All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched: fix sched_entity avg statistics update
@ 2014-01-21 16:12 Vincent Guittot
  2014-01-21 18:38 ` bsegall
  2014-01-22 10:10 ` [PATCH] sched: fix sched_entity avg statistics update Chris Redpath
  0 siblings, 2 replies; 11+ messages in thread
From: Vincent Guittot @ 2014-01-21 16:12 UTC (permalink / raw)
  To: peterz, linux-kernel; +Cc: mingo, pjt, bsegall, linaro-kernel, Vincent Guittot

With the current implementation, the load average statistics of a sched entity
change according to other activity on the CPU even if this activity is done
between the running window of the sched entity and have no influence on the
running duration of the task.

When a task wakes up on the same CPU, we currently update last_runnable_update
with the return  of __synchronize_entity_decay without updating the
runnable_avg_sum and runnable_avg_period accordingly. In fact, we have to sync
the load_contrib of the se with the rq's blocked_load_contrib before removing
it from the latter (with __synchronize_entity_decay) but we must keep
last_runnable_update unchanged for updating runnable_avg_sum/period during the
next update_entity_load_avg.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e64b079..5b0ef90 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2370,8 +2370,7 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
 		 * would have made count negative); we must be careful to avoid
 		 * double-accounting blocked time after synchronizing decays.
 		 */
-		se->avg.last_runnable_update += __synchronize_entity_decay(se)
-							<< 20;
+		__synchronize_entity_decay(se);
 	}
 
 	/* migrated tasks did not contribute to our blocked load */
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] sched: fix sched_entity avg statistics update
  2014-01-21 16:12 [PATCH] sched: fix sched_entity avg statistics update Vincent Guittot
@ 2014-01-21 18:38 ` bsegall
  2014-01-21 20:06   ` Vincent Guittot
       [not found]   ` <CAKfTPtDEKnM+er0NLxCKN4gK_KP3hnzPd13k7qqYtyWKCgdP4w@mail.gmail.com>
  2014-01-22 10:10 ` [PATCH] sched: fix sched_entity avg statistics update Chris Redpath
  1 sibling, 2 replies; 11+ messages in thread
From: bsegall @ 2014-01-21 18:38 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: peterz, linux-kernel, mingo, pjt, linaro-kernel

Vincent Guittot <vincent.guittot@linaro.org> writes:

> With the current implementation, the load average statistics of a sched entity
> change according to other activity on the CPU even if this activity is done
> between the running window of the sched entity and have no influence on the
> running duration of the task.
>
> When a task wakes up on the same CPU, we currently update last_runnable_update
> with the return  of __synchronize_entity_decay without updating the
> runnable_avg_sum and runnable_avg_period accordingly. In fact, we have to sync
> the load_contrib of the se with the rq's blocked_load_contrib before removing
> it from the latter (with __synchronize_entity_decay) but we must keep
> last_runnable_update unchanged for updating runnable_avg_sum/period during the
> next update_entity_load_avg.

... Gah, that's correct, we had this right the first time. Could you do
this as a full revert of 282cf499f03ec1754b6c8c945c9674b02631fb0f (ie
remove the now inaccurate comment, or maybe replace it with a correct one).
>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>  kernel/sched/fair.c |    3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e64b079..5b0ef90 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2370,8 +2370,7 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
>  		 * would have made count negative); we must be careful to avoid
>  		 * double-accounting blocked time after synchronizing decays.
>  		 */
> -		se->avg.last_runnable_update += __synchronize_entity_decay(se)
> -							<< 20;
> +		__synchronize_entity_decay(se);
>  	}
>  
>  	/* migrated tasks did not contribute to our blocked load */

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] sched: fix sched_entity avg statistics update
  2014-01-21 18:38 ` bsegall
@ 2014-01-21 20:06   ` Vincent Guittot
       [not found]   ` <CAKfTPtDEKnM+er0NLxCKN4gK_KP3hnzPd13k7qqYtyWKCgdP4w@mail.gmail.com>
  1 sibling, 0 replies; 11+ messages in thread
From: Vincent Guittot @ 2014-01-21 20:06 UTC (permalink / raw)
  To: Benjamin Segall
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Paul Turner, linaro-kernel

On 21 January 2014 19:38,  <bsegall@google.com> wrote:
> Vincent Guittot <vincent.guittot@linaro.org> writes:
>
>> With the current implementation, the load average statistics of a sched entity
>> change according to other activity on the CPU even if this activity is done
>> between the running window of the sched entity and have no influence on the
>> running duration of the task.
>>
>> When a task wakes up on the same CPU, we currently update last_runnable_update
>> with the return  of __synchronize_entity_decay without updating the
>> runnable_avg_sum and runnable_avg_period accordingly. In fact, we have to sync
>> the load_contrib of the se with the rq's blocked_load_contrib before removing
>> it from the latter (with __synchronize_entity_decay) but we must keep
>> last_runnable_update unchanged for updating runnable_avg_sum/period during the
>> next update_entity_load_avg.
>
> ... Gah, that's correct, we had this right the first time. Could you do
> this as a full revert of 282cf499f03ec1754b6c8c945c9674b02631fb0f (ie
> remove the now inaccurate comment, or maybe replace it with a correct one).

Ok i'm going to remove comment as well and replace it with a new description

Vincent

>>
>> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
>> ---
>>  kernel/sched/fair.c |    3 +--
>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index e64b079..5b0ef90 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -2370,8 +2370,7 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
>>                * would have made count negative); we must be careful to avoid
>>                * double-accounting blocked time after synchronizing decays.
>>                */
>> -             se->avg.last_runnable_update += __synchronize_entity_decay(se)
>> -                                                     << 20;
>> +             __synchronize_entity_decay(se);
>>       }
>>
>>       /* migrated tasks did not contribute to our blocked load */

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] sched: fix sched_entity avg statistics update
       [not found]   ` <CAKfTPtDEKnM+er0NLxCKN4gK_KP3hnzPd13k7qqYtyWKCgdP4w@mail.gmail.com>
@ 2014-01-21 20:31     ` Paul Turner
  2014-01-21 20:45       ` Peter Zijlstra
  2014-01-22  7:45       ` [PATCH] Revert "sched: Fix sleep time double accounting in enqueue entity" Vincent Guittot
  0 siblings, 2 replies; 11+ messages in thread
From: Paul Turner @ 2014-01-21 20:31 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Ben Segall, Ingo Molnar, Peter Zijlstra, Linaro Kernel, LKML

On Tue, Jan 21, 2014 at 12:00 PM, Vincent Guittot
<vincent.guittot@linaro.org> wrote:
>
> Le 21 janv. 2014 19:39, <bsegall@google.com> a écrit :
>
>
>>
>> Vincent Guittot <vincent.guittot@linaro.org> writes:
>>
>> > With the current implementation, the load average statistics of a sched
>> > entity
>> > change according to other activity on the CPU even if this activity is
>> > done
>> > between the running window of the sched entity and have no influence on
>> > the
>> > running duration of the task.
>> >
>> > When a task wakes up on the same CPU, we currently update
>> > last_runnable_update
>> > with the return  of __synchronize_entity_decay without updating the
>> > runnable_avg_sum and runnable_avg_period accordingly. In fact, we have
>> > to sync
>> > the load_contrib of the se with the rq's blocked_load_contrib before
>> > removing
>> > it from the latter (with __synchronize_entity_decay) but we must keep
>> > last_runnable_update unchanged for updating runnable_avg_sum/period
>> > during the
>> > next update_entity_load_avg.
>>
>> ... Gah, that's correct, we had this right the first time. Could you do
>> this as a full revert of 282cf499f03ec1754b6c8c945c9674b02631fb0f (ie
>> remove the now inaccurate comment, or maybe replace it with a correct
>> one).
>
> Ok i'm going to remove comment as well and replace it with a new description
>

I think I need to go through and do a comments patch like we did with
the wake-affine math; it's too easy to make a finicky mistake like
this when not touching this path for a while.

OK, so there are two numerical components we're juggling here:

1) The actual quotient for the current runnable average, stored as
(runnable_avg_sum / runnable_avg period).  Last updated at
last_runnable_update.
2) The last time we computed the quotient in (1) and accumulated it in
within cfs_rq->{runnable, blocked}_load_avg, this is stored in
load_avg_contrib.  We track the passage of off-rq time and migrations
against this value using decay_count.
[ All of the values above are stored on se / se->avg ]

When we are re-enqueuing something and we wish to remove its
contribution from blocked_load_avg, we must update load_avg_contrib in
(2) using the total time it spent off rq (using a jiffy rounded
approximation in decay_count).  However, Alex's patch (which this
reverts) also adjusted the quotient by modifying its last update time
so as to make it look up-to-date, effectively skipping the most recent
idle span.

I think we could make the connection between (1) and (2) more explicit
if we moved the subsequent "if (wakeup) logic" within the else.  We
can then have a comment that refers to (1) and (2) explicitly, perhaps
something like:

Task re-woke on same cpu (or else migrate_task_rq_fair() would have
made count negative).  Perform an approximate decay on
load_avg_contrib to match blocked_load_avg, and compute a precise
runnable_avg_sum quotient update that will be accumulated into
runnable_load_avg below.


>
>> >
>> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
>> > ---
>> >  kernel/sched/fair.c |    3 +--
>> >  1 file changed, 1 insertion(+), 2 deletions(-)
>> >
>> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> > index e64b079..5b0ef90 100644
>> > --- a/kernel/sched/fair.c
>> > +++ b/kernel/sched/fair.c
>> > @@ -2370,8 +2370,7 @@ static inline void enqueue_entity_load_avg(struct
>> > cfs_rq *cfs_rq,
>> >                * would have made count negative); we must be careful to
>> > avoid
>> >                * double-accounting blocked time after synchronizing
>> > decays.
>> >                */
>> > -             se->avg.last_runnable_update +=
>> > __synchronize_entity_decay(se)
>> > -                                                     << 20;
>> > +             __synchronize_entity_decay(se);
>> >       }
>> >
>> >       /* migrated tasks did not contribute to our blocked load */

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] sched: fix sched_entity avg statistics update
  2014-01-21 20:31     ` Paul Turner
@ 2014-01-21 20:45       ` Peter Zijlstra
  2014-01-22  7:45       ` [PATCH] Revert "sched: Fix sleep time double accounting in enqueue entity" Vincent Guittot
  1 sibling, 0 replies; 11+ messages in thread
From: Peter Zijlstra @ 2014-01-21 20:45 UTC (permalink / raw)
  To: Paul Turner; +Cc: Vincent Guittot, Ben Segall, Ingo Molnar, Linaro Kernel, LKML

On Tue, Jan 21, 2014 at 12:31:18PM -0800, Paul Turner wrote:
> I think I need to go through and do a comments patch like we did with
> the wake-affine math; it's too easy to make a finicky mistake like
> this when not touching this path for a while.

If you're going to do that, please consider fixing the XXX on line 4783:

 * [XXX write more on how we solve this.. _after_ merging pjt's patches that
 *      rewrite all of this once again.]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] Revert "sched: Fix sleep time double accounting in enqueue entity"
  2014-01-21 20:31     ` Paul Turner
  2014-01-21 20:45       ` Peter Zijlstra
@ 2014-01-22  7:45       ` Vincent Guittot
  2014-01-22  7:50         ` Vincent Guittot
                           ` (2 more replies)
  1 sibling, 3 replies; 11+ messages in thread
From: Vincent Guittot @ 2014-01-22  7:45 UTC (permalink / raw)
  To: peterz, linux-kernel
  Cc: mingo, pjt, bsegall, alex.shi, linaro-kernel, Vincent Guittot

This reverts commit 282cf499f03ec1754b6c8c945c9674b02631fb0f.

With the current implementation, the load average statistics of a sched entity
change according to other activity on the CPU even if this activity is done
between the running window of the sched entity and have no influence on the
running duration of the task.

When a task wakes up on the same CPU, we currently update last_runnable_update
with the return  of __synchronize_entity_decay without updating the
runnable_avg_sum and runnable_avg_period accordingly. In fact, we have to sync
the load_contrib of the se with the rq's blocked_load_contrib before removing
it from the latter (with __synchronize_entity_decay) but we must keep
last_runnable_update unchanged for updating runnable_avg_sum/period during the
next update_entity_load_avg.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

---
 kernel/sched/fair.c |    8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e64b079..6d61f20 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2365,13 +2365,7 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
 		}
 		wakeup = 0;
 	} else {
-		/*
-		 * Task re-woke on same cpu (or else migrate_task_rq_fair()
-		 * would have made count negative); we must be careful to avoid
-		 * double-accounting blocked time after synchronizing decays.
-		 */
-		se->avg.last_runnable_update += __synchronize_entity_decay(se)
-							<< 20;
+		__synchronize_entity_decay(se);
 	}
 
 	/* migrated tasks did not contribute to our blocked load */
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] Revert "sched: Fix sleep time double accounting in enqueue entity"
  2014-01-22  7:45       ` [PATCH] Revert "sched: Fix sleep time double accounting in enqueue entity" Vincent Guittot
@ 2014-01-22  7:50         ` Vincent Guittot
  2014-01-22 17:53         ` bsegall
  2014-01-23 16:46         ` [tip:sched/urgent] " tip-bot for Vincent Guittot
  2 siblings, 0 replies; 11+ messages in thread
From: Vincent Guittot @ 2014-01-22  7:50 UTC (permalink / raw)
  To: Peter Zijlstra, Paul Turner
  Cc: Ingo Molnar, linux-kernel, Benjamin Segall, Alex Shi,
	linaro-kernel, Vincent Guittot

Paul,

I let you send a patch that will add comment and move the "if (wakeup) logic" ?

Regards
Vincent

On 22 January 2014 08:45, Vincent Guittot <vincent.guittot@linaro.org> wrote:
> This reverts commit 282cf499f03ec1754b6c8c945c9674b02631fb0f.
>
> With the current implementation, the load average statistics of a sched entity
> change according to other activity on the CPU even if this activity is done
> between the running window of the sched entity and have no influence on the
> running duration of the task.
>
> When a task wakes up on the same CPU, we currently update last_runnable_update
> with the return  of __synchronize_entity_decay without updating the
> runnable_avg_sum and runnable_avg_period accordingly. In fact, we have to sync
> the load_contrib of the se with the rq's blocked_load_contrib before removing
> it from the latter (with __synchronize_entity_decay) but we must keep
> last_runnable_update unchanged for updating runnable_avg_sum/period during the
> next update_entity_load_avg.
>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
>
> ---
>  kernel/sched/fair.c |    8 +-------
>  1 file changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e64b079..6d61f20 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2365,13 +2365,7 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
>                 }
>                 wakeup = 0;
>         } else {
> -               /*
> -                * Task re-woke on same cpu (or else migrate_task_rq_fair()
> -                * would have made count negative); we must be careful to avoid
> -                * double-accounting blocked time after synchronizing decays.
> -                */
> -               se->avg.last_runnable_update += __synchronize_entity_decay(se)
> -                                                       << 20;
> +               __synchronize_entity_decay(se);
>         }
>
>         /* migrated tasks did not contribute to our blocked load */
> --
> 1.7.9.5
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] sched: fix sched_entity avg statistics update
  2014-01-21 16:12 [PATCH] sched: fix sched_entity avg statistics update Vincent Guittot
  2014-01-21 18:38 ` bsegall
@ 2014-01-22 10:10 ` Chris Redpath
  1 sibling, 0 replies; 11+ messages in thread
From: Chris Redpath @ 2014-01-22 10:10 UTC (permalink / raw)
  To: Vincent Guittot, peterz, linux-kernel; +Cc: mingo, pjt, bsegall, linaro-kernel

On 21/01/14 16:12, Vincent Guittot wrote:
> With the current implementation, the load average statistics of a sched entity
> change according to other activity on the CPU even if this activity is done
> between the running window of the sched entity and have no influence on the
> running duration of the task.
>
> When a task wakes up on the same CPU, we currently update last_runnable_update
> with the return  of __synchronize_entity_decay without updating the
> runnable_avg_sum and runnable_avg_period accordingly. In fact, we have to sync
> the load_contrib of the se with the rq's blocked_load_contrib before removing
> it from the latter (with __synchronize_entity_decay) but we must keep
> last_runnable_update unchanged for updating runnable_avg_sum/period during the
> next update_entity_load_avg.
>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>   kernel/sched/fair.c |    3 +--
>   1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e64b079..5b0ef90 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2370,8 +2370,7 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
>   		 * would have made count negative); we must be careful to avoid
>   		 * double-accounting blocked time after synchronizing decays.
>   		 */
> -		se->avg.last_runnable_update += __synchronize_entity_decay(se)
> -							<< 20;
> +		__synchronize_entity_decay(se);
>   	}
>
>   	/* migrated tasks did not contribute to our blocked load */
>

I've noticed this problem too. It becomes more apparent if you are 
closely inspecting load signals and comparing against ideal signals 
generated from task runtime traces. IMO it should be fixed.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] Revert "sched: Fix sleep time double accounting in enqueue entity"
  2014-01-22  7:45       ` [PATCH] Revert "sched: Fix sleep time double accounting in enqueue entity" Vincent Guittot
  2014-01-22  7:50         ` Vincent Guittot
@ 2014-01-22 17:53         ` bsegall
  2014-01-22 19:54           ` Paul Turner
  2014-01-23 16:46         ` [tip:sched/urgent] " tip-bot for Vincent Guittot
  2 siblings, 1 reply; 11+ messages in thread
From: bsegall @ 2014-01-22 17:53 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: peterz, linux-kernel, mingo, pjt, alex.shi, linaro-kernel

Vincent Guittot <vincent.guittot@linaro.org> writes:

> This reverts commit 282cf499f03ec1754b6c8c945c9674b02631fb0f.
>
> With the current implementation, the load average statistics of a sched entity
> change according to other activity on the CPU even if this activity is done
> between the running window of the sched entity and have no influence on the
> running duration of the task.
>
> When a task wakes up on the same CPU, we currently update last_runnable_update
> with the return  of __synchronize_entity_decay without updating the
> runnable_avg_sum and runnable_avg_period accordingly. In fact, we have to sync
> the load_contrib of the se with the rq's blocked_load_contrib before removing
> it from the latter (with __synchronize_entity_decay) but we must keep
> last_runnable_update unchanged for updating runnable_avg_sum/period during the
> next update_entity_load_avg.
>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Unless paul wants to squash this into a possible change to the if
(wakeup) stuff:
Reviewed-by: Ben Segall <bsegall@google.com>

>
> ---
>  kernel/sched/fair.c |    8 +-------
>  1 file changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e64b079..6d61f20 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2365,13 +2365,7 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
>  		}
>  		wakeup = 0;
>  	} else {
> -		/*
> -		 * Task re-woke on same cpu (or else migrate_task_rq_fair()
> -		 * would have made count negative); we must be careful to avoid
> -		 * double-accounting blocked time after synchronizing decays.
> -		 */
> -		se->avg.last_runnable_update += __synchronize_entity_decay(se)
> -							<< 20;
> +		__synchronize_entity_decay(se);
>  	}
>  
>  	/* migrated tasks did not contribute to our blocked load */

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] Revert "sched: Fix sleep time double accounting in enqueue entity"
  2014-01-22 17:53         ` bsegall
@ 2014-01-22 19:54           ` Paul Turner
  0 siblings, 0 replies; 11+ messages in thread
From: Paul Turner @ 2014-01-22 19:54 UTC (permalink / raw)
  To: Benjamin Segall
  Cc: Vincent Guittot, Peter Zijlstra, LKML, Ingo Molnar, Alex Shi,
	Linaro Kernel

On Wed, Jan 22, 2014 at 9:53 AM,  <bsegall@google.com> wrote:
> Vincent Guittot <vincent.guittot@linaro.org> writes:
>
>> This reverts commit 282cf499f03ec1754b6c8c945c9674b02631fb0f.
>>
>> With the current implementation, the load average statistics of a sched entity
>> change according to other activity on the CPU even if this activity is done
>> between the running window of the sched entity and have no influence on the
>> running duration of the task.
>>
>> When a task wakes up on the same CPU, we currently update last_runnable_update
>> with the return  of __synchronize_entity_decay without updating the
>> runnable_avg_sum and runnable_avg_period accordingly. In fact, we have to sync
>> the load_contrib of the se with the rq's blocked_load_contrib before removing
>> it from the latter (with __synchronize_entity_decay) but we must keep
>> last_runnable_update unchanged for updating runnable_avg_sum/period during the
>> next update_entity_load_avg.
>>
>> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> Unless paul wants to squash this into a possible change to the if
> (wakeup) stuff:
> Reviewed-by: Ben Segall <bsegall@google.com>
>

I can send that separately as Vincent suggests.  This is good to go.

>>
>> ---
>>  kernel/sched/fair.c |    8 +-------
>>  1 file changed, 1 insertion(+), 7 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index e64b079..6d61f20 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -2365,13 +2365,7 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
>>               }
>>               wakeup = 0;
>>       } else {
>> -             /*
>> -              * Task re-woke on same cpu (or else migrate_task_rq_fair()
>> -              * would have made count negative); we must be careful to avoid
>> -              * double-accounting blocked time after synchronizing decays.
>> -              */
>> -             se->avg.last_runnable_update += __synchronize_entity_decay(se)
>> -                                                     << 20;
>> +             __synchronize_entity_decay(se);
>>       }
>>
>>       /* migrated tasks did not contribute to our blocked load */

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [tip:sched/urgent] Revert "sched: Fix sleep time double accounting in enqueue entity"
  2014-01-22  7:45       ` [PATCH] Revert "sched: Fix sleep time double accounting in enqueue entity" Vincent Guittot
  2014-01-22  7:50         ` Vincent Guittot
  2014-01-22 17:53         ` bsegall
@ 2014-01-23 16:46         ` tip-bot for Vincent Guittot
  2 siblings, 0 replies; 11+ messages in thread
From: tip-bot for Vincent Guittot @ 2014-01-23 16:46 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, bsegall, hpa, mingo, peterz, vincent.guittot, tglx

Commit-ID:  9390675af0835ae1d654d33bfcf16096028550ad
Gitweb:     http://git.kernel.org/tip/9390675af0835ae1d654d33bfcf16096028550ad
Author:     Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate: Wed, 22 Jan 2014 08:45:34 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 23 Jan 2014 14:48:34 +0100

Revert "sched: Fix sleep time double accounting in enqueue entity"

This reverts commit 282cf499f03ec1754b6c8c945c9674b02631fb0f.

With the current implementation, the load average statistics of a sched entity
change according to other activity on the CPU even if this activity is done
between the running window of the sched entity and have no influence on the
running duration of the task.

When a task wakes up on the same CPU, we currently update last_runnable_update
with the return  of __synchronize_entity_decay without updating the
runnable_avg_sum and runnable_avg_period accordingly. In fact, we have to sync
the load_contrib of the se with the rq's blocked_load_contrib before removing
it from the latter (with __synchronize_entity_decay) but we must keep
last_runnable_update unchanged for updating runnable_avg_sum/period during the
next update_entity_load_avg.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Ben Segall <bsegall@google.com>
Cc: pjt@google.com
Cc: alex.shi@linaro.org
Link: http://lkml.kernel.org/r/1390376734-6800-1-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b24b6cf..efe6457 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2356,13 +2356,7 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
 		}
 		wakeup = 0;
 	} else {
-		/*
-		 * Task re-woke on same cpu (or else migrate_task_rq_fair()
-		 * would have made count negative); we must be careful to avoid
-		 * double-accounting blocked time after synchronizing decays.
-		 */
-		se->avg.last_runnable_update += __synchronize_entity_decay(se)
-							<< 20;
+		__synchronize_entity_decay(se);
 	}
 
 	/* migrated tasks did not contribute to our blocked load */

^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2014-01-23 16:47 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-01-21 16:12 [PATCH] sched: fix sched_entity avg statistics update Vincent Guittot
2014-01-21 18:38 ` bsegall
2014-01-21 20:06   ` Vincent Guittot
     [not found]   ` <CAKfTPtDEKnM+er0NLxCKN4gK_KP3hnzPd13k7qqYtyWKCgdP4w@mail.gmail.com>
2014-01-21 20:31     ` Paul Turner
2014-01-21 20:45       ` Peter Zijlstra
2014-01-22  7:45       ` [PATCH] Revert "sched: Fix sleep time double accounting in enqueue entity" Vincent Guittot
2014-01-22  7:50         ` Vincent Guittot
2014-01-22 17:53         ` bsegall
2014-01-22 19:54           ` Paul Turner
2014-01-23 16:46         ` [tip:sched/urgent] " tip-bot for Vincent Guittot
2014-01-22 10:10 ` [PATCH] sched: fix sched_entity avg statistics update Chris Redpath

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.