linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vincent Guittot <vincent.guittot@linaro.org>
To: Yuyang Du <yuyang.du@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Benjamin Segall <bsegall@google.com>,
	Paul Turner <pjt@google.com>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>
Subject: Re: [PATCH v4 2/5] sched/fair: Fix attaching task sched avgs twice when switching to fair or changing task group
Date: Mon, 6 Jun 2016 14:32:39 +0200	[thread overview]
Message-ID: <CAKfTPtC9oTO4znWxyZq7YSw64KzDamst7SFTy_0+amzNzP+dqg@mail.gmail.com> (raw)
In-Reply-To: <1465172441-27727-3-git-send-email-yuyang.du@intel.com>

Hi Yuyang,

On 6 June 2016 at 02:20, Yuyang Du <yuyang.du@intel.com> wrote:
> Vincent reported that the first task to a new task group's cfs_rq will
> be attached in attach_task_cfs_rq() and once more when it is enqueued
> (see https://lkml.org/lkml/2016/5/25/388).
>
> Actually, it is worse. The sched avgs can be sometimes attached twice
> not only when we change task groups but also when we switch to fair class,
> these two scenarios will descripbe in the following respectively.
>
> 1) Switch to fair class:
>
> The sched class change is done like this:
>
>         if (queued)
>           enqueue_task();
>         check_class_changed()
>           switched_from()
>           switched_to()
>
> If the task is on_rq, it should have already been enqueued, which
> MAY have attached sched avgs to the cfs_rq, if so, we shouldn't attach
> it again in switched_to(), otherwise, we attach it twice.
>
> To address both the on_rq and !on_rq cases, as well as both the task
> was switched from fair and otherwise, the simplest solution is to reset
> the task's last_update_time to 0, when the task is switched from fair.
> Then let task enqueue do the sched avgs attachment only once.
>
> 2) Change between fair task groups:
>
> The task groups are changed like this:
>
>         if (queued)
>           dequeue_task()
>         task_move_group()
>         if (queued)
>           enqueue_task()
>
> Unlike the switch to fair class case, if the task is on_rq, it will be
> enqueued after we move task groups, so the simplest solution is to reset
> the task's last_update_time when we do task_move_group(), but not to
> attach sched avgs in task_move_group(), and then let enqueue_task() do
> the sched avgs attachment.

According to the review of the previous version
http://www.gossamer-threads.com/lists/linux/kernel/2450678#2450678,
only the use case switch to fair with a task that has never been
queued as a CFS task before, can have the issue and this will not
happen for other use cases described above.
So can you limit the description in the commit message to just this
use case unless you have discovered new use cases and in this case
please add the description.

Then, the problem with this use case, comes that last_update_time == 0
has a special meaning ( task has migrated ) and we initialize
last_update_time with this value. A much more simple solution would be
to prevent last_update_time to be initialized with this special value.
We can initialize the last_update_time of a sched_entity to 1 as an
example which is easier than these changes. Furthermore,  this patch
makes a task to be never decayed for the time that elapsed between its
detach and its next enqueue on a cfs_rq which can be quite long
duration and will make the task been far higher than it should.

>
> Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
> Signed-off-by: Yuyang Du <yuyang.du@intel.com>
> ---
>  kernel/sched/fair.c |   47 +++++++++++++++++++++--------------------------
>  1 file changed, 21 insertions(+), 26 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 34e658b..28b3415 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2933,7 +2933,8 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg)
>                 update_tg_load_avg(cfs_rq, 0);
>  }
>
> -static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
> +/* Virtually synchronize task with its cfs_rq */
> +static inline void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
>  {
>         se->avg.last_update_time = cfs_rq->avg.last_update_time;
>         cfs_rq->avg.load_avg += se->avg.load_avg;
> @@ -2944,19 +2945,6 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
>         cfs_rq_util_change(cfs_rq);
>  }
>
> -static inline void attach_age_load_task(struct rq *rq, struct task_struct *p)
> -{
> -       struct sched_entity *se = &p->se;
> -
> -       if (!sched_feat(ATTACH_AGE_LOAD))
> -               return;
> -
> -       if (se->avg.last_update_time) {
> -               __update_load_avg(cfs_rq_of(se)->avg.last_update_time, cpu_of(rq),
> -                                 &se->avg, 0, 0, NULL);
> -       }
> -}
> -
>  static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
>  {
>         __update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)),
> @@ -3031,6 +3019,11 @@ static inline u64 cfs_rq_last_update_time(struct cfs_rq *cfs_rq)
>  }
>  #endif
>
> +static inline void reset_task_last_update_time(struct task_struct *p)
> +{
> +       p->se.avg.last_update_time = 0;
> +}
> +
>  /*
>   * Task first catches up with cfs_rq, and then subtract
>   * itself from the cfs_rq (task must be off the queue now).
> @@ -3083,10 +3076,8 @@ dequeue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
>  static inline void remove_entity_load_avg(struct sched_entity *se) {}
>
>  static inline void
> -attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
> -static inline void
>  detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
> -static inline void attach_age_load_task(struct rq *rq, struct task_struct *p) {}
> +static inline void reset_task_last_update_time(struct task_struct *p) {}
>
>  static inline int idle_balance(struct rq *rq)
>  {
> @@ -8372,9 +8363,6 @@ static void attach_task_cfs_rq(struct task_struct *p)
>         se->depth = se->parent ? se->parent->depth + 1 : 0;
>  #endif
>
> -       /* Synchronize task with its cfs_rq */
> -       attach_entity_load_avg(cfs_rq, se);
> -
>         if (!vruntime_normalized(p))
>                 se->vruntime += cfs_rq->min_vruntime;
>  }
> @@ -8382,16 +8370,18 @@ static void attach_task_cfs_rq(struct task_struct *p)
>  static void switched_from_fair(struct rq *rq, struct task_struct *p)
>  {
>         detach_task_cfs_rq(p);
> +       reset_task_last_update_time(p);
> +       /*
> +        * If we change back to fair class, we will attach the sched
> +        * avgs when we are enqueued, which will be done only once. We
> +        * won't have the chance to consistently age the avgs before
> +        * attaching them, so we have to continue with the last updated
> +        * sched avgs when we were detached.
> +        */
>  }
>
>  static void switched_to_fair(struct rq *rq, struct task_struct *p)
>  {
> -       /*
> -        * If we change between classes, age the averages before attaching them.
> -        * XXX: we could have just aged the entire load away if we've been
> -        * absent from the fair class for too long.
> -        */
> -       attach_age_load_task(rq, p);
>         attach_task_cfs_rq(p);
>
>         if (task_on_rq_queued(p)) {
> @@ -8444,6 +8434,11 @@ static void task_move_group_fair(struct task_struct *p)
>         detach_task_cfs_rq(p);
>         set_task_rq(p, task_cpu(p));
>         attach_task_cfs_rq(p);
> +       /*
> +        * This assures we will attach the sched avgs when we are enqueued,
> +        * which will be done only once.
> +        */
> +       reset_task_last_update_time(p);
>  }
>
>  void free_fair_sched_group(struct task_group *tg)
> --
> 1.7.9.5
>

  parent reply	other threads:[~2016-06-06 12:33 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-06  0:20 [PATCH v4 0/5] sched/fair: Fix attach and detach sched avgs for task group change or sched class change Yuyang Du
2016-06-06  0:20 ` [PATCH v4 1/5] sched/fair: Clean up attach_entity_load_avg() Yuyang Du
2016-06-06 13:30   ` Matt Fleming
2016-06-06 13:40     ` Vincent Guittot
2016-06-06 14:06       ` Matt Fleming
2016-06-06 14:27       ` Peter Zijlstra
2016-06-06  0:20 ` [PATCH v4 2/5] sched/fair: Fix attaching task sched avgs twice when switching to fair or changing task group Yuyang Du
2016-06-06  9:54   ` Peter Zijlstra
2016-06-06 19:38     ` Yuyang Du
2016-06-06 12:32   ` Vincent Guittot [this message]
2016-06-06 19:05     ` Yuyang Du
2016-06-07  8:09       ` Vincent Guittot
2016-06-07 18:16         ` Yuyang Du
2016-06-06  0:20 ` [PATCH v4 3/5] sched/fair: Skip detach sched avgs for new task when changing task groups Yuyang Du
2016-06-06  9:58   ` Peter Zijlstra
2016-06-06 14:03   ` Matt Fleming
2016-06-06 19:15     ` Yuyang Du
2016-06-06  0:20 ` [PATCH v4 4/5] sched/fair: Move load and util avgs from wake_up_new_task() to sched_fork() Yuyang Du
2016-06-06  0:20 ` [PATCH v4 5/5] sched/fair: Add inline to detach_entity_load_evg() Yuyang Du

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKfTPtC9oTO4znWxyZq7YSw64KzDamst7SFTy_0+amzNzP+dqg@mail.gmail.com \
    --to=vincent.guittot@linaro.org \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=morten.rasmussen@arm.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=yuyang.du@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).