From: Yuyang Du <yuyang.du@intel.com> To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org Cc: bsegall@google.com, pjt@google.com, morten.rasmussen@arm.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, matt@codeblueprint.co.uk, Yuyang Du <yuyang.du@intel.com> Subject: [PATCH v5 1/5] sched/fair: Clean up attach_entity_load_avg() Date: Thu, 9 Jun 2016 07:15:50 +0800 [thread overview] Message-ID: <1465427754-28897-2-git-send-email-yuyang.du@intel.com> (raw) In-Reply-To: <1465427754-28897-1-git-send-email-yuyang.du@intel.com> attach_entity_load_avg() is called (indirectly) from: - switched_to_fair(): switch between classes to fair - task_move_group_fair(): move between task groups - enqueue_entity_load_avg(): enqueue entity Only in switched_to_fair() is it possible that the task's last_update_time is not 0 and therefore the task needs sched avgs update, so move the task sched avgs update to switched_to_fair() only. In addition, the code is refactored and code comments are updated. No functionality change. Signed-off-by: Yuyang Du <yuyang.du@intel.com> --- kernel/sched/fair.c | 43 ++++++++++++++++++++----------------------- 1 file changed, 20 insertions(+), 23 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c6dd8ba..34e658b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2935,24 +2935,6 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg) static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { - if (!sched_feat(ATTACH_AGE_LOAD)) - goto skip_aging; - - /* - * If we got migrated (either between CPUs or between cgroups) we'll - * have aged the average right before clearing @last_update_time. - */ - if (se->avg.last_update_time) { - __update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)), - &se->avg, 0, 0, NULL); - - /* - * XXX: we could have just aged the entire load away if we've been - * absent from the fair class for too long. - */ - } - -skip_aging: se->avg.last_update_time = cfs_rq->avg.last_update_time; cfs_rq->avg.load_avg += se->avg.load_avg; cfs_rq->avg.load_sum += se->avg.load_sum; @@ -2962,6 +2944,19 @@ skip_aging: cfs_rq_util_change(cfs_rq); } +static inline void attach_age_load_task(struct rq *rq, struct task_struct *p) +{ + struct sched_entity *se = &p->se; + + if (!sched_feat(ATTACH_AGE_LOAD)) + return; + + if (se->avg.last_update_time) { + __update_load_avg(cfs_rq_of(se)->avg.last_update_time, cpu_of(rq), + &se->avg, 0, 0, NULL); + } +} + static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { __update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)), @@ -3091,6 +3086,7 @@ static inline void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} static inline void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} +static inline void attach_age_load_task(struct rq *rq, struct task_struct *p) {} static inline int idle_balance(struct rq *rq) { @@ -8390,6 +8386,12 @@ static void switched_from_fair(struct rq *rq, struct task_struct *p) static void switched_to_fair(struct rq *rq, struct task_struct *p) { + /* + * If we change between classes, age the averages before attaching them. + * XXX: we could have just aged the entire load away if we've been + * absent from the fair class for too long. + */ + attach_age_load_task(rq, p); attach_task_cfs_rq(p); if (task_on_rq_queued(p)) { @@ -8441,11 +8443,6 @@ static void task_move_group_fair(struct task_struct *p) { detach_task_cfs_rq(p); set_task_rq(p, task_cpu(p)); - -#ifdef CONFIG_SMP - /* Tell se's cfs_rq has been changed -- migrated */ - p->se.avg.last_update_time = 0; -#endif attach_task_cfs_rq(p); } -- 1.7.9.5
next prev parent reply other threads:[~2016-06-09 7:13 UTC|newest] Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top 2016-06-08 23:15 [PATCH v5 0/5] sched/fair: Fix attach and detach sched avgs for task group change and sched class change Yuyang Du 2016-06-08 23:15 ` Yuyang Du [this message] 2016-06-14 11:11 ` [PATCH v5 1/5] sched/fair: Clean up attach_entity_load_avg() Matt Fleming 2016-06-14 12:11 ` Peter Zijlstra 2016-06-14 22:18 ` Yuyang Du 2016-06-08 23:15 ` [PATCH v5 2/5] sched/fair: Fix attaching task sched avgs twice when switching to fair or changing task group Yuyang Du 2016-06-08 23:15 ` [PATCH v5 3/5] sched/fair: Move load and util avgs from wake_up_new_task() to sched_fork() Yuyang Du 2016-06-08 23:15 ` [PATCH v5 4/5] sched/fair: Skip detach sched avgs for new task when changing task groups Yuyang Du 2016-06-14 11:38 ` Matt Fleming 2016-06-14 14:36 ` Peter Zijlstra 2016-06-14 14:45 ` Peter Zijlstra 2016-06-08 23:15 ` [PATCH v5 5/5] sched/fair: Add inline to detach_entity_load_evg() Yuyang Du
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1465427754-28897-2-git-send-email-yuyang.du@intel.com \ --to=yuyang.du@intel.com \ --cc=bsegall@google.com \ --cc=dietmar.eggemann@arm.com \ --cc=linux-kernel@vger.kernel.org \ --cc=matt@codeblueprint.co.uk \ --cc=mingo@kernel.org \ --cc=morten.rasmussen@arm.com \ --cc=peterz@infradead.org \ --cc=pjt@google.com \ --cc=vincent.guittot@linaro.org \ --subject='Re: [PATCH v5 1/5] sched/fair: Clean up attach_entity_load_avg()' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).