From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752720AbdIANkQ (ORCPT ); Fri, 1 Sep 2017 09:40:16 -0400 Received: from bombadil.infradead.org ([65.50.211.133]:41539 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752327AbdIANfO (ORCPT ); Fri, 1 Sep 2017 09:35:14 -0400 Message-Id: <20170901132748.035370431@infradead.org> User-Agent: quilt/0.63-1 Date: Fri, 01 Sep 2017 15:21:00 +0200 From: Peter Zijlstra To: mingo@kernel.org, linux-kernel@vger.kernel.org, tj@kernel.org, josef@toxicpanda.com Cc: torvalds@linux-foundation.org, vincent.guittot@linaro.org, efault@gmx.de, pjt@google.com, clm@fb.com, dietmar.eggemann@arm.com, morten.rasmussen@arm.com, bsegall@google.com, yuyang.du@intel.com, peterz@infradead.org Subject: [PATCH -v2 01/18] sched/fair: Clean up calc_cfs_shares() References: <20170901132059.342024223@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline; filename=peterz-sched-cleanup-update_cfs_shares.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org For consistencies sake, we should have only a single reading of tg->shares. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 31 ++++++++++++------------------- 1 file changed, 12 insertions(+), 19 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2633,9 +2633,12 @@ account_entity_dequeue(struct cfs_rq *cf #ifdef CONFIG_FAIR_GROUP_SCHED # ifdef CONFIG_SMP -static long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg) +static long calc_cfs_shares(struct cfs_rq *cfs_rq) { - long tg_weight, load, shares; + long tg_weight, tg_shares, load, shares; + struct task_group *tg = cfs_rq->tg; + + tg_shares = READ_ONCE(tg->shares); /* * This really should be: cfs_rq->avg.load_avg, but instead we use @@ -2650,7 +2653,7 @@ static long calc_cfs_shares(struct cfs_r tg_weight -= cfs_rq->tg_load_avg_contrib; tg_weight += load; - shares = (tg->shares * load); + shares = (tg_shares * load); if (tg_weight) shares /= tg_weight; @@ -2666,17 +2669,7 @@ static long calc_cfs_shares(struct cfs_r * case no task is runnable on a CPU MIN_SHARES=2 should be returned * instead of 0. */ - if (shares < MIN_SHARES) - shares = MIN_SHARES; - if (shares > tg->shares) - shares = tg->shares; - - return shares; -} -# else /* CONFIG_SMP */ -static inline long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg) -{ - return tg->shares; + return clamp_t(long, shares, MIN_SHARES, tg_shares); } # endif /* CONFIG_SMP */ @@ -2701,7 +2694,6 @@ static inline int throttled_hierarchy(st static void update_cfs_shares(struct sched_entity *se) { struct cfs_rq *cfs_rq = group_cfs_rq(se); - struct task_group *tg; long shares; if (!cfs_rq) @@ -2710,13 +2702,14 @@ static void update_cfs_shares(struct sch if (throttled_hierarchy(cfs_rq)) return; - tg = cfs_rq->tg; - #ifndef CONFIG_SMP - if (likely(se->load.weight == tg->shares)) + shares = READ_ONCE(cfs_rq->tg->shares); + + if (likely(se->load.weight == shares)) return; +#else + shares = calc_cfs_shares(cfs_rq); #endif - shares = calc_cfs_shares(cfs_rq, tg); reweight_entity(cfs_rq_of(se), se, shares); }