From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752527AbdIANfT (ORCPT ); Fri, 1 Sep 2017 09:35:19 -0400 Received: from bombadil.infradead.org ([65.50.211.133]:48243 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752290AbdIANfO (ORCPT ); Fri, 1 Sep 2017 09:35:14 -0400 Message-Id: <20170901132748.336490447@infradead.org> User-Agent: quilt/0.63-1 Date: Fri, 01 Sep 2017 15:21:06 +0200 From: Peter Zijlstra To: mingo@kernel.org, linux-kernel@vger.kernel.org, tj@kernel.org, josef@toxicpanda.com Cc: torvalds@linux-foundation.org, vincent.guittot@linaro.org, efault@gmx.de, pjt@google.com, clm@fb.com, dietmar.eggemann@arm.com, morten.rasmussen@arm.com, bsegall@google.com, yuyang.du@intel.com, peterz@infradead.org Subject: [PATCH -v2 07/18] sched/fair: Rename {en,de}queue_entity_load_avg() References: <20170901132059.342024223@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline; filename=peterz-sched-rename-enqueue_entity_load_avg.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since they're now purely about runnable_load, rename them. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3561,7 +3561,7 @@ static inline void update_load_avg(struc /* Add the load generated by se into cfs_rq's load average */ static inline void -enqueue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) +enqueue_runnable_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { cfs_rq->runnable_load_avg += se->avg.load_avg; cfs_rq->runnable_load_sum += se_weight(se) * se->avg.load_sum; @@ -3569,7 +3569,7 @@ enqueue_entity_load_avg(struct cfs_rq *c /* Remove the runnable load generated by se from cfs_rq's runnable load average */ static inline void -dequeue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) +dequeue_runnable_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { sub_positive(&cfs_rq->runnable_load_avg, se->avg.load_avg); sub_positive(&cfs_rq->runnable_load_sum, se_weight(se) * se->avg.load_sum); @@ -3662,9 +3662,9 @@ static inline void update_load_avg(struc } static inline void -enqueue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} +enqueue_runnable_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} static inline void -dequeue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} +dequeue_runnable_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} static inline void remove_entity_load_avg(struct sched_entity *se) {} static inline void @@ -3810,7 +3810,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, st * - Add its new weight to cfs_rq->load.weight */ update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH); - enqueue_entity_load_avg(cfs_rq, se); + enqueue_runnable_load_avg(cfs_rq, se); update_cfs_shares(se); account_entity_enqueue(cfs_rq, se); @@ -3894,7 +3894,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, st * of its group cfs_rq. */ update_load_avg(cfs_rq, se, UPDATE_TG); - dequeue_entity_load_avg(cfs_rq, se); + dequeue_runnable_load_avg(cfs_rq, se); update_stats_dequeue(cfs_rq, se, flags);