From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752161AbbJEBqn (ORCPT ); Sun, 4 Oct 2015 21:46:43 -0400 Received: from mga03.intel.com ([134.134.136.65]:42676 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752000AbbJEBp4 (ORCPT ); Sun, 4 Oct 2015 21:45:56 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,635,1437462000"; d="scan'208";a="819369080" From: Yuyang Du To: mingo@kernel.org, peterz@infradead.org, linux-kernel@vger.kernel.org Cc: pjt@google.com, bsegall@google.com, morten.rasmussen@arm.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, Yuyang Du Subject: [PATCH 4/4] sched/fair: Rename scale_load() and scale_load_down() Date: Mon, 5 Oct 2015 01:56:59 +0800 Message-Id: <1443981419-16665-5-git-send-email-yuyang.du@intel.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1443981419-16665-1-git-send-email-yuyang.du@intel.com> References: <1443981419-16665-1-git-send-email-yuyang.du@intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Rename scale_load() and scale_load_down() to user_to_kernel_load() and kernel_to_user_load() respectively, to allow the names to bear what they are really about. Signed-off-by: Yuyang Du --- kernel/sched/core.c | 8 ++++---- kernel/sched/fair.c | 7 ++++--- kernel/sched/sched.h | 15 ++++++++------- 3 files changed, 16 insertions(+), 14 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ffe7b7e..1359871 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -818,12 +818,12 @@ static void set_load_weight(struct task_struct *p) * SCHED_IDLE tasks get minimal weight: */ if (idle_policy(p->policy)) { - load->weight = scale_load(WEIGHT_IDLEPRIO); + load->weight = user_to_kernel_load(WEIGHT_IDLEPRIO); load->inv_weight = WMULT_IDLEPRIO; return; } - load->weight = scale_load(prio_to_weight[prio]); + load->weight = user_to_kernel_load(prio_to_weight[prio]); load->inv_weight = prio_to_wmult[prio]; } @@ -8199,7 +8199,7 @@ static void cpu_cgroup_exit(struct cgroup_subsys_state *css, static int cpu_shares_write_u64(struct cgroup_subsys_state *css, struct cftype *cftype, u64 shareval) { - return sched_group_set_shares(css_tg(css), scale_load(shareval)); + return sched_group_set_shares(css_tg(css), user_to_kernel_load(shareval)); } static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css, @@ -8207,7 +8207,7 @@ static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css, { struct task_group *tg = css_tg(css); - return (u64) scale_load_down(tg->shares); + return (u64) kernel_to_user_load(tg->shares); } #ifdef CONFIG_CFS_BANDWIDTH diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 807d960..72db21e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -189,7 +189,7 @@ static void __update_inv_weight(struct load_weight *lw) if (likely(lw->inv_weight)) return; - w = scale_load_down(lw->weight); + w = kernel_to_user_load(lw->weight); if (BITS_PER_LONG > 32 && unlikely(w >= WMULT_CONST)) lw->inv_weight = 1; @@ -213,7 +213,7 @@ static void __update_inv_weight(struct load_weight *lw) */ static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw) { - u64 fact = scale_load_down(weight); + u64 fact = kernel_to_user_load(weight); int shift = WMULT_SHIFT; __update_inv_weight(lw); @@ -8205,7 +8205,8 @@ int sched_group_set_shares(struct task_group *tg, unsigned long shares) if (!tg->se[0]) return -EINVAL; - shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES)); + shares = clamp(shares, user_to_kernel_load(MIN_SHARES), + user_to_kernel_load(MAX_SHARES)); mutex_lock(&shares_mutex); if (tg->shares == shares) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 3d03956..0a1e972 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -54,21 +54,22 @@ static inline void update_cpu_load_active(struct rq *this_rq) { } */ #if 0 /* BITS_PER_LONG > 32 -- currently broken: it increases power usage under light load */ # define NICE_0_LOAD_SHIFT (SCHED_RESOLUTION_SHIFT + SCHED_RESOLUTION_SHIFT) -# define scale_load(w) ((w) << SCHED_RESOLUTION_SHIFT) -# define scale_load_down(w) ((w) >> SCHED_RESOLUTION_SHIFT) +# define user_to_kernel_load(w) ((w) << SCHED_RESOLUTION_SHIFT) +# define kernel_to_user_load(w) ((w) >> SCHED_RESOLUTION_SHIFT) #else # define NICE_0_LOAD_SHIFT (SCHED_RESOLUTION_SHIFT) -# define scale_load(w) (w) -# define scale_load_down(w) (w) +# define user_to_kernel_load(w) (w) +# define kernel_to_user_load(w) (w) #endif /* * Task weight (visible to user) and its load (invisible to user) have * independent resolution, but they should be well calibrated. We use - * scale_load() and scale_load_down(w) to convert between them. The - * following must be true: + * user_to_kernel_load() and kernel_to_user_load(w) to convert between + * them. The following must be true: * - * scale_load(prio_to_weight[USER_PRIO(NICE_TO_PRIO(0))]) == NICE_0_LOAD + * user_to_kernel_load(prio_to_weight[USER_PRIO(NICE_TO_PRIO(0))]) == NICE_0_LOAD + * kernel_to_user_load(NICE_0_LOAD) == prio_to_weight[USER_PRIO(NICE_TO_PRIO(0))] * */ #define NICE_0_LOAD (1L << NICE_0_LOAD_SHIFT) -- 2.1.4