From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752851AbdICUQQ (ORCPT ); Sun, 3 Sep 2017 16:16:16 -0400 Received: from mail-pg0-f48.google.com ([74.125.83.48]:38260 "EHLO mail-pg0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752494AbdICUQB (ORCPT ); Sun, 3 Sep 2017 16:16:01 -0400 X-Google-Smtp-Source: ADKCNb4QB8n3tKam5GEuOlaiNJzdUGh9Q2DVqv0lxsAfTQlzM6hvttR3nrqHTwBBYowFRE4BKdOWjg== From: Joel Fernandes To: linux-kernel@vger.kernel.org Cc: Joel Fernandes , Srinivas Pandruvada , Len Brown , "Rafael J . Wysocki" , Viresh Kumar , Ingo Molnar , Peter Zijlstra , Juri Lelli , Patrick Bellasi , Steve Muckle , kernel-team@android.com Subject: [PATCH RFC v2 2/2] sched/fair: Skip frequency update if CPU about to idle Date: Sun, 3 Sep 2017 13:15:42 -0700 Message-Id: <20170903201542.2929-3-joelaf@google.com> X-Mailer: git-send-email 2.14.1.581.gf28d330327-goog In-Reply-To: <20170903201542.2929-1-joelaf@google.com> References: <20170903201542.2929-1-joelaf@google.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In an application playing music where the music app's thread wakes up and sleeps periodically on an Android device, its seen that the frequency increases slightly on the dequeue and is reduced after the wake up on the wake up. This oscillation continues between 300Mhz and 350Mhz, all the while the task is running at 300MHz when its active. This is pointless and causes unnecessary wake ups of the governor thread on slow-switch systems. This patch prevents a frequency update on the last dequeue. With this the number of schedutil governor thread wake ups are reduces more than 2 times (1389 -> 527). Cc: Srinivas Pandruvada Cc: Len Brown Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Patrick Bellasi Cc: Steve Muckle Cc: kernel-team@android.com Signed-off-by: Joel Fernandes --- kernel/sched/fair.c | 25 ++++++++++++++++++++++--- kernel/sched/sched.h | 1 + 2 files changed, 23 insertions(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bf3595c0badf..82496bb2dad3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3736,6 +3736,7 @@ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s #define UPDATE_TG 0x1 #define SKIP_AGE_LOAD 0x2 #define DO_ATTACH 0x4 +#define SKIP_CPUFREQ 0x8 /* Update task and its cfs_rq load average */ static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) @@ -3752,7 +3753,7 @@ static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s if (se->avg.last_update_time && !(flags & SKIP_AGE_LOAD)) __update_load_avg_se(now, cpu, cfs_rq, se); - decayed = update_cfs_rq_load_avg(now, cfs_rq, true); + decayed = update_cfs_rq_load_avg(now, cfs_rq, !(flags & SKIP_CPUFREQ)); decayed |= propagate_entity_load_avg(se); if (!se->avg.last_update_time && (flags & DO_ATTACH)) { @@ -3850,6 +3851,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq) #define UPDATE_TG 0x0 #define SKIP_AGE_LOAD 0x0 #define DO_ATTACH 0x0 +#define SKIP_CPUFREQ 0x0 static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se, int not_used1) { @@ -4071,6 +4073,8 @@ static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq); static void dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { + int update_flags; + /* * Update run-time statistics of the 'current'. */ @@ -4084,7 +4088,12 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) * - For group entity, update its weight to reflect the new share * of its group cfs_rq. */ - update_load_avg(cfs_rq, se, UPDATE_TG); + update_flags = UPDATE_TG; + + if (flags & DEQUEUE_IDLE) + update_flags |= SKIP_CPUFREQ; + + update_load_avg(cfs_rq, se, update_flags); dequeue_runnable_load_avg(cfs_rq, se); update_stats_dequeue(cfs_rq, se, flags); @@ -5231,6 +5240,9 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) struct sched_entity *se = &p->se; int task_sleep = flags & DEQUEUE_SLEEP; + if (task_sleep && rq->nr_running == 1) + flags |= DEQUEUE_IDLE; + for_each_sched_entity(se) { cfs_rq = cfs_rq_of(se); dequeue_entity(cfs_rq, se, flags); @@ -5261,13 +5273,20 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) } for_each_sched_entity(se) { + int update_flags; + cfs_rq = cfs_rq_of(se); cfs_rq->h_nr_running--; if (cfs_rq_throttled(cfs_rq)) break; - update_load_avg(cfs_rq, se, UPDATE_TG); + update_flags = UPDATE_TG; + + if (flags & DEQUEUE_IDLE) + update_flags |= SKIP_CPUFREQ; + + update_load_avg(cfs_rq, se, update_flags); update_cfs_group(se); } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index ef16fd0a74fd..44a9048e54dc 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1388,6 +1388,7 @@ extern const u32 sched_prio_to_wmult[40]; #define DEQUEUE_SAVE 0x02 /* matches ENQUEUE_RESTORE */ #define DEQUEUE_MOVE 0x04 /* matches ENQUEUE_MOVE */ #define DEQUEUE_NOCLOCK 0x08 /* matches ENQUEUE_NOCLOCK */ +#define DEQUEUE_IDLE 0x80 #define ENQUEUE_WAKEUP 0x01 #define ENQUEUE_RESTORE 0x02 -- 2.14.1.581.gf28d330327-goog