From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1428422AbdDXVfl (ORCPT ); Mon, 24 Apr 2017 17:35:41 -0400 Received: from mail-it0-f67.google.com ([209.85.214.67]:33592 "EHLO mail-it0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S975696AbdDXVfc (ORCPT ); Mon, 24 Apr 2017 17:35:32 -0400 Date: Mon, 24 Apr 2017 14:35:28 -0700 From: Tejun Heo To: Ingo Molnar , Peter Zijlstra Cc: linux-kernel@vger.kernel.org, Linus Torvalds , Vincent Guittot , Mike Galbraith , Paul Turner , Chris Mason , kernel-team@fb.com Subject: [PATCH 3/2] sched/fair: Skip __update_load_avg() on cfs_rq sched_entities Message-ID: <20170424213528.GB23619@wtj.duckdns.org> References: <20170424201344.GA14169@wtj.duckdns.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170424201344.GA14169@wtj.duckdns.org> User-Agent: Mutt/1.8.0 (2017-02-23) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now that a cfs_rq sched_entity's load_avg always gets propagated from the associated cfs_rq, there's no point in calling __update_load_avg() on it. The two mechanisms compete with each other and we'd be always using a value close to the propagated one anyway. Skip __update_load_avg() for cfs_rq sched_entities. Also, relocate propagate_entity_load_avg() to signify that propagation is the counterpart to __update_load_avg() for cfs_rq sched_entities. This puts the propagation before update_cfs_rq_load_avg() which shouldn't disturb anything. Signed-off-by: Tejun Heo Cc: Vincent Guittot Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Mike Galbraith Cc: Paul Turner --- Hello, A follow-up patch. This removes __update_load_avg() on cfs_rq se's as the value is now constantly kept in sync from cfs_rq. The patch doesn't cause any noticable changes in tets. Thanks. kernel/sched/fair.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3293,20 +3293,22 @@ static inline void update_load_avg(struc u64 now = cfs_rq_clock_task(cfs_rq); struct rq *rq = rq_of(cfs_rq); int cpu = cpu_of(rq); - int decayed; + int decayed = 0; /* * Track task load average for carrying it to new CPU after migrated, and * track group sched_entity load average for task_h_load calc in migration */ - if (se->avg.last_update_time && !(flags & SKIP_AGE_LOAD)) { - __update_load_avg(now, cpu, &se->avg, - se->on_rq * scale_load_down(se->load.weight), - cfs_rq->curr == se, NULL); + if (entity_is_task(se)) { + if (se->avg.last_update_time && !(flags & SKIP_AGE_LOAD)) + __update_load_avg(now, cpu, &se->avg, + se->on_rq * scale_load_down(se->load.weight), + cfs_rq->curr == se, NULL); + } else { + decayed |= propagate_entity_load_avg(se); } - decayed = update_cfs_rq_load_avg(now, cfs_rq, true); - decayed |= propagate_entity_load_avg(se); + decayed |= update_cfs_rq_load_avg(now, cfs_rq, true); if (decayed && (flags & UPDATE_TG)) update_tg_load_avg(cfs_rq, 0);