From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6917EC4363E for ; Mon, 10 May 2021 11:08:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 505FC61929 for ; Mon, 10 May 2021 11:08:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236494AbhEJLIP (ORCPT ); Mon, 10 May 2021 07:08:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:59424 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231975AbhEJKrv (ORCPT ); Mon, 10 May 2021 06:47:51 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 862B8619B8; Mon, 10 May 2021 10:37:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1620643060; bh=OiVkr2U5Omt7baOtEMh3LQxKl7nKQ3eY6sFFrL9m7fE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XSz+nIq5/ddTXenGS7FqfkPMbJY8XGwHpzRAiO+KNhTM4BEE4V3qEefY3aaP55At4 q5WhjN9qF26C/oO3LGsBGHHQGKN1lAlsNlkqibBdgo8Px/9ZLlVBuBF1n3+6WLd1ak ueRj7knORJH8s9wN1rzOYbg5GAgXqUgCVCtFMei4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Vincent Donnefort , "Peter Zijlstra (Intel)" , Ingo Molnar , Dietmar Eggemann , Vincent Guittot , Sasha Levin Subject: [PATCH 5.10 126/299] sched/pelt: Fix task util_est update filtering Date: Mon, 10 May 2021 12:18:43 +0200 Message-Id: <20210510102009.141358382@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210510102004.821838356@linuxfoundation.org> References: <20210510102004.821838356@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Vincent Donnefort [ Upstream commit b89997aa88f0b07d8a6414c908af75062103b8c9 ] Being called for each dequeue, util_est reduces the number of its updates by filtering out when the EWMA signal is different from the task util_avg by less than 1%. It is a problem for a sudden util_avg ramp-up. Due to the decay from a previous high util_avg, EWMA might now be close enough to the new util_avg. No update would then happen while it would leave ue.enqueued with an out-of-date value. Taking into consideration the two util_est members, EWMA and enqueued for the filtering, ensures, for both, an up-to-date value. This is for now an issue only for the trace probe that might return the stale value. Functional-wise, it isn't a problem, as the value is always accessed through max(enqueued, ewma). This problem has been observed using LISA's UtilConvergence:test_means on the sd845c board. No regression observed with Hackbench on sd845c and Perf-bench sched pipe on hikey/hikey960. Signed-off-by: Vincent Donnefort Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Ingo Molnar Reviewed-by: Dietmar Eggemann Reviewed-by: Vincent Guittot Link: https://lkml.kernel.org/r/20210225165820.1377125-1-vincent.donnefort@arm.com Signed-off-by: Sasha Levin --- kernel/sched/fair.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 348605306027..8f5bbc1469ed 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3948,6 +3948,8 @@ static inline void util_est_dequeue(struct cfs_rq *cfs_rq, trace_sched_util_est_cfs_tp(cfs_rq); } +#define UTIL_EST_MARGIN (SCHED_CAPACITY_SCALE / 100) + /* * Check if a (signed) value is within a specified (unsigned) margin, * based on the observation that: @@ -3965,7 +3967,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq, struct task_struct *p, bool task_sleep) { - long last_ewma_diff; + long last_ewma_diff, last_enqueued_diff; struct util_est ue; if (!sched_feat(UTIL_EST)) @@ -3986,6 +3988,8 @@ static inline void util_est_update(struct cfs_rq *cfs_rq, if (ue.enqueued & UTIL_AVG_UNCHANGED) return; + last_enqueued_diff = ue.enqueued; + /* * Reset EWMA on utilization increases, the moving average is used only * to smooth utilization decreases. @@ -3999,12 +4003,17 @@ static inline void util_est_update(struct cfs_rq *cfs_rq, } /* - * Skip update of task's estimated utilization when its EWMA is + * Skip update of task's estimated utilization when its members are * already ~1% close to its last activation value. */ last_ewma_diff = ue.enqueued - ue.ewma; - if (within_margin(last_ewma_diff, (SCHED_CAPACITY_SCALE / 100))) + last_enqueued_diff -= ue.enqueued; + if (within_margin(last_ewma_diff, UTIL_EST_MARGIN)) { + if (!within_margin(last_enqueued_diff, UTIL_EST_MARGIN)) + goto done; + return; + } /* * To avoid overestimation of actual task utilization, skip updates if -- 2.30.2