From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F3E3C433FE for ; Thu, 25 Nov 2021 15:22:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356044AbhKYPZS (ORCPT ); Thu, 25 Nov 2021 10:25:18 -0500 Received: from outbound-smtp53.blacknight.com ([46.22.136.237]:37089 "EHLO outbound-smtp53.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355990AbhKYPXR (ORCPT ); Thu, 25 Nov 2021 10:23:17 -0500 Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp53.blacknight.com (Postfix) with ESMTPS id 98340FB314 for ; Thu, 25 Nov 2021 15:20:03 +0000 (GMT) Received: (qmail 26004 invoked from network); 25 Nov 2021 15:20:03 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.17.29]) by 81.17.254.9 with ESMTPA; 25 Nov 2021 15:20:03 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Ingo Molnar , Vincent Guittot , Valentin Schneider , Aubrey Li , Barry Song , Mike Galbraith , Srikar Dronamraju , LKML , Mel Gorman Subject: [PATCH 1/2] sched/fair: Use weight of SD_NUMA domain in find_busiest_group Date: Thu, 25 Nov 2021 15:19:40 +0000 Message-Id: <20211125151941.8710-2-mgorman@techsingularity.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211125151941.8710-1-mgorman@techsingularity.net> References: <20211125151941.8710-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org find_busiest_group uses the child domain's group weight instead of the sched_domain's weight that has SD_NUMA set when calculating the allowed imbalance between NUMA nodes. This is wrong and inconsistent with find_idlest_group. This patch uses the SD_NUMA weight in both. Fixes: c4e8f691d926 ("sched/fair: Adjust the allowed NUMA imbalance when SD_NUMA spans multiple LLCS") Signed-off-by: Mel Gorman --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6e476f6d9435..0a969affca76 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9397,7 +9397,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s /* Consider allowing a small imbalance between NUMA groups */ if (env->sd->flags & SD_NUMA) { env->imbalance = adjust_numa_imbalance(env->imbalance, - busiest->sum_nr_running, busiest->group_weight); + busiest->sum_nr_running, env->sd->span_weight); } return; -- 2.31.1