From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B64FC433EF for ; Fri, 10 Dec 2021 09:33:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233961AbhLJJhF (ORCPT ); Fri, 10 Dec 2021 04:37:05 -0500 Received: from outbound-smtp41.blacknight.com ([46.22.139.224]:58377 "EHLO outbound-smtp41.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231384AbhLJJhE (ORCPT ); Fri, 10 Dec 2021 04:37:04 -0500 Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp41.blacknight.com (Postfix) with ESMTPS id 53D231DB0 for ; Fri, 10 Dec 2021 09:33:28 +0000 (GMT) Received: (qmail 8672 invoked from network); 10 Dec 2021 09:33:28 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.197.169]) by 81.17.254.9 with ESMTPA; 10 Dec 2021 09:33:27 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Ingo Molnar , Vincent Guittot , Valentin Schneider , Aubrey Li , Barry Song , Mike Galbraith , Srikar Dronamraju , Gautham Shenoy , LKML , Mel Gorman Subject: [PATCH 1/2] sched/fair: Use weight of SD_NUMA domain in find_busiest_group Date: Fri, 10 Dec 2021 09:33:06 +0000 Message-Id: <20211210093307.31701-2-mgorman@techsingularity.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211210093307.31701-1-mgorman@techsingularity.net> References: <20211210093307.31701-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org find_busiest_group uses the child domain's group weight instead of the sched_domain's weight that has SD_NUMA set when calculating the allowed imbalance between NUMA nodes. This is wrong and inconsistent with find_idlest_group. This patch uses the SD_NUMA weight in both. Fixes: 7d2b5dd0bcc4 ("sched/numa: Allow a floating imbalance between NUMA nodes") Signed-off-by: Mel Gorman --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6e476f6d9435..0a969affca76 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9397,7 +9397,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s /* Consider allowing a small imbalance between NUMA groups */ if (env->sd->flags & SD_NUMA) { env->imbalance = adjust_numa_imbalance(env->imbalance, - busiest->sum_nr_running, busiest->group_weight); + busiest->sum_nr_running, env->sd->span_weight); } return; -- 2.31.1