From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97F3CC433F5 for ; Thu, 3 Feb 2022 14:47:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351491AbiBCOrP (ORCPT ); Thu, 3 Feb 2022 09:47:15 -0500 Received: from outbound-smtp58.blacknight.com ([46.22.136.242]:45599 "EHLO outbound-smtp58.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351433AbiBCOrF (ORCPT ); Thu, 3 Feb 2022 09:47:05 -0500 Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp58.blacknight.com (Postfix) with ESMTPS id 0943AFA96F for ; Thu, 3 Feb 2022 14:47:04 +0000 (GMT) Received: (qmail 1203 invoked from network); 3 Feb 2022 14:47:03 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.17.223]) by 81.17.254.9 with ESMTPA; 3 Feb 2022 14:47:03 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Ingo Molnar , Vincent Guittot , Valentin Schneider , Aubrey Li , Barry Song , Mike Galbraith , Srikar Dronamraju , Gautham Shenoy , LKML , Mel Gorman Subject: [PATCH v5 0/2] Adjust NUMA imbalance for multiple LLCs Date: Thu, 3 Feb 2022 14:46:50 +0000 Message-Id: <20220203144652.12540-1-mgorman@techsingularity.net> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Changelog since V4 o Scale imbalance based on the top domain that prefers siblings o Keep allowed imbalance as 2 up to the point where LLCs can be overloaded Changelog since V3 o Calculate imb_numa_nr for multiple SD_NUMA domains o Restore behaviour where communicating pairs remain on the same node Commit 7d2b5dd0bcc4 ("sched/numa: Allow a floating imbalance between NUMA nodes") allowed an imbalance between NUMA nodes such that communicating tasks would not be pulled apart by the load balancer. This works fine when there is a 1:1 relationship between LLC and node but can be suboptimal for multiple LLCs if independent tasks prematurely use CPUs sharing cache. The series addresses two problems -- inconsistent logic when allowing a NUMA imbalance and sub-optimal performance when there are many LLCs per NUMA node. include/linux/sched/topology.h | 1 + kernel/sched/fair.c | 30 ++++++++++--------- kernel/sched/topology.c | 53 ++++++++++++++++++++++++++++++++++ 3 files changed, 71 insertions(+), 13 deletions(-) -- 2.31.1 Mel Gorman (2): sched/fair: Improve consistency of allowed NUMA balance calculations sched/fair: Adjust the allowed NUMA imbalance when SD_NUMA spans multiple LLCs include/linux/sched/topology.h | 1 + kernel/sched/fair.c | 30 ++++++++++--------- kernel/sched/topology.c | 53 ++++++++++++++++++++++++++++++++++ 3 files changed, 71 insertions(+), 13 deletions(-) -- 2.31.1