From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FF29C11F64 for ; Thu, 1 Jul 2021 14:28:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 135B861409 for ; Thu, 1 Jul 2021 14:28:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233929AbhGAObX (ORCPT ); Thu, 1 Jul 2021 10:31:23 -0400 Received: from foss.arm.com ([217.140.110.172]:55110 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231698AbhGAObW (ORCPT ); Thu, 1 Jul 2021 10:31:22 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 86090D6E; Thu, 1 Jul 2021 07:28:51 -0700 (PDT) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 87BD63F718; Thu, 1 Jul 2021 07:28:49 -0700 (PDT) From: Valentin Schneider To: Srikar Dronamraju , Ingo Molnar , Peter Zijlstra , Michael Ellerman Cc: LKML , Mel Gorman , Rik van Riel , Srikar Dronamraju , Thomas Gleixner , Vincent Guittot , Dietmar Eggemann , linuxppc-dev@lists.ozlabs.org, Nathan Lynch , Gautham R Shenoy , Geetika Moolchandani , Laurent Dufour Subject: Re: [PATCH v2 1/2] sched/topology: Skip updating masks for non-online nodes In-Reply-To: <20210701041552.112072-2-srikar@linux.vnet.ibm.com> References: <20210701041552.112072-1-srikar@linux.vnet.ibm.com> <20210701041552.112072-2-srikar@linux.vnet.ibm.com> Date: Thu, 01 Jul 2021 15:28:45 +0100 Message-ID: <875yxu85wi.mognet@arm.com> MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/07/21 09:45, Srikar Dronamraju wrote: > @@ -1891,12 +1894,30 @@ void sched_init_numa(void) > void sched_domains_numa_masks_set(unsigned int cpu) > { > int node = cpu_to_node(cpu); > - int i, j; > + int i, j, empty; > > + empty = cpumask_empty(sched_domains_numa_masks[0][node]); > for (i = 0; i < sched_domains_numa_levels; i++) { > for (j = 0; j < nr_node_ids; j++) { > - if (node_distance(j, node) <= sched_domains_numa_distance[i]) > + if (!node_online(j)) > + continue; > + > + if (node_distance(j, node) <= sched_domains_numa_distance[i]) { > cpumask_set_cpu(cpu, sched_domains_numa_masks[i][j]); > + > + /* > + * We skip updating numa_masks for offline > + * nodes. However now that the node is > + * finally online, CPUs that were added > + * earlier, should now be accommodated into > + * newly oneline node's numa mask. > + */ > + if (node != j && empty) { > + cpumask_or(sched_domains_numa_masks[i][node], > + sched_domains_numa_masks[i][node], > + sched_domains_numa_masks[0][j]); > + } > + } Hmph, so we're playing games with masks of offline nodes - is that really necessary? Your modification of sched_init_numa() still scans all of the nodes (regardless of their online status) to build the distance map, and that is never updated (sched_init_numa() is pretty much an __init function). So AFAICT this is all to cope with topology_span_sane() not applying 'cpu_map' to its masks. That seemed fine to me back when I wrote it, but in light of having bogus distance values for offline nodes, not so much... What about the below instead? --- diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index b77ad49dc14f..c2d9caad4aa6 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -2075,6 +2075,7 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve static bool topology_span_sane(struct sched_domain_topology_level *tl, const struct cpumask *cpu_map, int cpu) { + struct cpumask *intersect = sched_domains_tmpmask; int i; /* NUMA levels are allowed to overlap */ @@ -2090,14 +2091,17 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl, for_each_cpu(i, cpu_map) { if (i == cpu) continue; + /* - * We should 'and' all those masks with 'cpu_map' to exactly - * match the topology we're about to build, but that can only - * remove CPUs, which only lessens our ability to detect - * overlaps + * We shouldn't have to bother with cpu_map here, unfortunately + * some architectures (powerpc says hello) have to deal with + * offline NUMA nodes reporting bogus distance values. This can + * lead to funky NODE domain spans, but since those are offline + * we can mask them out. */ + cpumask_and(intersect, tl->mask(cpu), tl->mask(i)); if (!cpumask_equal(tl->mask(cpu), tl->mask(i)) && - cpumask_intersects(tl->mask(cpu), tl->mask(i))) + cpumask_intersects(intersect, cpu_map)) return false; }