From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CC85C43457 for ; Mon, 19 Oct 2020 13:13:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DB8FB22203 for ; Mon, 19 Oct 2020 13:13:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727843AbgJSNNg (ORCPT ); Mon, 19 Oct 2020 09:13:36 -0400 Received: from foss.arm.com ([217.140.110.172]:57386 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726336AbgJSNNf (ORCPT ); Mon, 19 Oct 2020 09:13:35 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 05118D6E; Mon, 19 Oct 2020 06:13:35 -0700 (PDT) Received: from e123083-lin (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 043063F719; Mon, 19 Oct 2020 06:13:32 -0700 (PDT) Date: Mon, 19 Oct 2020 15:13:30 +0200 From: Morten Rasmussen To: Peter Zijlstra Cc: Jonathan Cameron , linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, x86@kernel.org, Len Brown , Greg Kroah-Hartman , Sudeep Holla , guohanjun@huawei.com, Will Deacon , linuxarm@huawei.com, Brice Goglin , valentin.schneider@arm.com Subject: Re: [RFC PATCH] topology: Represent clusters of CPUs within a die. Message-ID: <20201019131330.GD8004@e123083-lin> References: <20201016152702.1513592-1-Jonathan.Cameron@huawei.com> <20201019103522.GK2628@hirez.programming.kicks-ass.net> <20201019123226.00006705@Huawei.com> <20201019125053.GM2628@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201019125053.GM2628@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.9.4 (2018-02-28) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 19, 2020 at 02:50:53PM +0200, Peter Zijlstra wrote: > On Mon, Oct 19, 2020 at 01:32:26PM +0100, Jonathan Cameron wrote: > > On Mon, 19 Oct 2020 12:35:22 +0200 > > Peter Zijlstra wrote: > > > > I'm confused by all of this. The core level is exactly what you seem to > > > want. > > > > It's the level above the core, whether in an multi-threaded core > > or a single threaded core. This may correspond to the level > > at which caches are shared (typically L3). Cores are already well > > represented via thread_siblings and similar. Extra confusion is that > > the current core_siblings (deprecated) sysfs interface, actually reflects > > the package level and ignores anything in between core and > > package (such as die on x86) > > That seems wrong. core-mask should be whatever cores share L3. So on a > Intel Core2-Quad (just to pick an example) you should have 4 CPU in a > package, but only 2 CPUs for the core-mask. > > It just so happens that L3 and package were the same for a long while in > x86 land, although recent chips started breaking that trend. > > And I know nothing about the core-mask being depricated; it's what the > scheduler uses. It's not going anywhere. Don't get confused over the user-space topology and the scheduler topology, they are _not_ the same despite having similar names for some things :-) > So if your 'cluster' is a group of single cores (possibly with SMT) that > do not share cache but have a faster cache connection and you want them > to behave as-if they were a multi-core group that did share cache, then > core-mask it is. In the scheduler, yes. There is no core-mask exposed to user-space. We have to be clear about whether we discuss scheduler or user-space topology :-)