linux-acpi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brice Goglin <Brice.Goglin@inria.fr>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
	Len Brown <len.brown@intel.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Sudeep Holla <sudeep.holla@arm.com>,
	guohanjun@huawei.com, Will Deacon <will@kernel.org>,
	linuxarm@huawei.com
Subject: Re: [RFC PATCH] topology: Represent clusters of CPUs within a die.
Date: Mon, 19 Oct 2020 12:00:15 +0200	[thread overview]
Message-ID: <942b4d68-8d19-66d8-c84b-d17eba837e9a@inria.fr> (raw)
In-Reply-To: <20201016152702.1513592-1-Jonathan.Cameron@huawei.com>

Le 16/10/2020 à 17:27, Jonathan Cameron a écrit :
> Both ACPI and DT provide the ability to describe additional layers of
> topology between that of individual cores and higher level constructs
> such as the level at which the last level cache is shared.
> In ACPI this can be represented in PPTT as a Processor Hierarchy
> Node Structure [1] that is the parent of the CPU cores and in turn
> has a parent Processor Hierarchy Nodes Structure representing
> a higher level of topology.
>
> For example Kunpeng 920 has clusters of 4 CPUs.  These do not share
> any cache resources, but the interconnect topology is such that
> the cost to transfer ownership of a cacheline between CPUs within
> a cluster is lower than between CPUs in different clusters on the same
> die.   Hence, it can make sense to deliberately schedule threads
> sharing data to a single cluster.
>
> This patch simply exposes this information to userspace libraries
> like hwloc by providing cluster_cpus and related sysfs attributes.
> PoC of HWLOC support at [2].
>
> Note this patch only handle the ACPI case.
>
> Special consideration is needed for SMT processors, where it is
> necessary to move 2 levels up the hierarchy from the leaf nodes
> (thus skipping the processor core level).
>
> Currently the ID provided is the offset of the Processor
> Hierarchy Nodes Structure within PPTT.  Whilst this is unique
> it is not terribly elegant so alternative suggestions welcome.
>
> Note that arm64 / ACPI does not provide any means of identifying
> a die level in the topology but that may be unrelate to the cluster
> level.
>
> RFC questions:
> 1) Naming
> 2) Related to naming, do we want to represent all potential levels,
>    or this enough?  On Kunpeng920, the next level up from cluster happens
>    to be covered by llc cache sharing, but in theory more than one
>    level of cluster description might be needed by some future system.
> 3) Do we need DT code in place? I'm not sure any DT based ARM64
>    systems would have enough complexity for this to be useful.
> 4) Other architectures?  Is this useful on x86 for example?


Hello Jonathan

Intel has CPUID registers to describe "tiles" and "modules" too (not
used yet as far as I know). The list of levels could become quite long
if any processor ever exposes those. If having multiple cluster levels
is possible, maybe it's time to think about introducing some sort of
generic levels:

cluster0_id = your cluster_id
cluster0_cpus/cpulist = your cluster_cpus/cpulis
cluster0_type = would optionally contain hardware-specific info such as
"module" or "tile" on x86
cluster_levels = 1

hwloc already does something like this for some "rare" levels such as
s390 book/drawers (by the way, thanks a lot for the hwloc PoC, very good
job), we call them "Groups" instead of "cluster" above.

However I don't know if the Linux scheduler would like that. Is it
better to have 10+ levels with static names, or a dynamic number of levels?

Brice

  parent reply	other threads:[~2020-10-19 10:07 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-16 15:27 [RFC PATCH] topology: Represent clusters of CPUs within a die Jonathan Cameron
2020-10-17  6:44 ` Greg Kroah-Hartman
2020-10-19  8:08   ` Jonathan Cameron
2020-10-19 10:00 ` Brice Goglin [this message]
2020-10-19 12:38   ` Jonathan Cameron
2020-10-19 10:01 ` Sudeep Holla
2020-10-19 13:14   ` Jonathan Cameron
2020-10-19 10:35 ` Peter Zijlstra
2020-10-19 12:32   ` Jonathan Cameron
2020-10-19 12:50     ` Peter Zijlstra
2020-10-19 13:12       ` Brice Goglin
2020-10-19 13:13       ` Morten Rasmussen
2020-10-19 13:10     ` Morten Rasmussen
2020-10-19 13:41       ` Jonathan Cameron
2020-10-19 14:16         ` Morten Rasmussen
2020-10-19 14:42           ` Brice Goglin
2020-10-19 15:30           ` Jonathan Cameron
2020-10-19 13:48       ` Valentin Schneider
2020-10-19 14:27         ` Jonathan Cameron
2020-10-19 15:51           ` Valentin Schneider
2020-10-19 16:00             ` Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=942b4d68-8d19-66d8-c84b-d17eba837e9a@inria.fr \
    --to=brice.goglin@inria.fr \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=guohanjun@huawei.com \
    --cc=len.brown@intel.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=sudeep.holla@arm.com \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).