From: "Duran, Leo" <leo.duran@amd.com>
To: "'Thomas Gleixner'" <tglx@linutronix.de>,
"Suthikulpanit, Suravee" <Suravee.Suthikulpanit@amd.com>
Cc: Borislav Petkov <bp@alien8.de>, "x86@kernel.org" <x86@kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"Ghannam, Yazen" <Yazen.Ghannam@amd.com>,
Peter Zijlstra <peterz@infradead.org>
Subject: RE: [PATCH 1/2] x86/CPU/AMD: Present package as die instead of socket
Date: Tue, 27 Jun 2017 15:48:55 +0000 [thread overview]
Message-ID: <DM5PR12MB12439895F6D748B8AC1A7997F9DC0@DM5PR12MB1243.namprd12.prod.outlook.com> (raw)
In-Reply-To: <alpine.DEB.2.20.1706271604550.1798@nanos>
Hi Thomas, et al,
Just a quick comment below.
Leo.
> -----Original Message-----
> From: Thomas Gleixner [mailto:tglx@linutronix.de]
> Sent: Tuesday, June 27, 2017 9:21 AM
> To: Suthikulpanit, Suravee <Suravee.Suthikulpanit@amd.com>
> Cc: Borislav Petkov <bp@alien8.de>; x86@kernel.org; linux-
> kernel@vger.kernel.org; Duran, Leo <leo.duran@amd.com>; Ghannam,
> Yazen <Yazen.Ghannam@amd.com>; Peter Zijlstra <peterz@infradead.org>
> Subject: Re: [PATCH 1/2] x86/CPU/AMD: Present package as die instead of
> socket
>
> On Tue, 27 Jun 2017, Suravee Suthikulpanit wrote:
> > On 6/27/17 17:48, Borislav Petkov wrote:
> > > On Tue, Jun 27, 2017 at 01:40:52AM -0500, Suravee Suthikulpanit wrote:
> > > > However, this is not the case on AMD family17h multi-die processor
> > > > platforms, which can have up to 4 dies per socket as shown in the
> > > > following system topology.
> > >
> > > So what exactly does that mean? A die is a package on ZN and you can
> > > have up to 4 packages on a physical socket?
> >
> > Yes. 4 packages (or 4 dies, or 4 NUMA nodes) in a socket.
>
> And why is this relevant at all?
>
> The kernel does not care about sockets. Sockets are electromechanical
> components and completely irrelevant.
>
> The kernel cares about :
>
> Threads - Single scheduling unit
>
> Cores - Contains one or more threads
>
> Packages - Contains one or more cores. The cores share L3.
>
> NUMA Node - Contains one or more Packages which share a memory
> controller.
>
> I'm not aware of x86 systems which have several Packages
> sharing a memory controller, so Package == NUMA Node
> (but I might be wrong here).
>
> Platform - Contains one or more Numa Nodes
[Duran, Leo]
That is my understanding of intent as well... However, regarding the L3:
The sentence 'The cores share L3.' under 'Packages' may give the impression that all cores in a package share an L3.
In our case, we define a Package a group of cores sharing a memory controller, a 'Die' in hardware terms.
Also, it turns out that within a Package we may have separate groups of cores each having their own L3 (in hardware terms we refer to those as a 'Core Complex').
Basically, in our case a Package may contain more than one L3 (i.e., in hardware terms, there may more than one 'Core complex' in a 'Die').
The important point is that all logical processors (threads) that share an L3 have a common "cpu_llc_id".
>
> All the kernel is interested in is the above and the NUMA Node distance so it
> knows about memory access latencies. No sockets, no MCMs, that's all
> completely useless for the scheduler.
>
> So if the current CPUID stuff gives you the same phycial package ID for all
> packages in a MCM, then this needs to be fixed at the CPUID/ACPI/BIOS
> level and not hacked around in the kernel.
>
> The only reason why a MCM might need it's own ID is, when it contains
> infrastructure which is shared between the packages, but again that's
> irrelevant for the scheduler. That'd be only relevant to implement a driver for
> that shared infrastructure.
>
> Thanks,
>
> tglx
next prev parent reply other threads:[~2017-06-27 15:49 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-27 6:40 [PATCH 0/2] x86/CPU/AMD: Fix multi-die processor topology info Suravee Suthikulpanit
2017-06-27 6:40 ` [PATCH 1/2] x86/CPU/AMD: Present package as die instead of socket Suravee Suthikulpanit
2017-06-27 10:48 ` Borislav Petkov
2017-06-27 13:07 ` Suravee Suthikulpanit
2017-06-27 13:42 ` Borislav Petkov
2017-06-27 16:54 ` Suravee Suthikulpanit
2017-06-27 17:44 ` Borislav Petkov
2017-06-27 18:32 ` Ghannam, Yazen
2017-06-27 18:44 ` Borislav Petkov
2017-06-27 20:26 ` Suravee Suthikulpanit
2017-06-28 9:12 ` Borislav Petkov
2017-06-27 14:21 ` Thomas Gleixner
2017-06-27 14:54 ` Brice Goglin
2017-06-27 15:48 ` Duran, Leo [this message]
2017-06-27 16:11 ` Borislav Petkov
2017-06-27 16:23 ` Duran, Leo
2017-06-27 16:31 ` Borislav Petkov
2017-06-27 16:42 ` Duran, Leo
2017-06-27 16:45 ` Borislav Petkov
2017-06-27 17:04 ` Duran, Leo
2017-06-27 16:19 ` Thomas Gleixner
2017-06-27 16:34 ` Duran, Leo
2017-06-27 15:55 ` Suravee Suthikulpanit
2017-06-27 16:16 ` Borislav Petkov
2017-07-05 15:50 ` Peter Zijlstra
2017-06-27 6:40 ` [PATCH 2/2] x86/CPU/AMD: Use L3 Cache info from CPUID to determine LLC ID Suravee Suthikulpanit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DM5PR12MB12439895F6D748B8AC1A7997F9DC0@DM5PR12MB1243.namprd12.prod.outlook.com \
--to=leo.duran@amd.com \
--cc=Suravee.Suthikulpanit@amd.com \
--cc=Yazen.Ghannam@amd.com \
--cc=bp@alien8.de \
--cc=linux-kernel@vger.kernel.org \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).