linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Zhang Rui <rui.zhang@intel.com>
Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, hpa@zytor.com, x86@kernel.org,
	linux-kernel@vger.kernel.org, zhang.jia@linux.alibaba.com,
	len.brown@intel.com
Subject: Re: [RFC PATCH V2 0/1] x86: cpu topology fix and question on x86_max_cores
Date: Mon, 20 Feb 2023 12:08:14 +0100	[thread overview]
Message-ID: <Y/NUni00nDuURT1H@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20230220032856.661884-1-rui.zhang@intel.com>

On Mon, Feb 20, 2023 at 11:28:55AM +0800, Zhang Rui wrote:

> Questions on how to fix cpuinfo_x86.x86_max_cores
> -------------------------------------------------
> 
> Fixing x86_max_cores is more complex. Current kernel uses below logic to
> get x86_max_cores
> 	x86_max_cores = cpus_in_a_package / smp_num_siblings
> But
> 1. There is a known bug in CPUID.1F handling code. Thus cpus_in_a_package
>    can be bogus. To fix it, I will add CPUID.1F Module level support.
> 2. x86_max_cores is set and used in an inconsistent way in current kernel.
>    In short, smp_num_siblings/x86_max_cores
>    2.1 represents the number of maximum *addressable* threads/cores in a
>        core/package when retrieved via CPUID 1 and 4 on old platforms.
>        CPUID.1 EBX 23:16 "Maximum number of addressable IDs for logical
>        processors in this physical package".
>        CPUID.4 EAX 31:26 "Maximum number of addressable IDs for processor
>        cores in the physical package".
>    2.2 represents the number of maximum *possible* threads/cores in a
>        core/package, when retrieved via CPUID.B/1F on non-Hybrid platforms.
>        CPUID.B/1F EBX 15:0 "Number of logical processors at this level type.
>        The number reflects configuration as shipped by Intel".
>        For example, in calc_llc_size_per_core()
>           do_div(llc_size, c->x86_max_cores);
>        x86_max_cores is used as the max *possible* cores in a package.
>    2.3 is used in a conflict way on other vendors like AMD by checking the
>        code. I need help on confirming the proper behavior for AMD.
>        For example, in amd_get_topology(),
>           c->x86_coreid_bits = get_count_order(c->x86_max_cores);
>        x86_max_cores is used as the max *addressable* cores in a package.
>        in get_nbc_for_node(),
>           cores_per_node = (c->x86_max_cores * smp_num_siblings) / amd_get_nodes_per_socket();
>        x86_max_cores is used as the max *possible* cores in a package.
> 3. using
>       x86_max_cores = cpus_in_a_package / smp_num_siblings
>    to get the number of maximum *possible* cores in a package during boot
>    cpu bringup is not applicable on platforms with asymmetric cores.
>    Because, for a given number of threads, we don't know how many of the
>    threads are the master thread or the only thread of a core, and how
>    many of them are SMT siblings.
>    For example, on a platform with 6 Pcore and 8 Ecore, there are 20
>    threads. But setting x86_max_cores to 10 is apparently wrong.
> 
> Given the above situation, I have below question and any input is really
> appreciated.
> 
> Is this inconsistency a problem or not?

IIRC x86_max_cores in specific is only ever used in arch specific code,
the pmu uncore drivers and things like that (grep shows MCE).

Also, perhaps you want to look at calculate_max_logical_packages(). That
has a comment about there not being heterogeneous systems :/

Anyway, the reason I went and had a look there, is because I remember
Thomas and me spend entirely too much time to try and figure out means
to size an array for number of pacakges at boot time and getting it
wrong too many times to recount.

If only there was a sane way to tell these things without actually
bringing everything online first :-(

  parent reply	other threads:[~2023-02-20 11:09 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-20  3:28 [RFC PATCH V2 0/1] x86: cpu topology fix and question on x86_max_cores Zhang Rui
2023-02-20  3:28 ` [PATCH V2 1/1] x86/topology: fix erroneous smp_num_siblings on Intel Hybrid platform Zhang Rui
2023-02-20 11:22   ` Peter Zijlstra
2023-02-21  8:34     ` Zhang, Rui
2023-03-13  2:05       ` Zhang, Rui
2023-02-20 10:36 ` [RFC PATCH V2 0/1] x86: cpu topology fix and question on x86_max_cores Peter Zijlstra
2023-02-20 14:40   ` Zhang, Rui
2023-02-20 11:08 ` Peter Zijlstra [this message]
2023-02-20 14:33   ` Zhang, Rui
2023-02-20 19:06     ` Peter Zijlstra
2023-02-20 22:52       ` Thomas Gleixner
2023-02-21  8:01       ` Zhang, Rui
2023-02-20 22:49     ` Thomas Gleixner
2023-02-21  8:26       ` Zhang, Rui
2023-03-07 16:10         ` Zhang, Rui
2023-03-08  2:46           ` Brown, Len
2023-02-21  9:00       ` Peter Zijlstra
2023-02-21 10:09         ` Borislav Petkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y/NUni00nDuURT1H@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=rui.zhang@intel.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    --cc=zhang.jia@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).