linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 1/1] x86/mce/amd: Add PPIN support for AMD MCE
@ 2020-03-20 19:43 Wei Huang
  2020-03-20 21:09 ` Luck, Tony
  2020-03-21  9:39 ` Borislav Petkov
  0 siblings, 2 replies; 3+ messages in thread
From: Wei Huang @ 2020-03-20 19:43 UTC (permalink / raw)
  To: linux-kernel
  Cc: bp, tony.luck, yazen.ghannam, tglx, mingo, hpa, linux-edac, x86,
	smita.koralahallichannabasappa, wei.huang2

Newer AMD CPUs support a feature called protected processor identification
number (PPIN). This feature can be detected via CPUID_Fn80000008_EBX[23].
However CPUID alone is not enough to read the processor serial number.
MSR_AMD_PPIN_CTL also needs to be configured properly. If for any reason
X86_FEATURE_AMD_PPIN[PPIN_EN] can not be turned on, such as disabled in
BIOS, we have to clear the CPU capability bit of X86_FEATURE_AMD_PPIN.

When the X86_FEATURE_AMD_PPIN capability is available, MCE can read the
serial number to keep track the source of MCE errors.

Co-developed-by: Smita Koralahalli Channabasappa <smita.koralahallichannabasappa@amd.com>
Signed-off-by: Smita Koralahalli Channabasappa <smita.koralahallichannabasappa@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Yazen Ghannam <yazen.ghannam@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: linux-edac <linux-edac@vger.kernel.org>
Cc: x86-ml <x86@kernel.org>
---
 arch/x86/include/asm/cpufeatures.h |  1 +
 arch/x86/kernel/cpu/amd.c          | 26 ++++++++++++++++++++++++++
 arch/x86/kernel/cpu/mce/core.c     |  2 ++
 3 files changed, 29 insertions(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index f3327cb56edf..4b263ffb793b 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -299,6 +299,7 @@
 #define X86_FEATURE_AMD_IBRS		(13*32+14) /* "" Indirect Branch Restricted Speculation */
 #define X86_FEATURE_AMD_STIBP		(13*32+15) /* "" Single Thread Indirect Branch Predictors */
 #define X86_FEATURE_AMD_STIBP_ALWAYS_ON	(13*32+17) /* "" Single Thread Indirect Branch Predictors always-on preferred */
+#define X86_FEATURE_AMD_PPIN		(13*32+23) /* Protected Processor Inventory Number */
 #define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
 #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
 #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 1f875fbe1384..9176db4be69b 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -394,6 +394,31 @@ static void amd_detect_cmp(struct cpuinfo_x86 *c)
 	per_cpu(cpu_llc_id, cpu) = c->phys_proc_id;
 }
 
+static void amd_detect_ppin(struct cpuinfo_x86 *c)
+{
+	unsigned long long val;
+
+	if (cpu_has(c, X86_FEATURE_AMD_PPIN)) {
+		/* Turn off now until MSR is properly configured */
+		clear_cpu_cap(c, X86_FEATURE_AMD_PPIN);
+
+		if (rdmsrl_safe(MSR_AMD_PPIN_CTL, &val))
+			return;
+
+		if ((val & 3UL) == 1UL)
+			return;
+
+		if (!(val & 2UL)) {
+			wrmsrl_safe(MSR_AMD_PPIN_CTL,  val | 2UL);
+			rdmsrl_safe(MSR_AMD_PPIN_CTL, &val);
+		}
+
+		/* MSR_AMD_PPIN_CTL[PPIN_EN] bit is 1, turn feature back on */
+		if (val & 2UL)
+			set_cpu_cap(c, X86_FEATURE_AMD_PPIN);
+	}
+}
+
 u16 amd_get_nb_id(int cpu)
 {
 	return per_cpu(cpu_llc_id, cpu);
@@ -941,6 +966,7 @@ static void init_amd(struct cpuinfo_x86 *c)
 	amd_detect_cmp(c);
 	amd_get_topology(c);
 	srat_detect_node(c);
+	amd_detect_ppin(c);
 
 	init_amd_cacheinfo(c);
 
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 2c4f949611e4..57347e899575 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -142,6 +142,8 @@ void mce_setup(struct mce *m)
 
 	if (this_cpu_has(X86_FEATURE_INTEL_PPIN))
 		rdmsrl(MSR_PPIN, m->ppin);
+	else if (this_cpu_has(X86_FEATURE_AMD_PPIN))
+		rdmsrl(MSR_AMD_PPIN, m->ppin);
 
 	m->microcode = boot_cpu_data.microcode;
 }
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* RE: [PATCH V2 1/1] x86/mce/amd: Add PPIN support for AMD MCE
  2020-03-20 19:43 [PATCH V2 1/1] x86/mce/amd: Add PPIN support for AMD MCE Wei Huang
@ 2020-03-20 21:09 ` Luck, Tony
  2020-03-21  9:39 ` Borislav Petkov
  1 sibling, 0 replies; 3+ messages in thread
From: Luck, Tony @ 2020-03-20 21:09 UTC (permalink / raw)
  To: Wei Huang, linux-kernel
  Cc: bp, yazen.ghannam, tglx, mingo, hpa, linux-edac, x86,
	smita.koralahallichannabasappa

Acked-by: Tony Luck <tony.luck@intel.com>

Still true for this new version.

-Tony


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH V2 1/1] x86/mce/amd: Add PPIN support for AMD MCE
  2020-03-20 19:43 [PATCH V2 1/1] x86/mce/amd: Add PPIN support for AMD MCE Wei Huang
  2020-03-20 21:09 ` Luck, Tony
@ 2020-03-21  9:39 ` Borislav Petkov
  1 sibling, 0 replies; 3+ messages in thread
From: Borislav Petkov @ 2020-03-21  9:39 UTC (permalink / raw)
  To: Wei Huang
  Cc: linux-kernel, tony.luck, yazen.ghannam, tglx, mingo, hpa,
	linux-edac, x86, smita.koralahallichannabasappa

On Fri, Mar 20, 2020 at 02:43:05PM -0500, Wei Huang wrote:
> Newer AMD CPUs support a feature called protected processor identification
> number (PPIN). This feature can be detected via CPUID_Fn80000008_EBX[23].
> However CPUID alone is not enough to read the processor serial number.
> MSR_AMD_PPIN_CTL also needs to be configured properly. If for any reason
> X86_FEATURE_AMD_PPIN[PPIN_EN] can not be turned on, such as disabled in
> BIOS, we have to clear the CPU capability bit of X86_FEATURE_AMD_PPIN.
> 
> When the X86_FEATURE_AMD_PPIN capability is available, MCE can read the
> serial number to keep track the source of MCE errors.
> 
> Co-developed-by: Smita Koralahalli Channabasappa <smita.koralahallichannabasappa@amd.com>
> Signed-off-by: Smita Koralahalli Channabasappa <smita.koralahallichannabasappa@amd.com>
> Signed-off-by: Wei Huang <wei.huang2@amd.com>
> Acked-by: Tony Luck <tony.luck@intel.com>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Yazen Ghannam <yazen.ghannam@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: linux-edac <linux-edac@vger.kernel.org>
> Cc: x86-ml <x86@kernel.org>
> ---
>  arch/x86/include/asm/cpufeatures.h |  1 +
>  arch/x86/kernel/cpu/amd.c          | 26 ++++++++++++++++++++++++++
>  arch/x86/kernel/cpu/mce/core.c     |  2 ++
>  3 files changed, 29 insertions(+)
> 
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index f3327cb56edf..4b263ffb793b 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -299,6 +299,7 @@
>  #define X86_FEATURE_AMD_IBRS		(13*32+14) /* "" Indirect Branch Restricted Speculation */
>  #define X86_FEATURE_AMD_STIBP		(13*32+15) /* "" Single Thread Indirect Branch Predictors */
>  #define X86_FEATURE_AMD_STIBP_ALWAYS_ON	(13*32+17) /* "" Single Thread Indirect Branch Predictors always-on preferred */
> +#define X86_FEATURE_AMD_PPIN		(13*32+23) /* Protected Processor Inventory Number */
>  #define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
>  #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
>  #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index 1f875fbe1384..9176db4be69b 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -394,6 +394,31 @@ static void amd_detect_cmp(struct cpuinfo_x86 *c)
>  	per_cpu(cpu_llc_id, cpu) = c->phys_proc_id;
>  }
>  
> +static void amd_detect_ppin(struct cpuinfo_x86 *c)
> +{
> +	unsigned long long val;
> +
> +	if (cpu_has(c, X86_FEATURE_AMD_PPIN)) {

Flip check:

	if (!cpu_has...
		return;

> +		/* Turn off now until MSR is properly configured */
> +		clear_cpu_cap(c, X86_FEATURE_AMD_PPIN);

What for? You can do the final decision in the end, once, instead of
toggling that bit here.

> +
> +		if (rdmsrl_safe(MSR_AMD_PPIN_CTL, &val))
> +			return;
> +
> +		if ((val & 3UL) == 1UL)
> +			return;
> +
> +		if (!(val & 2UL)) {
> +			wrmsrl_safe(MSR_AMD_PPIN_CTL,  val | 2UL);
> +			rdmsrl_safe(MSR_AMD_PPIN_CTL, &val);
> +		}

Those need comments what they do, like in the Intel variant.

> +
> +		/* MSR_AMD_PPIN_CTL[PPIN_EN] bit is 1, turn feature back on */
> +		if (val & 2UL)
> +			set_cpu_cap(c, X86_FEATURE_AMD_PPIN);

No, keep the feature bit set and clear it only when you determine so
instead of clearing and setting again.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-03-21  9:39 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-20 19:43 [PATCH V2 1/1] x86/mce/amd: Add PPIN support for AMD MCE Wei Huang
2020-03-20 21:09 ` Luck, Tony
2020-03-21  9:39 ` Borislav Petkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).