From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (193.142.43.55:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 14 Apr 2020 10:58:13 -0000 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jOJGe-0000a2-01 for speck@linutronix.de; Tue, 14 Apr 2020 12:58:12 +0200 From: Thomas Gleixner Subject: Re: [PATCH 3/4] V7 more sampling fun 3 In-Reply-To: =?utf-8?q?=3C8fbdbe0dbc619f8c9d5f4cf7a1d2d4c8642f2ff3=2E15868?= =?utf-8?q?01416=2Egit=2Emgross=40linux=2Eintel=2Ecom=3E?= References: =?utf-8?q?=3C8fbdb?= =?utf-8?q?e0dbc619f8c9d5f4cf7a1d2d4c8642f2ff3=2E1586801416=2Egit=2Emgro?= =?utf-8?q?ss=40linux=2Eintel=2Ecom=3E?= Date: Tue, 14 Apr 2020 12:58:10 +0200 Message-ID: <87mu7enxkt.fsf@nanos.tec.linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit To: speck@linutronix.de List-ID: Mark, speck for mark gross writes: > +static void __init srbds_select_mitigation(void) > +{ > + u64 ia32_cap; > + > + if (!boot_cpu_has_bug(X86_BUG_SRBDS)) > + return; > + /* > + * Check to see if this is one of the MDS_NO systems supporting > + * TSX that are only exposed to SRBDS when TSX is enabled. > + */ > + ia32_cap = x86_read_arch_cap_msr(); > + if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM)) { This does not work for the CPUs which have the IF_TSX thing because your setup magic does not set X86_BUG_SRBDS for those.... See below. > + srbds_mitigation = SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF; > + goto out; > + } > + > + if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) { > + srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR; > + goto out; > + } > + > + if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) { > + srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED; > + goto out; > + } > + > + if (cpu_mitigations_off() || srbds_off) { > + if (srbds_mitigation != SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF) That's pointless. You already jumped over this part in case of TSX off. > + srbds_mitigation = SRBDS_MITIGATION_OFF; > + } The whole goto stuff can be completely avoided. if (!boot_cpu_has_bug(X86_BUG_SRBDS)) return; if (boot_cpu_has(X86_BUG_SRBDS_IF_TSX) && !boot_cpu_has(X86_FEATURE_RTM))) srbds_mitigation = SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF; else if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR; else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED; else if (cpu_mitigations_off() || srbds_off) srbds_mitigation = SRBDS_MITIGATION_OFF; > +out: > + update_srbds_msr(); > + pr_info("%s\n", srbds_strings[srbds_mitigation]); > +} > +#define SRBDS BIT(0) > +#define SRBDS_IF_TSX BIT(1) > + > +static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { > + VULNBL_INTEL_STEPPING(IVYBRIDGE, X86_STEPPING_ANY, SRBDS), > + VULNBL_INTEL_STEPPING(HASWELL, X86_STEPPING_ANY, SRBDS), > + VULNBL_INTEL_STEPPING(HASWELL_L, X86_STEPPING_ANY, SRBDS), > + VULNBL_INTEL_STEPPING(HASWELL_G, X86_STEPPING_ANY, SRBDS), > + VULNBL_INTEL_STEPPING(BROADWELL_G, X86_STEPPING_ANY, SRBDS), > + VULNBL_INTEL_STEPPING(BROADWELL, X86_STEPPING_ANY, SRBDS), > + VULNBL_INTEL_STEPPING(SKYLAKE_L, X86_STEPPING_ANY, SRBDS), > + VULNBL_INTEL_STEPPING(SKYLAKE, X86_STEPPING_ANY, SRBDS), > + VULNBL_INTEL_STEPPING(KABYLAKE_L, X86_STEPPINGS(0, 0xA), SRBDS), > + VULNBL_INTEL_STEPPING(KABYLAKE_L, X86_STEPPINGS(0xB, 0xC), SRBDS_IF_TSX), > + VULNBL_INTEL_STEPPING(KABYLAKE, X86_STEPPINGS(0, 0xB), SRBDS), > + VULNBL_INTEL_STEPPING(KABYLAKE, X86_STEPPINGS(0xC, 0xD), SRBDS_IF_TSX), > + {} > +}; > + > static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table) > { > const struct x86_cpu_id *m = x86_match_cpu(table); > @@ -1142,6 +1166,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) > (ia32_cap & ARCH_CAP_TSX_CTRL_MSR))) > setup_force_cpu_bug(X86_BUG_TAA); > > + if (cpu_matches(SRBDS|SRBDS_IF_TSX, cpu_vuln_blacklist)) { > + /* > + * Some parts on the list don't have RDRAND or RDSEED. Make sure > + * they show as "Not affected". > + */ > + if (!cpu_has(c, X86_FEATURE_RDRAND) && > + !cpu_has(c, X86_FEATURE_RDSEED)) > + goto srbds_not_affected; > + /* > + * Parts in the blacklist that enumerate MDS_NO are only > + * vulneralbe if TSX can be used. To handle cases where TSX > + * gets fused off check to see if TSX is fused off and thus not > + * affected. > + * > + * When running with up to day microcode TSX_CTRL is only > + * enumerated on parts where TSX fused on. > + * When running with microcode not supporting TSX_CTRL we check > + * for RTM > + */ > + if ((ia32_cap & ARCH_CAP_MDS_NO) && > + !((ia32_cap & ARCH_CAP_TSX_CTRL_MSR) || > + cpu_has(c, X86_FEATURE_RTM))) So you added SRBDS_IF_TSX and then you check for both and still have that check for _all_ CPUs. Also the TSX_CTRL_MSR part is weird. That's completely irrelevant. The only interesting part is X86_FEATURE_RTM. What you really want is: VULNBL_INTEL_STEPPING(KABYLAKE_L, X86_STEPPINGS(0xB, 0xC), SRBDS | SRBDS_IF_TSX), VULNBL_INTEL_STEPPING(KABYLAKE, X86_STEPPINGS(0, 0xB), SRBDS), VULNBL_INTEL_STEPPING(KABYLAKE, X86_STEPPINGS(0xC, 0xD), SRBDS | SRBDS_IF_TSX), /* * CPUs which have neither RDRAND nor RDSEED are not affected. */ if (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) { if (cpu_matches(SRBDS, cpu_vuln_blacklist)) { setup_force_cpu_bug(X86_BUG_SRBDS); /* These CPUs are only vulnerable when TSX is usable. */ if (cpu_matches(SRBDS_IF_TSX, cpu_vuln_blacklist)) setup_force_cpu_bug(X86_BUG_SRBDS_IF_TSX); } } Hmm? Thanks, tglx