From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (193.142.43.55:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 09 Apr 2020 18:23:18 -0000 Received: from mga18.intel.com ([134.134.136.126]) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jMblM-0002Db-J9 for speck@linutronix.de; Thu, 09 Apr 2020 20:18:54 +0200 Received: from mtg-dev (mtg-dev.jf.intel.com [10.54.74.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.ostc.intel.com (Postfix) with ESMTPS id 7AB106367 for ; Thu, 9 Apr 2020 18:18:46 +0000 (UTC) Received: from mgross by mtg-dev with local (Exim 4.90_1) (envelope-from ) id 1jMblG-000S8T-TN for speck@linutronix.de; Thu, 09 Apr 2020 11:18:46 -0700 From: mark gross Date: Mon, 16 Mar 2020 17:56:27 -0700 Subject: [MODERATED] [PATCH 2/4] v5 more sampling fun 2 Message-Id: Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit MIME-Version: 1.0 To: speck@linutronix.de List-ID: From: mark gross Subject: [PATCH 2/4] x86/cpu: clean up cpu_matches To make cpu_matches reusable for alternitive matching tables, make cpu_matches take a x86_cpu_id table as a parameter. Signed-off-by: Mark Gross --- arch/x86/kernel/cpu/common.c | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index bed0cb83fe24..2bea1cc8dcb4 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1075,9 +1075,9 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { {} }; -static bool __init cpu_matches(unsigned long which) +static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table) { - const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist); + const struct x86_cpu_id *m = x86_match_cpu(table); return m && !!(m->driver_data & which); } @@ -1097,31 +1097,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) u64 ia32_cap = x86_read_arch_cap_msr(); /* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */ - if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO)) + if (!cpu_matches(NO_ITLB_MULTIHIT, cpu_vuln_whitelist) && + !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO)) setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT); - if (cpu_matches(NO_SPECULATION)) + if (cpu_matches(NO_SPECULATION, cpu_vuln_whitelist)) return; setup_force_cpu_bug(X86_BUG_SPECTRE_V1); - if (!cpu_matches(NO_SPECTRE_V2)) + if (!cpu_matches(NO_SPECTRE_V2, cpu_vuln_whitelist)) setup_force_cpu_bug(X86_BUG_SPECTRE_V2); - if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) && + if (!cpu_matches(NO_SSB, cpu_vuln_whitelist) && + !(ia32_cap & ARCH_CAP_SSB_NO) && !cpu_has(c, X86_FEATURE_AMD_SSB_NO)) setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS); if (ia32_cap & ARCH_CAP_IBRS_ALL) setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED); - if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) { + if (!cpu_matches(NO_MDS, cpu_vuln_whitelist) && + !(ia32_cap & ARCH_CAP_MDS_NO)) { setup_force_cpu_bug(X86_BUG_MDS); - if (cpu_matches(MSBDS_ONLY)) + if (cpu_matches(MSBDS_ONLY, cpu_vuln_whitelist)) setup_force_cpu_bug(X86_BUG_MSBDS_ONLY); } - if (!cpu_matches(NO_SWAPGS)) + if (!cpu_matches(NO_SWAPGS, cpu_vuln_whitelist)) setup_force_cpu_bug(X86_BUG_SWAPGS); /* @@ -1139,7 +1142,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) (ia32_cap & ARCH_CAP_TSX_CTRL_MSR))) setup_force_cpu_bug(X86_BUG_TAA); - if (cpu_matches(NO_MELTDOWN)) + if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist)) return; /* Rogue Data Cache Load? No! */ @@ -1148,7 +1151,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN); - if (cpu_matches(NO_L1TF)) + if (cpu_matches(NO_L1TF, cpu_vuln_whitelist)) return; setup_force_cpu_bug(X86_BUG_L1TF); -- 2.17.1