linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 00/10] arm64: add system vulnerability sysfs entries
@ 2019-02-27  1:05 Jeremy Linton
  2019-02-27  1:05 ` [PATCH v5 01/10] arm64: Provide a command line to disable spectre_v2 mitigation Jeremy Linton
                   ` (11 more replies)
  0 siblings, 12 replies; 35+ messages in thread
From: Jeremy Linton @ 2019-02-27  1:05 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	Andre.Przywara, linux-kernel, Jeremy Linton

Arm64 machines should be displaying a human readable
vulnerability status to speculative execution attacks in
/sys/devices/system/cpu/vulnerabilities 

This series enables that behavior by providing the expected
functions. Those functions expose the cpu errata and feature
states, as well as whether firmware is responding appropriately
to display the overall machine status. This means that in a
heterogeneous machine we will only claim the machine is mitigated
or safe if we are confident all booted cores are safe or
mitigated.

v4->v5:
	Revert the changes to remove the CONFIG_EXPERT hidden
	       options, but leave the detection paths building
	       without #ifdef wrappers. Also remove the
	       CONFIG_GENERIC_CPU_VULNERABILITIES #ifdefs
	       as we are 'select'ing the option in the Kconfig.
	       This allows us to keep all three variations of
	       the CONFIG/enable/disable paths without a lot of
	       (CONFIG_X || CONFIG_Y) checks.
	Various bits/pieces moved between the patches in an attempt
		to keep similar features/changes together.

v3->v4:
        Drop the patch which selectivly exports sysfs entries
        Remove the CONFIG_EXPERT hidden options which allowed
               the kernel to be built without the vulnerability
               detection code.
        Pick Marc Z's patches which invert the white/black
               lists for spectrev2 and clean up the firmware
               detection logic.
        Document the existing kpti controls
        Add a nospectre_v2 option to boot time disable the
             mitigation

v2->v3:
        Remove "Unknown" states, replace with further blacklists
               and default vulnerable/not affected states.
        Add the ability for an arch port to selectively export
               sysfs vulnerabilities.

v1->v2:
        Add "Unknown" state to ABI/testing docs.
        Minor tweaks.

Jeremy Linton (6):
  arm64: Provide a command line to disable spectre_v2 mitigation
  arm64: add sysfs vulnerability show for meltdown
  arm64: Always enable spectrev2 vulnerability detection
  arm64: add sysfs vulnerability show for spectre v2
  arm64: Always enable ssb vulnerability detection
  arm64: add sysfs vulnerability show for speculative store bypass

Marc Zyngier (2):
  arm64: Advertise mitigation of Spectre-v2, or lack thereof
  arm64: Use firmware to detect CPUs that are not affected by Spectre-v2

Mian Yousaf Kaukab (2):
  arm64: add sysfs vulnerability show for spectre v1
  arm64: enable generic CPU vulnerabilites support

 .../admin-guide/kernel-parameters.txt         |   8 +-
 arch/arm64/Kconfig                            |   1 +
 arch/arm64/include/asm/cpufeature.h           |   4 -
 arch/arm64/kernel/cpu_errata.c                | 239 +++++++++++++-----
 arch/arm64/kernel/cpufeature.c                |  47 +++-
 5 files changed, 216 insertions(+), 83 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v5 01/10] arm64: Provide a command line to disable spectre_v2 mitigation
  2019-02-27  1:05 [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Jeremy Linton
@ 2019-02-27  1:05 ` Jeremy Linton
  2019-02-28 18:14   ` Suzuki K Poulose
  2019-03-01  6:54   ` Andre Przywara
  2019-02-27  1:05 ` [PATCH v5 02/10] arm64: add sysfs vulnerability show for spectre v1 Jeremy Linton
                   ` (10 subsequent siblings)
  11 siblings, 2 replies; 35+ messages in thread
From: Jeremy Linton @ 2019-02-27  1:05 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	Andre.Przywara, linux-kernel, Jeremy Linton, Jonathan Corbet,
	linux-doc

There are various reasons, including bencmarking, to disable spectrev2
mitigation on a machine. Provide a command-line to do so.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-doc@vger.kernel.org
---
 Documentation/admin-guide/kernel-parameters.txt |  8 ++++----
 arch/arm64/kernel/cpu_errata.c                  | 13 +++++++++++++
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 858b6c0b9a15..4d4d6a9537ae 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2842,10 +2842,10 @@
 			check bypass). With this option data leaks are possible
 			in the system.
 
-	nospectre_v2	[X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2
-			(indirect branch prediction) vulnerability. System may
-			allow data leaks with this option, which is equivalent
-			to spectre_v2=off.
+	nospectre_v2	[X86,PPC_FSL_BOOK3E,ARM64] Disable all mitigations for
+			the Spectre variant 2 (indirect branch prediction)
+			vulnerability. System may allow data leaks with this
+			option.
 
 	nospec_store_bypass_disable
 			[HW] Disable all mitigations for the Speculative Store Bypass vulnerability
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 9950bb0cbd52..d2b2c69d31bb 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -220,6 +220,14 @@ static void qcom_link_stack_sanitization(void)
 		     : "=&r" (tmp));
 }
 
+static bool __nospectre_v2;
+static int __init parse_nospectre_v2(char *str)
+{
+	__nospectre_v2 = true;
+	return 0;
+}
+early_param("nospectre_v2", parse_nospectre_v2);
+
 static void
 enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 {
@@ -231,6 +239,11 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
 		return;
 
+	if (__nospectre_v2) {
+		pr_info_once("spectrev2 mitigation disabled by command line option\n");
+		return;
+	}
+
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
 		return;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v5 02/10] arm64: add sysfs vulnerability show for spectre v1
  2019-02-27  1:05 [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Jeremy Linton
  2019-02-27  1:05 ` [PATCH v5 01/10] arm64: Provide a command line to disable spectre_v2 mitigation Jeremy Linton
@ 2019-02-27  1:05 ` Jeremy Linton
  2019-02-28 18:29   ` Suzuki K Poulose
  2019-03-01  6:54   ` Andre Przywara
  2019-02-27  1:05 ` [PATCH v5 03/10] arm64: add sysfs vulnerability show for meltdown Jeremy Linton
                   ` (9 subsequent siblings)
  11 siblings, 2 replies; 35+ messages in thread
From: Jeremy Linton @ 2019-02-27  1:05 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	Andre.Przywara, linux-kernel, Mian Yousaf Kaukab, Jeremy Linton

From: Mian Yousaf Kaukab <ykaukab@suse.de>

spectre v1, has been mitigated, and the mitigation is
always active.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index d2b2c69d31bb..ad58958becb6 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -755,3 +755,9 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 	{
 	}
 };
+
+ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v5 03/10] arm64: add sysfs vulnerability show for meltdown
  2019-02-27  1:05 [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Jeremy Linton
  2019-02-27  1:05 ` [PATCH v5 01/10] arm64: Provide a command line to disable spectre_v2 mitigation Jeremy Linton
  2019-02-27  1:05 ` [PATCH v5 02/10] arm64: add sysfs vulnerability show for spectre v1 Jeremy Linton
@ 2019-02-27  1:05 ` Jeremy Linton
  2019-02-28 18:33   ` Suzuki K Poulose
  2019-03-01  7:11   ` Andre Przywara
  2019-02-27  1:05 ` [PATCH v5 04/10] arm64: Advertise mitigation of Spectre-v2, or lack thereof Jeremy Linton
                   ` (8 subsequent siblings)
  11 siblings, 2 replies; 35+ messages in thread
From: Jeremy Linton @ 2019-02-27  1:05 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	Andre.Przywara, linux-kernel, Jeremy Linton

Display the mitigation status if active, otherwise
assume the cpu is safe unless it doesn't have CSV3
and isn't in our whitelist.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 47 ++++++++++++++++++++++++++--------
 1 file changed, 37 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index f6d84e2c92fe..d31bd770acba 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -944,7 +944,7 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
 	return has_cpuid_feature(entry, scope);
 }
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+static bool __meltdown_safe = true;
 static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 
 static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
@@ -963,6 +963,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 		{ /* sentinel */ }
 	};
 	char const *str = "command line option";
+	bool meltdown_safe;
+
+	meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
+
+	/* Defer to CPU feature registers */
+	if (has_cpuid_feature(entry, scope))
+		meltdown_safe = true;
+
+	if (!meltdown_safe)
+		__meltdown_safe = false;
 
 	/*
 	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
@@ -974,6 +984,11 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 		__kpti_forced = -1;
 	}
 
+	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) {
+		pr_info_once("kernel page table isolation disabled by CONFIG\n");
+		return false;
+	}
+
 	/* Forced? */
 	if (__kpti_forced) {
 		pr_info_once("kernel page table isolation forced %s by %s\n",
@@ -985,14 +1000,10 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
 		return kaslr_offset() > 0;
 
-	/* Don't force KPTI for CPUs that are not vulnerable */
-	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
-		return false;
-
-	/* Defer to CPU feature registers */
-	return !has_cpuid_feature(entry, scope);
+	return !meltdown_safe;
 }
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 static void
 kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
 {
@@ -1022,6 +1033,13 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
 
 	return;
 }
+#else
+static void
+kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
+{
+}
+#endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
+
 
 static int __init parse_kpti(char *str)
 {
@@ -1035,7 +1053,6 @@ static int __init parse_kpti(char *str)
 	return 0;
 }
 early_param("kpti", parse_kpti);
-#endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 #ifdef CONFIG_ARM64_HW_AFDBM
 static inline void __cpu_enable_hw_dbm(void)
@@ -1286,7 +1303,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,
 		.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
 	},
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	{
 		.desc = "Kernel page table isolation (KPTI)",
 		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
@@ -1302,7 +1318,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.matches = unmap_kernel_at_el0,
 		.cpu_enable = kpti_install_ng_mappings,
 	},
-#endif
 	{
 		/* FP/SIMD is not implemented */
 		.capability = ARM64_HAS_NO_FPSIMD,
@@ -2063,3 +2078,15 @@ static int __init enable_mrs_emulation(void)
 }
 
 core_initcall(enable_mrs_emulation);
+
+ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	if (arm64_kernel_unmapped_at_el0())
+		return sprintf(buf, "Mitigation: KPTI\n");
+
+	if (__meltdown_safe)
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v5 04/10] arm64: Advertise mitigation of Spectre-v2, or lack thereof
  2019-02-27  1:05 [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (2 preceding siblings ...)
  2019-02-27  1:05 ` [PATCH v5 03/10] arm64: add sysfs vulnerability show for meltdown Jeremy Linton
@ 2019-02-27  1:05 ` Jeremy Linton
  2019-03-01  6:57   ` Andre Przywara
  2019-02-27  1:05 ` [PATCH v5 05/10] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2 Jeremy Linton
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-02-27  1:05 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	Andre.Przywara, linux-kernel, Jeremy Linton

From: Marc Zyngier <marc.zyngier@arm.com>

We currently have a list of CPUs affected by Spectre-v2, for which
we check that the firmware implements ARCH_WORKAROUND_1. It turns
out that not all firmwares do implement the required mitigation,
and that we fail to let the user know about it.

Instead, let's slightly revamp our checks, and rely on a whitelist
of cores that are known to be non-vulnerable, and let the user know
the status of the mitigation in the kernel log.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[This makes more sense in front of the sysfs patch]
[Pick pieces of that patch into this and move it earlier]
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 108 +++++++++++++++++----------------
 1 file changed, 56 insertions(+), 52 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index ad58958becb6..c8972255b365 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -131,9 +131,9 @@ static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
 	__flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K);
 }
 
-static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
-				      const char *hyp_vecs_start,
-				      const char *hyp_vecs_end)
+static void install_bp_hardening_cb(bp_hardening_cb_t fn,
+				    const char *hyp_vecs_start,
+				    const char *hyp_vecs_end)
 {
 	static DEFINE_RAW_SPINLOCK(bp_lock);
 	int cpu, slot = -1;
@@ -177,23 +177,6 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
 }
 #endif	/* CONFIG_KVM_INDIRECT_VECTORS */
 
-static void  install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry,
-				     bp_hardening_cb_t fn,
-				     const char *hyp_vecs_start,
-				     const char *hyp_vecs_end)
-{
-	u64 pfr0;
-
-	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
-		return;
-
-	pfr0 = read_cpuid(ID_AA64PFR0_EL1);
-	if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT))
-		return;
-
-	__install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end);
-}
-
 #include <uapi/linux/psci.h>
 #include <linux/arm-smccc.h>
 #include <linux/psci.h>
@@ -228,31 +211,27 @@ static int __init parse_nospectre_v2(char *str)
 }
 early_param("nospectre_v2", parse_nospectre_v2);
 
-static void
-enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
+/*
+ * -1: No workaround
+ *  0: No workaround required
+ *  1: Workaround installed
+ */
+static int detect_harden_bp_fw(void)
 {
 	bp_hardening_cb_t cb;
 	void *smccc_start, *smccc_end;
 	struct arm_smccc_res res;
 	u32 midr = read_cpuid_id();
 
-	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
-		return;
-
-	if (__nospectre_v2) {
-		pr_info_once("spectrev2 mitigation disabled by command line option\n");
-		return;
-	}
-
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
-		return;
+		return -1;
 
 	switch (psci_ops.conduit) {
 	case PSCI_CONDUIT_HVC:
 		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
 		if ((int)res.a0 < 0)
-			return;
+			return -1;
 		cb = call_hvc_arch_workaround_1;
 		/* This is a guest, no need to patch KVM vectors */
 		smccc_start = NULL;
@@ -263,23 +242,23 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
 		if ((int)res.a0 < 0)
-			return;
+			return -1;
 		cb = call_smc_arch_workaround_1;
 		smccc_start = __smccc_workaround_1_smc_start;
 		smccc_end = __smccc_workaround_1_smc_end;
 		break;
 
 	default:
-		return;
+		return -1;
 	}
 
 	if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) ||
 	    ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1))
 		cb = qcom_link_stack_sanitization;
 
-	install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
+	install_bp_hardening_cb(cb, smccc_start, smccc_end);
 
-	return;
+	return 1;
 }
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
@@ -521,24 +500,49 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
 	CAP_MIDR_RANGE_LIST(midr_list)
 
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
-
 /*
- * List of CPUs where we need to issue a psci call to
- * harden the branch predictor.
+ * List of CPUs that do not need any Spectre-v2 mitigation at all.
  */
-static const struct midr_range arm64_bp_harden_smccc_cpus[] = {
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
-	MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
-	MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
-	MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
-	MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
-	MIDR_ALL_VERSIONS(MIDR_NVIDIA_DENVER),
-	{},
+static const struct midr_range spectre_v2_safe_list[] = {
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+	{ /* sentinel */ }
 };
 
+static bool __maybe_unused
+check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
+{
+	int need_wa;
+
+	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+
+	/* If the CPU has CSV2 set, we're safe */
+	if (cpuid_feature_extract_unsigned_field(read_cpuid(ID_AA64PFR0_EL1),
+						 ID_AA64PFR0_CSV2_SHIFT))
+		return false;
+
+	/* Alternatively, we have a list of unaffected CPUs */
+	if (is_midr_in_range_list(read_cpuid_id(), spectre_v2_safe_list))
+		return false;
+
+	/* Fallback to firmware detection */
+	need_wa = detect_harden_bp_fw();
+	if (!need_wa)
+		return false;
+
+	/* forced off */
+	if (__nospectre_v2) {
+		pr_info_once("spectrev2 mitigation disabled by command line option\n");
+		return false;
+	}
+
+	if (need_wa < 0)
+		pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n");
+
+	return (need_wa > 0);
+}
+
 #endif
 
 #ifdef CONFIG_HARDEN_EL2_VECTORS
@@ -717,8 +721,8 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		.cpu_enable = enable_smccc_arch_workaround_1,
-		ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
+		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+		.matches = check_branch_predictor,
 	},
 #endif
 #ifdef CONFIG_HARDEN_EL2_VECTORS
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v5 05/10] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2
  2019-02-27  1:05 [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (3 preceding siblings ...)
  2019-02-27  1:05 ` [PATCH v5 04/10] arm64: Advertise mitigation of Spectre-v2, or lack thereof Jeremy Linton
@ 2019-02-27  1:05 ` Jeremy Linton
  2019-03-01  6:58   ` Andre Przywara
  2019-02-27  1:05 ` [PATCH v5 06/10] arm64: Always enable spectrev2 vulnerability detection Jeremy Linton
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-02-27  1:05 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	Andre.Przywara, linux-kernel, Jeremy Linton

From: Marc Zyngier <marc.zyngier@arm.com>

The SMCCC ARCH_WORKAROUND_1 service can indicate that although the
firmware knows about the Spectre-v2 mitigation, this particular
CPU is not vulnerable, and it is thus not necessary to call
the firmware on this CPU.

Let's use this information to our benefit.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 32 +++++++++++++++++++++++---------
 1 file changed, 23 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index c8972255b365..77f021e78a28 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -230,22 +230,36 @@ static int detect_harden_bp_fw(void)
 	case PSCI_CONDUIT_HVC:
 		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
-		if ((int)res.a0 < 0)
+		switch ((int)res.a0) {
+		case 1:
+			/* Firmware says we're just fine */
+			return 0;
+		case 0:
+			cb = call_hvc_arch_workaround_1;
+			/* This is a guest, no need to patch KVM vectors */
+			smccc_start = NULL;
+			smccc_end = NULL;
+			break;
+		default:
 			return -1;
-		cb = call_hvc_arch_workaround_1;
-		/* This is a guest, no need to patch KVM vectors */
-		smccc_start = NULL;
-		smccc_end = NULL;
+		}
 		break;
 
 	case PSCI_CONDUIT_SMC:
 		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
-		if ((int)res.a0 < 0)
+		switch ((int)res.a0) {
+		case 1:
+			/* Firmware says we're just fine */
+			return 0;
+		case 0:
+			cb = call_smc_arch_workaround_1;
+			smccc_start = __smccc_workaround_1_smc_start;
+			smccc_end = __smccc_workaround_1_smc_end;
+			break;
+		default:
 			return -1;
-		cb = call_smc_arch_workaround_1;
-		smccc_start = __smccc_workaround_1_smc_start;
-		smccc_end = __smccc_workaround_1_smc_end;
+		}
 		break;
 
 	default:
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v5 06/10] arm64: Always enable spectrev2 vulnerability detection
  2019-02-27  1:05 [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (4 preceding siblings ...)
  2019-02-27  1:05 ` [PATCH v5 05/10] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2 Jeremy Linton
@ 2019-02-27  1:05 ` Jeremy Linton
  2019-03-01  6:58   ` Andre Przywara
  2019-02-27  1:05 ` [PATCH v5 07/10] arm64: add sysfs vulnerability show for spectre v2 Jeremy Linton
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-02-27  1:05 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	Andre.Przywara, linux-kernel, Jeremy Linton

The sysfs patches need to display machine vulnerability
status regardless of kernel config. Prepare for that
by breaking out the vulnerability/mitigation detection
code from the logic which implements the mitigation.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 77f021e78a28..a27e1ee750e1 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -109,12 +109,12 @@ cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
 
 atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 #include <asm/mmu_context.h>
 #include <asm/cacheflush.h>
 
 DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
 
+
 #ifdef CONFIG_KVM_INDIRECT_VECTORS
 extern char __smccc_workaround_1_smc_start[];
 extern char __smccc_workaround_1_smc_end[];
@@ -270,11 +270,11 @@ static int detect_harden_bp_fw(void)
 	    ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1))
 		cb = qcom_link_stack_sanitization;
 
-	install_bp_hardening_cb(cb, smccc_start, smccc_end);
+	if (IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR))
+		install_bp_hardening_cb(cb, smccc_start, smccc_end);
 
 	return 1;
 }
-#endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
 #ifdef CONFIG_ARM64_SSBD
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
@@ -513,7 +513,6 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
 	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
 	CAP_MIDR_RANGE_LIST(midr_list)
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 /*
  * List of CPUs that do not need any Spectre-v2 mitigation at all.
  */
@@ -545,6 +544,11 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
 	if (!need_wa)
 		return false;
 
+	if (!IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR)) {
+		pr_warn_once("spectrev2 mitigation disabled by configuration\n");
+		return false;
+	}
+
 	/* forced off */
 	if (__nospectre_v2) {
 		pr_info_once("spectrev2 mitigation disabled by command line option\n");
@@ -557,8 +561,6 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
 	return (need_wa > 0);
 }
 
-#endif
-
 #ifdef CONFIG_HARDEN_EL2_VECTORS
 
 static const struct midr_range arm64_harden_el2_vectors[] = {
@@ -732,13 +734,11 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
 	},
 #endif
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.matches = check_branch_predictor,
 	},
-#endif
 #ifdef CONFIG_HARDEN_EL2_VECTORS
 	{
 		.desc = "EL2 vector hardening",
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v5 07/10] arm64: add sysfs vulnerability show for spectre v2
  2019-02-27  1:05 [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (5 preceding siblings ...)
  2019-02-27  1:05 ` [PATCH v5 06/10] arm64: Always enable spectrev2 vulnerability detection Jeremy Linton
@ 2019-02-27  1:05 ` Jeremy Linton
  2019-03-01  6:59   ` Andre Przywara
  2019-02-27  1:05 ` [PATCH v5 08/10] arm64: Always enable ssb vulnerability detection Jeremy Linton
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-02-27  1:05 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	Andre.Przywara, linux-kernel, Jeremy Linton

Add code to track whether all the cores in the machine are
vulnerable, and whether all the vulnerable cores have been
mitigated.

Once we have that information we can add the sysfs stub and
provide an accurate view of what is known about the machine.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 28 +++++++++++++++++++++++++++-
 1 file changed, 27 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index a27e1ee750e1..0f6e8f5d67bc 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -513,6 +513,10 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
 	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
 	CAP_MIDR_RANGE_LIST(midr_list)
 
+/* Track overall mitigation state. We are only mitigated if all cores are ok */
+static bool __hardenbp_enab = true;
+static bool __spectrev2_safe = true;
+
 /*
  * List of CPUs that do not need any Spectre-v2 mitigation at all.
  */
@@ -523,6 +527,10 @@ static const struct midr_range spectre_v2_safe_list[] = {
 	{ /* sentinel */ }
 };
 
+/*
+ * Track overall bp hardening for all heterogeneous cores in the machine.
+ * We are only considered "safe" if all booted cores are known safe.
+ */
 static bool __maybe_unused
 check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
 {
@@ -544,19 +552,25 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
 	if (!need_wa)
 		return false;
 
+	__spectrev2_safe = false;
+
 	if (!IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR)) {
 		pr_warn_once("spectrev2 mitigation disabled by configuration\n");
+		__hardenbp_enab = false;
 		return false;
 	}
 
 	/* forced off */
 	if (__nospectre_v2) {
 		pr_info_once("spectrev2 mitigation disabled by command line option\n");
+		__hardenbp_enab = false;
 		return false;
 	}
 
-	if (need_wa < 0)
+	if (need_wa < 0) {
 		pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n");
+		__hardenbp_enab = false;
+	}
 
 	return (need_wa > 0);
 }
@@ -779,3 +793,15 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
 {
 	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 }
+
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	if (__spectrev2_safe)
+		return sprintf(buf, "Not affected\n");
+
+	if (__hardenbp_enab)
+		return sprintf(buf, "Mitigation: Branch predictor hardening\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v5 08/10] arm64: Always enable ssb vulnerability detection
  2019-02-27  1:05 [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (6 preceding siblings ...)
  2019-02-27  1:05 ` [PATCH v5 07/10] arm64: add sysfs vulnerability show for spectre v2 Jeremy Linton
@ 2019-02-27  1:05 ` Jeremy Linton
  2019-03-01  7:02   ` Andre Przywara
  2019-02-27  1:05 ` [PATCH v5 09/10] arm64: add sysfs vulnerability show for speculative store bypass Jeremy Linton
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-02-27  1:05 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	Andre.Przywara, linux-kernel, Jeremy Linton

The ssb detection logic is necessary regardless of whether
the vulnerability mitigation code is built into the kernel.
Break it out so that the CONFIG option only controls the
mitigation logic and not the vulnerability detection.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |  4 ----
 arch/arm64/kernel/cpu_errata.c      | 11 +++++++----
 2 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index dfcfba725d72..c2b60a021437 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -628,11 +628,7 @@ static inline int arm64_get_ssbd_state(void)
 #endif
 }
 
-#ifdef CONFIG_ARM64_SSBD
 void arm64_set_ssbd_mitigation(bool state);
-#else
-static inline void arm64_set_ssbd_mitigation(bool state) {}
-#endif
 
 extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
 
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 0f6e8f5d67bc..5f5611d17dc1 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -276,7 +276,6 @@ static int detect_harden_bp_fw(void)
 	return 1;
 }
 
-#ifdef CONFIG_ARM64_SSBD
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
 
 int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
@@ -347,6 +346,7 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt,
 		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
 }
 
+#ifdef CONFIG_ARM64_SSBD
 void arm64_set_ssbd_mitigation(bool state)
 {
 	if (this_cpu_has_cap(ARM64_SSBS)) {
@@ -371,6 +371,12 @@ void arm64_set_ssbd_mitigation(bool state)
 		break;
 	}
 }
+#else
+void arm64_set_ssbd_mitigation(bool state)
+{
+	pr_info_once("SSBD, disabled by kernel configuration\n");
+}
+#endif	/* CONFIG_ARM64_SSBD */
 
 static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 				    int scope)
@@ -468,7 +474,6 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 
 	return required;
 }
-#endif	/* CONFIG_ARM64_SSBD */
 
 static void __maybe_unused
 cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
@@ -760,14 +765,12 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors),
 	},
 #endif
-#ifdef CONFIG_ARM64_SSBD
 	{
 		.desc = "Speculative Store Bypass Disable",
 		.capability = ARM64_SSBD,
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.matches = has_ssbd_mitigation,
 	},
-#endif
 #ifdef CONFIG_ARM64_ERRATUM_1188873
 	{
 		/* Cortex-A76 r0p0 to r2p0 */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v5 09/10] arm64: add sysfs vulnerability show for speculative store bypass
  2019-02-27  1:05 [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (7 preceding siblings ...)
  2019-02-27  1:05 ` [PATCH v5 08/10] arm64: Always enable ssb vulnerability detection Jeremy Linton
@ 2019-02-27  1:05 ` Jeremy Linton
  2019-03-01  7:02   ` Andre Przywara
  2019-02-27  1:05 ` [PATCH v5 10/10] arm64: enable generic CPU vulnerabilites support Jeremy Linton
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-02-27  1:05 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	Andre.Przywara, linux-kernel, Jeremy Linton

Return status based on ssbd_state and the arm64 SSBS feature. If
the mitigation is disabled, or the firmware isn't responding then
return the expected machine state based on a new blacklist of known
vulnerable cores.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 43 ++++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 5f5611d17dc1..e1b03f643799 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -279,6 +279,7 @@ static int detect_harden_bp_fw(void)
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
 
 int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
+static bool __ssb_safe = true;
 
 static const struct ssbd_options {
 	const char	*str;
@@ -387,6 +388,9 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 
 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
 
+	if (is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list))
+		__ssb_safe = false;
+
 	if (this_cpu_has_cap(ARM64_SSBS)) {
 		required = false;
 		goto out_printmsg;
@@ -420,6 +424,7 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 		ssbd_state = ARM64_SSBD_UNKNOWN;
 		return false;
 
+	/* machines with mixed mitigation requirements must not return this */
 	case SMCCC_RET_NOT_REQUIRED:
 		pr_info_once("%s mitigation not required\n", entry->desc);
 		ssbd_state = ARM64_SSBD_MITIGATED;
@@ -475,6 +480,16 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 	return required;
 }
 
+/* known vulnerable cores */
+static const struct midr_range arm64_ssb_cpus[] = {
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
+	{},
+};
+
 static void __maybe_unused
 cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
 {
@@ -770,6 +785,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		.capability = ARM64_SSBD,
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.matches = has_ssbd_mitigation,
+		.midr_range_list = arm64_ssb_cpus,
 	},
 #ifdef CONFIG_ARM64_ERRATUM_1188873
 	{
@@ -808,3 +824,30 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
 
 	return sprintf(buf, "Vulnerable\n");
 }
+
+ssize_t cpu_show_spec_store_bypass(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	/*
+	 *  Two assumptions: First, ssbd_state reflects the worse case
+	 *  for hetrogenous machines, and that if SSBS is supported its
+	 *  supported by all cores.
+	 */
+	switch (ssbd_state) {
+	case ARM64_SSBD_MITIGATED:
+		return sprintf(buf, "Not affected\n");
+
+	case ARM64_SSBD_KERNEL:
+	case ARM64_SSBD_FORCE_ENABLE:
+		if (cpus_have_cap(ARM64_SSBS))
+			return sprintf(buf, "Not affected\n");
+		if (IS_ENABLED(CONFIG_ARM64_SSBD))
+			return sprintf(buf,
+			    "Mitigation: Speculative Store Bypass disabled\n");
+	}
+
+	if (__ssb_safe)
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v5 10/10] arm64: enable generic CPU vulnerabilites support
  2019-02-27  1:05 [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (8 preceding siblings ...)
  2019-02-27  1:05 ` [PATCH v5 09/10] arm64: add sysfs vulnerability show for speculative store bypass Jeremy Linton
@ 2019-02-27  1:05 ` Jeremy Linton
  2019-03-01  7:03   ` Andre Przywara
  2019-02-28 12:01 ` [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Catalin Marinas
  2019-03-01 19:35 ` Stefan Wahren
  11 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-02-27  1:05 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	Andre.Przywara, linux-kernel, Mian Yousaf Kaukab, Jeremy Linton

From: Mian Yousaf Kaukab <ykaukab@suse.de>

Enable CPU vulnerabilty show functions for spectre_v1, spectre_v2,
meltdown and store-bypass.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index a4168d366127..be9872ee1d61 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -88,6 +88,7 @@ config ARM64
 	select GENERIC_CLOCKEVENTS
 	select GENERIC_CLOCKEVENTS_BROADCAST
 	select GENERIC_CPU_AUTOPROBE
+	select GENERIC_CPU_VULNERABILITIES
 	select GENERIC_EARLY_IOREMAP
 	select GENERIC_IDLE_POLL_SETUP
 	select GENERIC_IRQ_MULTI_HANDLER
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 00/10] arm64: add system vulnerability sysfs entries
  2019-02-27  1:05 [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (9 preceding siblings ...)
  2019-02-27  1:05 ` [PATCH v5 10/10] arm64: enable generic CPU vulnerabilites support Jeremy Linton
@ 2019-02-28 12:01 ` Catalin Marinas
  2019-03-01 19:35 ` Stefan Wahren
  11 siblings, 0 replies; 35+ messages in thread
From: Catalin Marinas @ 2019-02-28 12:01 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	Andre.Przywara, linux-kernel

Hi Jeremy,

On Tue, Feb 26, 2019 at 07:05:34PM -0600, Jeremy Linton wrote:
> Jeremy Linton (6):
>   arm64: Provide a command line to disable spectre_v2 mitigation
>   arm64: add sysfs vulnerability show for meltdown
>   arm64: Always enable spectrev2 vulnerability detection
>   arm64: add sysfs vulnerability show for spectre v2
>   arm64: Always enable ssb vulnerability detection
>   arm64: add sysfs vulnerability show for speculative store bypass
> 
> Marc Zyngier (2):
>   arm64: Advertise mitigation of Spectre-v2, or lack thereof
>   arm64: Use firmware to detect CPUs that are not affected by Spectre-v2
> 
> Mian Yousaf Kaukab (2):
>   arm64: add sysfs vulnerability show for spectre v1
>   arm64: enable generic CPU vulnerabilites support

The patches look fine to me (I'm giving them some testing now). However,
It would be nice if we got the acks/reviewed-by tags from the people
that looked at the previous series (Andre, Suzuki, Julien). You haven't
included any tags in this series, I guess there were sufficient changes
not to carry them over.

If I get the acks by tomorrow, I'll queue them for 5.1, otherwise they'd
have to wait for the next merging window.

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 01/10] arm64: Provide a command line to disable spectre_v2 mitigation
  2019-02-27  1:05 ` [PATCH v5 01/10] arm64: Provide a command line to disable spectre_v2 mitigation Jeremy Linton
@ 2019-02-28 18:14   ` Suzuki K Poulose
  2019-02-28 18:21     ` Catalin Marinas
  2019-03-01  6:54   ` Andre Przywara
  1 sibling, 1 reply; 35+ messages in thread
From: Suzuki K Poulose @ 2019-02-28 18:14 UTC (permalink / raw)
  To: jeremy.linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, dave.martin,
	shankerd, julien.thierry, mlangsdo, stefan.wahren,
	andre.przywara, linux-kernel, corbet, linux-doc

Hi Jeremy

On 27/02/2019 01:05, Jeremy Linton wrote:
> There are various reasons, including bencmarking, to disable spectrev2
> mitigation on a machine. Provide a command-line to do so.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: linux-doc@vger.kernel.org


> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 9950bb0cbd52..d2b2c69d31bb 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -220,6 +220,14 @@ static void qcom_link_stack_sanitization(void)
>   		     : "=&r" (tmp));
>   }
>   
> +static bool __nospectre_v2;
> +static int __init parse_nospectre_v2(char *str)
> +{
> +	__nospectre_v2 = true;
> +	return 0;
> +}
> +early_param("nospectre_v2", parse_nospectre_v2);
> +
>   static void
>   enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>   {
> @@ -231,6 +239,11 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>   	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
>   		return;
>   
> +	if (__nospectre_v2) {
> +		pr_info_once("spectrev2 mitigation disabled by command line option\n");
> +		return;
> +	}
> +

Could we not disable the "cap" altogether instead, rather than disabling the
work around ? Or do we need that information ?

Cheers
Suzuki

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 01/10] arm64: Provide a command line to disable spectre_v2 mitigation
  2019-02-28 18:14   ` Suzuki K Poulose
@ 2019-02-28 18:21     ` Catalin Marinas
  2019-02-28 18:25       ` Suzuki K Poulose
  0 siblings, 1 reply; 35+ messages in thread
From: Catalin Marinas @ 2019-02-28 18:21 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: jeremy.linton, linux-arm-kernel, will.deacon, marc.zyngier,
	dave.martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	andre.przywara, linux-kernel, corbet, linux-doc

On Thu, Feb 28, 2019 at 06:14:34PM +0000, Suzuki K Poulose wrote:
> On 27/02/2019 01:05, Jeremy Linton wrote:
> > There are various reasons, including bencmarking, to disable spectrev2
> > mitigation on a machine. Provide a command-line to do so.
> > 
> > Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> > Cc: Jonathan Corbet <corbet@lwn.net>
> > Cc: linux-doc@vger.kernel.org
> 
> 
> > diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> > index 9950bb0cbd52..d2b2c69d31bb 100644
> > --- a/arch/arm64/kernel/cpu_errata.c
> > +++ b/arch/arm64/kernel/cpu_errata.c
> > @@ -220,6 +220,14 @@ static void qcom_link_stack_sanitization(void)
> >   		     : "=&r" (tmp));
> >   }
> > +static bool __nospectre_v2;
> > +static int __init parse_nospectre_v2(char *str)
> > +{
> > +	__nospectre_v2 = true;
> > +	return 0;
> > +}
> > +early_param("nospectre_v2", parse_nospectre_v2);
> > +
> >   static void
> >   enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
> >   {
> > @@ -231,6 +239,11 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
> >   	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
> >   		return;
> > +	if (__nospectre_v2) {
> > +		pr_info_once("spectrev2 mitigation disabled by command line option\n");
> > +		return;
> > +	}
> > +
> 
> Could we not disable the "cap" altogether instead, rather than disabling the
> work around ? Or do we need that information ?

There are a few ideas here but I think we settled on always reporting in
sysfs even if the mitigation is disabled in .config. So I guess we need
the "cap" around for the reporting part.

-- 
Catalin

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 01/10] arm64: Provide a command line to disable spectre_v2 mitigation
  2019-02-28 18:21     ` Catalin Marinas
@ 2019-02-28 18:25       ` Suzuki K Poulose
  0 siblings, 0 replies; 35+ messages in thread
From: Suzuki K Poulose @ 2019-02-28 18:25 UTC (permalink / raw)
  To: catalin.marinas
  Cc: jeremy.linton, linux-arm-kernel, will.deacon, marc.zyngier,
	dave.martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	andre.przywara, linux-kernel, corbet, linux-doc



On 28/02/2019 18:21, Catalin Marinas wrote:
> On Thu, Feb 28, 2019 at 06:14:34PM +0000, Suzuki K Poulose wrote:
>> On 27/02/2019 01:05, Jeremy Linton wrote:
>>> There are various reasons, including bencmarking, to disable spectrev2
>>> mitigation on a machine. Provide a command-line to do so.
>>>
>>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>>> Cc: Jonathan Corbet <corbet@lwn.net>
>>> Cc: linux-doc@vger.kernel.org
>>
>>
>>> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
>>> index 9950bb0cbd52..d2b2c69d31bb 100644
>>> --- a/arch/arm64/kernel/cpu_errata.c
>>> +++ b/arch/arm64/kernel/cpu_errata.c
>>> @@ -220,6 +220,14 @@ static void qcom_link_stack_sanitization(void)
>>>    		     : "=&r" (tmp));
>>>    }
>>> +static bool __nospectre_v2;
>>> +static int __init parse_nospectre_v2(char *str)
>>> +{
>>> +	__nospectre_v2 = true;
>>> +	return 0;
>>> +}
>>> +early_param("nospectre_v2", parse_nospectre_v2);
>>> +
>>>    static void
>>>    enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>>>    {
>>> @@ -231,6 +239,11 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>>>    	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
>>>    		return;
>>> +	if (__nospectre_v2) {
>>> +		pr_info_once("spectrev2 mitigation disabled by command line option\n");
>>> +		return;
>>> +	}
>>> +
>>
>> Could we not disable the "cap" altogether instead, rather than disabling the
>> work around ? Or do we need that information ?
> 
> There are a few ideas here but I think we settled on always reporting in
> sysfs even if the mitigation is disabled in .config. So I guess we need
> the "cap" around for the reporting part.
> 

Thanks Catalin.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 02/10] arm64: add sysfs vulnerability show for spectre v1
  2019-02-27  1:05 ` [PATCH v5 02/10] arm64: add sysfs vulnerability show for spectre v1 Jeremy Linton
@ 2019-02-28 18:29   ` Suzuki K Poulose
  2019-03-01  6:54   ` Andre Przywara
  1 sibling, 0 replies; 35+ messages in thread
From: Suzuki K Poulose @ 2019-02-28 18:29 UTC (permalink / raw)
  To: jeremy.linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, dave.martin,
	shankerd, julien.thierry, mlangsdo, stefan.wahren,
	andre.przywara, linux-kernel, ykaukab



On 27/02/2019 01:05, Jeremy Linton wrote:
> From: Mian Yousaf Kaukab <ykaukab@suse.de>
> 
> spectre v1, has been mitigated, and the mitigation is
> always active.
> 
> Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>   arch/arm64/kernel/cpu_errata.c | 6 ++++++
>   1 file changed, 6 insertions(+)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index d2b2c69d31bb..ad58958becb6 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -755,3 +755,9 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
>   	{
>   	}
>   };
> +
> +ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
> +		char *buf)
> +{
> +	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
> +}
> 

minor nit: This could possibly have been in the cpufeature.c, where we keep
the spectre_v2 routine.

Either way,

Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 03/10] arm64: add sysfs vulnerability show for meltdown
  2019-02-27  1:05 ` [PATCH v5 03/10] arm64: add sysfs vulnerability show for meltdown Jeremy Linton
@ 2019-02-28 18:33   ` Suzuki K Poulose
  2019-03-01  7:11   ` Andre Przywara
  1 sibling, 0 replies; 35+ messages in thread
From: Suzuki K Poulose @ 2019-02-28 18:33 UTC (permalink / raw)
  To: jeremy.linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, dave.martin,
	shankerd, julien.thierry, mlangsdo, andre.przywara, linux-kernel



On 27/02/2019 01:05, Jeremy Linton wrote:
> Display the mitigation status if active, otherwise
> assume the cpu is safe unless it doesn't have CSV3
> and isn't in our whitelist.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 01/10] arm64: Provide a command line to disable spectre_v2 mitigation
  2019-02-27  1:05 ` [PATCH v5 01/10] arm64: Provide a command line to disable spectre_v2 mitigation Jeremy Linton
  2019-02-28 18:14   ` Suzuki K Poulose
@ 2019-03-01  6:54   ` Andre Przywara
  1 sibling, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-03-01  6:54 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	linux-kernel, Jonathan Corbet, linux-doc

Hi,

On 2/26/19 7:05 PM, Jeremy Linton wrote:
> There are various reasons, including bencmarking, to disable spectrev2
> mitigation on a machine. Provide a command-line to do so.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre.

> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: linux-doc@vger.kernel.org
> ---
>   Documentation/admin-guide/kernel-parameters.txt |  8 ++++----
>   arch/arm64/kernel/cpu_errata.c                  | 13 +++++++++++++
>   2 files changed, 17 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 858b6c0b9a15..4d4d6a9537ae 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -2842,10 +2842,10 @@
>   			check bypass). With this option data leaks are possible
>   			in the system.
>   
> -	nospectre_v2	[X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2
> -			(indirect branch prediction) vulnerability. System may
> -			allow data leaks with this option, which is equivalent
> -			to spectre_v2=off.
> +	nospectre_v2	[X86,PPC_FSL_BOOK3E,ARM64] Disable all mitigations for
> +			the Spectre variant 2 (indirect branch prediction)
> +			vulnerability. System may allow data leaks with this
> +			option.
>   
>   	nospec_store_bypass_disable
>   			[HW] Disable all mitigations for the Speculative Store Bypass vulnerability
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 9950bb0cbd52..d2b2c69d31bb 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -220,6 +220,14 @@ static void qcom_link_stack_sanitization(void)
>   		     : "=&r" (tmp));
>   }
>   
> +static bool __nospectre_v2;
> +static int __init parse_nospectre_v2(char *str)
> +{
> +	__nospectre_v2 = true;
> +	return 0;
> +}
> +early_param("nospectre_v2", parse_nospectre_v2);
> +
>   static void
>   enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>   {
> @@ -231,6 +239,11 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>   	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
>   		return;
>   
> +	if (__nospectre_v2) {
> +		pr_info_once("spectrev2 mitigation disabled by command line option\n");
> +		return;
> +	}
> +
>   	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
>   		return;
>   
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 02/10] arm64: add sysfs vulnerability show for spectre v1
  2019-02-27  1:05 ` [PATCH v5 02/10] arm64: add sysfs vulnerability show for spectre v1 Jeremy Linton
  2019-02-28 18:29   ` Suzuki K Poulose
@ 2019-03-01  6:54   ` Andre Przywara
  1 sibling, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-03-01  6:54 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	linux-kernel, Mian Yousaf Kaukab

Hi,

On 2/26/19 7:05 PM, Jeremy Linton wrote:
> From: Mian Yousaf Kaukab <ykaukab@suse.de>
> 
> spectre v1, has been mitigated, and the mitigation is
> always active.
> 
> Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>   arch/arm64/kernel/cpu_errata.c | 6 ++++++
>   1 file changed, 6 insertions(+)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index d2b2c69d31bb..ad58958becb6 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -755,3 +755,9 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
>   	{
>   	}
>   };
> +
> +ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
> +		char *buf)

w/s issue, but it's not critical:

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre

> +{
> +	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
> +}
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 04/10] arm64: Advertise mitigation of Spectre-v2, or lack thereof
  2019-02-27  1:05 ` [PATCH v5 04/10] arm64: Advertise mitigation of Spectre-v2, or lack thereof Jeremy Linton
@ 2019-03-01  6:57   ` Andre Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-03-01  6:57 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	linux-kernel

Hi,

On 2/26/19 7:05 PM, Jeremy Linton wrote:
> From: Marc Zyngier <marc.zyngier@arm.com>
> 
> We currently have a list of CPUs affected by Spectre-v2, for which
> we check that the firmware implements ARCH_WORKAROUND_1. It turns
> out that not all firmwares do implement the required mitigation,
> and that we fail to let the user know about it.
> 
> Instead, let's slightly revamp our checks, and rely on a whitelist
> of cores that are known to be non-vulnerable, and let the user know
> the status of the mitigation in the kernel log.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> [This makes more sense in front of the sysfs patch]
> [Pick pieces of that patch into this and move it earlier]
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>

Indeed a whitelist is much better.

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre.

> ---
>   arch/arm64/kernel/cpu_errata.c | 108 +++++++++++++++++----------------
>   1 file changed, 56 insertions(+), 52 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index ad58958becb6..c8972255b365 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -131,9 +131,9 @@ static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
>   	__flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K);
>   }
>   
> -static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
> -				      const char *hyp_vecs_start,
> -				      const char *hyp_vecs_end)
> +static void install_bp_hardening_cb(bp_hardening_cb_t fn,
> +				    const char *hyp_vecs_start,
> +				    const char *hyp_vecs_end)
>   {
>   	static DEFINE_RAW_SPINLOCK(bp_lock);
>   	int cpu, slot = -1;
> @@ -177,23 +177,6 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
>   }
>   #endif	/* CONFIG_KVM_INDIRECT_VECTORS */
>   
> -static void  install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry,
> -				     bp_hardening_cb_t fn,
> -				     const char *hyp_vecs_start,
> -				     const char *hyp_vecs_end)
> -{
> -	u64 pfr0;
> -
> -	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
> -		return;
> -
> -	pfr0 = read_cpuid(ID_AA64PFR0_EL1);
> -	if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT))
> -		return;
> -
> -	__install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end);
> -}
> -
>   #include <uapi/linux/psci.h>
>   #include <linux/arm-smccc.h>
>   #include <linux/psci.h>
> @@ -228,31 +211,27 @@ static int __init parse_nospectre_v2(char *str)
>   }
>   early_param("nospectre_v2", parse_nospectre_v2);
>   
> -static void
> -enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
> +/*
> + * -1: No workaround
> + *  0: No workaround required
> + *  1: Workaround installed
> + */
> +static int detect_harden_bp_fw(void)
>   {
>   	bp_hardening_cb_t cb;
>   	void *smccc_start, *smccc_end;
>   	struct arm_smccc_res res;
>   	u32 midr = read_cpuid_id();
>   
> -	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
> -		return;
> -
> -	if (__nospectre_v2) {
> -		pr_info_once("spectrev2 mitigation disabled by command line option\n");
> -		return;
> -	}
> -
>   	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
> -		return;
> +		return -1;
>   
>   	switch (psci_ops.conduit) {
>   	case PSCI_CONDUIT_HVC:
>   		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>   				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
>   		if ((int)res.a0 < 0)
> -			return;
> +			return -1;
>   		cb = call_hvc_arch_workaround_1;
>   		/* This is a guest, no need to patch KVM vectors */
>   		smccc_start = NULL;
> @@ -263,23 +242,23 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>   		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>   				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
>   		if ((int)res.a0 < 0)
> -			return;
> +			return -1;
>   		cb = call_smc_arch_workaround_1;
>   		smccc_start = __smccc_workaround_1_smc_start;
>   		smccc_end = __smccc_workaround_1_smc_end;
>   		break;
>   
>   	default:
> -		return;
> +		return -1;
>   	}
>   
>   	if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) ||
>   	    ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1))
>   		cb = qcom_link_stack_sanitization;
>   
> -	install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
> +	install_bp_hardening_cb(cb, smccc_start, smccc_end);
>   
> -	return;
> +	return 1;
>   }
>   #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
>   
> @@ -521,24 +500,49 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
>   	CAP_MIDR_RANGE_LIST(midr_list)
>   
>   #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> -
>   /*
> - * List of CPUs where we need to issue a psci call to
> - * harden the branch predictor.
> + * List of CPUs that do not need any Spectre-v2 mitigation at all.
>    */
> -static const struct midr_range arm64_bp_harden_smccc_cpus[] = {
> -	MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
> -	MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
> -	MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
> -	MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
> -	MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
> -	MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
> -	MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
> -	MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
> -	MIDR_ALL_VERSIONS(MIDR_NVIDIA_DENVER),
> -	{},
> +static const struct midr_range spectre_v2_safe_list[] = {
> +	MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
> +	MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
> +	MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
> +	{ /* sentinel */ }
>   };
>   
> +static bool __maybe_unused
> +check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
> +{
> +	int need_wa;
> +
> +	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
> +
> +	/* If the CPU has CSV2 set, we're safe */
> +	if (cpuid_feature_extract_unsigned_field(read_cpuid(ID_AA64PFR0_EL1),
> +						 ID_AA64PFR0_CSV2_SHIFT))
> +		return false;
> +
> +	/* Alternatively, we have a list of unaffected CPUs */
> +	if (is_midr_in_range_list(read_cpuid_id(), spectre_v2_safe_list))
> +		return false;
> +
> +	/* Fallback to firmware detection */
> +	need_wa = detect_harden_bp_fw();
> +	if (!need_wa)
> +		return false;
> +
> +	/* forced off */
> +	if (__nospectre_v2) {
> +		pr_info_once("spectrev2 mitigation disabled by command line option\n");
> +		return false;
> +	}
> +
> +	if (need_wa < 0)
> +		pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n");
> +
> +	return (need_wa > 0);
> +}
> +
>   #endif
>   
>   #ifdef CONFIG_HARDEN_EL2_VECTORS
> @@ -717,8 +721,8 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
>   #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>   	{
>   		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
> -		.cpu_enable = enable_smccc_arch_workaround_1,
> -		ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
> +		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
> +		.matches = check_branch_predictor,
>   	},
>   #endif
>   #ifdef CONFIG_HARDEN_EL2_VECTORS
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 05/10] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2
  2019-02-27  1:05 ` [PATCH v5 05/10] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2 Jeremy Linton
@ 2019-03-01  6:58   ` Andre Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-03-01  6:58 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	linux-kernel

Hi,

On 2/26/19 7:05 PM, Jeremy Linton wrote:
> From: Marc Zyngier <marc.zyngier@arm.com>
> 
> The SMCCC ARCH_WORKAROUND_1 service can indicate that although the
> firmware knows about the Spectre-v2 mitigation, this particular
> CPU is not vulnerable, and it is thus not necessary to call
> the firmware on this CPU.
> 
> Let's use this information to our benefit.

Yes, that matches the firmware interface description.

> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre.

> ---
>   arch/arm64/kernel/cpu_errata.c | 32 +++++++++++++++++++++++---------
>   1 file changed, 23 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index c8972255b365..77f021e78a28 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -230,22 +230,36 @@ static int detect_harden_bp_fw(void)
>   	case PSCI_CONDUIT_HVC:
>   		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>   				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> -		if ((int)res.a0 < 0)
> +		switch ((int)res.a0) {
> +		case 1:
> +			/* Firmware says we're just fine */
> +			return 0;
> +		case 0:
> +			cb = call_hvc_arch_workaround_1;
> +			/* This is a guest, no need to patch KVM vectors */
> +			smccc_start = NULL;
> +			smccc_end = NULL;
> +			break;
> +		default:
>   			return -1;
> -		cb = call_hvc_arch_workaround_1;
> -		/* This is a guest, no need to patch KVM vectors */
> -		smccc_start = NULL;
> -		smccc_end = NULL;
> +		}
>   		break;
>   
>   	case PSCI_CONDUIT_SMC:
>   		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>   				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> -		if ((int)res.a0 < 0)
> +		switch ((int)res.a0) {
> +		case 1:
> +			/* Firmware says we're just fine */
> +			return 0;
> +		case 0:
> +			cb = call_smc_arch_workaround_1;
> +			smccc_start = __smccc_workaround_1_smc_start;
> +			smccc_end = __smccc_workaround_1_smc_end;
> +			break;
> +		default:
>   			return -1;
> -		cb = call_smc_arch_workaround_1;
> -		smccc_start = __smccc_workaround_1_smc_start;
> -		smccc_end = __smccc_workaround_1_smc_end;
> +		}
>   		break;
>   
>   	default:
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 06/10] arm64: Always enable spectrev2 vulnerability detection
  2019-02-27  1:05 ` [PATCH v5 06/10] arm64: Always enable spectrev2 vulnerability detection Jeremy Linton
@ 2019-03-01  6:58   ` Andre Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-03-01  6:58 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	linux-kernel

Hi,

On 2/26/19 7:05 PM, Jeremy Linton wrote:
> The sysfs patches need to display machine vulnerability
> status regardless of kernel config. Prepare for that
> by breaking out the vulnerability/mitigation detection
> code from the logic which implements the mitigation.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>   arch/arm64/kernel/cpu_errata.c | 16 ++++++++--------
>   1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 77f021e78a28..a27e1ee750e1 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -109,12 +109,12 @@ cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
>   
>   atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
>   
> -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>   #include <asm/mmu_context.h>
>   #include <asm/cacheflush.h>
>   
>   DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
>   
> +

extra empty line

Apart from that picky and unimportant nit it looks alright and compiles 
with and without CONFIG_HARDEN_BRANCH_PREDICTOR being defined.

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre.

>   #ifdef CONFIG_KVM_INDIRECT_VECTORS
>   extern char __smccc_workaround_1_smc_start[];
>   extern char __smccc_workaround_1_smc_end[];
> @@ -270,11 +270,11 @@ static int detect_harden_bp_fw(void)
>   	    ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1))
>   		cb = qcom_link_stack_sanitization;
>   
> -	install_bp_hardening_cb(cb, smccc_start, smccc_end);
> +	if (IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR))
> +		install_bp_hardening_cb(cb, smccc_start, smccc_end);
>   
>   	return 1;
>   }
> -#endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
>   
>   #ifdef CONFIG_ARM64_SSBD
>   DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
> @@ -513,7 +513,6 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
>   	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
>   	CAP_MIDR_RANGE_LIST(midr_list)
>   
> -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>   /*
>    * List of CPUs that do not need any Spectre-v2 mitigation at all.
>    */
> @@ -545,6 +544,11 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
>   	if (!need_wa)
>   		return false;
>   
> +	if (!IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR)) {
> +		pr_warn_once("spectrev2 mitigation disabled by configuration\n");
> +		return false;
> +	}
> +
>   	/* forced off */
>   	if (__nospectre_v2) {
>   		pr_info_once("spectrev2 mitigation disabled by command line option\n");
> @@ -557,8 +561,6 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
>   	return (need_wa > 0);
>   }
>   
> -#endif
> -
>   #ifdef CONFIG_HARDEN_EL2_VECTORS
>   
>   static const struct midr_range arm64_harden_el2_vectors[] = {
> @@ -732,13 +734,11 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
>   		ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
>   	},
>   #endif
> -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>   	{
>   		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
>   		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
>   		.matches = check_branch_predictor,
>   	},
> -#endif
>   #ifdef CONFIG_HARDEN_EL2_VECTORS
>   	{
>   		.desc = "EL2 vector hardening",
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 07/10] arm64: add sysfs vulnerability show for spectre v2
  2019-02-27  1:05 ` [PATCH v5 07/10] arm64: add sysfs vulnerability show for spectre v2 Jeremy Linton
@ 2019-03-01  6:59   ` Andre Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-03-01  6:59 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	linux-kernel

Hi,

On 2/26/19 7:05 PM, Jeremy Linton wrote:
> Add code to track whether all the cores in the machine are
> vulnerable, and whether all the vulnerable cores have been
> mitigated.
> 
> Once we have that information we can add the sysfs stub and
> provide an accurate view of what is known about the machine.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>   arch/arm64/kernel/cpu_errata.c | 28 +++++++++++++++++++++++++++-
>   1 file changed, 27 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index a27e1ee750e1..0f6e8f5d67bc 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -513,6 +513,10 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
>   	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
>   	CAP_MIDR_RANGE_LIST(midr_list)
>   
> +/* Track overall mitigation state. We are only mitigated if all cores are ok */
> +static bool __hardenbp_enab = true;
> +static bool __spectrev2_safe = true;
> +
>   /*
>    * List of CPUs that do not need any Spectre-v2 mitigation at all.
>    */
> @@ -523,6 +527,10 @@ static const struct midr_range spectre_v2_safe_list[] = {
>   	{ /* sentinel */ }
>   };
>   
> +/*
> + * Track overall bp hardening for all heterogeneous cores in the machine.
> + * We are only considered "safe" if all booted cores are known safe.
> + */
>   static bool __maybe_unused
>   check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
>   {
> @@ -544,19 +552,25 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
>   	if (!need_wa)
>   		return false;
>   
> +	__spectrev2_safe = false;
> +
>   	if (!IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR)) {
>   		pr_warn_once("spectrev2 mitigation disabled by configuration\n");
> +		__hardenbp_enab = false;
>   		return false;
>   	}
>   
>   	/* forced off */
>   	if (__nospectre_v2) {
>   		pr_info_once("spectrev2 mitigation disabled by command line option\n");
> +		__hardenbp_enab = false;
>   		return false;
>   	}
>   
> -	if (need_wa < 0)
> +	if (need_wa < 0) {
>   		pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n");
> +		__hardenbp_enab = false;
> +	}
>   
>   	return (need_wa > 0);
>   }
> @@ -779,3 +793,15 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
>   {
>   	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
>   }
> +
> +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
> +		char *buf)

w/s issue

Anyway:
Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre.

> +{
> +	if (__spectrev2_safe)
> +		return sprintf(buf, "Not affected\n");
> +
> +	if (__hardenbp_enab)
> +		return sprintf(buf, "Mitigation: Branch predictor hardening\n");
> +
> +	return sprintf(buf, "Vulnerable\n");
> +}
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 09/10] arm64: add sysfs vulnerability show for speculative store bypass
  2019-02-27  1:05 ` [PATCH v5 09/10] arm64: add sysfs vulnerability show for speculative store bypass Jeremy Linton
@ 2019-03-01  7:02   ` Andre Przywara
  2019-03-01 16:41     ` Jeremy Linton
  0 siblings, 1 reply; 35+ messages in thread
From: Andre Przywara @ 2019-03-01  7:02 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	linux-kernel

Hi,

On 2/26/19 7:05 PM, Jeremy Linton wrote:
> Return status based on ssbd_state and the arm64 SSBS feature. If
> the mitigation is disabled, or the firmware isn't responding then
> return the expected machine state based on a new blacklist of known
> vulnerable cores.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>   arch/arm64/kernel/cpu_errata.c | 43 ++++++++++++++++++++++++++++++++++
>   1 file changed, 43 insertions(+)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 5f5611d17dc1..e1b03f643799 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -279,6 +279,7 @@ static int detect_harden_bp_fw(void)
>   DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
>   
>   int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
> +static bool __ssb_safe = true;
>   
>   static const struct ssbd_options {
>   	const char	*str;
> @@ -387,6 +388,9 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>   
>   	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
>   
> +	if (is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list))
> +		__ssb_safe = false;

Is that the only place where we set it to false?
What about if firmware reports that (at least one core) is vulnerable?

> +
>   	if (this_cpu_has_cap(ARM64_SSBS)) {
>   		required = false;
>   		goto out_printmsg;
> @@ -420,6 +424,7 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>   		ssbd_state = ARM64_SSBD_UNKNOWN;
>   		return false;
>   
> +	/* machines with mixed mitigation requirements must not return this */
>   	case SMCCC_RET_NOT_REQUIRED:
>   		pr_info_once("%s mitigation not required\n", entry->desc);
>   		ssbd_state = ARM64_SSBD_MITIGATED;
> @@ -475,6 +480,16 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>   	return required;
>   }
>   
> +/* known vulnerable cores */
> +static const struct midr_range arm64_ssb_cpus[] = {
> +	MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
> +	MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
> +	MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
> +	MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
> +	MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
> +	{},
> +};
> +
>   static void __maybe_unused
>   cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
>   {
> @@ -770,6 +785,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
>   		.capability = ARM64_SSBD,
>   		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
>   		.matches = has_ssbd_mitigation,
> +		.midr_range_list = arm64_ssb_cpus,
>   	},
>   #ifdef CONFIG_ARM64_ERRATUM_1188873
>   	{
> @@ -808,3 +824,30 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
>   
>   	return sprintf(buf, "Vulnerable\n");
>   }
> +
> +ssize_t cpu_show_spec_store_bypass(struct device *dev,
> +		struct device_attribute *attr, char *buf)
> +{
> +	/*
> +	 *  Two assumptions: First, ssbd_state reflects the worse case
> +	 *  for hetrogenous machines, and that if SSBS is supported its

                 heterogeneous

Cheers,
Andre.

> +	 *  supported by all cores.
> +	 */
> +	switch (ssbd_state) {
> +	case ARM64_SSBD_MITIGATED:
> +		return sprintf(buf, "Not affected\n");
> +
> +	case ARM64_SSBD_KERNEL:
> +	case ARM64_SSBD_FORCE_ENABLE:
> +		if (cpus_have_cap(ARM64_SSBS))
> +			return sprintf(buf, "Not affected\n");
> +		if (IS_ENABLED(CONFIG_ARM64_SSBD))
> +			return sprintf(buf,
> +			    "Mitigation: Speculative Store Bypass disabled\n");
> +	}
> +
> +	if (__ssb_safe)
> +		return sprintf(buf, "Not affected\n");
> +
> +	return sprintf(buf, "Vulnerable\n");
> +}
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 08/10] arm64: Always enable ssb vulnerability detection
  2019-02-27  1:05 ` [PATCH v5 08/10] arm64: Always enable ssb vulnerability detection Jeremy Linton
@ 2019-03-01  7:02   ` Andre Przywara
  2019-03-01 16:16     ` Jeremy Linton
  0 siblings, 1 reply; 35+ messages in thread
From: Andre Przywara @ 2019-03-01  7:02 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	linux-kernel

Hi,

On 2/26/19 7:05 PM, Jeremy Linton wrote:
> The ssb detection logic is necessary regardless of whether
> the vulnerability mitigation code is built into the kernel.
> Break it out so that the CONFIG option only controls the
> mitigation logic and not the vulnerability detection.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>   arch/arm64/include/asm/cpufeature.h |  4 ----
>   arch/arm64/kernel/cpu_errata.c      | 11 +++++++----
>   2 files changed, 7 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index dfcfba725d72..c2b60a021437 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -628,11 +628,7 @@ static inline int arm64_get_ssbd_state(void)
>   #endif
>   }
>   
> -#ifdef CONFIG_ARM64_SSBD
>   void arm64_set_ssbd_mitigation(bool state);
> -#else
> -static inline void arm64_set_ssbd_mitigation(bool state) {}
> -#endif
>   
>   extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
>   
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 0f6e8f5d67bc..5f5611d17dc1 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -276,7 +276,6 @@ static int detect_harden_bp_fw(void)
>   	return 1;
>   }
>   
> -#ifdef CONFIG_ARM64_SSBD
>   DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
>   
>   int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
> @@ -347,6 +346,7 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt,
>   		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
>   }
>   
> +#ifdef CONFIG_ARM64_SSBD
>   void arm64_set_ssbd_mitigation(bool state)
>   {
>   	if (this_cpu_has_cap(ARM64_SSBS)) {
> @@ -371,6 +371,12 @@ void arm64_set_ssbd_mitigation(bool state)
>   		break;
>   	}
>   }
> +#else
> +void arm64_set_ssbd_mitigation(bool state)
> +{
> +	pr_info_once("SSBD, disabled by kernel configuration\n");

Is there a stray comma or is the continuation of some previous printout?

Regardless of that it looks good and compiles with both 
CONFIG_ARM64_SSBD defined or not:

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre.

> +}
> +#endif	/* CONFIG_ARM64_SSBD */
>   
>   static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>   				    int scope)
> @@ -468,7 +474,6 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>   
>   	return required;
>   }
> -#endif	/* CONFIG_ARM64_SSBD */
>   
>   static void __maybe_unused
>   cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
> @@ -760,14 +765,12 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
>   		ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors),
>   	},
>   #endif
> -#ifdef CONFIG_ARM64_SSBD
>   	{
>   		.desc = "Speculative Store Bypass Disable",
>   		.capability = ARM64_SSBD,
>   		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
>   		.matches = has_ssbd_mitigation,
>   	},
> -#endif
>   #ifdef CONFIG_ARM64_ERRATUM_1188873
>   	{
>   		/* Cortex-A76 r0p0 to r2p0 */
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 10/10] arm64: enable generic CPU vulnerabilites support
  2019-02-27  1:05 ` [PATCH v5 10/10] arm64: enable generic CPU vulnerabilites support Jeremy Linton
@ 2019-03-01  7:03   ` Andre Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-03-01  7:03 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	linux-kernel, Mian Yousaf Kaukab

Hi,

On 2/26/19 7:05 PM, Jeremy Linton wrote:
> From: Mian Yousaf Kaukab <ykaukab@suse.de>
> 
> Enable CPU vulnerabilty show functions for spectre_v1, spectre_v2,
> meltdown and store-bypass.
> 
> Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Thanks,
Andre.

> ---
>   arch/arm64/Kconfig | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index a4168d366127..be9872ee1d61 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -88,6 +88,7 @@ config ARM64
>   	select GENERIC_CLOCKEVENTS
>   	select GENERIC_CLOCKEVENTS_BROADCAST
>   	select GENERIC_CPU_AUTOPROBE
> +	select GENERIC_CPU_VULNERABILITIES
>   	select GENERIC_EARLY_IOREMAP
>   	select GENERIC_IDLE_POLL_SETUP
>   	select GENERIC_IRQ_MULTI_HANDLER
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 03/10] arm64: add sysfs vulnerability show for meltdown
  2019-02-27  1:05 ` [PATCH v5 03/10] arm64: add sysfs vulnerability show for meltdown Jeremy Linton
  2019-02-28 18:33   ` Suzuki K Poulose
@ 2019-03-01  7:11   ` Andre Przywara
  2019-03-01 16:12     ` Jeremy Linton
  1 sibling, 1 reply; 35+ messages in thread
From: Andre Przywara @ 2019-03-01  7:11 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	linux-kernel

Hi,

On 2/26/19 7:05 PM, Jeremy Linton wrote:
> Display the mitigation status if active, otherwise
> assume the cpu is safe unless it doesn't have CSV3
> and isn't in our whitelist.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>   arch/arm64/kernel/cpufeature.c | 47 ++++++++++++++++++++++++++--------
>   1 file changed, 37 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index f6d84e2c92fe..d31bd770acba 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -944,7 +944,7 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
>   	return has_cpuid_feature(entry, scope);
>   }
>   
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> +static bool __meltdown_safe = true;
>   static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
>   
>   static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
> @@ -963,6 +963,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>   		{ /* sentinel */ }
>   	};
>   	char const *str = "command line option";
> +	bool meltdown_safe;
> +
> +	meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
> +
> +	/* Defer to CPU feature registers */
> +	if (has_cpuid_feature(entry, scope))
> +		meltdown_safe = true;
> +
> +	if (!meltdown_safe)
> +		__meltdown_safe = false;
>   
>   	/*
>   	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
> @@ -974,6 +984,11 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>   		__kpti_forced = -1;
>   	}
>   
> +	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) {
> +		pr_info_once("kernel page table isolation disabled by CONFIG\n");
> +		return false;
> +	}
> +
>   	/* Forced? */
>   	if (__kpti_forced) {
>   		pr_info_once("kernel page table isolation forced %s by %s\n",
> @@ -985,14 +1000,10 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>   	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
>   		return kaslr_offset() > 0;
>   
> -	/* Don't force KPTI for CPUs that are not vulnerable */
> -	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
> -		return false;
> -
> -	/* Defer to CPU feature registers */
> -	return !has_cpuid_feature(entry, scope);
> +	return !meltdown_safe;
>   }
>   
> +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>   static void
>   kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
>   {
> @@ -1022,6 +1033,13 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
>   
>   	return;
>   }
> +#else
> +static void
> +kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
> +{
> +}
> +#endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
> +
>   
>   static int __init parse_kpti(char *str)
>   {
> @@ -1035,7 +1053,6 @@ static int __init parse_kpti(char *str)
>   	return 0;
>   }
>   early_param("kpti", parse_kpti);
> -#endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
>   
>   #ifdef CONFIG_ARM64_HW_AFDBM
>   static inline void __cpu_enable_hw_dbm(void)
> @@ -1286,7 +1303,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>   		.field_pos = ID_AA64PFR0_EL0_SHIFT,
>   		.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
>   	},
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>   	{
>   		.desc = "Kernel page table isolation (KPTI)",
>   		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
> @@ -1302,7 +1318,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>   		.matches = unmap_kernel_at_el0,
>   		.cpu_enable = kpti_install_ng_mappings,
>   	},
> -#endif
>   	{
>   		/* FP/SIMD is not implemented */
>   		.capability = ARM64_HAS_NO_FPSIMD,
> @@ -2063,3 +2078,15 @@ static int __init enable_mrs_emulation(void)
>   }
>   
>   core_initcall(enable_mrs_emulation);
> +
> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
> +		char *buf)
> +{
> +	if (arm64_kernel_unmapped_at_el0())
> +		return sprintf(buf, "Mitigation: KPTI\n");
> +
> +	if (__meltdown_safe)
> +		return sprintf(buf, "Not affected\n");

Shall those two checks be swapped? So it doesn't report about a KPTI 
mitigation if the CPU is safe, but we enable KPTI because of KASLR 
having enabled it? Or is that a different knob?

Cheers,
Andre.

> +
> +	return sprintf(buf, "Vulnerable\n");
> +}
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 03/10] arm64: add sysfs vulnerability show for meltdown
  2019-03-01  7:11   ` Andre Przywara
@ 2019-03-01 16:12     ` Jeremy Linton
  2019-03-01 16:20       ` Catalin Marinas
  0 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-03-01 16:12 UTC (permalink / raw)
  To: Andre Przywara, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	linux-kernel

Hi,

On 3/1/19 1:11 AM, Andre Przywara wrote:
> Hi,
> 
> On 2/26/19 7:05 PM, Jeremy Linton wrote:
>> Display the mitigation status if active, otherwise
>> assume the cpu is safe unless it doesn't have CSV3
>> and isn't in our whitelist.
>>
>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>> ---
>>   arch/arm64/kernel/cpufeature.c | 47 ++++++++++++++++++++++++++--------
>>   1 file changed, 37 insertions(+), 10 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/cpufeature.c 
>> b/arch/arm64/kernel/cpufeature.c
>> index f6d84e2c92fe..d31bd770acba 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -944,7 +944,7 @@ has_useable_cnp(const struct 
>> arm64_cpu_capabilities *entry, int scope)
>>       return has_cpuid_feature(entry, scope);
>>   }
>> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>> +static bool __meltdown_safe = true;
>>   static int __kpti_forced; /* 0: not forced, >0: forced on, <0: 
>> forced off */
>>   static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities 
>> *entry,
>> @@ -963,6 +963,16 @@ static bool unmap_kernel_at_el0(const struct 
>> arm64_cpu_capabilities *entry,
>>           { /* sentinel */ }
>>       };
>>       char const *str = "command line option";
>> +    bool meltdown_safe;
>> +
>> +    meltdown_safe = is_midr_in_range_list(read_cpuid_id(), 
>> kpti_safe_list);
>> +
>> +    /* Defer to CPU feature registers */
>> +    if (has_cpuid_feature(entry, scope))
>> +        meltdown_safe = true;
>> +
>> +    if (!meltdown_safe)
>> +        __meltdown_safe = false;
>>       /*
>>        * For reasons that aren't entirely clear, enabling KPTI on Cavium
>> @@ -974,6 +984,11 @@ static bool unmap_kernel_at_el0(const struct 
>> arm64_cpu_capabilities *entry,
>>           __kpti_forced = -1;
>>       }
>> +    if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) {
>> +        pr_info_once("kernel page table isolation disabled by 
>> CONFIG\n");
>> +        return false;
>> +    }
>> +
>>       /* Forced? */
>>       if (__kpti_forced) {
>>           pr_info_once("kernel page table isolation forced %s by %s\n",
>> @@ -985,14 +1000,10 @@ static bool unmap_kernel_at_el0(const struct 
>> arm64_cpu_capabilities *entry,
>>       if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
>>           return kaslr_offset() > 0;
>> -    /* Don't force KPTI for CPUs that are not vulnerable */
>> -    if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
>> -        return false;
>> -
>> -    /* Defer to CPU feature registers */
>> -    return !has_cpuid_feature(entry, scope);
>> +    return !meltdown_safe;
>>   }
>> +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>>   static void
>>   kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
>>   {
>> @@ -1022,6 +1033,13 @@ kpti_install_ng_mappings(const struct 
>> arm64_cpu_capabilities *__unused)
>>       return;
>>   }
>> +#else
>> +static void
>> +kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
>> +{
>> +}
>> +#endif    /* CONFIG_UNMAP_KERNEL_AT_EL0 */
>> +
>>   static int __init parse_kpti(char *str)
>>   {
>> @@ -1035,7 +1053,6 @@ static int __init parse_kpti(char *str)
>>       return 0;
>>   }
>>   early_param("kpti", parse_kpti);
>> -#endif    /* CONFIG_UNMAP_KERNEL_AT_EL0 */
>>   #ifdef CONFIG_ARM64_HW_AFDBM
>>   static inline void __cpu_enable_hw_dbm(void)
>> @@ -1286,7 +1303,6 @@ static const struct arm64_cpu_capabilities 
>> arm64_features[] = {
>>           .field_pos = ID_AA64PFR0_EL0_SHIFT,
>>           .min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
>>       },
>> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>>       {
>>           .desc = "Kernel page table isolation (KPTI)",
>>           .capability = ARM64_UNMAP_KERNEL_AT_EL0,
>> @@ -1302,7 +1318,6 @@ static const struct arm64_cpu_capabilities 
>> arm64_features[] = {
>>           .matches = unmap_kernel_at_el0,
>>           .cpu_enable = kpti_install_ng_mappings,
>>       },
>> -#endif
>>       {
>>           /* FP/SIMD is not implemented */
>>           .capability = ARM64_HAS_NO_FPSIMD,
>> @@ -2063,3 +2078,15 @@ static int __init enable_mrs_emulation(void)
>>   }
>>   core_initcall(enable_mrs_emulation);
>> +
>> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute 
>> *attr,
>> +        char *buf)
>> +{
>> +    if (arm64_kernel_unmapped_at_el0())
>> +        return sprintf(buf, "Mitigation: KPTI\n");
>> +
>> +    if (__meltdown_safe)
>> +        return sprintf(buf, "Not affected\n");
> 
> Shall those two checks be swapped? So it doesn't report about a KPTI 
> mitigation if the CPU is safe, but we enable KPTI because of KASLR 
> having enabled it? Or is that a different knob?

Hmmm, I think having it this way reflects the fact that the machine is 
mitigated independent of whether it needed it. The force on case is 
similar. The machine may not have needed the mitigation but it was 
forced on.


> 
> Cheers,
> Andre.
>  
>> +
>> +    return sprintf(buf, "Vulnerable\n");
>> +}
>>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 08/10] arm64: Always enable ssb vulnerability detection
  2019-03-01  7:02   ` Andre Przywara
@ 2019-03-01 16:16     ` Jeremy Linton
  0 siblings, 0 replies; 35+ messages in thread
From: Jeremy Linton @ 2019-03-01 16:16 UTC (permalink / raw)
  To: Andre Przywara, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	linux-kernel

On 3/1/19 1:02 AM, Andre Przywara wrote:
> Hi,
> 
> On 2/26/19 7:05 PM, Jeremy Linton wrote:
>> The ssb detection logic is necessary regardless of whether
>> the vulnerability mitigation code is built into the kernel.
>> Break it out so that the CONFIG option only controls the
>> mitigation logic and not the vulnerability detection.
>>
>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>> ---
>>   arch/arm64/include/asm/cpufeature.h |  4 ----
>>   arch/arm64/kernel/cpu_errata.c      | 11 +++++++----
>>   2 files changed, 7 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h 
>> b/arch/arm64/include/asm/cpufeature.h
>> index dfcfba725d72..c2b60a021437 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -628,11 +628,7 @@ static inline int arm64_get_ssbd_state(void)
>>   #endif
>>   }
>> -#ifdef CONFIG_ARM64_SSBD
>>   void arm64_set_ssbd_mitigation(bool state);
>> -#else
>> -static inline void arm64_set_ssbd_mitigation(bool state) {}
>> -#endif
>>   extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
>> diff --git a/arch/arm64/kernel/cpu_errata.c 
>> b/arch/arm64/kernel/cpu_errata.c
>> index 0f6e8f5d67bc..5f5611d17dc1 100644
>> --- a/arch/arm64/kernel/cpu_errata.c
>> +++ b/arch/arm64/kernel/cpu_errata.c
>> @@ -276,7 +276,6 @@ static int detect_harden_bp_fw(void)
>>       return 1;
>>   }
>> -#ifdef CONFIG_ARM64_SSBD
>>   DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
>>   int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
>> @@ -347,6 +346,7 @@ void __init arm64_enable_wa2_handling(struct 
>> alt_instr *alt,
>>           *updptr = cpu_to_le32(aarch64_insn_gen_nop());
>>   }
>> +#ifdef CONFIG_ARM64_SSBD
>>   void arm64_set_ssbd_mitigation(bool state)
>>   {
>>       if (this_cpu_has_cap(ARM64_SSBS)) {
>> @@ -371,6 +371,12 @@ void arm64_set_ssbd_mitigation(bool state)
>>           break;
>>       }
>>   }
>> +#else
>> +void arm64_set_ssbd_mitigation(bool state)
>> +{
>> +    pr_info_once("SSBD, disabled by kernel configuration\n");
> 
> Is there a stray comma or is the continuation of some previous printout?

This is on purpose because I didn't like the way it read if you expanded 
the acronym. I still don't, maybe a ":" is more appropriate.


> 
> Regardless of that it looks good and compiles with both 
> CONFIG_ARM64_SSBD defined or not:
> 
> Reviewed-by: Andre Przywara <andre.przywara@arm.com>
> 
> Cheers,
> Andre.
> 
>> +}
>> +#endif    /* CONFIG_ARM64_SSBD */
>>   static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities 
>> *entry,
>>                       int scope)
>> @@ -468,7 +474,6 @@ static bool has_ssbd_mitigation(const struct 
>> arm64_cpu_capabilities *entry,
>>       return required;
>>   }
>> -#endif    /* CONFIG_ARM64_SSBD */
>>   static void __maybe_unused
>>   cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities 
>> *__unused)
>> @@ -760,14 +765,12 @@ const struct arm64_cpu_capabilities 
>> arm64_errata[] = {
>>           ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors),
>>       },
>>   #endif
>> -#ifdef CONFIG_ARM64_SSBD
>>       {
>>           .desc = "Speculative Store Bypass Disable",
>>           .capability = ARM64_SSBD,
>>           .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
>>           .matches = has_ssbd_mitigation,
>>       },
>> -#endif
>>   #ifdef CONFIG_ARM64_ERRATUM_1188873
>>       {
>>           /* Cortex-A76 r0p0 to r2p0 */
>>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 03/10] arm64: add sysfs vulnerability show for meltdown
  2019-03-01 16:12     ` Jeremy Linton
@ 2019-03-01 16:20       ` Catalin Marinas
  2019-03-01 16:53         ` Jeremy Linton
  0 siblings, 1 reply; 35+ messages in thread
From: Catalin Marinas @ 2019-03-01 16:20 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: Andre Przywara, linux-arm-kernel, will.deacon, marc.zyngier,
	suzuki.poulose, Dave.Martin, shankerd, julien.thierry, mlangsdo,
	stefan.wahren, linux-kernel

On Fri, Mar 01, 2019 at 10:12:09AM -0600, Jeremy Linton wrote:
> On 3/1/19 1:11 AM, Andre Przywara wrote:
> > On 2/26/19 7:05 PM, Jeremy Linton wrote:
> > > Display the mitigation status if active, otherwise
> > > assume the cpu is safe unless it doesn't have CSV3
> > > and isn't in our whitelist.
> > > 
> > > Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> > > ---
> > >   arch/arm64/kernel/cpufeature.c | 47 ++++++++++++++++++++++++++--------
> > >   1 file changed, 37 insertions(+), 10 deletions(-)
> > > 
> > > diff --git a/arch/arm64/kernel/cpufeature.c
> > > b/arch/arm64/kernel/cpufeature.c
> > > index f6d84e2c92fe..d31bd770acba 100644
> > > --- a/arch/arm64/kernel/cpufeature.c
> > > +++ b/arch/arm64/kernel/cpufeature.c
> > > @@ -944,7 +944,7 @@ has_useable_cnp(const struct
> > > arm64_cpu_capabilities *entry, int scope)
> > >       return has_cpuid_feature(entry, scope);
> > >   }
> > > -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> > > +static bool __meltdown_safe = true;
> > >   static int __kpti_forced; /* 0: not forced, >0: forced on, <0:
> > > forced off */
> > >   static bool unmap_kernel_at_el0(const struct
> > > arm64_cpu_capabilities *entry,
> > > @@ -963,6 +963,16 @@ static bool unmap_kernel_at_el0(const struct
> > > arm64_cpu_capabilities *entry,
> > >           { /* sentinel */ }
> > >       };
> > >       char const *str = "command line option";
> > > +    bool meltdown_safe;
> > > +
> > > +    meltdown_safe = is_midr_in_range_list(read_cpuid_id(),
> > > kpti_safe_list);
> > > +
> > > +    /* Defer to CPU feature registers */
> > > +    if (has_cpuid_feature(entry, scope))
> > > +        meltdown_safe = true;
> > > +
> > > +    if (!meltdown_safe)
> > > +        __meltdown_safe = false;
> > >       /*
> > >        * For reasons that aren't entirely clear, enabling KPTI on Cavium
> > > @@ -974,6 +984,11 @@ static bool unmap_kernel_at_el0(const struct
> > > arm64_cpu_capabilities *entry,
> > >           __kpti_forced = -1;
> > >       }
> > > +    if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) {
> > > +        pr_info_once("kernel page table isolation disabled by
> > > CONFIG\n");
> > > +        return false;
> > > +    }
> > > +
> > >       /* Forced? */
> > >       if (__kpti_forced) {
> > >           pr_info_once("kernel page table isolation forced %s by %s\n",
> > > @@ -985,14 +1000,10 @@ static bool unmap_kernel_at_el0(const struct
> > > arm64_cpu_capabilities *entry,
> > >       if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
> > >           return kaslr_offset() > 0;
> > > -    /* Don't force KPTI for CPUs that are not vulnerable */
> > > -    if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
> > > -        return false;
> > > -
> > > -    /* Defer to CPU feature registers */
> > > -    return !has_cpuid_feature(entry, scope);
> > > +    return !meltdown_safe;
> > >   }
> > > +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> > >   static void
> > >   kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
> > >   {
> > > @@ -1022,6 +1033,13 @@ kpti_install_ng_mappings(const struct
> > > arm64_cpu_capabilities *__unused)
> > >       return;
> > >   }
> > > +#else
> > > +static void
> > > +kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
> > > +{
> > > +}
> > > +#endif    /* CONFIG_UNMAP_KERNEL_AT_EL0 */
> > > +
> > >   static int __init parse_kpti(char *str)
> > >   {
> > > @@ -1035,7 +1053,6 @@ static int __init parse_kpti(char *str)
> > >       return 0;
> > >   }
> > >   early_param("kpti", parse_kpti);
> > > -#endif    /* CONFIG_UNMAP_KERNEL_AT_EL0 */
> > >   #ifdef CONFIG_ARM64_HW_AFDBM
> > >   static inline void __cpu_enable_hw_dbm(void)
> > > @@ -1286,7 +1303,6 @@ static const struct arm64_cpu_capabilities
> > > arm64_features[] = {
> > >           .field_pos = ID_AA64PFR0_EL0_SHIFT,
> > >           .min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
> > >       },
> > > -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> > >       {
> > >           .desc = "Kernel page table isolation (KPTI)",
> > >           .capability = ARM64_UNMAP_KERNEL_AT_EL0,
> > > @@ -1302,7 +1318,6 @@ static const struct arm64_cpu_capabilities
> > > arm64_features[] = {
> > >           .matches = unmap_kernel_at_el0,
> > >           .cpu_enable = kpti_install_ng_mappings,
> > >       },
> > > -#endif
> > >       {
> > >           /* FP/SIMD is not implemented */
> > >           .capability = ARM64_HAS_NO_FPSIMD,
> > > @@ -2063,3 +2078,15 @@ static int __init enable_mrs_emulation(void)
> > >   }
> > >   core_initcall(enable_mrs_emulation);
> > > +
> > > +ssize_t cpu_show_meltdown(struct device *dev, struct
> > > device_attribute *attr,
> > > +        char *buf)
> > > +{
> > > +    if (arm64_kernel_unmapped_at_el0())
> > > +        return sprintf(buf, "Mitigation: KPTI\n");
> > > +
> > > +    if (__meltdown_safe)
> > > +        return sprintf(buf, "Not affected\n");
> > 
> > Shall those two checks be swapped? So it doesn't report about a KPTI
> > mitigation if the CPU is safe, but we enable KPTI because of KASLR
> > having enabled it? Or is that a different knob?
> 
> Hmmm, I think having it this way reflects the fact that the machine is
> mitigated independent of whether it needed it. The force on case is similar.
> The machine may not have needed the mitigation but it was forced on.

So is this patchset about showing vulnerabilities _and_ mitigations or
just one of them?

-- 
Catalin

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 09/10] arm64: add sysfs vulnerability show for speculative store bypass
  2019-03-01  7:02   ` Andre Przywara
@ 2019-03-01 16:41     ` Jeremy Linton
  0 siblings, 0 replies; 35+ messages in thread
From: Jeremy Linton @ 2019-03-01 16:41 UTC (permalink / raw)
  To: Andre Przywara, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	linux-kernel

Hi,

On 3/1/19 1:02 AM, Andre Przywara wrote:
> Hi,
> 
> On 2/26/19 7:05 PM, Jeremy Linton wrote:
>> Return status based on ssbd_state and the arm64 SSBS feature. If
>> the mitigation is disabled, or the firmware isn't responding then
>> return the expected machine state based on a new blacklist of known
>> vulnerable cores.
>>
>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>> ---
>>   arch/arm64/kernel/cpu_errata.c | 43 ++++++++++++++++++++++++++++++++++
>>   1 file changed, 43 insertions(+)
>>
>> diff --git a/arch/arm64/kernel/cpu_errata.c 
>> b/arch/arm64/kernel/cpu_errata.c
>> index 5f5611d17dc1..e1b03f643799 100644
>> --- a/arch/arm64/kernel/cpu_errata.c
>> +++ b/arch/arm64/kernel/cpu_errata.c
>> @@ -279,6 +279,7 @@ static int detect_harden_bp_fw(void)
>>   DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
>>   int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
>> +static bool __ssb_safe = true;
>>   static const struct ssbd_options {
>>       const char    *str;
>> @@ -387,6 +388,9 @@ static bool has_ssbd_mitigation(const struct 
>> arm64_cpu_capabilities *entry,
>>       WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
>> +    if (is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list))
>> +        __ssb_safe = false;
> 
> Is that the only place where we set it to false?
> What about if firmware reports that (at least one core) is vulnerable?

Maybe.. Normally if the firmware is functional enough to report the core 
state, then I would expect the kernel mitigation to be enabled. But your 
right, if the mitigation is disabled then it might be possible for us to 
miss the blacklist and report the machine safe even if the firmware 
reports it vulnerable.

The core problem though is really that the blacklist isn't complete, 
because we also report an incorrect state if the firmware fails to 
respond. Although that said, there are still some other interesting 
paths here which might fall into the "unknown" case if you get creative 
enough (ex: think force disabling a SSBS mitigated machine).

Anyway, its probably worth flagging the machine vulnerable if we get 
SMCC_RET_SUCCESS to avoid cases which miss the blacklist.


> 
>> +
>>       if (this_cpu_has_cap(ARM64_SSBS)) {
>>           required = false;
>>           goto out_printmsg;
>> @@ -420,6 +424,7 @@ static bool has_ssbd_mitigation(const struct 
>> arm64_cpu_capabilities *entry,
>>           ssbd_state = ARM64_SSBD_UNKNOWN;
>>           return false;
>> +    /* machines with mixed mitigation requirements must not return 
>> this */
>>       case SMCCC_RET_NOT_REQUIRED:
>>           pr_info_once("%s mitigation not required\n", entry->desc);
>>           ssbd_state = ARM64_SSBD_MITIGATED;
>> @@ -475,6 +480,16 @@ static bool has_ssbd_mitigation(const struct 
>> arm64_cpu_capabilities *entry,
>>       return required;
>>   }
>> +/* known vulnerable cores */
>> +static const struct midr_range arm64_ssb_cpus[] = {
>> +    MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
>> +    MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
>> +    MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
>> +    MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
>> +    MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
>> +    {},
>> +};
>> +
>>   static void __maybe_unused
>>   cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities 
>> *__unused)
>>   {
>> @@ -770,6 +785,7 @@ const struct arm64_cpu_capabilities arm64_errata[] 
>> = {
>>           .capability = ARM64_SSBD,
>>           .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
>>           .matches = has_ssbd_mitigation,
>> +        .midr_range_list = arm64_ssb_cpus,
>>       },
>>   #ifdef CONFIG_ARM64_ERRATUM_1188873
>>       {
>> @@ -808,3 +824,30 @@ ssize_t cpu_show_spectre_v2(struct device *dev, 
>> struct device_attribute *attr,
>>       return sprintf(buf, "Vulnerable\n");
>>   }
>> +
>> +ssize_t cpu_show_spec_store_bypass(struct device *dev,
>> +        struct device_attribute *attr, char *buf)
>> +{
>> +    /*
>> +     *  Two assumptions: First, ssbd_state reflects the worse case
>> +     *  for hetrogenous machines, and that if SSBS is supported its
> 
>                  heterogeneous
> 
> Cheers,
> Andre.
> 
>> +     *  supported by all cores.
>> +     */
>> +    switch (ssbd_state) {
>> +    case ARM64_SSBD_MITIGATED:
>> +        return sprintf(buf, "Not affected\n");
>> +
>> +    case ARM64_SSBD_KERNEL:
>> +    case ARM64_SSBD_FORCE_ENABLE:
>> +        if (cpus_have_cap(ARM64_SSBS))
>> +            return sprintf(buf, "Not affected\n");
>> +        if (IS_ENABLED(CONFIG_ARM64_SSBD))
>> +            return sprintf(buf,
>> +                "Mitigation: Speculative Store Bypass disabled\n");
>> +    }
>> +
>> +    if (__ssb_safe)
>> +        return sprintf(buf, "Not affected\n");
>> +
>> +    return sprintf(buf, "Vulnerable\n");
>> +}
>>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 03/10] arm64: add sysfs vulnerability show for meltdown
  2019-03-01 16:20       ` Catalin Marinas
@ 2019-03-01 16:53         ` Jeremy Linton
  2019-03-01 17:15           ` Catalin Marinas
  2019-03-01 17:30           ` Andre Przywara
  0 siblings, 2 replies; 35+ messages in thread
From: Jeremy Linton @ 2019-03-01 16:53 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Andre Przywara, linux-arm-kernel, will.deacon, marc.zyngier,
	suzuki.poulose, Dave.Martin, shankerd, julien.thierry, mlangsdo,
	stefan.wahren, linux-kernel

Hi,

On 3/1/19 10:20 AM, Catalin Marinas wrote:
> On Fri, Mar 01, 2019 at 10:12:09AM -0600, Jeremy Linton wrote:
>> On 3/1/19 1:11 AM, Andre Przywara wrote:
>>> On 2/26/19 7:05 PM, Jeremy Linton wrote:
>>>> Display the mitigation status if active, otherwise
>>>> assume the cpu is safe unless it doesn't have CSV3
>>>> and isn't in our whitelist.
>>>>
>>>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>>>> ---
>>>>    arch/arm64/kernel/cpufeature.c | 47 ++++++++++++++++++++++++++--------
>>>>    1 file changed, 37 insertions(+), 10 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/kernel/cpufeature.c
>>>> b/arch/arm64/kernel/cpufeature.c
>>>> index f6d84e2c92fe..d31bd770acba 100644
>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>> @@ -944,7 +944,7 @@ has_useable_cnp(const struct
>>>> arm64_cpu_capabilities *entry, int scope)
>>>>        return has_cpuid_feature(entry, scope);
>>>>    }
>>>> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>>>> +static bool __meltdown_safe = true;
>>>>    static int __kpti_forced; /* 0: not forced, >0: forced on, <0:
>>>> forced off */
>>>>    static bool unmap_kernel_at_el0(const struct
>>>> arm64_cpu_capabilities *entry,
>>>> @@ -963,6 +963,16 @@ static bool unmap_kernel_at_el0(const struct
>>>> arm64_cpu_capabilities *entry,
>>>>            { /* sentinel */ }
>>>>        };
>>>>        char const *str = "command line option";
>>>> +    bool meltdown_safe;
>>>> +
>>>> +    meltdown_safe = is_midr_in_range_list(read_cpuid_id(),
>>>> kpti_safe_list);
>>>> +
>>>> +    /* Defer to CPU feature registers */
>>>> +    if (has_cpuid_feature(entry, scope))
>>>> +        meltdown_safe = true;
>>>> +
>>>> +    if (!meltdown_safe)
>>>> +        __meltdown_safe = false;
>>>>        /*
>>>>         * For reasons that aren't entirely clear, enabling KPTI on Cavium
>>>> @@ -974,6 +984,11 @@ static bool unmap_kernel_at_el0(const struct
>>>> arm64_cpu_capabilities *entry,
>>>>            __kpti_forced = -1;
>>>>        }
>>>> +    if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) {
>>>> +        pr_info_once("kernel page table isolation disabled by
>>>> CONFIG\n");
>>>> +        return false;
>>>> +    }
>>>> +
>>>>        /* Forced? */
>>>>        if (__kpti_forced) {
>>>>            pr_info_once("kernel page table isolation forced %s by %s\n",
>>>> @@ -985,14 +1000,10 @@ static bool unmap_kernel_at_el0(const struct
>>>> arm64_cpu_capabilities *entry,
>>>>        if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
>>>>            return kaslr_offset() > 0;
>>>> -    /* Don't force KPTI for CPUs that are not vulnerable */
>>>> -    if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
>>>> -        return false;
>>>> -
>>>> -    /* Defer to CPU feature registers */
>>>> -    return !has_cpuid_feature(entry, scope);
>>>> +    return !meltdown_safe;
>>>>    }
>>>> +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>>>>    static void
>>>>    kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
>>>>    {
>>>> @@ -1022,6 +1033,13 @@ kpti_install_ng_mappings(const struct
>>>> arm64_cpu_capabilities *__unused)
>>>>        return;
>>>>    }
>>>> +#else
>>>> +static void
>>>> +kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
>>>> +{
>>>> +}
>>>> +#endif    /* CONFIG_UNMAP_KERNEL_AT_EL0 */
>>>> +
>>>>    static int __init parse_kpti(char *str)
>>>>    {
>>>> @@ -1035,7 +1053,6 @@ static int __init parse_kpti(char *str)
>>>>        return 0;
>>>>    }
>>>>    early_param("kpti", parse_kpti);
>>>> -#endif    /* CONFIG_UNMAP_KERNEL_AT_EL0 */
>>>>    #ifdef CONFIG_ARM64_HW_AFDBM
>>>>    static inline void __cpu_enable_hw_dbm(void)
>>>> @@ -1286,7 +1303,6 @@ static const struct arm64_cpu_capabilities
>>>> arm64_features[] = {
>>>>            .field_pos = ID_AA64PFR0_EL0_SHIFT,
>>>>            .min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
>>>>        },
>>>> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>>>>        {
>>>>            .desc = "Kernel page table isolation (KPTI)",
>>>>            .capability = ARM64_UNMAP_KERNEL_AT_EL0,
>>>> @@ -1302,7 +1318,6 @@ static const struct arm64_cpu_capabilities
>>>> arm64_features[] = {
>>>>            .matches = unmap_kernel_at_el0,
>>>>            .cpu_enable = kpti_install_ng_mappings,
>>>>        },
>>>> -#endif
>>>>        {
>>>>            /* FP/SIMD is not implemented */
>>>>            .capability = ARM64_HAS_NO_FPSIMD,
>>>> @@ -2063,3 +2078,15 @@ static int __init enable_mrs_emulation(void)
>>>>    }
>>>>    core_initcall(enable_mrs_emulation);
>>>> +
>>>> +ssize_t cpu_show_meltdown(struct device *dev, struct
>>>> device_attribute *attr,
>>>> +        char *buf)
>>>> +{
>>>> +    if (arm64_kernel_unmapped_at_el0())
>>>> +        return sprintf(buf, "Mitigation: KPTI\n");
>>>> +
>>>> +    if (__meltdown_safe)
>>>> +        return sprintf(buf, "Not affected\n");
>>>
>>> Shall those two checks be swapped? So it doesn't report about a KPTI
>>> mitigation if the CPU is safe, but we enable KPTI because of KASLR
>>> having enabled it? Or is that a different knob?
>>
>> Hmmm, I think having it this way reflects the fact that the machine is
>> mitigated independent of whether it needed it. The force on case is similar.
>> The machine may not have needed the mitigation but it was forced on.
> 
> So is this patchset about showing vulnerabilities _and_ mitigations or
> just one of them?
> 

Well, I don't think there is a way to express a mitigated but not 
vulnerable state in the current ABI. This set is mostly just to bring us 
in line with the current ABI expectations.





^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 03/10] arm64: add sysfs vulnerability show for meltdown
  2019-03-01 16:53         ` Jeremy Linton
@ 2019-03-01 17:15           ` Catalin Marinas
  2019-03-01 17:30           ` Andre Przywara
  1 sibling, 0 replies; 35+ messages in thread
From: Catalin Marinas @ 2019-03-01 17:15 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: Andre Przywara, linux-arm-kernel, will.deacon, marc.zyngier,
	suzuki.poulose, Dave.Martin, shankerd, julien.thierry, mlangsdo,
	stefan.wahren, linux-kernel

On Fri, Mar 01, 2019 at 10:53:50AM -0600, Jeremy Linton wrote:
> On 3/1/19 10:20 AM, Catalin Marinas wrote:
> > On Fri, Mar 01, 2019 at 10:12:09AM -0600, Jeremy Linton wrote:
> > > On 3/1/19 1:11 AM, Andre Przywara wrote:
> > > > On 2/26/19 7:05 PM, Jeremy Linton wrote:
> > > > > +ssize_t cpu_show_meltdown(struct device *dev, struct
> > > > > device_attribute *attr,
> > > > > +        char *buf)
> > > > > +{
> > > > > +    if (arm64_kernel_unmapped_at_el0())
> > > > > +        return sprintf(buf, "Mitigation: KPTI\n");
> > > > > +
> > > > > +    if (__meltdown_safe)
> > > > > +        return sprintf(buf, "Not affected\n");
> > > > 
> > > > Shall those two checks be swapped? So it doesn't report about a KPTI
> > > > mitigation if the CPU is safe, but we enable KPTI because of KASLR
> > > > having enabled it? Or is that a different knob?
> > > 
> > > Hmmm, I think having it this way reflects the fact that the machine is
> > > mitigated independent of whether it needed it. The force on case is similar.
> > > The machine may not have needed the mitigation but it was forced on.
> > 
> > So is this patchset about showing vulnerabilities _and_ mitigations or
> > just one of them?
> 
> Well, I don't think there is a way to express a mitigated but not vulnerable
> state in the current ABI. This set is mostly just to bring us in line with
> the current ABI expectations.

Looking at the ABI doc, it states:

	"Not affected"	  CPU is not affected by the vulnerability
	"Vulnerable"	  CPU is affected and no mitigation in effect
	"Mitigation: $M"  CPU is affected and mitigation $M is in effect

So, yes, we don't have mitigated but not vulnerable. Therefore I think
we should stick to "not affected" and swap the lines above as per
Andre's comment. This file is about Meltdown vulnerability and
mitigation, not KASLR hardening.

-- 
Catalin

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 03/10] arm64: add sysfs vulnerability show for meltdown
  2019-03-01 16:53         ` Jeremy Linton
  2019-03-01 17:15           ` Catalin Marinas
@ 2019-03-01 17:30           ` Andre Przywara
  1 sibling, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-03-01 17:30 UTC (permalink / raw)
  To: Jeremy Linton, Catalin Marinas
  Cc: linux-arm-kernel, will.deacon, marc.zyngier, suzuki.poulose,
	Dave.Martin, shankerd, julien.thierry, mlangsdo, stefan.wahren,
	linux-kernel

Hi,

On 3/1/19 10:53 AM, Jeremy Linton wrote:
> Hi,
> 
> On 3/1/19 10:20 AM, Catalin Marinas wrote:
>> On Fri, Mar 01, 2019 at 10:12:09AM -0600, Jeremy Linton wrote:
>>> On 3/1/19 1:11 AM, Andre Przywara wrote:
>>>> On 2/26/19 7:05 PM, Jeremy Linton wrote:
>>>>> Display the mitigation status if active, otherwise
>>>>> assume the cpu is safe unless it doesn't have CSV3
>>>>> and isn't in our whitelist.
>>>>>
>>>>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>>>>> ---
>>>>>    arch/arm64/kernel/cpufeature.c | 47 
>>>>> ++++++++++++++++++++++++++--------
>>>>>    1 file changed, 37 insertions(+), 10 deletions(-)
>>>>>
>>>>> diff --git a/arch/arm64/kernel/cpufeature.c
>>>>> b/arch/arm64/kernel/cpufeature.c
>>>>> index f6d84e2c92fe..d31bd770acba 100644
>>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>>> @@ -944,7 +944,7 @@ has_useable_cnp(const struct
>>>>> arm64_cpu_capabilities *entry, int scope)
>>>>>        return has_cpuid_feature(entry, scope);
>>>>>    }
>>>>> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>>>>> +static bool __meltdown_safe = true;
>>>>>    static int __kpti_forced; /* 0: not forced, >0: forced on, <0:
>>>>> forced off */
>>>>>    static bool unmap_kernel_at_el0(const struct
>>>>> arm64_cpu_capabilities *entry,
>>>>> @@ -963,6 +963,16 @@ static bool unmap_kernel_at_el0(const struct
>>>>> arm64_cpu_capabilities *entry,
>>>>>            { /* sentinel */ }
>>>>>        };
>>>>>        char const *str = "command line option";
>>>>> +    bool meltdown_safe;
>>>>> +
>>>>> +    meltdown_safe = is_midr_in_range_list(read_cpuid_id(),
>>>>> kpti_safe_list);
>>>>> +
>>>>> +    /* Defer to CPU feature registers */
>>>>> +    if (has_cpuid_feature(entry, scope))
>>>>> +        meltdown_safe = true;
>>>>> +
>>>>> +    if (!meltdown_safe)
>>>>> +        __meltdown_safe = false;
>>>>>        /*
>>>>>         * For reasons that aren't entirely clear, enabling KPTI on 
>>>>> Cavium
>>>>> @@ -974,6 +984,11 @@ static bool unmap_kernel_at_el0(const struct
>>>>> arm64_cpu_capabilities *entry,
>>>>>            __kpti_forced = -1;
>>>>>        }
>>>>> +    if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) {
>>>>> +        pr_info_once("kernel page table isolation disabled by
>>>>> CONFIG\n");
>>>>> +        return false;
>>>>> +    }
>>>>> +
>>>>>        /* Forced? */
>>>>>        if (__kpti_forced) {
>>>>>            pr_info_once("kernel page table isolation forced %s by 
>>>>> %s\n",
>>>>> @@ -985,14 +1000,10 @@ static bool unmap_kernel_at_el0(const struct
>>>>> arm64_cpu_capabilities *entry,
>>>>>        if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
>>>>>            return kaslr_offset() > 0;
>>>>> -    /* Don't force KPTI for CPUs that are not vulnerable */
>>>>> -    if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
>>>>> -        return false;
>>>>> -
>>>>> -    /* Defer to CPU feature registers */
>>>>> -    return !has_cpuid_feature(entry, scope);
>>>>> +    return !meltdown_safe;
>>>>>    }
>>>>> +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>>>>>    static void
>>>>>    kpti_install_ng_mappings(const struct arm64_cpu_capabilities 
>>>>> *__unused)
>>>>>    {
>>>>> @@ -1022,6 +1033,13 @@ kpti_install_ng_mappings(const struct
>>>>> arm64_cpu_capabilities *__unused)
>>>>>        return;
>>>>>    }
>>>>> +#else
>>>>> +static void
>>>>> +kpti_install_ng_mappings(const struct arm64_cpu_capabilities 
>>>>> *__unused)
>>>>> +{
>>>>> +}
>>>>> +#endif    /* CONFIG_UNMAP_KERNEL_AT_EL0 */
>>>>> +
>>>>>    static int __init parse_kpti(char *str)
>>>>>    {
>>>>> @@ -1035,7 +1053,6 @@ static int __init parse_kpti(char *str)
>>>>>        return 0;
>>>>>    }
>>>>>    early_param("kpti", parse_kpti);
>>>>> -#endif    /* CONFIG_UNMAP_KERNEL_AT_EL0 */
>>>>>    #ifdef CONFIG_ARM64_HW_AFDBM
>>>>>    static inline void __cpu_enable_hw_dbm(void)
>>>>> @@ -1286,7 +1303,6 @@ static const struct arm64_cpu_capabilities
>>>>> arm64_features[] = {
>>>>>            .field_pos = ID_AA64PFR0_EL0_SHIFT,
>>>>>            .min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
>>>>>        },
>>>>> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>>>>>        {
>>>>>            .desc = "Kernel page table isolation (KPTI)",
>>>>>            .capability = ARM64_UNMAP_KERNEL_AT_EL0,
>>>>> @@ -1302,7 +1318,6 @@ static const struct arm64_cpu_capabilities
>>>>> arm64_features[] = {
>>>>>            .matches = unmap_kernel_at_el0,
>>>>>            .cpu_enable = kpti_install_ng_mappings,
>>>>>        },
>>>>> -#endif
>>>>>        {
>>>>>            /* FP/SIMD is not implemented */
>>>>>            .capability = ARM64_HAS_NO_FPSIMD,
>>>>> @@ -2063,3 +2078,15 @@ static int __init enable_mrs_emulation(void)
>>>>>    }
>>>>>    core_initcall(enable_mrs_emulation);
>>>>> +
>>>>> +ssize_t cpu_show_meltdown(struct device *dev, struct
>>>>> device_attribute *attr,
>>>>> +        char *buf)
>>>>> +{
>>>>> +    if (arm64_kernel_unmapped_at_el0())
>>>>> +        return sprintf(buf, "Mitigation: KPTI\n");
>>>>> +
>>>>> +    if (__meltdown_safe)
>>>>> +        return sprintf(buf, "Not affected\n");
>>>>
>>>> Shall those two checks be swapped? So it doesn't report about a KPTI
>>>> mitigation if the CPU is safe, but we enable KPTI because of KASLR
>>>> having enabled it? Or is that a different knob?
>>>
>>> Hmmm, I think having it this way reflects the fact that the machine is
>>> mitigated independent of whether it needed it. The force on case is 
>>> similar.
>>> The machine may not have needed the mitigation but it was forced on.
>>
>> So is this patchset about showing vulnerabilities _and_ mitigations or
>> just one of them?
>>
> 
> Well, I don't think there is a way to express a mitigated but not 
> vulnerable state in the current ABI. This set is mostly just to bring us 
> in line with the current ABI expectations.

Yeah, from a user's point of view this is probably just bikeshedding, 
because the system is safe either way. If there are good arguments later 
on to prefer "Not affected" over "Mitigated", we can still change it.

Cheers,
Andre.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v5 00/10] arm64: add system vulnerability sysfs entries
  2019-02-27  1:05 [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (10 preceding siblings ...)
  2019-02-28 12:01 ` [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Catalin Marinas
@ 2019-03-01 19:35 ` Stefan Wahren
  11 siblings, 0 replies; 35+ messages in thread
From: Stefan Wahren @ 2019-03-01 19:35 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: mlangsdo, suzuki.poulose, marc.zyngier, catalin.marinas,
	julien.thierry, will.deacon, linux-kernel, Andre.Przywara,
	Dave.Martin, shankerd

Hi Jeremy,

> Jeremy Linton <jeremy.linton@arm.com> hat am 27. Februar 2019 um 02:05 geschrieben:
> 
> 
> Arm64 machines should be displaying a human readable
> vulnerability status to speculative execution attacks in
> /sys/devices/system/cpu/vulnerabilities 
> 
> This series enables that behavior by providing the expected
> functions. Those functions expose the cpu errata and feature
> states, as well as whether firmware is responding appropriately
> to display the overall machine status. This means that in a
> heterogeneous machine we will only claim the machine is mitigated
> or safe if we are confident all booted cores are safe or
> mitigated.
> 

here are the results for Raspberry Pi 3 B:

l1tf:Not affected
meltdown:Not affected
spec_store_bypass:Not affected
spectre_v1:Mitigation: __user pointer sanitization
spectre_v2:Not affected

Tested-by: Stefan Wahren <stefan.wahren@i2se.com>

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2019-03-01 19:35 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-27  1:05 [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Jeremy Linton
2019-02-27  1:05 ` [PATCH v5 01/10] arm64: Provide a command line to disable spectre_v2 mitigation Jeremy Linton
2019-02-28 18:14   ` Suzuki K Poulose
2019-02-28 18:21     ` Catalin Marinas
2019-02-28 18:25       ` Suzuki K Poulose
2019-03-01  6:54   ` Andre Przywara
2019-02-27  1:05 ` [PATCH v5 02/10] arm64: add sysfs vulnerability show for spectre v1 Jeremy Linton
2019-02-28 18:29   ` Suzuki K Poulose
2019-03-01  6:54   ` Andre Przywara
2019-02-27  1:05 ` [PATCH v5 03/10] arm64: add sysfs vulnerability show for meltdown Jeremy Linton
2019-02-28 18:33   ` Suzuki K Poulose
2019-03-01  7:11   ` Andre Przywara
2019-03-01 16:12     ` Jeremy Linton
2019-03-01 16:20       ` Catalin Marinas
2019-03-01 16:53         ` Jeremy Linton
2019-03-01 17:15           ` Catalin Marinas
2019-03-01 17:30           ` Andre Przywara
2019-02-27  1:05 ` [PATCH v5 04/10] arm64: Advertise mitigation of Spectre-v2, or lack thereof Jeremy Linton
2019-03-01  6:57   ` Andre Przywara
2019-02-27  1:05 ` [PATCH v5 05/10] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2 Jeremy Linton
2019-03-01  6:58   ` Andre Przywara
2019-02-27  1:05 ` [PATCH v5 06/10] arm64: Always enable spectrev2 vulnerability detection Jeremy Linton
2019-03-01  6:58   ` Andre Przywara
2019-02-27  1:05 ` [PATCH v5 07/10] arm64: add sysfs vulnerability show for spectre v2 Jeremy Linton
2019-03-01  6:59   ` Andre Przywara
2019-02-27  1:05 ` [PATCH v5 08/10] arm64: Always enable ssb vulnerability detection Jeremy Linton
2019-03-01  7:02   ` Andre Przywara
2019-03-01 16:16     ` Jeremy Linton
2019-02-27  1:05 ` [PATCH v5 09/10] arm64: add sysfs vulnerability show for speculative store bypass Jeremy Linton
2019-03-01  7:02   ` Andre Przywara
2019-03-01 16:41     ` Jeremy Linton
2019-02-27  1:05 ` [PATCH v5 10/10] arm64: enable generic CPU vulnerabilites support Jeremy Linton
2019-03-01  7:03   ` Andre Przywara
2019-02-28 12:01 ` [PATCH v5 00/10] arm64: add system vulnerability sysfs entries Catalin Marinas
2019-03-01 19:35 ` Stefan Wahren

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).