linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] add system vulnerability sysfs entries
@ 2018-12-06 23:44 Jeremy Linton
  2018-12-06 23:44 ` [PATCH 1/6] arm64: kpti: move check for non-vulnerable CPUs to a function Jeremy Linton
                   ` (6 more replies)
  0 siblings, 7 replies; 24+ messages in thread
From: Jeremy Linton @ 2018-12-06 23:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab,
	Jeremy Linton

Part of this series was originally by Mian Yousaf Kaukab.

Arm64 machines should be displaying a human readable
vulnerability status to speculative execution attacks in
/sys/devices/system/cpu/vulnerabilities 

This series enables that behavior by providing the expected
functions. Those functions expose the cpu errata and feature
states, as well as whether firmware is responding appropriately
to display the overall machine status. This means that in a
heterogeneous machine we will only claim the machine is mitigated
or safe if we are confident all booted cores are safe or
mitigated. Otherwise, we will display unknown or unsafe
depending on how much of the machine configuration can
be assured.

Jeremy Linton (2):
  arm64: add sysfs vulnerability show for meltdown
  arm64: add sysfs vulnerability show for spectre v2

Mian Yousaf Kaukab (4):
  arm64: kpti: move check for non-vulnerable CPUs to a function
  arm64: add sysfs vulnerability show for spectre v1
  arm64: add sysfs vulnerability show for speculative store bypass
  arm64: enable generic CPU vulnerabilites support

 arch/arm64/Kconfig             |   1 +
 arch/arm64/kernel/cpu_errata.c | 110 +++++++++++++++++++++++++++++++--
 arch/arm64/kernel/cpufeature.c |  45 +++++++++++---
 3 files changed, 143 insertions(+), 13 deletions(-)

-- 
2.17.2


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/6] arm64: kpti: move check for non-vulnerable CPUs to a function
  2018-12-06 23:44 [PATCH 0/6] add system vulnerability sysfs entries Jeremy Linton
@ 2018-12-06 23:44 ` Jeremy Linton
  2018-12-13  9:13   ` Julien Thierry
  2018-12-06 23:44 ` [PATCH 2/6] arm64: add sysfs vulnerability show for meltdown Jeremy Linton
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 24+ messages in thread
From: Jeremy Linton @ 2018-12-06 23:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab,
	Jeremy Linton

From: Mian Yousaf Kaukab <ykaukab@suse.de>

Add is_meltdown_safe() which is a whitelist of known safe cores.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
[Moved location of function]
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index aec5ecb85737..242898395f68 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -908,8 +908,7 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 
-static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
-				int scope)
+static bool is_cpu_meltdown_safe(void)
 {
 	/* List of CPUs that are not vulnerable and don't need KPTI */
 	static const struct midr_range kpti_safe_list[] = {
@@ -917,6 +916,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 		MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
 		{ /* sentinel */ }
 	};
+	/* Don't force KPTI for CPUs that are not vulnerable */
+	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
+		return true;
+
+	return false;
+}
+
+static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
+				int scope)
+{
 	char const *str = "command line option";
 
 	/*
@@ -940,8 +949,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
 		return true;
 
-	/* Don't force KPTI for CPUs that are not vulnerable */
-	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
+	if (is_cpu_meltdown_safe())
 		return false;
 
 	/* Defer to CPU feature registers */
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/6] arm64: add sysfs vulnerability show for meltdown
  2018-12-06 23:44 [PATCH 0/6] add system vulnerability sysfs entries Jeremy Linton
  2018-12-06 23:44 ` [PATCH 1/6] arm64: kpti: move check for non-vulnerable CPUs to a function Jeremy Linton
@ 2018-12-06 23:44 ` Jeremy Linton
  2018-12-13  9:23   ` Julien Thierry
  2018-12-06 23:44 ` [PATCH 3/6] arm64: add sysfs vulnerability show for spectre v1 Jeremy Linton
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 24+ messages in thread
From: Jeremy Linton @ 2018-12-06 23:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab,
	Jeremy Linton

Add a simple state machine which will track whether
all the online cores in a machine are vulnerable.

Once that is done we have a fairly authoritative view
of the machine vulnerability, which allows us to make a
judgment about machine safety if it hasn't been mitigated.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 31 ++++++++++++++++++++++++++-----
 1 file changed, 26 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 242898395f68..bea9adfef7fa 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -905,6 +905,8 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
 	return has_cpuid_feature(entry, scope);
 }
 
+static enum { A64_MELT_UNSET, A64_MELT_SAFE, A64_MELT_UNKN } __meltdown_safe = A64_MELT_UNSET;
+
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 
@@ -928,6 +930,15 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 {
 	char const *str = "command line option";
 
+	bool meltdown_safe = is_cpu_meltdown_safe() ||
+		has_cpuid_feature(entry, scope);
+
+	/* Only safe if all booted cores are known safe */
+	if (meltdown_safe && __meltdown_safe == A64_MELT_UNSET)
+		__meltdown_safe = A64_MELT_SAFE;
+	else if (!meltdown_safe)
+		__meltdown_safe = A64_MELT_UNKN;
+
 	/*
 	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
 	 * ThunderX leads to apparent I-cache corruption of kernel text, which
@@ -949,11 +960,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
 		return true;
 
-	if (is_cpu_meltdown_safe())
-		return false;
-
-	/* Defer to CPU feature registers */
-	return !has_cpuid_feature(entry, scope);
+	return !meltdown_safe;
 }
 
 static void
@@ -1920,3 +1927,17 @@ static int __init enable_mrs_emulation(void)
 }
 
 core_initcall(enable_mrs_emulation);
+
+#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
+ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	if (arm64_kernel_unmapped_at_el0())
+		return sprintf(buf, "Mitigation: KPTI\n");
+
+	if (__meltdown_safe == A64_MELT_SAFE)
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Unknown\n");
+}
+#endif
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 3/6] arm64: add sysfs vulnerability show for spectre v1
  2018-12-06 23:44 [PATCH 0/6] add system vulnerability sysfs entries Jeremy Linton
  2018-12-06 23:44 ` [PATCH 1/6] arm64: kpti: move check for non-vulnerable CPUs to a function Jeremy Linton
  2018-12-06 23:44 ` [PATCH 2/6] arm64: add sysfs vulnerability show for meltdown Jeremy Linton
@ 2018-12-06 23:44 ` Jeremy Linton
  2018-12-06 23:44 ` [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2 Jeremy Linton
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 24+ messages in thread
From: Jeremy Linton @ 2018-12-06 23:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab,
	Jeremy Linton

From: Mian Yousaf Kaukab <ykaukab@suse.de>

spectre v1, has been mitigated, and the mitigation is
always active.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 6ad715d67df8..559ecdee6fd2 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -757,3 +757,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 	{
 	}
 };
+
+#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
+
+ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+}
+
+#endif
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2
  2018-12-06 23:44 [PATCH 0/6] add system vulnerability sysfs entries Jeremy Linton
                   ` (2 preceding siblings ...)
  2018-12-06 23:44 ` [PATCH 3/6] arm64: add sysfs vulnerability show for spectre v1 Jeremy Linton
@ 2018-12-06 23:44 ` Jeremy Linton
  2018-12-13 11:09   ` Julien Thierry
  2018-12-06 23:44 ` [PATCH 5/6] arm64: add sysfs vulnerability show for speculative store bypass Jeremy Linton
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 24+ messages in thread
From: Jeremy Linton @ 2018-12-06 23:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab,
	Jeremy Linton

Add code to track whether all the cores in the machine are
vulnerable, and whether all the vulnerable cores have been
mitigated.

Once we have that information we can add the sysfs stub and
provide an accurate view of what is known about the machine.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 72 +++++++++++++++++++++++++++++++---
 1 file changed, 67 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 559ecdee6fd2..6505c93d507e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -109,6 +109,11 @@ cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
 
 atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
 
+#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || defined(CONFIG_GENERIC_CPU_VULNERABILITIES)
+/* Track overall mitigation state. We are only mitigated if all cores are ok */
+static enum { A64_HBP_UNSET, A64_HBP_MIT, A64_HBP_NOTMIT } __hardenbp_enab = A64_HBP_UNSET;
+#endif
+
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 #include <asm/mmu_context.h>
 #include <asm/cacheflush.h>
@@ -231,15 +236,19 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
 		return;
 
-	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
+	if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
+		__hardenbp_enab = A64_HBP_NOTMIT;
 		return;
+	}
 
 	switch (psci_ops.conduit) {
 	case PSCI_CONDUIT_HVC:
 		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
-		if ((int)res.a0 < 0)
+		if ((int)res.a0 < 0) {
+			__hardenbp_enab = A64_HBP_NOTMIT;
 			return;
+		}
 		cb = call_hvc_arch_workaround_1;
 		/* This is a guest, no need to patch KVM vectors */
 		smccc_start = NULL;
@@ -249,14 +258,17 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 	case PSCI_CONDUIT_SMC:
 		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
-		if ((int)res.a0 < 0)
+		if ((int)res.a0 < 0) {
+			__hardenbp_enab = A64_HBP_NOTMIT;
 			return;
+		}
 		cb = call_smc_arch_workaround_1;
 		smccc_start = __smccc_workaround_1_smc_start;
 		smccc_end = __smccc_workaround_1_smc_end;
 		break;
 
 	default:
+		__hardenbp_enab = A64_HBP_NOTMIT;
 		return;
 	}
 
@@ -266,6 +278,9 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 
 	install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
 
+	if (__hardenbp_enab == A64_HBP_UNSET)
+		__hardenbp_enab = A64_HBP_MIT;
+
 	return;
 }
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
@@ -539,7 +554,36 @@ multi_entry_cap_cpu_enable(const struct arm64_cpu_capabilities *entry)
 			caps->cpu_enable(caps);
 }
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \
+	defined(CONFIG_GENERIC_CPU_VULNERABILITIES)
+
+static enum { A64_SV2_UNSET, A64_SV2_SAFE, A64_SV2_UNSAFE } __spectrev2_safe = A64_SV2_UNSET;
+
+/*
+ * Track overall bp hardening for all heterogeneous cores in the machine.
+ * We are only considered "safe" if all booted cores are known safe.
+ */
+static bool __maybe_unused
+check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
+{
+	bool is_vul;
+	bool has_csv2;
+	u64 pfr0;
+
+	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+
+	is_vul = is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list);
+
+	pfr0 = read_cpuid(ID_AA64PFR0_EL1);
+	has_csv2 = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT);
+
+	if (is_vul)
+		__spectrev2_safe = A64_SV2_UNSAFE;
+	else if (__spectrev2_safe == A64_SV2_UNSET && has_csv2)
+		__spectrev2_safe = A64_SV2_SAFE;
+
+	return is_vul;
+}
 
 /*
  * List of CPUs where we need to issue a psci call to
@@ -728,7 +772,9 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
 		.cpu_enable = enable_smccc_arch_workaround_1,
-		ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
+		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+		.matches = check_branch_predictor,
+		.midr_range_list = arm64_bp_harden_smccc_cpus,
 	},
 #endif
 #ifdef CONFIG_HARDEN_EL2_VECTORS
@@ -766,4 +812,20 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
 	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 }
 
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	switch (__spectrev2_safe) {
+	case A64_SV2_SAFE:
+		return sprintf(buf, "Not affected\n");
+	case A64_SV2_UNSAFE:
+		if (__hardenbp_enab == A64_HBP_MIT)
+			return sprintf(buf,
+				"Mitigation: Branch predictor hardening\n");
+		return sprintf(buf, "Vulnerable\n");
+	default:
+		return sprintf(buf, "Unknown\n");
+	}
+}
+
 #endif
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 5/6] arm64: add sysfs vulnerability show for speculative store bypass
  2018-12-06 23:44 [PATCH 0/6] add system vulnerability sysfs entries Jeremy Linton
                   ` (3 preceding siblings ...)
  2018-12-06 23:44 ` [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2 Jeremy Linton
@ 2018-12-06 23:44 ` Jeremy Linton
  2018-12-14 10:34   ` Steven Price
  2018-12-06 23:44 ` [PATCH 6/6] arm64: enable generic CPU vulnerabilites support Jeremy Linton
  2018-12-13 12:07 ` [PATCH 0/6] add system vulnerability sysfs entries Dave Martin
  6 siblings, 1 reply; 24+ messages in thread
From: Jeremy Linton @ 2018-12-06 23:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab,
	Jeremy Linton

From: Mian Yousaf Kaukab <ykaukab@suse.de>

Return status based no ssbd_state and the arm64 SSBS feature.
Return string "Unknown" in case CONFIG_ARM64_SSBD is
disabled or arch workaround2 is not available
in the firmware.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
[Added SSBS logic]
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 6505c93d507e..8aeb5ca38db8 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -423,6 +423,7 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 		ssbd_state = ARM64_SSBD_UNKNOWN;
 		return false;
 
+	/* machines with mixed mitigation requirements must not return this */
 	case SMCCC_RET_NOT_REQUIRED:
 		pr_info_once("%s mitigation not required\n", entry->desc);
 		ssbd_state = ARM64_SSBD_MITIGATED;
@@ -828,4 +829,31 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
 	}
 }
 
+ssize_t cpu_show_spec_store_bypass(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	/*
+	 *  Two assumptions: First, get_ssbd_state() reflects the worse case
+	 *  for hetrogenous machines, and that if SSBS is supported its
+	 *  supported by all cores.
+	 */
+	switch (arm64_get_ssbd_state()) {
+	case ARM64_SSBD_MITIGATED:
+		return sprintf(buf, "Not affected\n");
+
+	case ARM64_SSBD_KERNEL:
+	case ARM64_SSBD_FORCE_ENABLE:
+		if (cpus_have_cap(ARM64_SSBS))
+			return sprintf(buf, "Not affected\n");
+		return sprintf(buf,
+			"Mitigation: Speculative Store Bypass disabled\n");
+
+	case ARM64_SSBD_FORCE_DISABLE:
+		return sprintf(buf, "Vulnerable\n");
+
+	default: /* ARM64_SSBD_UNKNOWN*/
+		return sprintf(buf, "Unknown\n");
+	}
+}
+
 #endif
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 6/6] arm64: enable generic CPU vulnerabilites support
  2018-12-06 23:44 [PATCH 0/6] add system vulnerability sysfs entries Jeremy Linton
                   ` (4 preceding siblings ...)
  2018-12-06 23:44 ` [PATCH 5/6] arm64: add sysfs vulnerability show for speculative store bypass Jeremy Linton
@ 2018-12-06 23:44 ` Jeremy Linton
  2018-12-13 12:07 ` [PATCH 0/6] add system vulnerability sysfs entries Dave Martin
  6 siblings, 0 replies; 24+ messages in thread
From: Jeremy Linton @ 2018-12-06 23:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab,
	Jeremy Linton

From: Mian Yousaf Kaukab <ykaukab@suse.de>

Enable CPU vulnerabilty show functions for spectre_v1, spectre_v2,
meltdown and store-bypass.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index ea2ab0330e3a..4b20d46c959b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -89,6 +89,7 @@ config ARM64
 	select GENERIC_CLOCKEVENTS
 	select GENERIC_CLOCKEVENTS_BROADCAST
 	select GENERIC_CPU_AUTOPROBE
+	select GENERIC_CPU_VULNERABILITIES
 	select GENERIC_EARLY_IOREMAP
 	select GENERIC_IDLE_POLL_SETUP
 	select GENERIC_IRQ_MULTI_HANDLER
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/6] arm64: kpti: move check for non-vulnerable CPUs to a function
  2018-12-13  9:13   ` Julien Thierry
@ 2018-12-12 14:36     ` Jeremy Linton
  0 siblings, 0 replies; 24+ messages in thread
From: Jeremy Linton @ 2018-12-12 14:36 UTC (permalink / raw)
  To: Julien Thierry, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab

Hi Julien,

Thanks for looking at this,

On 12/13/2018 03:13 AM, Julien Thierry wrote:
> Hi,
> 
> On 06/12/2018 23:44, Jeremy Linton wrote:
>> From: Mian Yousaf Kaukab <ykaukab@suse.de>
>>
>> Add is_meltdown_safe() which is a whitelist of known safe cores.
>>
>> Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
>> [Moved location of function]
>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>> ---
>>   arch/arm64/kernel/cpufeature.c | 16 ++++++++++++----
>>   1 file changed, 12 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index aec5ecb85737..242898395f68 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -908,8 +908,7 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
>>   #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>>   static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
>>   
>> -static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>> -				int scope)
>> +static bool is_cpu_meltdown_safe(void)
>>   {
>>   	/* List of CPUs that are not vulnerable and don't need KPTI */
>>   	static const struct midr_range kpti_safe_list[] = {
>> @@ -917,6 +916,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>>   		MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
>>   		{ /* sentinel */ }
>>   	};
>> +	/* Don't force KPTI for CPUs that are not vulnerable */
> 
> This is really a nit, but that comment would make more sense where
> is_cpu_meltdown_safe() is called since unmap_kernel_at_el0 is the one
> deciding whether to apply KPTI, is_cpu_meltdown_safe() just states
> whether the core is safe of not.

That is a good point, thanks.


> 
> Otherwise:
> 
> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
> 
> Cheers,
> 
> Julien
> 
>> +	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
>> +		return true;
>> +
>> +	return false;
>> +}
>> +
>> +static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>> +				int scope)
>> +{
>>   	char const *str = "command line option";
>>   
>>   	/*
>> @@ -940,8 +949,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>>   	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
>>   		return true;
>>   
>> -	/* Don't force KPTI for CPUs that are not vulnerable */
>> -	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
>> +	if (is_cpu_meltdown_safe())
>>   		return false;
>>   
>>   	/* Defer to CPU feature registers */
>>
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/6] arm64: add sysfs vulnerability show for meltdown
  2018-12-13 10:46     ` Julien Thierry
@ 2018-12-12 14:49       ` Jeremy Linton
  2018-12-14  8:55         ` Julien Thierry
  0 siblings, 1 reply; 24+ messages in thread
From: Jeremy Linton @ 2018-12-12 14:49 UTC (permalink / raw)
  To: Julien Thierry, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab

Hi Julien,

Thanks for taking a look at this!

On 12/13/2018 04:46 AM, Julien Thierry wrote:
> 
> 
> On 13/12/2018 09:23, Julien Thierry wrote:
>> Hi Jeremy,
>>
>> On 06/12/2018 23:44, Jeremy Linton wrote:
>>> Add a simple state machine which will track whether
>>> all the online cores in a machine are vulnerable.
>>>
>>> Once that is done we have a fairly authoritative view
>>> of the machine vulnerability, which allows us to make a
>>> judgment about machine safety if it hasn't been mitigated.
>>>
>>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>>> ---
>>>   arch/arm64/kernel/cpufeature.c | 31 ++++++++++++++++++++++++++-----
>>>   1 file changed, 26 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>>> index 242898395f68..bea9adfef7fa 100644
>>> --- a/arch/arm64/kernel/cpufeature.c
>>> +++ b/arch/arm64/kernel/cpufeature.c
>>> @@ -905,6 +905,8 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
>>>   	return has_cpuid_feature(entry, scope);
>>>   }
>>>   
>>> +static enum { A64_MELT_UNSET, A64_MELT_SAFE, A64_MELT_UNKN } __meltdown_safe = A64_MELT_UNSET;
>>> +
>>
>> I'm wondering, do we really need that tri state?
>>
>> Can't we consider that we are safe an move to unsafe/unkown if any cpu
>> during bring up is not in the safe list?
>>
>> The only user of this is cpu_show_meltdown, but I don't imagine it'll
>> get called before unmap_kernel_at_el0() is called for the boot CPU which
>> should initialise that state.
>>
>> Or is there another reason for having that UNSET state?
>>
> 
> Ok, I think I get the point of the UNSET as #ifndef
> CONFIG_UNMAP_KERNEL_AT_EL0 we don't set the state. But does that mean we
> always fall in the "Unknown" case when we don't build kpti in? Is that
> desirable?
> 
> If so, I'd suggest replacing the tri-state with the following change:
> 
> 
>>> +
>>> +#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
>>> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
>>> +		char *buf)
>>> +{
>>> +	if (arm64_kernel_unmapped_at_el0())
>>> +		return sprintf(buf, "Mitigation: KPTI\n");
>>> +
> 
> 	if (!IS_ENABLED(UNMAP_KERNEL_AT_EL0) || !meltdown_safe)
> 		sprintf(buf, "Unknown\n");
> 	else
> 		sprintf(buf, "Not affected\n");

If I'm understanding what your suggesting:

Isn't this only checking the current core, rather than the whole 
machine? IIRC that was the fundamental complaint with the original set.





^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/6] add system vulnerability sysfs entries
  2018-12-13 12:07 ` [PATCH 0/6] add system vulnerability sysfs entries Dave Martin
@ 2018-12-12 15:48   ` Jeremy Linton
  2018-12-13 19:26     ` Dave Martin
  0 siblings, 1 reply; 24+ messages in thread
From: Jeremy Linton @ 2018-12-12 15:48 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, mark.rutland, suzuki.poulose, marc.zyngier,
	catalin.marinas, will.deacon, linux-kernel, ykaukab, shankerd

Hi Dave,

Thanks for looking at this!

On 12/13/2018 06:07 AM, Dave Martin wrote:
> On Thu, Dec 06, 2018 at 05:44:02PM -0600, Jeremy Linton wrote:
>> Part of this series was originally by Mian Yousaf Kaukab.
>>
>> Arm64 machines should be displaying a human readable
>> vulnerability status to speculative execution attacks in
>> /sys/devices/system/cpu/vulnerabilities
> 
> Is there any agreement on the strings that will be returned in there?
> 
> A quick search didn't find anything obvious upstream.  There is
> documentation proposed in [1], but I don't know what happened to it and
> it doesn't define the mitigation strings at all.  (I didn't follow the
> discussion, so there is likely background here I'm not aware of.)
> 
> If the mitigation strings are meaningful at all, they really ought to be
> documented somewhere since this is ABI.

I think they are in testing? But that documentation is missing the 
"Unknown" state which tends to be our default case for "we don't have a 
complete white/black list", or "mitigation disabled, we don't know if 
your vulnerable", etc.

I'm not sure I like the "Unknown" state, but we can try to add it to the 
documentation.

> 
>> This series enables that behavior by providing the expected
>> functions. Those functions expose the cpu errata and feature
>> states, as well as whether firmware is responding appropriately
>> to display the overall machine status. This means that in a
>> heterogeneous machine we will only claim the machine is mitigated
>> or safe if we are confident all booted cores are safe or
>> mitigated. Otherwise, we will display unknown or unsafe
>> depending on how much of the machine configuration can
>> be assured.
> 
> Can the vulnerability status change once we enter userspace?

Generally no, for heterogeneous machines I think the answer here is yes, 
a user could check the state, and have it read "Not affected" then bring 
another core online which causes the state to change at which point if 
they re-read /sys it may reflect another state. OTOH, given that we tend 
to default to mitigated usually this shouldn't be an issue unless 
someone has disabled the mitigation.


> 
> I see no locking or other concurrency protections, and various global
> variables that could be __ro_after_init if nothing will change them
> after boot.

I think the state changes are all protected due to the fact the bringing 
a core online/offline is serialized.

> 
> If they can change after boot, userspace has no way to be notified,

Is checking on hotplug notification sufficient?


> 
> (I haven't grokked the patches fully, so the answer to this question may
> be reasonably straightforward...)
> 
> 
> Cheers
> ---Dave
> 
> [1] https://lkml.org/lkml/2018/1/8/145
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/6] arm64: kpti: move check for non-vulnerable CPUs to a function
  2018-12-06 23:44 ` [PATCH 1/6] arm64: kpti: move check for non-vulnerable CPUs to a function Jeremy Linton
@ 2018-12-13  9:13   ` Julien Thierry
  2018-12-12 14:36     ` Jeremy Linton
  0 siblings, 1 reply; 24+ messages in thread
From: Julien Thierry @ 2018-12-13  9:13 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab

Hi,

On 06/12/2018 23:44, Jeremy Linton wrote:
> From: Mian Yousaf Kaukab <ykaukab@suse.de>
> 
> Add is_meltdown_safe() which is a whitelist of known safe cores.
> 
> Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
> [Moved location of function]
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>  arch/arm64/kernel/cpufeature.c | 16 ++++++++++++----
>  1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index aec5ecb85737..242898395f68 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -908,8 +908,7 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
>  #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
>  
> -static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
> -				int scope)
> +static bool is_cpu_meltdown_safe(void)
>  {
>  	/* List of CPUs that are not vulnerable and don't need KPTI */
>  	static const struct midr_range kpti_safe_list[] = {
> @@ -917,6 +916,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>  		MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
>  		{ /* sentinel */ }
>  	};
> +	/* Don't force KPTI for CPUs that are not vulnerable */

This is really a nit, but that comment would make more sense where
is_cpu_meltdown_safe() is called since unmap_kernel_at_el0 is the one
deciding whether to apply KPTI, is_cpu_meltdown_safe() just states
whether the core is safe of not.

Otherwise:

Reviewed-by: Julien Thierry <julien.thierry@arm.com>

Cheers,

Julien

> +	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
> +		return true;
> +
> +	return false;
> +}
> +
> +static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
> +				int scope)
> +{
>  	char const *str = "command line option";
>  
>  	/*
> @@ -940,8 +949,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>  	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
>  		return true;
>  
> -	/* Don't force KPTI for CPUs that are not vulnerable */
> -	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
> +	if (is_cpu_meltdown_safe())
>  		return false;
>  
>  	/* Defer to CPU feature registers */
> 

-- 
Julien Thierry

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/6] arm64: add sysfs vulnerability show for meltdown
  2018-12-06 23:44 ` [PATCH 2/6] arm64: add sysfs vulnerability show for meltdown Jeremy Linton
@ 2018-12-13  9:23   ` Julien Thierry
  2018-12-13 10:46     ` Julien Thierry
  0 siblings, 1 reply; 24+ messages in thread
From: Julien Thierry @ 2018-12-13  9:23 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab

Hi Jeremy,

On 06/12/2018 23:44, Jeremy Linton wrote:
> Add a simple state machine which will track whether
> all the online cores in a machine are vulnerable.
> 
> Once that is done we have a fairly authoritative view
> of the machine vulnerability, which allows us to make a
> judgment about machine safety if it hasn't been mitigated.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>  arch/arm64/kernel/cpufeature.c | 31 ++++++++++++++++++++++++++-----
>  1 file changed, 26 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 242898395f68..bea9adfef7fa 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -905,6 +905,8 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
>  	return has_cpuid_feature(entry, scope);
>  }
>  
> +static enum { A64_MELT_UNSET, A64_MELT_SAFE, A64_MELT_UNKN } __meltdown_safe = A64_MELT_UNSET;
> +

I'm wondering, do we really need that tri state?

Can't we consider that we are safe an move to unsafe/unkown if any cpu
during bring up is not in the safe list?

The only user of this is cpu_show_meltdown, but I don't imagine it'll
get called before unmap_kernel_at_el0() is called for the boot CPU which
should initialise that state.

Or is there another reason for having that UNSET state?

Thanks,

>  #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
>  
> @@ -928,6 +930,15 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>  {
>  	char const *str = "command line option";
>  
> +	bool meltdown_safe = is_cpu_meltdown_safe() ||
> +		has_cpuid_feature(entry, scope);
> +
> +	/* Only safe if all booted cores are known safe */
> +	if (meltdown_safe && __meltdown_safe == A64_MELT_UNSET)
> +		__meltdown_safe = A64_MELT_SAFE;
> +	else if (!meltdown_safe)
> +		__meltdown_safe = A64_MELT_UNKN;
> +
>  	/*
>  	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
>  	 * ThunderX leads to apparent I-cache corruption of kernel text, which
> @@ -949,11 +960,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>  	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
>  		return true;
>  
> -	if (is_cpu_meltdown_safe())
> -		return false;
> -
> -	/* Defer to CPU feature registers */
> -	return !has_cpuid_feature(entry, scope);
> +	return !meltdown_safe;
>  }
>  
>  static void
> @@ -1920,3 +1927,17 @@ static int __init enable_mrs_emulation(void)
>  }
>  
>  core_initcall(enable_mrs_emulation);
> +
> +#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
> +		char *buf)
> +{
> +	if (arm64_kernel_unmapped_at_el0())
> +		return sprintf(buf, "Mitigation: KPTI\n");
> +
> +	if (__meltdown_safe == A64_MELT_SAFE)
> +		return sprintf(buf, "Not affected\n");
> +
> +	return sprintf(buf, "Unknown\n");
> +}
> +#endif
> 

-- 
Julien Thierry

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/6] arm64: add sysfs vulnerability show for meltdown
  2018-12-13  9:23   ` Julien Thierry
@ 2018-12-13 10:46     ` Julien Thierry
  2018-12-12 14:49       ` Jeremy Linton
  0 siblings, 1 reply; 24+ messages in thread
From: Julien Thierry @ 2018-12-13 10:46 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab



On 13/12/2018 09:23, Julien Thierry wrote:
> Hi Jeremy,
> 
> On 06/12/2018 23:44, Jeremy Linton wrote:
>> Add a simple state machine which will track whether
>> all the online cores in a machine are vulnerable.
>>
>> Once that is done we have a fairly authoritative view
>> of the machine vulnerability, which allows us to make a
>> judgment about machine safety if it hasn't been mitigated.
>>
>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>> ---
>>  arch/arm64/kernel/cpufeature.c | 31 ++++++++++++++++++++++++++-----
>>  1 file changed, 26 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index 242898395f68..bea9adfef7fa 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -905,6 +905,8 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
>>  	return has_cpuid_feature(entry, scope);
>>  }
>>  
>> +static enum { A64_MELT_UNSET, A64_MELT_SAFE, A64_MELT_UNKN } __meltdown_safe = A64_MELT_UNSET;
>> +
> 
> I'm wondering, do we really need that tri state?
> 
> Can't we consider that we are safe an move to unsafe/unkown if any cpu
> during bring up is not in the safe list?
> 
> The only user of this is cpu_show_meltdown, but I don't imagine it'll
> get called before unmap_kernel_at_el0() is called for the boot CPU which
> should initialise that state.
> 
> Or is there another reason for having that UNSET state?
> 

Ok, I think I get the point of the UNSET as #ifndef
CONFIG_UNMAP_KERNEL_AT_EL0 we don't set the state. But does that mean we
always fall in the "Unknown" case when we don't build kpti in? Is that
desirable?

If so, I'd suggest replacing the tri-state with the following change:


>> +
>> +#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
>> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
>> +		char *buf)
>> +{
>> +	if (arm64_kernel_unmapped_at_el0())
>> +		return sprintf(buf, "Mitigation: KPTI\n");
>> +

	if (!IS_ENABLED(UNMAP_KERNEL_AT_EL0) || !meltdown_safe)
		sprintf(buf, "Unknown\n");
	else
		sprintf(buf, "Not affected\n");


Thanks,

-- 
Julien Thierry

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2
  2018-12-06 23:44 ` [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2 Jeremy Linton
@ 2018-12-13 11:09   ` Julien Thierry
  2019-01-02 22:19     ` Jeremy Linton
  0 siblings, 1 reply; 24+ messages in thread
From: Julien Thierry @ 2018-12-13 11:09 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab



On 06/12/2018 23:44, Jeremy Linton wrote:
> Add code to track whether all the cores in the machine are
> vulnerable, and whether all the vulnerable cores have been
> mitigated.
> 
> Once we have that information we can add the sysfs stub and
> provide an accurate view of what is known about the machine.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>  arch/arm64/kernel/cpu_errata.c | 72 +++++++++++++++++++++++++++++++---
>  1 file changed, 67 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 559ecdee6fd2..6505c93d507e 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c

[...]

> @@ -766,4 +812,20 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
>  	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
>  }
>  
> +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
> +		char *buf)
> +{
> +	switch (__spectrev2_safe) {
> +	case A64_SV2_SAFE:
> +		return sprintf(buf, "Not affected\n");
> +	case A64_SV2_UNSAFE:
> +		if (__hardenbp_enab == A64_HBP_MIT)
> +			return sprintf(buf,
> +				"Mitigation: Branch predictor hardening\n");
> +		return sprintf(buf, "Vulnerable\n");
> +	default:
> +		return sprintf(buf, "Unknown\n");
> +	}

Again I see that we are going to display "Unknown" when the mitigation
is not built in.

Couldn't we make that CONFIG_GENERIC_CPU_VULNERABILITIES check whether a
CPU is vulnerable or not even if the mitigation is not implemented? It's
just checking the list of MIDRs.

Thanks,

-- 
Julien Thierry

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/6] add system vulnerability sysfs entries
  2018-12-06 23:44 [PATCH 0/6] add system vulnerability sysfs entries Jeremy Linton
                   ` (5 preceding siblings ...)
  2018-12-06 23:44 ` [PATCH 6/6] arm64: enable generic CPU vulnerabilites support Jeremy Linton
@ 2018-12-13 12:07 ` Dave Martin
  2018-12-12 15:48   ` Jeremy Linton
  6 siblings, 1 reply; 24+ messages in thread
From: Dave Martin @ 2018-12-13 12:07 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, mark.rutland, suzuki.poulose, marc.zyngier,
	catalin.marinas, will.deacon, linux-kernel, ykaukab, shankerd

On Thu, Dec 06, 2018 at 05:44:02PM -0600, Jeremy Linton wrote:
> Part of this series was originally by Mian Yousaf Kaukab.
> 
> Arm64 machines should be displaying a human readable
> vulnerability status to speculative execution attacks in
> /sys/devices/system/cpu/vulnerabilities 

Is there any agreement on the strings that will be returned in there?

A quick search didn't find anything obvious upstream.  There is
documentation proposed in [1], but I don't know what happened to it and
it doesn't define the mitigation strings at all.  (I didn't follow the
discussion, so there is likely background here I'm not aware of.)

If the mitigation strings are meaningful at all, they really ought to be
documented somewhere since this is ABI.

> This series enables that behavior by providing the expected
> functions. Those functions expose the cpu errata and feature
> states, as well as whether firmware is responding appropriately
> to display the overall machine status. This means that in a
> heterogeneous machine we will only claim the machine is mitigated
> or safe if we are confident all booted cores are safe or
> mitigated. Otherwise, we will display unknown or unsafe
> depending on how much of the machine configuration can
> be assured.

Can the vulnerability status change once we enter userspace?

I see no locking or other concurrency protections, and various global
variables that could be __ro_after_init if nothing will change them
after boot.

If they can change after boot, userspace has no way to be notified,

(I haven't grokked the patches fully, so the answer to this question may
be reasonably straightforward...)


Cheers
---Dave

[1] https://lkml.org/lkml/2018/1/8/145

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/6] add system vulnerability sysfs entries
  2018-12-12 15:48   ` Jeremy Linton
@ 2018-12-13 19:26     ` Dave Martin
  0 siblings, 0 replies; 24+ messages in thread
From: Dave Martin @ 2018-12-13 19:26 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: mark.rutland, suzuki.poulose, marc.zyngier, catalin.marinas,
	will.deacon, linux-kernel, shankerd, ykaukab, linux-arm-kernel

On Wed, Dec 12, 2018 at 09:48:03AM -0600, Jeremy Linton wrote:
> Hi Dave,
> 
> Thanks for looking at this!
> 
> On 12/13/2018 06:07 AM, Dave Martin wrote:
> >On Thu, Dec 06, 2018 at 05:44:02PM -0600, Jeremy Linton wrote:
> >>Part of this series was originally by Mian Yousaf Kaukab.
> >>
> >>Arm64 machines should be displaying a human readable
> >>vulnerability status to speculative execution attacks in
> >>/sys/devices/system/cpu/vulnerabilities
> >
> >Is there any agreement on the strings that will be returned in there?
> >
> >A quick search didn't find anything obvious upstream.  There is
> >documentation proposed in [1], but I don't know what happened to it and
> >it doesn't define the mitigation strings at all.  (I didn't follow the
> >discussion, so there is likely background here I'm not aware of.)
> >
> >If the mitigation strings are meaningful at all, they really ought to be
> >documented somewhere since this is ABI.
> 
> I think they are in testing? But that documentation is missing the "Unknown"
> state which tends to be our default case for "we don't have a complete
> white/black list", or "mitigation disabled, we don't know if your
> vulnerable", etc.
> 
> I'm not sure I like the "Unknown" state, but we can try to add it to the
> documentation.
> 
> >
> >>This series enables that behavior by providing the expected
> >>functions. Those functions expose the cpu errata and feature
> >>states, as well as whether firmware is responding appropriately
> >>to display the overall machine status. This means that in a
> >>heterogeneous machine we will only claim the machine is mitigated
> >>or safe if we are confident all booted cores are safe or
> >>mitigated. Otherwise, we will display unknown or unsafe
> >>depending on how much of the machine configuration can
> >>be assured.
> >
> >Can the vulnerability status change once we enter userspace?
> 
> Generally no, for heterogeneous machines I think the answer here is yes, a
> user could check the state, and have it read "Not affected" then bring
> another core online which causes the state to change at which point if they

This feels like a potential bug, since userspace may already be making
assumptions about the vulnerability state by this point.

Shouldn't we reject late cpus that are noncompliant with the status quo
(as for unreconcilable cpu feature mismatches)?  I thought we already
did this for some mitigations.

> re-read /sys it may reflect another state. OTOH, given that we tend to
> default to mitigated usually this shouldn't be an issue unless someone has
> disabled the mitigation.
> 
> 
> >
> >I see no locking or other concurrency protections, and various global
> >variables that could be __ro_after_init if nothing will change them
> >after boot.
> 
> I think the state changes are all protected due to the fact the bringing a
> core online/offline is serialized.

Only with each other, not with random sysfs code running on another
online cpu.

If the concurrently accessed variables are only of fundamental integer
types there is unlikely to be a problem in practice, since transforming
a single, small isolated access into a memcpy() or similar (although
permitted by the language) is very unlikely to be a performance win --
so the compiler is unlikely to do it.

This is rather theoretical: if nobody else is bothering with atomics or
locking here, it may not be worth us adding anything specific just for
arm64.

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/6] arm64: add sysfs vulnerability show for meltdown
  2018-12-12 14:49       ` Jeremy Linton
@ 2018-12-14  8:55         ` Julien Thierry
  0 siblings, 0 replies; 24+ messages in thread
From: Julien Thierry @ 2018-12-14  8:55 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab

Hi Jeremy,

On 12/12/2018 14:49, Jeremy Linton wrote:
> Hi Julien,
> 
> Thanks for taking a look at this!
> 
> On 12/13/2018 04:46 AM, Julien Thierry wrote:
>>
>>
>> On 13/12/2018 09:23, Julien Thierry wrote:
>>> Hi Jeremy,
>>>
>>> On 06/12/2018 23:44, Jeremy Linton wrote:
>>>> Add a simple state machine which will track whether
>>>> all the online cores in a machine are vulnerable.
>>>>
>>>> Once that is done we have a fairly authoritative view
>>>> of the machine vulnerability, which allows us to make a
>>>> judgment about machine safety if it hasn't been mitigated.
>>>>
>>>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>>>> ---
>>>>   arch/arm64/kernel/cpufeature.c | 31 ++++++++++++++++++++++++++-----
>>>>   1 file changed, 26 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/kernel/cpufeature.c
>>>> b/arch/arm64/kernel/cpufeature.c
>>>> index 242898395f68..bea9adfef7fa 100644
>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>> @@ -905,6 +905,8 @@ has_useable_cnp(const struct
>>>> arm64_cpu_capabilities *entry, int scope)
>>>>       return has_cpuid_feature(entry, scope);
>>>>   }
>>>>   +static enum { A64_MELT_UNSET, A64_MELT_SAFE, A64_MELT_UNKN }
>>>> __meltdown_safe = A64_MELT_UNSET;
>>>> +
>>>
>>> I'm wondering, do we really need that tri state?
>>>
>>> Can't we consider that we are safe an move to unsafe/unkown if any cpu
>>> during bring up is not in the safe list?
>>>
>>> The only user of this is cpu_show_meltdown, but I don't imagine it'll
>>> get called before unmap_kernel_at_el0() is called for the boot CPU which
>>> should initialise that state.
>>>
>>> Or is there another reason for having that UNSET state?
>>>
>>
>> Ok, I think I get the point of the UNSET as #ifndef
>> CONFIG_UNMAP_KERNEL_AT_EL0 we don't set the state. But does that mean we
>> always fall in the "Unknown" case when we don't build kpti in? Is that
>> desirable?
>>
>> If so, I'd suggest replacing the tri-state with the following change:
>>
>>
>>>> +
>>>> +#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
>>>> +ssize_t cpu_show_meltdown(struct device *dev, struct
>>>> device_attribute *attr,
>>>> +        char *buf)
>>>> +{
>>>> +    if (arm64_kernel_unmapped_at_el0())
>>>> +        return sprintf(buf, "Mitigation: KPTI\n");
>>>> +
>>
>>     if (!IS_ENABLED(UNMAP_KERNEL_AT_EL0) || !meltdown_safe)
>>         sprintf(buf, "Unknown\n");
>>     else
>>         sprintf(buf, "Not affected\n");
> 
> If I'm understanding what your suggesting:
> 
> Isn't this only checking the current core, rather than the whole
> machine? IIRC that was the fundamental complaint with the original set.
> 

Sorry, yes, I meant to check "!__meltdown_safe". Basically my suggestion
is to replace the static enum variable with a static bool and handle the
"UNSET" case on whether we built the mitigation.

Does that make sense?

However, there's still the same issue as with patch 4. If we don't build
the mitigation, we say that we don't know the status of the system. I
think it would be nice to be able to say that a system is safe even when
the mitigation is not built. People knowing they have a safe system
might be inclined to not build additional stuff they don't need.

Cheers.

-- 
Julien Thierry

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 5/6] arm64: add sysfs vulnerability show for speculative store bypass
  2018-12-06 23:44 ` [PATCH 5/6] arm64: add sysfs vulnerability show for speculative store bypass Jeremy Linton
@ 2018-12-14 10:34   ` Steven Price
  2018-12-14 10:36     ` Will Deacon
  0 siblings, 1 reply; 24+ messages in thread
From: Steven Price @ 2018-12-14 10:34 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: mark.rutland, suzuki.poulose, marc.zyngier, catalin.marinas,
	will.deacon, linux-kernel, ykaukab, dave.martin, shankerd

On 06/12/2018 23:44, Jeremy Linton wrote:
> From: Mian Yousaf Kaukab <ykaukab@suse.de>
> 
> Return status based no ssbd_state and the arm64 SSBS feature.
                      ^^ on

> Return string "Unknown" in case CONFIG_ARM64_SSBD is
> disabled or arch workaround2 is not available
> in the firmware.
> 
> Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
> [Added SSBS logic]
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>  arch/arm64/kernel/cpu_errata.c | 28 ++++++++++++++++++++++++++++
>  1 file changed, 28 insertions(+)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 6505c93d507e..8aeb5ca38db8 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -423,6 +423,7 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>  		ssbd_state = ARM64_SSBD_UNKNOWN;
>  		return false;
>  
> +	/* machines with mixed mitigation requirements must not return this */
>  	case SMCCC_RET_NOT_REQUIRED:
>  		pr_info_once("%s mitigation not required\n", entry->desc);
>  		ssbd_state = ARM64_SSBD_MITIGATED;
> @@ -828,4 +829,31 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
>  	}
>  }
>  
> +ssize_t cpu_show_spec_store_bypass(struct device *dev,
> +		struct device_attribute *attr, char *buf)
> +{
> +	/*
> +	 *  Two assumptions: First, get_ssbd_state() reflects the worse case
> +	 *  for hetrogenous machines, and that if SSBS is supported its
                                                  ^^^^ SSBD
> +	 *  supported by all cores.
> +	 */
> +	switch (arm64_get_ssbd_state()) {
> +	case ARM64_SSBD_MITIGATED:
> +		return sprintf(buf, "Not affected\n");
> +
> +	case ARM64_SSBD_KERNEL:
> +	case ARM64_SSBD_FORCE_ENABLE:
> +		if (cpus_have_cap(ARM64_SSBS))
> +			return sprintf(buf, "Not affected\n");
> +		return sprintf(buf,
> +			"Mitigation: Speculative Store Bypass disabled\n");

NIT: To me this reads as the mitigation is disabled. Can we call it
"Speculative Store Bypass Disable" (with a capital 'D' and without the
'd at the end)?

Steve

> +
> +	case ARM64_SSBD_FORCE_DISABLE:
> +		return sprintf(buf, "Vulnerable\n");
> +
> +	default: /* ARM64_SSBD_UNKNOWN*/
> +		return sprintf(buf, "Unknown\n");
> +	}
> +}
> +
>  #endif
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 5/6] arm64: add sysfs vulnerability show for speculative store bypass
  2018-12-14 10:34   ` Steven Price
@ 2018-12-14 10:36     ` Will Deacon
  2018-12-14 10:41       ` Steven Price
  0 siblings, 1 reply; 24+ messages in thread
From: Will Deacon @ 2018-12-14 10:36 UTC (permalink / raw)
  To: Steven Price
  Cc: Jeremy Linton, linux-arm-kernel, mark.rutland, suzuki.poulose,
	marc.zyngier, catalin.marinas, linux-kernel, ykaukab,
	dave.martin, shankerd

On Fri, Dec 14, 2018 at 10:34:31AM +0000, Steven Price wrote:
> On 06/12/2018 23:44, Jeremy Linton wrote:
> > From: Mian Yousaf Kaukab <ykaukab@suse.de>
> > 
> > Return status based no ssbd_state and the arm64 SSBS feature.
>                       ^^ on
> 
> > Return string "Unknown" in case CONFIG_ARM64_SSBD is
> > disabled or arch workaround2 is not available
> > in the firmware.
> > 
> > Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
> > [Added SSBS logic]
> > Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> > ---
> >  arch/arm64/kernel/cpu_errata.c | 28 ++++++++++++++++++++++++++++
> >  1 file changed, 28 insertions(+)
> > 
> > diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> > index 6505c93d507e..8aeb5ca38db8 100644
> > --- a/arch/arm64/kernel/cpu_errata.c
> > +++ b/arch/arm64/kernel/cpu_errata.c
> > @@ -423,6 +423,7 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
> >  		ssbd_state = ARM64_SSBD_UNKNOWN;
> >  		return false;
> >  
> > +	/* machines with mixed mitigation requirements must not return this */
> >  	case SMCCC_RET_NOT_REQUIRED:
> >  		pr_info_once("%s mitigation not required\n", entry->desc);
> >  		ssbd_state = ARM64_SSBD_MITIGATED;
> > @@ -828,4 +829,31 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
> >  	}
> >  }
> >  
> > +ssize_t cpu_show_spec_store_bypass(struct device *dev,
> > +		struct device_attribute *attr, char *buf)
> > +{
> > +	/*
> > +	 *  Two assumptions: First, get_ssbd_state() reflects the worse case
> > +	 *  for hetrogenous machines, and that if SSBS is supported its
>                                                   ^^^^ SSBD
> > +	 *  supported by all cores.
> > +	 */
> > +	switch (arm64_get_ssbd_state()) {
> > +	case ARM64_SSBD_MITIGATED:
> > +		return sprintf(buf, "Not affected\n");
> > +
> > +	case ARM64_SSBD_KERNEL:
> > +	case ARM64_SSBD_FORCE_ENABLE:
> > +		if (cpus_have_cap(ARM64_SSBS))
> > +			return sprintf(buf, "Not affected\n");
> > +		return sprintf(buf,
> > +			"Mitigation: Speculative Store Bypass disabled\n");
> 
> NIT: To me this reads as the mitigation is disabled. Can we call it
> "Speculative Store Bypass Disable" (with a capital 'D' and without the
> 'd at the end)?

Whilst I agree that the strings are reasonably confusing (especially when
you pile on the double-negatives all the way up the stack!), we really
have no choice but to follow x86's lead with these strings.

I don't think it's worth forking the ABI in an attempt to make this clearer.

Will

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 5/6] arm64: add sysfs vulnerability show for speculative store bypass
  2018-12-14 10:36     ` Will Deacon
@ 2018-12-14 10:41       ` Steven Price
  2018-12-14 11:28         ` Dave Martin
  0 siblings, 1 reply; 24+ messages in thread
From: Steven Price @ 2018-12-14 10:41 UTC (permalink / raw)
  To: Will Deacon
  Cc: mark.rutland, suzuki.poulose, marc.zyngier, catalin.marinas,
	ykaukab, linux-kernel, Jeremy Linton, shankerd, dave.martin,
	linux-arm-kernel

On 14/12/2018 10:36, Will Deacon wrote:
> On Fri, Dec 14, 2018 at 10:34:31AM +0000, Steven Price wrote:
>> On 06/12/2018 23:44, Jeremy Linton wrote:
>>> From: Mian Yousaf Kaukab <ykaukab@suse.de>
>>>
>>> Return status based no ssbd_state and the arm64 SSBS feature.
>>                       ^^ on
>>
>>> Return string "Unknown" in case CONFIG_ARM64_SSBD is
>>> disabled or arch workaround2 is not available
>>> in the firmware.
>>>
>>> Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
>>> [Added SSBS logic]
>>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>>> ---
>>>  arch/arm64/kernel/cpu_errata.c | 28 ++++++++++++++++++++++++++++
>>>  1 file changed, 28 insertions(+)
>>>
>>> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
>>> index 6505c93d507e..8aeb5ca38db8 100644
>>> --- a/arch/arm64/kernel/cpu_errata.c
>>> +++ b/arch/arm64/kernel/cpu_errata.c
>>> @@ -423,6 +423,7 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>>>  		ssbd_state = ARM64_SSBD_UNKNOWN;
>>>  		return false;
>>>  
>>> +	/* machines with mixed mitigation requirements must not return this */
>>>  	case SMCCC_RET_NOT_REQUIRED:
>>>  		pr_info_once("%s mitigation not required\n", entry->desc);
>>>  		ssbd_state = ARM64_SSBD_MITIGATED;
>>> @@ -828,4 +829,31 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
>>>  	}
>>>  }
>>>  
>>> +ssize_t cpu_show_spec_store_bypass(struct device *dev,
>>> +		struct device_attribute *attr, char *buf)
>>> +{
>>> +	/*
>>> +	 *  Two assumptions: First, get_ssbd_state() reflects the worse case
>>> +	 *  for hetrogenous machines, and that if SSBS is supported its
>>                                                   ^^^^ SSBD
>>> +	 *  supported by all cores.
>>> +	 */
>>> +	switch (arm64_get_ssbd_state()) {
>>> +	case ARM64_SSBD_MITIGATED:
>>> +		return sprintf(buf, "Not affected\n");
>>> +
>>> +	case ARM64_SSBD_KERNEL:
>>> +	case ARM64_SSBD_FORCE_ENABLE:
>>> +		if (cpus_have_cap(ARM64_SSBS))
>>> +			return sprintf(buf, "Not affected\n");
>>> +		return sprintf(buf,
>>> +			"Mitigation: Speculative Store Bypass disabled\n");
>>
>> NIT: To me this reads as the mitigation is disabled. Can we call it
>> "Speculative Store Bypass Disable" (with a capital 'D' and without the
>> 'd at the end)?
> 
> Whilst I agree that the strings are reasonably confusing (especially when
> you pile on the double-negatives all the way up the stack!), we really
> have no choice but to follow x86's lead with these strings.
> 
> I don't think it's worth forking the ABI in an attempt to make this clearer.

Ah, sorry I hadn't checked the x86 string - yes we should match that.

Steve

> Will
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 5/6] arm64: add sysfs vulnerability show for speculative store bypass
  2018-12-14 10:41       ` Steven Price
@ 2018-12-14 11:28         ` Dave Martin
  2018-12-14 11:33           ` Will Deacon
  0 siblings, 1 reply; 24+ messages in thread
From: Dave Martin @ 2018-12-14 11:28 UTC (permalink / raw)
  To: Steven Price
  Cc: Will Deacon, mark.rutland, suzuki.poulose, marc.zyngier,
	catalin.marinas, ykaukab, linux-kernel, Jeremy Linton,
	linux-arm-kernel, shankerd

On Fri, Dec 14, 2018 at 10:41:42AM +0000, Steven Price wrote:
> On 14/12/2018 10:36, Will Deacon wrote:
> > On Fri, Dec 14, 2018 at 10:34:31AM +0000, Steven Price wrote:
> >> On 06/12/2018 23:44, Jeremy Linton wrote:
> >>> From: Mian Yousaf Kaukab <ykaukab@suse.de>
> >>>
> >>> Return status based no ssbd_state and the arm64 SSBS feature.
> >>                       ^^ on
> >>
> >>> Return string "Unknown" in case CONFIG_ARM64_SSBD is
> >>> disabled or arch workaround2 is not available
> >>> in the firmware.
> >>>
> >>> Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
> >>> [Added SSBS logic]
> >>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> >>> ---
> >>>  arch/arm64/kernel/cpu_errata.c | 28 ++++++++++++++++++++++++++++
> >>>  1 file changed, 28 insertions(+)
> >>>
> >>> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> >>> index 6505c93d507e..8aeb5ca38db8 100644
> >>> --- a/arch/arm64/kernel/cpu_errata.c
> >>> +++ b/arch/arm64/kernel/cpu_errata.c
> >>> @@ -423,6 +423,7 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
> >>>  		ssbd_state = ARM64_SSBD_UNKNOWN;
> >>>  		return false;
> >>>  
> >>> +	/* machines with mixed mitigation requirements must not return this */
> >>>  	case SMCCC_RET_NOT_REQUIRED:
> >>>  		pr_info_once("%s mitigation not required\n", entry->desc);
> >>>  		ssbd_state = ARM64_SSBD_MITIGATED;
> >>> @@ -828,4 +829,31 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
> >>>  	}
> >>>  }
> >>>  
> >>> +ssize_t cpu_show_spec_store_bypass(struct device *dev,
> >>> +		struct device_attribute *attr, char *buf)
> >>> +{
> >>> +	/*
> >>> +	 *  Two assumptions: First, get_ssbd_state() reflects the worse case
> >>> +	 *  for hetrogenous machines, and that if SSBS is supported its
> >>                                                   ^^^^ SSBD
> >>> +	 *  supported by all cores.
> >>> +	 */
> >>> +	switch (arm64_get_ssbd_state()) {
> >>> +	case ARM64_SSBD_MITIGATED:
> >>> +		return sprintf(buf, "Not affected\n");
> >>> +
> >>> +	case ARM64_SSBD_KERNEL:
> >>> +	case ARM64_SSBD_FORCE_ENABLE:
> >>> +		if (cpus_have_cap(ARM64_SSBS))
> >>> +			return sprintf(buf, "Not affected\n");
> >>> +		return sprintf(buf,
> >>> +			"Mitigation: Speculative Store Bypass disabled\n");
> >>
> >> NIT: To me this reads as the mitigation is disabled. Can we call it
> >> "Speculative Store Bypass Disable" (with a capital 'D' and without the
> >> 'd at the end)?
> > 
> > Whilst I agree that the strings are reasonably confusing (especially when
> > you pile on the double-negatives all the way up the stack!), we really
> > have no choice but to follow x86's lead with these strings.
> > 
> > I don't think it's worth forking the ABI in an attempt to make this clearer.
> 
> Ah, sorry I hadn't checked the x86 string - yes we should match that.

This is rather why I feel these strings are either a) useless or
b) should be documented somewhere.

Putting at least a skeleton document somewhere could be a good start,
and would require little effort.


What decisions do we expect userspace to make based on this information?

Cheers
---Dave

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 5/6] arm64: add sysfs vulnerability show for speculative store bypass
  2018-12-14 11:28         ` Dave Martin
@ 2018-12-14 11:33           ` Will Deacon
  0 siblings, 0 replies; 24+ messages in thread
From: Will Deacon @ 2018-12-14 11:33 UTC (permalink / raw)
  To: Dave Martin
  Cc: Steven Price, mark.rutland, suzuki.poulose, marc.zyngier,
	catalin.marinas, ykaukab, linux-kernel, Jeremy Linton,
	linux-arm-kernel, shankerd

On Fri, Dec 14, 2018 at 11:28:16AM +0000, Dave Martin wrote:
> On Fri, Dec 14, 2018 at 10:41:42AM +0000, Steven Price wrote:
> > On 14/12/2018 10:36, Will Deacon wrote:
> > > On Fri, Dec 14, 2018 at 10:34:31AM +0000, Steven Price wrote:
> > >> On 06/12/2018 23:44, Jeremy Linton wrote:
> > >>> From: Mian Yousaf Kaukab <ykaukab@suse.de>
> > >>>
> > >>> Return status based no ssbd_state and the arm64 SSBS feature.
> > >>                       ^^ on
> > >>
> > >>> Return string "Unknown" in case CONFIG_ARM64_SSBD is
> > >>> disabled or arch workaround2 is not available
> > >>> in the firmware.
> > >>>
> > >>> Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
> > >>> [Added SSBS logic]
> > >>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> > >>> ---
> > >>>  arch/arm64/kernel/cpu_errata.c | 28 ++++++++++++++++++++++++++++
> > >>>  1 file changed, 28 insertions(+)
> > >>>
> > >>> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> > >>> index 6505c93d507e..8aeb5ca38db8 100644
> > >>> --- a/arch/arm64/kernel/cpu_errata.c
> > >>> +++ b/arch/arm64/kernel/cpu_errata.c
> > >>> @@ -423,6 +423,7 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
> > >>>  		ssbd_state = ARM64_SSBD_UNKNOWN;
> > >>>  		return false;
> > >>>  
> > >>> +	/* machines with mixed mitigation requirements must not return this */
> > >>>  	case SMCCC_RET_NOT_REQUIRED:
> > >>>  		pr_info_once("%s mitigation not required\n", entry->desc);
> > >>>  		ssbd_state = ARM64_SSBD_MITIGATED;
> > >>> @@ -828,4 +829,31 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
> > >>>  	}
> > >>>  }
> > >>>  
> > >>> +ssize_t cpu_show_spec_store_bypass(struct device *dev,
> > >>> +		struct device_attribute *attr, char *buf)
> > >>> +{
> > >>> +	/*
> > >>> +	 *  Two assumptions: First, get_ssbd_state() reflects the worse case
> > >>> +	 *  for hetrogenous machines, and that if SSBS is supported its
> > >>                                                   ^^^^ SSBD
> > >>> +	 *  supported by all cores.
> > >>> +	 */
> > >>> +	switch (arm64_get_ssbd_state()) {
> > >>> +	case ARM64_SSBD_MITIGATED:
> > >>> +		return sprintf(buf, "Not affected\n");
> > >>> +
> > >>> +	case ARM64_SSBD_KERNEL:
> > >>> +	case ARM64_SSBD_FORCE_ENABLE:
> > >>> +		if (cpus_have_cap(ARM64_SSBS))
> > >>> +			return sprintf(buf, "Not affected\n");
> > >>> +		return sprintf(buf,
> > >>> +			"Mitigation: Speculative Store Bypass disabled\n");
> > >>
> > >> NIT: To me this reads as the mitigation is disabled. Can we call it
> > >> "Speculative Store Bypass Disable" (with a capital 'D' and without the
> > >> 'd at the end)?
> > > 
> > > Whilst I agree that the strings are reasonably confusing (especially when
> > > you pile on the double-negatives all the way up the stack!), we really
> > > have no choice but to follow x86's lead with these strings.
> > > 
> > > I don't think it's worth forking the ABI in an attempt to make this clearer.
> > 
> > Ah, sorry I hadn't checked the x86 string - yes we should match that.
> 
> This is rather why I feel these strings are either a) useless or
> b) should be documented somewhere.
> 
> Putting at least a skeleton document somewhere could be a good start,
> and would require little effort.
> 
> 
> What decisions do we expect userspace to make based on this information?

There's at least one tool that parses this stuff to tell you whether you
have/need the mitigations:

https://github.com/speed47/spectre-meltdown-checker

Will

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2
  2018-12-13 11:09   ` Julien Thierry
@ 2019-01-02 22:19     ` Jeremy Linton
  0 siblings, 0 replies; 24+ messages in thread
From: Jeremy Linton @ 2019-01-02 22:19 UTC (permalink / raw)
  To: Julien Thierry, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab

Hi,

On 12/13/2018 05:09 AM, Julien Thierry wrote:
> 
> 
> On 06/12/2018 23:44, Jeremy Linton wrote:
>> Add code to track whether all the cores in the machine are
>> vulnerable, and whether all the vulnerable cores have been
>> mitigated.
>>
>> Once we have that information we can add the sysfs stub and
>> provide an accurate view of what is known about the machine.
>>
>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>> ---
>>   arch/arm64/kernel/cpu_errata.c | 72 +++++++++++++++++++++++++++++++---
>>   1 file changed, 67 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
>> index 559ecdee6fd2..6505c93d507e 100644
>> --- a/arch/arm64/kernel/cpu_errata.c
>> +++ b/arch/arm64/kernel/cpu_errata.c
> 
> [...]
> 
>> @@ -766,4 +812,20 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
>>   	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
>>   }
>>   
>> +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
>> +		char *buf)
>> +{
>> +	switch (__spectrev2_safe) {
>> +	case A64_SV2_SAFE:
>> +		return sprintf(buf, "Not affected\n");
>> +	case A64_SV2_UNSAFE:
>> +		if (__hardenbp_enab == A64_HBP_MIT)
>> +			return sprintf(buf,
>> +				"Mitigation: Branch predictor hardening\n");
>> +		return sprintf(buf, "Vulnerable\n");
>> +	default:
>> +		return sprintf(buf, "Unknown\n");
>> +	}
> 
> Again I see that we are going to display "Unknown" when the mitigation
> is not built in.
> 
> Couldn't we make that CONFIG_GENERIC_CPU_,gation is not implemented? It's
> just checking the list of MIDRs.


Before I re-post, its probably worth pointing out that the 
spectrev2_safe isn't set the same as the meltdown safe flag (which 
reflects a whitelist or cpu_good flag) where the unknown/unsafe 
condition is currently the same.

spectrev2_safe is a white/black list with a black list of known 
vulnerable cores, plus cores with csv2 set indicating they are good. 
This means the unset condition conceptually covers, the check being 
disabled, as well as the core not being one of either known bad or known 
good cores. Meaning you still need a dedicated "unknown" state because 
the final state isn't unknown simply because the mitigation is not 
compiled in.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 3/6] arm64: add sysfs vulnerability show for spectre v1
  2018-08-07 18:14 [PATCH 0/6] arm64: add support for generic cpu vulnerabilities Mian Yousaf Kaukab
@ 2018-08-07 18:14 ` Mian Yousaf Kaukab
  0 siblings, 0 replies; 24+ messages in thread
From: Mian Yousaf Kaukab @ 2018-08-07 18:14 UTC (permalink / raw)
  To: will.deacon, marc.zyngier
  Cc: linux-kernel, linux-arm-kernel, robert.richter, cwu, Mian Yousaf Kaukab

Hard-coded since patches are merged and there are no configuration
options.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
---
 arch/arm64/kernel/cpu_errata.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 996edb4e18ad..92616431ae4e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -706,4 +706,10 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
 	return sprintf(buf, "Vulnerable\n");
 }
 
+ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+}
+
 #endif
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2019-01-02 22:19 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-06 23:44 [PATCH 0/6] add system vulnerability sysfs entries Jeremy Linton
2018-12-06 23:44 ` [PATCH 1/6] arm64: kpti: move check for non-vulnerable CPUs to a function Jeremy Linton
2018-12-13  9:13   ` Julien Thierry
2018-12-12 14:36     ` Jeremy Linton
2018-12-06 23:44 ` [PATCH 2/6] arm64: add sysfs vulnerability show for meltdown Jeremy Linton
2018-12-13  9:23   ` Julien Thierry
2018-12-13 10:46     ` Julien Thierry
2018-12-12 14:49       ` Jeremy Linton
2018-12-14  8:55         ` Julien Thierry
2018-12-06 23:44 ` [PATCH 3/6] arm64: add sysfs vulnerability show for spectre v1 Jeremy Linton
2018-12-06 23:44 ` [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2 Jeremy Linton
2018-12-13 11:09   ` Julien Thierry
2019-01-02 22:19     ` Jeremy Linton
2018-12-06 23:44 ` [PATCH 5/6] arm64: add sysfs vulnerability show for speculative store bypass Jeremy Linton
2018-12-14 10:34   ` Steven Price
2018-12-14 10:36     ` Will Deacon
2018-12-14 10:41       ` Steven Price
2018-12-14 11:28         ` Dave Martin
2018-12-14 11:33           ` Will Deacon
2018-12-06 23:44 ` [PATCH 6/6] arm64: enable generic CPU vulnerabilites support Jeremy Linton
2018-12-13 12:07 ` [PATCH 0/6] add system vulnerability sysfs entries Dave Martin
2018-12-12 15:48   ` Jeremy Linton
2018-12-13 19:26     ` Dave Martin
  -- strict thread matches above, loose matches on Subject: below --
2018-08-07 18:14 [PATCH 0/6] arm64: add support for generic cpu vulnerabilities Mian Yousaf Kaukab
2018-08-07 18:14 ` [PATCH 3/6] arm64: add sysfs vulnerability show for spectre v1 Mian Yousaf Kaukab

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).