linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] arm64: add support for generic cpu vulnerabilities
@ 2018-08-07 18:14 Mian Yousaf Kaukab
  2018-08-07 18:14 ` [PATCH 1/6] arm64: kpti: move check for non-vulnerable CPUs to a function Mian Yousaf Kaukab
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Mian Yousaf Kaukab @ 2018-08-07 18:14 UTC (permalink / raw)
  To: will.deacon, marc.zyngier
  Cc: linux-kernel, linux-arm-kernel, robert.richter, cwu, Mian Yousaf Kaukab

GENERIC_CPU_VULNERABILITIES provide a common way to figure out if a
system is affected by vulnerabilities like meltdown and other variants
of spectre. This small series adds support for it in arm64.

Thank you,

Best regards,
Yousaf

Mian Yousaf Kaukab (6):
  arm64: kpti: move check for non-vulnerable CPUs to a function
  arm64: add sysfs vulnerability show for meltdown
  arm64: add sysfs vulnerability show for spectre v1
  arm64: add sysfs vulnerability show for spectre v2
  arm64: add sysfs vulnerability show for speculative store bypass
  arm64: enable generic CPU vulnerabilites support

 arch/arm64/Kconfig                  |  1 +
 arch/arm64/include/asm/cpufeature.h | 16 +++++++
 arch/arm64/kernel/cpu_errata.c      | 84 ++++++++++++++++++++++++++++++++++++-
 arch/arm64/kernel/cpufeature.c      |  9 +---
 4 files changed, 101 insertions(+), 9 deletions(-)

-- 
2.11.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/6] arm64: kpti: move check for non-vulnerable CPUs to a function
  2018-08-07 18:14 [PATCH 0/6] arm64: add support for generic cpu vulnerabilities Mian Yousaf Kaukab
@ 2018-08-07 18:14 ` Mian Yousaf Kaukab
  2018-08-07 18:14 ` [PATCH 2/6] arm64: add sysfs vulnerability show for meltdown Mian Yousaf Kaukab
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Mian Yousaf Kaukab @ 2018-08-07 18:14 UTC (permalink / raw)
  To: will.deacon, marc.zyngier
  Cc: linux-kernel, linux-arm-kernel, robert.richter, cwu, Mian Yousaf Kaukab

Prepare to call it in generic cpu vulnerabilities support.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
---
 arch/arm64/include/asm/cpufeature.h | 16 ++++++++++++++++
 arch/arm64/kernel/cpufeature.c      |  9 +--------
 2 files changed, 17 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 1717ba1db35d..0b0b5b3e36ba 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -530,6 +530,22 @@ void arm64_set_ssbd_mitigation(bool state);
 static inline void arm64_set_ssbd_mitigation(bool state) {}
 #endif
 
+static inline bool is_cpu_meltdown_safe(void)
+{
+	/* List of CPUs that are not vulnerable and don't need KPTI */
+	static const struct midr_range kpti_safe_list[] = {
+		MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
+		MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
+		{ /* sentinel */ }
+	};
+
+	/* Don't force KPTI for CPUs that are not vulnerable */
+	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
+		return true;
+
+	return false;
+}
+
 #endif /* __ASSEMBLY__ */
 
 #endif
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index e238b7932096..6a94f8bce35a 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -865,12 +865,6 @@ static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 				int scope)
 {
-	/* List of CPUs that are not vulnerable and don't need KPTI */
-	static const struct midr_range kpti_safe_list[] = {
-		MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
-		MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
-		{ /* sentinel */ }
-	};
 	char const *str = "command line option";
 
 	/*
@@ -894,8 +888,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
 		return true;
 
-	/* Don't force KPTI for CPUs that are not vulnerable */
-	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
+	if (is_cpu_meltdown_safe())
 		return false;
 
 	/* Defer to CPU feature registers */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/6] arm64: add sysfs vulnerability show for meltdown
  2018-08-07 18:14 [PATCH 0/6] arm64: add support for generic cpu vulnerabilities Mian Yousaf Kaukab
  2018-08-07 18:14 ` [PATCH 1/6] arm64: kpti: move check for non-vulnerable CPUs to a function Mian Yousaf Kaukab
@ 2018-08-07 18:14 ` Mian Yousaf Kaukab
  2018-08-07 18:14 ` [PATCH 3/6] arm64: add sysfs vulnerability show for spectre v1 Mian Yousaf Kaukab
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Mian Yousaf Kaukab @ 2018-08-07 18:14 UTC (permalink / raw)
  To: will.deacon, marc.zyngier
  Cc: linux-kernel, linux-arm-kernel, robert.richter, cwu, Mian Yousaf Kaukab

Checking CSV3 support directly in case CONFIG_UNMAP_KERNEL_AT_EL0
is not enabled.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
---
 arch/arm64/kernel/cpu_errata.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index dec10898d688..996edb4e18ad 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -22,6 +22,7 @@
 #include <asm/cpu.h>
 #include <asm/cputype.h>
 #include <asm/cpufeature.h>
+#include <asm/mmu.h>
 
 static bool __maybe_unused
 is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope)
@@ -683,3 +684,26 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 	{
 	}
 };
+
+#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
+
+ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	u64 pfr0;
+	u32 csv3;
+
+	if (arm64_kernel_unmapped_at_el0())
+		return sprintf(buf, "Mitigation: KPTI\n");
+
+	pfr0 = read_cpuid(ID_AA64PFR0_EL1);
+	csv3 = cpuid_feature_extract_unsigned_field(pfr0,
+			ID_AA64PFR0_CSV3_SHIFT);
+
+	if (csv3 || is_cpu_meltdown_safe())
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
+
+#endif
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/6] arm64: add sysfs vulnerability show for spectre v1
  2018-08-07 18:14 [PATCH 0/6] arm64: add support for generic cpu vulnerabilities Mian Yousaf Kaukab
  2018-08-07 18:14 ` [PATCH 1/6] arm64: kpti: move check for non-vulnerable CPUs to a function Mian Yousaf Kaukab
  2018-08-07 18:14 ` [PATCH 2/6] arm64: add sysfs vulnerability show for meltdown Mian Yousaf Kaukab
@ 2018-08-07 18:14 ` Mian Yousaf Kaukab
  2018-08-07 18:14 ` [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2 Mian Yousaf Kaukab
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Mian Yousaf Kaukab @ 2018-08-07 18:14 UTC (permalink / raw)
  To: will.deacon, marc.zyngier
  Cc: linux-kernel, linux-arm-kernel, robert.richter, cwu, Mian Yousaf Kaukab

Hard-coded since patches are merged and there are no configuration
options.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
---
 arch/arm64/kernel/cpu_errata.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 996edb4e18ad..92616431ae4e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -706,4 +706,10 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
 	return sprintf(buf, "Vulnerable\n");
 }
 
+ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+}
+
 #endif
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2
  2018-08-07 18:14 [PATCH 0/6] arm64: add support for generic cpu vulnerabilities Mian Yousaf Kaukab
                   ` (2 preceding siblings ...)
  2018-08-07 18:14 ` [PATCH 3/6] arm64: add sysfs vulnerability show for spectre v1 Mian Yousaf Kaukab
@ 2018-08-07 18:14 ` Mian Yousaf Kaukab
  2018-08-07 18:14 ` [PATCH 5/6] arm64: add sysfs vulnerability show for speculative store bypass Mian Yousaf Kaukab
  2018-08-07 18:14 ` [PATCH 6/6] arm64: enable generic CPU vulnerabilites support Mian Yousaf Kaukab
  5 siblings, 0 replies; 10+ messages in thread
From: Mian Yousaf Kaukab @ 2018-08-07 18:14 UTC (permalink / raw)
  To: will.deacon, marc.zyngier
  Cc: linux-kernel, linux-arm-kernel, robert.richter, cwu, Mian Yousaf Kaukab

Only report mitigation present if hardening callback has been
successfully installed.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
---
 arch/arm64/kernel/cpu_errata.c | 34 +++++++++++++++++++++++++++++++++-
 1 file changed, 33 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 92616431ae4e..8469d3be7b15 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -481,7 +481,8 @@ multi_entry_cap_cpu_enable(const struct arm64_cpu_capabilities *entry)
 			caps->cpu_enable(caps);
 }
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \
+	defined(CONFIG_GENERIC_CPU_VULNERABILITIES)
 
 /*
  * List of CPUs where we need to issue a psci call to
@@ -712,4 +713,35 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
 	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 }
 
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	u64 pfr0;
+	struct bp_hardening_data *data;
+
+	pfr0 = read_cpuid(ID_AA64PFR0_EL1);
+	if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT))
+		return sprintf(buf, "Not affected\n");
+
+	if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) {
+		/*
+		 * Hardware is vulnerable. Lets check if bp hardening callback
+		 * has been successfully installed
+		 */
+		data = arm64_get_bp_hardening_data();
+		if (data && data->fn)
+			return sprintf(buf,
+				"Mitigation: Branch predictor hardening");
+		else
+			/* For example SMCCC_VERSION_1_0 */
+			return sprintf(buf, "Vulnerable\n");
+	}
+
+	/* In case CONFIG_HARDEN_BRANCH_PREDICTOR is not enabled */
+	if (is_midr_in_range_list(read_cpuid_id(), arm64_bp_harden_smccc_cpus))
+		return sprintf(buf, "Vulnerable\n");
+
+	return sprintf(buf, "Not affected\n");
+}
+
 #endif
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/6] arm64: add sysfs vulnerability show for speculative store bypass
  2018-08-07 18:14 [PATCH 0/6] arm64: add support for generic cpu vulnerabilities Mian Yousaf Kaukab
                   ` (3 preceding siblings ...)
  2018-08-07 18:14 ` [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2 Mian Yousaf Kaukab
@ 2018-08-07 18:14 ` Mian Yousaf Kaukab
  2018-08-07 18:14 ` [PATCH 6/6] arm64: enable generic CPU vulnerabilites support Mian Yousaf Kaukab
  5 siblings, 0 replies; 10+ messages in thread
From: Mian Yousaf Kaukab @ 2018-08-07 18:14 UTC (permalink / raw)
  To: will.deacon, marc.zyngier
  Cc: linux-kernel, linux-arm-kernel, robert.richter, cwu, Mian Yousaf Kaukab

Return status based no ssbd_state. Return string "Unknown" in case
CONFIG_ARM64_SSBD is disabled or arch workaround2 is not available
in the firmware.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
---
 arch/arm64/kernel/cpu_errata.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 8469d3be7b15..8b60aa30a3fa 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -744,4 +744,24 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
 	return sprintf(buf, "Not affected\n");
 }
 
+ssize_t cpu_show_spec_store_bypass(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	switch (arm64_get_ssbd_state()) {
+	case ARM64_SSBD_MITIGATED:
+		return sprintf(buf, "Not affected\n");
+
+	case ARM64_SSBD_KERNEL:
+	case ARM64_SSBD_FORCE_ENABLE:
+		return sprintf(buf,
+			"Mitigation: Speculative Store Bypass disabled");
+
+	case ARM64_SSBD_FORCE_DISABLE:
+		return sprintf(buf, "Vulnerable\n");
+
+	default: /* ARM64_SSBD_UNKNOWN*/
+		return sprintf(buf, "Unknown\n");
+	}
+}
+
 #endif
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/6] arm64: enable generic CPU vulnerabilites support
  2018-08-07 18:14 [PATCH 0/6] arm64: add support for generic cpu vulnerabilities Mian Yousaf Kaukab
                   ` (4 preceding siblings ...)
  2018-08-07 18:14 ` [PATCH 5/6] arm64: add sysfs vulnerability show for speculative store bypass Mian Yousaf Kaukab
@ 2018-08-07 18:14 ` Mian Yousaf Kaukab
  5 siblings, 0 replies; 10+ messages in thread
From: Mian Yousaf Kaukab @ 2018-08-07 18:14 UTC (permalink / raw)
  To: will.deacon, marc.zyngier
  Cc: linux-kernel, linux-arm-kernel, robert.richter, cwu, Mian Yousaf Kaukab

Enable CPU vulnerabilty show functions for spectre_v1, spectre_v2,
meltdown and store-bypass.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
---
 arch/arm64/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 0dec01a0c81c..ffd97bc0f5d5 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -84,6 +84,7 @@ config ARM64
 	select GENERIC_CLOCKEVENTS
 	select GENERIC_CLOCKEVENTS_BROADCAST
 	select GENERIC_CPU_AUTOPROBE
+	select GENERIC_CPU_VULNERABILITIES
 	select GENERIC_EARLY_IOREMAP
 	select GENERIC_IDLE_POLL_SETUP
 	select GENERIC_IRQ_MULTI_HANDLER
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2
  2018-12-13 11:09   ` Julien Thierry
@ 2019-01-02 22:19     ` Jeremy Linton
  0 siblings, 0 replies; 10+ messages in thread
From: Jeremy Linton @ 2019-01-02 22:19 UTC (permalink / raw)
  To: Julien Thierry, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab

Hi,

On 12/13/2018 05:09 AM, Julien Thierry wrote:
> 
> 
> On 06/12/2018 23:44, Jeremy Linton wrote:
>> Add code to track whether all the cores in the machine are
>> vulnerable, and whether all the vulnerable cores have been
>> mitigated.
>>
>> Once we have that information we can add the sysfs stub and
>> provide an accurate view of what is known about the machine.
>>
>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>> ---
>>   arch/arm64/kernel/cpu_errata.c | 72 +++++++++++++++++++++++++++++++---
>>   1 file changed, 67 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
>> index 559ecdee6fd2..6505c93d507e 100644
>> --- a/arch/arm64/kernel/cpu_errata.c
>> +++ b/arch/arm64/kernel/cpu_errata.c
> 
> [...]
> 
>> @@ -766,4 +812,20 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
>>   	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
>>   }
>>   
>> +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
>> +		char *buf)
>> +{
>> +	switch (__spectrev2_safe) {
>> +	case A64_SV2_SAFE:
>> +		return sprintf(buf, "Not affected\n");
>> +	case A64_SV2_UNSAFE:
>> +		if (__hardenbp_enab == A64_HBP_MIT)
>> +			return sprintf(buf,
>> +				"Mitigation: Branch predictor hardening\n");
>> +		return sprintf(buf, "Vulnerable\n");
>> +	default:
>> +		return sprintf(buf, "Unknown\n");
>> +	}
> 
> Again I see that we are going to display "Unknown" when the mitigation
> is not built in.
> 
> Couldn't we make that CONFIG_GENERIC_CPU_,gation is not implemented? It's
> just checking the list of MIDRs.


Before I re-post, its probably worth pointing out that the 
spectrev2_safe isn't set the same as the meltdown safe flag (which 
reflects a whitelist or cpu_good flag) where the unknown/unsafe 
condition is currently the same.

spectrev2_safe is a white/black list with a black list of known 
vulnerable cores, plus cores with csv2 set indicating they are good. 
This means the unset condition conceptually covers, the check being 
disabled, as well as the core not being one of either known bad or known 
good cores. Meaning you still need a dedicated "unknown" state because 
the final state isn't unknown simply because the mitigation is not 
compiled in.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2
  2018-12-06 23:44 ` [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2 Jeremy Linton
@ 2018-12-13 11:09   ` Julien Thierry
  2019-01-02 22:19     ` Jeremy Linton
  0 siblings, 1 reply; 10+ messages in thread
From: Julien Thierry @ 2018-12-13 11:09 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab



On 06/12/2018 23:44, Jeremy Linton wrote:
> Add code to track whether all the cores in the machine are
> vulnerable, and whether all the vulnerable cores have been
> mitigated.
> 
> Once we have that information we can add the sysfs stub and
> provide an accurate view of what is known about the machine.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>  arch/arm64/kernel/cpu_errata.c | 72 +++++++++++++++++++++++++++++++---
>  1 file changed, 67 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 559ecdee6fd2..6505c93d507e 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c

[...]

> @@ -766,4 +812,20 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
>  	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
>  }
>  
> +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
> +		char *buf)
> +{
> +	switch (__spectrev2_safe) {
> +	case A64_SV2_SAFE:
> +		return sprintf(buf, "Not affected\n");
> +	case A64_SV2_UNSAFE:
> +		if (__hardenbp_enab == A64_HBP_MIT)
> +			return sprintf(buf,
> +				"Mitigation: Branch predictor hardening\n");
> +		return sprintf(buf, "Vulnerable\n");
> +	default:
> +		return sprintf(buf, "Unknown\n");
> +	}

Again I see that we are going to display "Unknown" when the mitigation
is not built in.

Couldn't we make that CONFIG_GENERIC_CPU_VULNERABILITIES check whether a
CPU is vulnerable or not even if the mitigation is not implemented? It's
just checking the list of MIDRs.

Thanks,

-- 
Julien Thierry

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2
  2018-12-06 23:44 [PATCH 0/6] add system vulnerability sysfs entries Jeremy Linton
@ 2018-12-06 23:44 ` Jeremy Linton
  2018-12-13 11:09   ` Julien Thierry
  0 siblings, 1 reply; 10+ messages in thread
From: Jeremy Linton @ 2018-12-06 23:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, mark.rutland, linux-kernel, ykaukab,
	Jeremy Linton

Add code to track whether all the cores in the machine are
vulnerable, and whether all the vulnerable cores have been
mitigated.

Once we have that information we can add the sysfs stub and
provide an accurate view of what is known about the machine.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 72 +++++++++++++++++++++++++++++++---
 1 file changed, 67 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 559ecdee6fd2..6505c93d507e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -109,6 +109,11 @@ cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
 
 atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
 
+#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || defined(CONFIG_GENERIC_CPU_VULNERABILITIES)
+/* Track overall mitigation state. We are only mitigated if all cores are ok */
+static enum { A64_HBP_UNSET, A64_HBP_MIT, A64_HBP_NOTMIT } __hardenbp_enab = A64_HBP_UNSET;
+#endif
+
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 #include <asm/mmu_context.h>
 #include <asm/cacheflush.h>
@@ -231,15 +236,19 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
 		return;
 
-	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
+	if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
+		__hardenbp_enab = A64_HBP_NOTMIT;
 		return;
+	}
 
 	switch (psci_ops.conduit) {
 	case PSCI_CONDUIT_HVC:
 		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
-		if ((int)res.a0 < 0)
+		if ((int)res.a0 < 0) {
+			__hardenbp_enab = A64_HBP_NOTMIT;
 			return;
+		}
 		cb = call_hvc_arch_workaround_1;
 		/* This is a guest, no need to patch KVM vectors */
 		smccc_start = NULL;
@@ -249,14 +258,17 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 	case PSCI_CONDUIT_SMC:
 		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
-		if ((int)res.a0 < 0)
+		if ((int)res.a0 < 0) {
+			__hardenbp_enab = A64_HBP_NOTMIT;
 			return;
+		}
 		cb = call_smc_arch_workaround_1;
 		smccc_start = __smccc_workaround_1_smc_start;
 		smccc_end = __smccc_workaround_1_smc_end;
 		break;
 
 	default:
+		__hardenbp_enab = A64_HBP_NOTMIT;
 		return;
 	}
 
@@ -266,6 +278,9 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 
 	install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
 
+	if (__hardenbp_enab == A64_HBP_UNSET)
+		__hardenbp_enab = A64_HBP_MIT;
+
 	return;
 }
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
@@ -539,7 +554,36 @@ multi_entry_cap_cpu_enable(const struct arm64_cpu_capabilities *entry)
 			caps->cpu_enable(caps);
 }
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \
+	defined(CONFIG_GENERIC_CPU_VULNERABILITIES)
+
+static enum { A64_SV2_UNSET, A64_SV2_SAFE, A64_SV2_UNSAFE } __spectrev2_safe = A64_SV2_UNSET;
+
+/*
+ * Track overall bp hardening for all heterogeneous cores in the machine.
+ * We are only considered "safe" if all booted cores are known safe.
+ */
+static bool __maybe_unused
+check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
+{
+	bool is_vul;
+	bool has_csv2;
+	u64 pfr0;
+
+	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+
+	is_vul = is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list);
+
+	pfr0 = read_cpuid(ID_AA64PFR0_EL1);
+	has_csv2 = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT);
+
+	if (is_vul)
+		__spectrev2_safe = A64_SV2_UNSAFE;
+	else if (__spectrev2_safe == A64_SV2_UNSET && has_csv2)
+		__spectrev2_safe = A64_SV2_SAFE;
+
+	return is_vul;
+}
 
 /*
  * List of CPUs where we need to issue a psci call to
@@ -728,7 +772,9 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
 		.cpu_enable = enable_smccc_arch_workaround_1,
-		ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
+		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+		.matches = check_branch_predictor,
+		.midr_range_list = arm64_bp_harden_smccc_cpus,
 	},
 #endif
 #ifdef CONFIG_HARDEN_EL2_VECTORS
@@ -766,4 +812,20 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
 	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 }
 
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	switch (__spectrev2_safe) {
+	case A64_SV2_SAFE:
+		return sprintf(buf, "Not affected\n");
+	case A64_SV2_UNSAFE:
+		if (__hardenbp_enab == A64_HBP_MIT)
+			return sprintf(buf,
+				"Mitigation: Branch predictor hardening\n");
+		return sprintf(buf, "Vulnerable\n");
+	default:
+		return sprintf(buf, "Unknown\n");
+	}
+}
+
 #endif
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-01-02 22:19 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-07 18:14 [PATCH 0/6] arm64: add support for generic cpu vulnerabilities Mian Yousaf Kaukab
2018-08-07 18:14 ` [PATCH 1/6] arm64: kpti: move check for non-vulnerable CPUs to a function Mian Yousaf Kaukab
2018-08-07 18:14 ` [PATCH 2/6] arm64: add sysfs vulnerability show for meltdown Mian Yousaf Kaukab
2018-08-07 18:14 ` [PATCH 3/6] arm64: add sysfs vulnerability show for spectre v1 Mian Yousaf Kaukab
2018-08-07 18:14 ` [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2 Mian Yousaf Kaukab
2018-08-07 18:14 ` [PATCH 5/6] arm64: add sysfs vulnerability show for speculative store bypass Mian Yousaf Kaukab
2018-08-07 18:14 ` [PATCH 6/6] arm64: enable generic CPU vulnerabilites support Mian Yousaf Kaukab
2018-12-06 23:44 [PATCH 0/6] add system vulnerability sysfs entries Jeremy Linton
2018-12-06 23:44 ` [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2 Jeremy Linton
2018-12-13 11:09   ` Julien Thierry
2019-01-02 22:19     ` Jeremy Linton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).