linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/12] arm64: add system vulnerability sysfs entries
@ 2019-01-25 18:06 Jeremy Linton
  2019-01-25 18:07 ` [PATCH v4 01/12] Documentation: Document arm64 kpti control Jeremy Linton
                   ` (12 more replies)
  0 siblings, 13 replies; 35+ messages in thread
From: Jeremy Linton @ 2019-01-25 18:06 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, julien.thierry,
	mlangsdo, steven.price, stefan.wahren, Jeremy Linton

Arm64 machines should be displaying a human readable
vulnerability status to speculative execution attacks in
/sys/devices/system/cpu/vulnerabilities 

This series enables that behavior by providing the expected
functions. Those functions expose the cpu errata and feature
states, as well as whether firmware is responding appropriately
to display the overall machine status. This means that in a
heterogeneous machine we will only claim the machine is mitigated
or safe if we are confident all booted cores are safe or
mitigated.

v3->v4:
	Drop the patch which selectivly exports sysfs entries
	Remove the CONFIG_EXPERT hidden options which allowed
	       the kernel to be built without the vulnerability
	       detection code.
	Pick Marc Z's patches which invert the white/black
	       lists for spectrev2 and clean up the firmware
	       detection logic.
	Document the existing kpti controls
	Add a nospectre_v2 option to boot time disable the
	     mitigation

v2->v3:
	Remove "Unknown" states, replace with further blacklists
	       and default vulnerable/not affected states.
	Add the ability for an arch port to selectively export
	       sysfs vulnerabilities.

v1->v2:
	Add "Unknown" state to ABI/testing docs.
	Minor tweaks.

Jeremy Linton (8):
  Documentation: Document arm64 kpti control
  arm64: Provide a command line to disable spectre_v2 mitigation
  arm64: Remove the ability to build a kernel without ssbd
  arm64: remove the ability to build a kernel without hardened branch
    predictors
  arm64: remove the ability to build a kernel without kpti
  arm64: add sysfs vulnerability show for meltdown
  arm64: add sysfs vulnerability show for spectre v2
  arm64: add sysfs vulnerability show for speculative store bypass

Marc Zyngier (2):
  arm64: Advertise mitigation of Spectre-v2, or lack thereof
  arm64: Use firmware to detect CPUs that are not affected by Spectre-v2

Mian Yousaf Kaukab (2):
  arm64: add sysfs vulnerability show for spectre v1
  arm64: enable generic CPU vulnerabilites support

 .../admin-guide/kernel-parameters.txt         |  14 +-
 arch/arm64/Kconfig                            |  39 +--
 arch/arm64/include/asm/cpufeature.h           |   8 -
 arch/arm64/include/asm/fixmap.h               |   2 -
 arch/arm64/include/asm/kvm_mmu.h              |  19 --
 arch/arm64/include/asm/mmu.h                  |  19 +-
 arch/arm64/include/asm/sdei.h                 |   2 +-
 arch/arm64/kernel/Makefile                    |   3 +-
 arch/arm64/kernel/asm-offsets.c               |   2 -
 arch/arm64/kernel/cpu_errata.c                | 242 ++++++++++++------
 arch/arm64/kernel/cpufeature.c                |  41 ++-
 arch/arm64/kernel/entry.S                     |  15 +-
 arch/arm64/kernel/sdei.c                      |   2 -
 arch/arm64/kernel/vmlinux.lds.S               |   8 -
 arch/arm64/kvm/Kconfig                        |   3 -
 arch/arm64/kvm/hyp/hyp-entry.S                |   4 -
 arch/arm64/kvm/hyp/switch.c                   |   4 -
 arch/arm64/mm/context.c                       |   6 -
 arch/arm64/mm/mmu.c                           |   2 -
 arch/arm64/mm/proc.S                          |   2 -
 20 files changed, 207 insertions(+), 230 deletions(-)

-- 
2.17.2


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v4 01/12] Documentation: Document arm64 kpti control
  2019-01-25 18:06 [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Jeremy Linton
@ 2019-01-25 18:07 ` Jeremy Linton
  2019-01-30 18:02   ` Andre Przywara
                     ` (2 more replies)
  2019-01-25 18:07 ` [PATCH v4 02/12] arm64: Provide a command line to disable spectre_v2 mitigation Jeremy Linton
                   ` (11 subsequent siblings)
  12 siblings, 3 replies; 35+ messages in thread
From: Jeremy Linton @ 2019-01-25 18:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, julien.thierry,
	mlangsdo, steven.price, stefan.wahren, Jeremy Linton,
	Jonathan Corbet, linux-doc

For a while Arm64 has been capable of force enabling
or disabling the kpti mitigations. Lets make sure the
documentation reflects that.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-doc@vger.kernel.org
---
 Documentation/admin-guide/kernel-parameters.txt | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index b799bcf67d7b..9475f02c79da 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1982,6 +1982,12 @@
 			Built with CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y,
 			the default is off.
 
+	kpti=		[ARM64] Control page table isolation of user
+			and kernel address spaces.
+			Default: enabled on cores which need mitigation.
+			0: force disabled
+			1: force enabled
+
 	kvm.ignore_msrs=[KVM] Ignore guest accesses to unhandled MSRs.
 			Default is 0 (don't ignore, but inject #GP)
 
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 02/12] arm64: Provide a command line to disable spectre_v2 mitigation
  2019-01-25 18:06 [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Jeremy Linton
  2019-01-25 18:07 ` [PATCH v4 01/12] Documentation: Document arm64 kpti control Jeremy Linton
@ 2019-01-25 18:07 ` Jeremy Linton
  2019-01-30 18:03   ` Andre Przywara
  2019-01-25 18:07 ` [PATCH v4 03/12] arm64: Remove the ability to build a kernel without ssbd Jeremy Linton
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-01-25 18:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, julien.thierry,
	mlangsdo, steven.price, stefan.wahren, Jeremy Linton,
	Jonathan Corbet, linux-doc

There are various reasons, including bencmarking, to disable spectrev2
mitigation on a machine. Provide a command-line to do so.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-doc@vger.kernel.org
---
 Documentation/admin-guide/kernel-parameters.txt |  8 ++++----
 arch/arm64/kernel/cpu_errata.c                  | 11 +++++++++++
 2 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 9475f02c79da..2ae77979488e 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2849,10 +2849,10 @@
 			check bypass). With this option data leaks are possible
 			in the system.
 
-	nospectre_v2	[X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2
-			(indirect branch prediction) vulnerability. System may
-			allow data leaks with this option, which is equivalent
-			to spectre_v2=off.
+	nospectre_v2	[X86,PPC_FSL_BOOK3E,ARM64] Disable all mitigations for
+			the Spectre variant 2 (indirect branch prediction)
+			vulnerability. System may allow data leaks with this
+			option.
 
 	nospec_store_bypass_disable
 			[HW] Disable all mitigations for the Speculative Store Bypass vulnerability
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 9950bb0cbd52..9a7b5fca51a0 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -220,6 +220,14 @@ static void qcom_link_stack_sanitization(void)
 		     : "=&r" (tmp));
 }
 
+static bool __nospectre_v2;
+static int __init parse_nospectre_v2(char *str)
+{
+	__nospectre_v2 = true;
+	return 0;
+}
+early_param("nospectre_v2", parse_nospectre_v2);
+
 static void
 enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 {
@@ -231,6 +239,9 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
 		return;
 
+	if (__nospectre_v2)
+		return;
+
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
 		return;
 
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 03/12] arm64: Remove the ability to build a kernel without ssbd
  2019-01-25 18:06 [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Jeremy Linton
  2019-01-25 18:07 ` [PATCH v4 01/12] Documentation: Document arm64 kpti control Jeremy Linton
  2019-01-25 18:07 ` [PATCH v4 02/12] arm64: Provide a command line to disable spectre_v2 mitigation Jeremy Linton
@ 2019-01-25 18:07 ` Jeremy Linton
  2019-01-30 18:04   ` Andre Przywara
  2019-01-25 18:07 ` [PATCH v4 04/12] arm64: remove the ability to build a kernel without hardened branch predictors Jeremy Linton
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-01-25 18:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, julien.thierry,
	mlangsdo, steven.price, stefan.wahren, Jeremy Linton,
	Christoffer Dall, kvmarm

Buried behind EXPERT is the ability to build a kernel without
SSBD, this needlessly clutters up the code as well as creates
the opportunity for bugs. It also removes the kernel's ability
to determine if the machine its running on is vulnerable.

Since its also possible to disable it at boot time, lets remove
the config option.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/Kconfig                  | 9 ---------
 arch/arm64/include/asm/cpufeature.h | 8 --------
 arch/arm64/include/asm/kvm_mmu.h    | 7 -------
 arch/arm64/kernel/Makefile          | 3 +--
 arch/arm64/kernel/cpu_errata.c      | 4 ----
 arch/arm64/kernel/cpufeature.c      | 4 ----
 arch/arm64/kernel/entry.S           | 2 --
 arch/arm64/kvm/hyp/hyp-entry.S      | 2 --
 arch/arm64/kvm/hyp/switch.c         | 4 ----
 9 files changed, 1 insertion(+), 42 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index a4168d366127..0baa632bf0a8 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1038,15 +1038,6 @@ config HARDEN_EL2_VECTORS
 
 	  If unsure, say Y.
 
-config ARM64_SSBD
-	bool "Speculative Store Bypass Disable" if EXPERT
-	default y
-	help
-	  This enables mitigation of the bypassing of previous stores
-	  by speculative loads.
-
-	  If unsure, say Y.
-
 config RODATA_FULL_DEFAULT_ENABLED
 	bool "Apply r/o permissions of VM areas also to their linear aliases"
 	default y
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index dfcfba725d72..bbed2067a1a4 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -620,19 +620,11 @@ static inline bool system_supports_generic_auth(void)
 
 static inline int arm64_get_ssbd_state(void)
 {
-#ifdef CONFIG_ARM64_SSBD
 	extern int ssbd_state;
 	return ssbd_state;
-#else
-	return ARM64_SSBD_UNKNOWN;
-#endif
 }
 
-#ifdef CONFIG_ARM64_SSBD
 void arm64_set_ssbd_mitigation(bool state);
-#else
-static inline void arm64_set_ssbd_mitigation(bool state) {}
-#endif
 
 extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
 
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 8af4b1befa42..a5c152d79820 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -541,7 +541,6 @@ static inline int kvm_map_vectors(void)
 }
 #endif
 
-#ifdef CONFIG_ARM64_SSBD
 DECLARE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
 
 static inline int hyp_map_aux_data(void)
@@ -558,12 +557,6 @@ static inline int hyp_map_aux_data(void)
 	}
 	return 0;
 }
-#else
-static inline int hyp_map_aux_data(void)
-{
-	return 0;
-}
-#endif
 
 #define kvm_phys_to_vttbr(addr)		phys_to_ttbr(addr)
 
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index cd434d0719c1..306336a2fa34 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -19,7 +19,7 @@ obj-y			:= debug-monitors.o entry.o irq.o fpsimd.o		\
 			   return_address.o cpuinfo.o cpu_errata.o		\
 			   cpufeature.o alternative.o cacheinfo.o		\
 			   smp.o smp_spin_table.o topology.o smccc-call.o	\
-			   syscall.o
+			   syscall.o ssbd.o
 
 extra-$(CONFIG_EFI)			:= efi-entry.o
 
@@ -57,7 +57,6 @@ arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
 obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
 obj-$(CONFIG_CRASH_CORE)		+= crash_core.o
 obj-$(CONFIG_ARM_SDE_INTERFACE)		+= sdei.o
-obj-$(CONFIG_ARM64_SSBD)		+= ssbd.o
 obj-$(CONFIG_ARM64_PTR_AUTH)		+= pointer_auth.o
 
 obj-y					+= vdso/ probes/
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 9a7b5fca51a0..934d50788ca3 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -281,7 +281,6 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 }
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
-#ifdef CONFIG_ARM64_SSBD
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
 
 int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
@@ -473,7 +472,6 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 
 	return required;
 }
-#endif	/* CONFIG_ARM64_SSBD */
 
 static void __maybe_unused
 cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
@@ -726,14 +724,12 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors),
 	},
 #endif
-#ifdef CONFIG_ARM64_SSBD
 	{
 		.desc = "Speculative Store Bypass Disable",
 		.capability = ARM64_SSBD,
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.matches = has_ssbd_mitigation,
 	},
-#endif
 #ifdef CONFIG_ARM64_ERRATUM_1188873
 	{
 		/* Cortex-A76 r0p0 to r2p0 */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index f6d84e2c92fe..d1a7fd7972f9 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1131,7 +1131,6 @@ static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused)
 	WARN_ON(val & (7 << 27 | 7 << 21));
 }
 
-#ifdef CONFIG_ARM64_SSBD
 static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr)
 {
 	if (user_mode(regs))
@@ -1171,7 +1170,6 @@ static void cpu_enable_ssbs(const struct arm64_cpu_capabilities *__unused)
 		arm64_set_ssbd_mitigation(true);
 	}
 }
-#endif /* CONFIG_ARM64_SSBD */
 
 #ifdef CONFIG_ARM64_PAN
 static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
@@ -1400,7 +1398,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64ISAR0_CRC32_SHIFT,
 		.min_field_value = 1,
 	},
-#ifdef CONFIG_ARM64_SSBD
 	{
 		.desc = "Speculative Store Bypassing Safe (SSBS)",
 		.capability = ARM64_SSBS,
@@ -1412,7 +1409,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY,
 		.cpu_enable = cpu_enable_ssbs,
 	},
-#endif
 #ifdef CONFIG_ARM64_CNP
 	{
 		.desc = "Common not Private translations",
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 0ec0c46b2c0c..bee54b7d17b9 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -137,7 +137,6 @@ alternative_else_nop_endif
 	// This macro corrupts x0-x3. It is the caller's duty
 	// to save/restore them if required.
 	.macro	apply_ssbd, state, tmp1, tmp2
-#ifdef CONFIG_ARM64_SSBD
 alternative_cb	arm64_enable_wa2_handling
 	b	.L__asm_ssbd_skip\@
 alternative_cb_end
@@ -151,7 +150,6 @@ alternative_cb	arm64_update_smccc_conduit
 	nop					// Patched to SMC/HVC #0
 alternative_cb_end
 .L__asm_ssbd_skip\@:
-#endif
 	.endm
 
 	.macro	kernel_entry, el, regsize = 64
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 73c1b483ec39..53c9344968d4 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -114,7 +114,6 @@ el1_hvc_guest:
 			  ARM_SMCCC_ARCH_WORKAROUND_2)
 	cbnz	w1, el1_trap
 
-#ifdef CONFIG_ARM64_SSBD
 alternative_cb	arm64_enable_wa2_handling
 	b	wa2_end
 alternative_cb_end
@@ -141,7 +140,6 @@ alternative_cb_end
 wa2_end:
 	mov	x2, xzr
 	mov	x1, xzr
-#endif
 
 wa_epilogue:
 	mov	x0, xzr
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index b0b1478094b4..9ce43ae6fc13 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -436,7 +436,6 @@ static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu)
 
 static void __hyp_text __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu)
 {
-#ifdef CONFIG_ARM64_SSBD
 	/*
 	 * The host runs with the workaround always present. If the
 	 * guest wants it disabled, so be it...
@@ -444,19 +443,16 @@ static void __hyp_text __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu)
 	if (__needs_ssbd_off(vcpu) &&
 	    __hyp_this_cpu_read(arm64_ssbd_callback_required))
 		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL);
-#endif
 }
 
 static void __hyp_text __set_host_arch_workaround_state(struct kvm_vcpu *vcpu)
 {
-#ifdef CONFIG_ARM64_SSBD
 	/*
 	 * If the guest has disabled the workaround, bring it back on.
 	 */
 	if (__needs_ssbd_off(vcpu) &&
 	    __hyp_this_cpu_read(arm64_ssbd_callback_required))
 		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1, NULL);
-#endif
 }
 
 /* Switch to the guest for VHE systems running in EL2 */
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 04/12] arm64: remove the ability to build a kernel without hardened branch predictors
  2019-01-25 18:06 [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (2 preceding siblings ...)
  2019-01-25 18:07 ` [PATCH v4 03/12] arm64: Remove the ability to build a kernel without ssbd Jeremy Linton
@ 2019-01-25 18:07 ` Jeremy Linton
  2019-01-30 18:04   ` Andre Przywara
  2019-01-25 18:07 ` [PATCH v4 05/12] arm64: remove the ability to build a kernel without kpti Jeremy Linton
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-01-25 18:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, julien.thierry,
	mlangsdo, steven.price, stefan.wahren, Jeremy Linton,
	Christoffer Dall, kvmarm

Buried behind EXPERT is the ability to build a kernel without
hardened branch predictors, this needlessly clutters up the
code as well as creates the opportunity for bugs. It also
removes the kernel's ability to determine if the machine its
running on is vulnerable.

Since its also possible to disable it at boot time, lets remove
the config option.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm64/Kconfig               | 17 -----------------
 arch/arm64/include/asm/kvm_mmu.h | 12 ------------
 arch/arm64/include/asm/mmu.h     | 12 ------------
 arch/arm64/kernel/cpu_errata.c   | 19 -------------------
 arch/arm64/kernel/entry.S        |  2 --
 arch/arm64/kvm/Kconfig           |  3 ---
 arch/arm64/kvm/hyp/hyp-entry.S   |  2 --
 7 files changed, 67 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 0baa632bf0a8..6b4c6d3fdf4d 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1005,23 +1005,6 @@ config UNMAP_KERNEL_AT_EL0
 
 	  If unsure, say Y.
 
-config HARDEN_BRANCH_PREDICTOR
-	bool "Harden the branch predictor against aliasing attacks" if EXPERT
-	default y
-	help
-	  Speculation attacks against some high-performance processors rely on
-	  being able to manipulate the branch predictor for a victim context by
-	  executing aliasing branches in the attacker context.  Such attacks
-	  can be partially mitigated against by clearing internal branch
-	  predictor state and limiting the prediction logic in some situations.
-
-	  This config option will take CPU-specific actions to harden the
-	  branch predictor against aliasing attacks and may rely on specific
-	  instruction sequences or control bits being set by the system
-	  firmware.
-
-	  If unsure, say Y.
-
 config HARDEN_EL2_VECTORS
 	bool "Harden EL2 vector mapping against system register leak" if EXPERT
 	default y
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index a5c152d79820..9dd680194db9 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -444,7 +444,6 @@ static inline int kvm_read_guest_lock(struct kvm *kvm,
 	return ret;
 }
 
-#ifdef CONFIG_KVM_INDIRECT_VECTORS
 /*
  * EL2 vectors can be mapped and rerouted in a number of ways,
  * depending on the kernel configuration and CPU present:
@@ -529,17 +528,6 @@ static inline int kvm_map_vectors(void)
 
 	return 0;
 }
-#else
-static inline void *kvm_get_hyp_vector(void)
-{
-	return kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector));
-}
-
-static inline int kvm_map_vectors(void)
-{
-	return 0;
-}
-#endif
 
 DECLARE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
 
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 3e8063f4f9d3..20fdf71f96c3 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -95,13 +95,9 @@ struct bp_hardening_data {
 	bp_hardening_cb_t	fn;
 };
 
-#if (defined(CONFIG_HARDEN_BRANCH_PREDICTOR) ||	\
-     defined(CONFIG_HARDEN_EL2_VECTORS))
 extern char __bp_harden_hyp_vecs_start[], __bp_harden_hyp_vecs_end[];
 extern atomic_t arm64_el2_vector_last_slot;
-#endif  /* CONFIG_HARDEN_BRANCH_PREDICTOR || CONFIG_HARDEN_EL2_VECTORS */
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
 
 static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void)
@@ -120,14 +116,6 @@ static inline void arm64_apply_bp_hardening(void)
 	if (d->fn)
 		d->fn();
 }
-#else
-static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void)
-{
-	return NULL;
-}
-
-static inline void arm64_apply_bp_hardening(void)	{ }
-#endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
 extern void paging_init(void);
 extern void bootmem_init(void);
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 934d50788ca3..de09a3537cd4 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -109,13 +109,11 @@ cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
 
 atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 #include <asm/mmu_context.h>
 #include <asm/cacheflush.h>
 
 DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
 
-#ifdef CONFIG_KVM_INDIRECT_VECTORS
 extern char __smccc_workaround_1_smc_start[];
 extern char __smccc_workaround_1_smc_end[];
 
@@ -165,17 +163,6 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
 	__this_cpu_write(bp_hardening_data.fn, fn);
 	raw_spin_unlock(&bp_lock);
 }
-#else
-#define __smccc_workaround_1_smc_start		NULL
-#define __smccc_workaround_1_smc_end		NULL
-
-static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
-				      const char *hyp_vecs_start,
-				      const char *hyp_vecs_end)
-{
-	__this_cpu_write(bp_hardening_data.fn, fn);
-}
-#endif	/* CONFIG_KVM_INDIRECT_VECTORS */
 
 static void  install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry,
 				     bp_hardening_cb_t fn,
@@ -279,7 +266,6 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 
 	return;
 }
-#endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
 
@@ -516,7 +502,6 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
 	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
 	CAP_MIDR_RANGE_LIST(midr_list)
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 
 /*
  * List of CPUs where we need to issue a psci call to
@@ -535,8 +520,6 @@ static const struct midr_range arm64_bp_harden_smccc_cpus[] = {
 	{},
 };
 
-#endif
-
 #ifdef CONFIG_HARDEN_EL2_VECTORS
 
 static const struct midr_range arm64_harden_el2_vectors[] = {
@@ -710,13 +693,11 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
 	},
 #endif
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
 		.cpu_enable = enable_smccc_arch_workaround_1,
 		ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
 	},
-#endif
 #ifdef CONFIG_HARDEN_EL2_VECTORS
 	{
 		.desc = "EL2 vector hardening",
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index bee54b7d17b9..3f0eaaf704c8 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -842,11 +842,9 @@ el0_irq_naked:
 #endif
 
 	ct_user_exit
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	tbz	x22, #55, 1f
 	bl	do_el0_irq_bp_hardening
 1:
-#endif
 	irq_handler
 
 #ifdef CONFIG_TRACE_IRQFLAGS
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index a3f85624313e..402bcfb85f25 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -58,9 +58,6 @@ config KVM_ARM_PMU
 	  Adds support for a virtual Performance Monitoring Unit (PMU) in
 	  virtual machines.
 
-config KVM_INDIRECT_VECTORS
-       def_bool KVM && (HARDEN_BRANCH_PREDICTOR || HARDEN_EL2_VECTORS)
-
 source "drivers/vhost/Kconfig"
 
 endif # VIRTUALIZATION
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 53c9344968d4..e02ddf40f113 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -272,7 +272,6 @@ ENTRY(__kvm_hyp_vector)
 	valid_vect	el1_error		// Error 32-bit EL1
 ENDPROC(__kvm_hyp_vector)
 
-#ifdef CONFIG_KVM_INDIRECT_VECTORS
 .macro hyp_ventry
 	.align 7
 1:	.rept 27
@@ -331,4 +330,3 @@ ENTRY(__smccc_workaround_1_smc_start)
 	ldp	x0, x1, [sp, #(8 * 2)]
 	add	sp, sp, #(8 * 4)
 ENTRY(__smccc_workaround_1_smc_end)
-#endif
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 05/12] arm64: remove the ability to build a kernel without kpti
  2019-01-25 18:06 [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (3 preceding siblings ...)
  2019-01-25 18:07 ` [PATCH v4 04/12] arm64: remove the ability to build a kernel without hardened branch predictors Jeremy Linton
@ 2019-01-25 18:07 ` Jeremy Linton
  2019-01-30 18:05   ` Andre Przywara
  2019-01-25 18:07 ` [PATCH v4 06/12] arm64: add sysfs vulnerability show for spectre v1 Jeremy Linton
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-01-25 18:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, julien.thierry,
	mlangsdo, steven.price, stefan.wahren, Jeremy Linton

Buried behind EXPERT is the ability to build a kernel without
hardened branch predictors, this needlessly clutters up the
code as well as creates the opportunity for bugs. It also
removes the kernel's ability to determine if the machine its
running on is vulnerable.

Since its also possible to disable it at boot time, lets remove
the config option.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/Kconfig              | 12 ------------
 arch/arm64/include/asm/fixmap.h |  2 --
 arch/arm64/include/asm/mmu.h    |  7 +------
 arch/arm64/include/asm/sdei.h   |  2 +-
 arch/arm64/kernel/asm-offsets.c |  2 --
 arch/arm64/kernel/cpufeature.c  |  4 ----
 arch/arm64/kernel/entry.S       | 11 +----------
 arch/arm64/kernel/sdei.c        |  2 --
 arch/arm64/kernel/vmlinux.lds.S |  8 --------
 arch/arm64/mm/context.c         |  6 ------
 arch/arm64/mm/mmu.c             |  2 --
 arch/arm64/mm/proc.S            |  2 --
 12 files changed, 3 insertions(+), 57 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6b4c6d3fdf4d..09a85410d814 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -993,18 +993,6 @@ config FORCE_MAX_ZONEORDER
 	  However for 4K, we choose a higher default value, 11 as opposed to 10, giving us
 	  4M allocations matching the default size used by generic code.
 
-config UNMAP_KERNEL_AT_EL0
-	bool "Unmap kernel when running in userspace (aka \"KAISER\")" if EXPERT
-	default y
-	help
-	  Speculation attacks against some high-performance processors can
-	  be used to bypass MMU permission checks and leak kernel data to
-	  userspace. This can be defended against by unmapping the kernel
-	  when running in userspace, mapping it back in on exception entry
-	  via a trampoline page in the vector table.
-
-	  If unsure, say Y.
-
 config HARDEN_EL2_VECTORS
 	bool "Harden EL2 vector mapping against system register leak" if EXPERT
 	default y
diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h
index ec1e6d6fa14c..62371f07d4ce 100644
--- a/arch/arm64/include/asm/fixmap.h
+++ b/arch/arm64/include/asm/fixmap.h
@@ -58,11 +58,9 @@ enum fixed_addresses {
 	FIX_APEI_GHES_NMI,
 #endif /* CONFIG_ACPI_APEI_GHES */
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	FIX_ENTRY_TRAMP_DATA,
 	FIX_ENTRY_TRAMP_TEXT,
 #define TRAMP_VALIAS		(__fix_to_virt(FIX_ENTRY_TRAMP_TEXT))
-#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
 	__end_of_permanent_fixed_addresses,
 
 	/*
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 20fdf71f96c3..9d689661471c 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -42,18 +42,13 @@ typedef struct {
 
 static inline bool arm64_kernel_unmapped_at_el0(void)
 {
-	return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) &&
-	       cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
+	return cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
 }
 
 static inline bool arm64_kernel_use_ng_mappings(void)
 {
 	bool tx1_bug;
 
-	/* What's a kpti? Use global mappings if we don't know. */
-	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
-		return false;
-
 	/*
 	 * Note: this function is called before the CPU capabilities have
 	 * been configured, so our early mappings will be global. If we
diff --git a/arch/arm64/include/asm/sdei.h b/arch/arm64/include/asm/sdei.h
index ffe47d766c25..82c3e9b6a4b0 100644
--- a/arch/arm64/include/asm/sdei.h
+++ b/arch/arm64/include/asm/sdei.h
@@ -23,7 +23,7 @@ extern unsigned long sdei_exit_mode;
 asmlinkage void __sdei_asm_handler(unsigned long event_num, unsigned long arg,
 				   unsigned long pc, unsigned long pstate);
 
-/* and its CONFIG_UNMAP_KERNEL_AT_EL0 trampoline */
+/* and its unmap kernel at el0 trampoline */
 asmlinkage void __sdei_asm_entry_trampoline(unsigned long event_num,
 						   unsigned long arg,
 						   unsigned long pc,
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 65b8afc84466..6a6f83de91b8 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -165,9 +165,7 @@ int main(void)
   DEFINE(HIBERN_PBE_NEXT,	offsetof(struct pbe, next));
   DEFINE(ARM64_FTR_SYSVAL,	offsetof(struct arm64_ftr_reg, sys_val));
   BLANK();
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
   DEFINE(TRAMP_VALIAS,		TRAMP_VALIAS);
-#endif
 #ifdef CONFIG_ARM_SDE_INTERFACE
   DEFINE(SDEI_EVENT_INTREGS,	offsetof(struct sdei_registered_event, interrupted_regs));
   DEFINE(SDEI_EVENT_PRIORITY,	offsetof(struct sdei_registered_event, priority));
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d1a7fd7972f9..a9e18b9cdc1e 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -944,7 +944,6 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
 	return has_cpuid_feature(entry, scope);
 }
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 
 static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
@@ -1035,7 +1034,6 @@ static int __init parse_kpti(char *str)
 	return 0;
 }
 early_param("kpti", parse_kpti);
-#endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 #ifdef CONFIG_ARM64_HW_AFDBM
 static inline void __cpu_enable_hw_dbm(void)
@@ -1284,7 +1282,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,
 		.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
 	},
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	{
 		.desc = "Kernel page table isolation (KPTI)",
 		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
@@ -1300,7 +1297,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.matches = unmap_kernel_at_el0,
 		.cpu_enable = kpti_install_ng_mappings,
 	},
-#endif
 	{
 		/* FP/SIMD is not implemented */
 		.capability = ARM64_HAS_NO_FPSIMD,
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 3f0eaaf704c8..1d8efc144b04 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -70,7 +70,6 @@
 
 	.macro kernel_ventry, el, label, regsize = 64
 	.align 7
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 alternative_if ARM64_UNMAP_KERNEL_AT_EL0
 	.if	\el == 0
 	.if	\regsize == 64
@@ -81,7 +80,6 @@ alternative_if ARM64_UNMAP_KERNEL_AT_EL0
 	.endif
 	.endif
 alternative_else_nop_endif
-#endif
 
 	sub	sp, sp, #S_FRAME_SIZE
 #ifdef CONFIG_VMAP_STACK
@@ -345,7 +343,6 @@ alternative_else_nop_endif
 
 	.if	\el == 0
 alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	bne	4f
 	msr	far_el1, x30
 	tramp_alias	x30, tramp_exit_native
@@ -353,7 +350,7 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
 4:
 	tramp_alias	x30, tramp_exit_compat
 	br	x30
-#endif
+
 	.else
 	eret
 	.endif
@@ -913,7 +910,6 @@ ENDPROC(el0_svc)
 
 	.popsection				// .entry.text
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 /*
  * Exception vectors trampoline.
  */
@@ -1023,7 +1019,6 @@ __entry_tramp_data_start:
 	.quad	vectors
 	.popsection				// .rodata
 #endif /* CONFIG_RANDOMIZE_BASE */
-#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 /*
  * Register switch for AArch64. The callee-saved registers need to be saved
@@ -1086,7 +1081,6 @@ NOKPROBE(ret_from_fork)
 	b	.
 .endm
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 /*
  * The regular SDEI entry point may have been unmapped along with the rest of
  * the kernel. This trampoline restores the kernel mapping to make the x1 memory
@@ -1146,7 +1140,6 @@ __sdei_asm_trampoline_next_handler:
 	.quad	__sdei_asm_handler
 .popsection		// .rodata
 #endif /* CONFIG_RANDOMIZE_BASE */
-#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 /*
  * Software Delegated Exception entry point.
@@ -1240,10 +1233,8 @@ alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
 	sdei_handler_exit exit_mode=x2
 alternative_else_nop_endif
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	tramp_alias	dst=x5, sym=__sdei_asm_exit_trampoline
 	br	x5
-#endif
 ENDPROC(__sdei_asm_handler)
 NOKPROBE(__sdei_asm_handler)
 #endif /* CONFIG_ARM_SDE_INTERFACE */
diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c
index 5ba4465e44f0..a0dbdb962019 100644
--- a/arch/arm64/kernel/sdei.c
+++ b/arch/arm64/kernel/sdei.c
@@ -157,7 +157,6 @@ unsigned long sdei_arch_get_entry_point(int conduit)
 
 	sdei_exit_mode = (conduit == CONDUIT_HVC) ? SDEI_EXIT_HVC : SDEI_EXIT_SMC;
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	if (arm64_kernel_unmapped_at_el0()) {
 		unsigned long offset;
 
@@ -165,7 +164,6 @@ unsigned long sdei_arch_get_entry_point(int conduit)
 			 (unsigned long)__entry_tramp_text_start;
 		return TRAMP_VALIAS + offset;
 	} else
-#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
 		return (unsigned long)__sdei_asm_handler;
 
 }
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 7fa008374907..a4dbee11bcb5 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -57,16 +57,12 @@ jiffies = jiffies_64;
 #define HIBERNATE_TEXT
 #endif
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 #define TRAMP_TEXT					\
 	. = ALIGN(PAGE_SIZE);				\
 	__entry_tramp_text_start = .;			\
 	*(.entry.tramp.text)				\
 	. = ALIGN(PAGE_SIZE);				\
 	__entry_tramp_text_end = .;
-#else
-#define TRAMP_TEXT
-#endif
 
 /*
  * The size of the PE/COFF section that covers the kernel image, which
@@ -143,10 +139,8 @@ SECTIONS
 	idmap_pg_dir = .;
 	. += IDMAP_DIR_SIZE;
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	tramp_pg_dir = .;
 	. += PAGE_SIZE;
-#endif
 
 #ifdef CONFIG_ARM64_SW_TTBR0_PAN
 	reserved_ttbr0 = .;
@@ -257,10 +251,8 @@ ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
 ASSERT(__hibernate_exit_text_end - (__hibernate_exit_text_start & ~(SZ_4K - 1))
 	<= SZ_4K, "Hibernate exit text too big or misaligned")
 #endif
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE,
 	"Entry trampoline text too big")
-#endif
 /*
  * If padding is applied before .head.text, virt<->phys conversions will fail.
  */
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 1f0ea2facf24..e99f3e645e06 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -40,15 +40,9 @@ static cpumask_t tlb_flush_pending;
 #define ASID_MASK		(~GENMASK(asid_bits - 1, 0))
 #define ASID_FIRST_VERSION	(1UL << asid_bits)
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 #define NUM_USER_ASIDS		(ASID_FIRST_VERSION >> 1)
 #define asid2idx(asid)		(((asid) & ~ASID_MASK) >> 1)
 #define idx2asid(idx)		(((idx) << 1) & ~ASID_MASK)
-#else
-#define NUM_USER_ASIDS		(ASID_FIRST_VERSION)
-#define asid2idx(asid)		((asid) & ~ASID_MASK)
-#define idx2asid(idx)		asid2idx(idx)
-#endif
 
 /* Get the ASIDBits supported by the current CPU */
 static u32 get_cpu_asid_bits(void)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index b6f5aa52ac67..97252baf4700 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -570,7 +570,6 @@ static int __init parse_rodata(char *arg)
 }
 early_param("rodata", parse_rodata);
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 static int __init map_entry_trampoline(void)
 {
 	pgprot_t prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC;
@@ -597,7 +596,6 @@ static int __init map_entry_trampoline(void)
 	return 0;
 }
 core_initcall(map_entry_trampoline);
-#endif
 
 /*
  * Create fine-grained mappings for the kernel.
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 73886a5f1f30..e9ca5cbb93bc 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -217,7 +217,6 @@ ENTRY(idmap_cpu_replace_ttbr1)
 ENDPROC(idmap_cpu_replace_ttbr1)
 	.popsection
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	.pushsection ".idmap.text", "awx"
 
 	.macro	__idmap_kpti_get_pgtable_ent, type
@@ -406,7 +405,6 @@ __idmap_kpti_secondary:
 	.unreq	pte
 ENDPROC(idmap_kpti_install_ng_mappings)
 	.popsection
-#endif
 
 /*
  *	__cpu_setup
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 06/12] arm64: add sysfs vulnerability show for spectre v1
  2019-01-25 18:06 [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (4 preceding siblings ...)
  2019-01-25 18:07 ` [PATCH v4 05/12] arm64: remove the ability to build a kernel without kpti Jeremy Linton
@ 2019-01-25 18:07 ` Jeremy Linton
  2019-01-31 17:52   ` Andre Przywara
  2019-01-25 18:07 ` [PATCH v4 07/12] arm64: add sysfs vulnerability show for meltdown Jeremy Linton
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-01-25 18:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, julien.thierry,
	mlangsdo, steven.price, stefan.wahren, Jeremy Linton

From: Mian Yousaf Kaukab <ykaukab@suse.de>

spectre v1, has been mitigated, and the mitigation is
always active.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index de09a3537cd4..ef636acf5604 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -730,3 +730,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 	{
 	}
 };
+
+#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
+
+ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+}
+
+#endif
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 07/12] arm64: add sysfs vulnerability show for meltdown
  2019-01-25 18:06 [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (5 preceding siblings ...)
  2019-01-25 18:07 ` [PATCH v4 06/12] arm64: add sysfs vulnerability show for spectre v1 Jeremy Linton
@ 2019-01-25 18:07 ` Jeremy Linton
  2019-01-31  9:28   ` Julien Thierry
  2019-01-31 17:54   ` Andre Przywara
  2019-01-25 18:07 ` [PATCH v4 08/12] arm64: Advertise mitigation of Spectre-v2, or lack thereof Jeremy Linton
                   ` (5 subsequent siblings)
  12 siblings, 2 replies; 35+ messages in thread
From: Jeremy Linton @ 2019-01-25 18:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, julien.thierry,
	mlangsdo, steven.price, stefan.wahren, Jeremy Linton

Display the mitigation status if active, otherwise
assume the cpu is safe unless it doesn't have CSV3
and isn't in our whitelist.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 33 +++++++++++++++++++++++++++------
 1 file changed, 27 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a9e18b9cdc1e..624dfe0b5cdd 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -944,6 +944,8 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
 	return has_cpuid_feature(entry, scope);
 }
 
+/* default value is invalid until unmap_kernel_at_el0() runs */
+static bool __meltdown_safe = true;
 static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 
 static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
@@ -962,6 +964,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 		{ /* sentinel */ }
 	};
 	char const *str = "command line option";
+	bool meltdown_safe;
+
+	meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
+
+	/* Defer to CPU feature registers */
+	if (has_cpuid_feature(entry, scope))
+		meltdown_safe = true;
+
+	if (!meltdown_safe)
+		__meltdown_safe = false;
 
 	/*
 	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
@@ -984,12 +996,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
 		return kaslr_offset() > 0;
 
-	/* Don't force KPTI for CPUs that are not vulnerable */
-	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
-		return false;
-
-	/* Defer to CPU feature registers */
-	return !has_cpuid_feature(entry, scope);
+	return !meltdown_safe;
 }
 
 static void
@@ -2055,3 +2062,17 @@ static int __init enable_mrs_emulation(void)
 }
 
 core_initcall(enable_mrs_emulation);
+
+#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
+ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	if (arm64_kernel_unmapped_at_el0())
+		return sprintf(buf, "Mitigation: KPTI\n");
+
+	if (__meltdown_safe)
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
+#endif
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 08/12] arm64: Advertise mitigation of Spectre-v2, or lack thereof
  2019-01-25 18:06 [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (6 preceding siblings ...)
  2019-01-25 18:07 ` [PATCH v4 07/12] arm64: add sysfs vulnerability show for meltdown Jeremy Linton
@ 2019-01-25 18:07 ` Jeremy Linton
  2019-01-31 17:54   ` Andre Przywara
  2019-01-25 18:07 ` [PATCH v4 09/12] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2 Jeremy Linton
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-01-25 18:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, julien.thierry,
	mlangsdo, steven.price, stefan.wahren, Jeremy Linton

From: Marc Zyngier <marc.zyngier@arm.com>

We currently have a list of CPUs affected by Spectre-v2, for which
we check that the firmware implements ARCH_WORKAROUND_1. It turns
out that not all firmwares do implement the required mitigation,
and that we fail to let the user know about it.

Instead, let's slightly revamp our checks, and rely on a whitelist
of cores that are known to be non-vulnerable, and let the user know
the status of the mitigation in the kernel log.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[This makes more sense in front of the sysfs patch]
[Pick pieces of that patch into this and move it earlier]
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 104 +++++++++++++++++----------------
 1 file changed, 54 insertions(+), 50 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index ef636acf5604..4d23b4d4cfa8 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -129,9 +129,9 @@ static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
 	__flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K);
 }
 
-static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
-				      const char *hyp_vecs_start,
-				      const char *hyp_vecs_end)
+static void install_bp_hardening_cb(bp_hardening_cb_t fn,
+				    const char *hyp_vecs_start,
+				    const char *hyp_vecs_end)
 {
 	static DEFINE_RAW_SPINLOCK(bp_lock);
 	int cpu, slot = -1;
@@ -164,23 +164,6 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
 	raw_spin_unlock(&bp_lock);
 }
 
-static void  install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry,
-				     bp_hardening_cb_t fn,
-				     const char *hyp_vecs_start,
-				     const char *hyp_vecs_end)
-{
-	u64 pfr0;
-
-	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
-		return;
-
-	pfr0 = read_cpuid(ID_AA64PFR0_EL1);
-	if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT))
-		return;
-
-	__install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end);
-}
-
 #include <uapi/linux/psci.h>
 #include <linux/arm-smccc.h>
 #include <linux/psci.h>
@@ -215,29 +198,27 @@ static int __init parse_nospectre_v2(char *str)
 }
 early_param("nospectre_v2", parse_nospectre_v2);
 
-static void
-enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
+/*
+ * -1: No workaround
+ *  0: No workaround required
+ *  1: Workaround installed
+ */
+static int detect_harden_bp_fw(void)
 {
 	bp_hardening_cb_t cb;
 	void *smccc_start, *smccc_end;
 	struct arm_smccc_res res;
 	u32 midr = read_cpuid_id();
 
-	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
-		return;
-
-	if (__nospectre_v2)
-		return;
-
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
-		return;
+		return -1;
 
 	switch (psci_ops.conduit) {
 	case PSCI_CONDUIT_HVC:
 		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
 		if ((int)res.a0 < 0)
-			return;
+			return -1;
 		cb = call_hvc_arch_workaround_1;
 		/* This is a guest, no need to patch KVM vectors */
 		smccc_start = NULL;
@@ -248,23 +229,23 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
 		if ((int)res.a0 < 0)
-			return;
+			return -1;
 		cb = call_smc_arch_workaround_1;
 		smccc_start = __smccc_workaround_1_smc_start;
 		smccc_end = __smccc_workaround_1_smc_end;
 		break;
 
 	default:
-		return;
+		return -1;
 	}
 
 	if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) ||
 	    ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1))
 		cb = qcom_link_stack_sanitization;
 
-	install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
+	install_bp_hardening_cb(cb, smccc_start, smccc_end);
 
-	return;
+	return 1;
 }
 
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
@@ -502,24 +483,47 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
 	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
 	CAP_MIDR_RANGE_LIST(midr_list)
 
-
 /*
- * List of CPUs where we need to issue a psci call to
- * harden the branch predictor.
+ * List of CPUs that do not need any Spectre-v2 mitigation at all.
  */
-static const struct midr_range arm64_bp_harden_smccc_cpus[] = {
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
-	MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
-	MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
-	MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
-	MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
-	MIDR_ALL_VERSIONS(MIDR_NVIDIA_DENVER),
-	{},
+static const struct midr_range spectre_v2_safe_list[] = {
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+	{ /* sentinel */ }
 };
 
+static bool __maybe_unused
+check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
+{
+	int need_wa;
+
+	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+
+	/* If the CPU has CSV2 set, we're safe */
+	if (cpuid_feature_extract_unsigned_field(read_cpuid(ID_AA64PFR0_EL1),
+						 ID_AA64PFR0_CSV2_SHIFT))
+		return false;
+
+	/* Alternatively, we have a list of unaffected CPUs */
+	if (is_midr_in_range_list(read_cpuid_id(), spectre_v2_safe_list))
+		return false;
+
+	/* Fallback to firmware detection */
+	need_wa = detect_harden_bp_fw();
+	if (!need_wa)
+		return false;
+
+	if (need_wa < 0)
+		pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n");
+
+	/* forced off */
+	if (__nospectre_v2)
+		return false;
+
+	return (need_wa > 0);
+}
+
 #ifdef CONFIG_HARDEN_EL2_VECTORS
 
 static const struct midr_range arm64_harden_el2_vectors[] = {
@@ -695,8 +699,8 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 #endif
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		.cpu_enable = enable_smccc_arch_workaround_1,
-		ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
+		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+		.matches = check_branch_predictor,
 	},
 #ifdef CONFIG_HARDEN_EL2_VECTORS
 	{
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 09/12] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2
  2019-01-25 18:06 [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (7 preceding siblings ...)
  2019-01-25 18:07 ` [PATCH v4 08/12] arm64: Advertise mitigation of Spectre-v2, or lack thereof Jeremy Linton
@ 2019-01-25 18:07 ` Jeremy Linton
  2019-01-31 17:55   ` Andre Przywara
  2019-01-25 18:07 ` [PATCH v4 10/12] arm64: add sysfs vulnerability show for spectre v2 Jeremy Linton
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-01-25 18:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, julien.thierry,
	mlangsdo, steven.price, stefan.wahren, Jeremy Linton

From: Marc Zyngier <marc.zyngier@arm.com>

The SMCCC ARCH_WORKAROUND_1 service can indicate that although the
firmware knows about the Spectre-v2 mitigation, this particular
CPU is not vulnerable, and it is thus not necessary to call
the firmware on this CPU.

Let's use this information to our benefit.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 32 +++++++++++++++++++++++---------
 1 file changed, 23 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 4d23b4d4cfa8..024c83ffff99 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -217,22 +217,36 @@ static int detect_harden_bp_fw(void)
 	case PSCI_CONDUIT_HVC:
 		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
-		if ((int)res.a0 < 0)
+		switch ((int)res.a0) {
+		case 1:
+			/* Firmware says we're just fine */
+			return 0;
+		case 0:
+			cb = call_hvc_arch_workaround_1;
+			/* This is a guest, no need to patch KVM vectors */
+			smccc_start = NULL;
+			smccc_end = NULL;
+			break;
+		default:
 			return -1;
-		cb = call_hvc_arch_workaround_1;
-		/* This is a guest, no need to patch KVM vectors */
-		smccc_start = NULL;
-		smccc_end = NULL;
+		}
 		break;
 
 	case PSCI_CONDUIT_SMC:
 		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
-		if ((int)res.a0 < 0)
+		switch ((int)res.a0) {
+		case 1:
+			/* Firmware says we're just fine */
+			return 0;
+		case 0:
+			cb = call_smc_arch_workaround_1;
+			smccc_start = __smccc_workaround_1_smc_start;
+			smccc_end = __smccc_workaround_1_smc_end;
+			break;
+		default:
 			return -1;
-		cb = call_smc_arch_workaround_1;
-		smccc_start = __smccc_workaround_1_smc_start;
-		smccc_end = __smccc_workaround_1_smc_end;
+		}
 		break;
 
 	default:
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 10/12] arm64: add sysfs vulnerability show for spectre v2
  2019-01-25 18:06 [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (8 preceding siblings ...)
  2019-01-25 18:07 ` [PATCH v4 09/12] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2 Jeremy Linton
@ 2019-01-25 18:07 ` Jeremy Linton
  2019-01-31 17:55   ` Andre Przywara
  2019-01-25 18:07 ` [PATCH v4 11/12] arm64: add sysfs vulnerability show for speculative store bypass Jeremy Linton
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-01-25 18:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, julien.thierry,
	mlangsdo, steven.price, stefan.wahren, Jeremy Linton

Add code to track whether all the cores in the machine are
vulnerable, and whether all the vulnerable cores have been
mitigated.

Once we have that information we can add the sysfs stub and
provide an accurate view of what is known about the machine.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 31 +++++++++++++++++++++++++++++--
 1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 024c83ffff99..caedf268c972 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -497,6 +497,10 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
 	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
 	CAP_MIDR_RANGE_LIST(midr_list)
 
+/* Track overall mitigation state. We are only mitigated if all cores are ok */
+static bool __hardenbp_enab = true;
+static bool __spectrev2_safe = true;
+
 /*
  * List of CPUs that do not need any Spectre-v2 mitigation at all.
  */
@@ -507,6 +511,10 @@ static const struct midr_range spectre_v2_safe_list[] = {
 	{ /* sentinel */ }
 };
 
+/*
+ * Track overall bp hardening for all heterogeneous cores in the machine.
+ * We are only considered "safe" if all booted cores are known safe.
+ */
 static bool __maybe_unused
 check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
 {
@@ -528,12 +536,19 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
 	if (!need_wa)
 		return false;
 
-	if (need_wa < 0)
+	__spectrev2_safe = false;
+
+	if (need_wa < 0) {
 		pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n");
+		__hardenbp_enab = false;
+	}
 
 	/* forced off */
-	if (__nospectre_v2)
+	if (__nospectre_v2) {
+		pr_info_once("spectrev2 mitigation disabled by command line option\n");
+		__hardenbp_enab = false;
 		return false;
+	}
 
 	return (need_wa > 0);
 }
@@ -757,4 +772,16 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
 	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 }
 
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	if (__spectrev2_safe)
+		return sprintf(buf, "Not affected\n");
+
+	if (__hardenbp_enab)
+		return sprintf(buf, "Mitigation: Branch predictor hardening\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
+
 #endif
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 11/12] arm64: add sysfs vulnerability show for speculative store bypass
  2019-01-25 18:06 [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (9 preceding siblings ...)
  2019-01-25 18:07 ` [PATCH v4 10/12] arm64: add sysfs vulnerability show for spectre v2 Jeremy Linton
@ 2019-01-25 18:07 ` Jeremy Linton
  2019-01-31 17:55   ` Andre Przywara
  2019-01-25 18:07 ` [PATCH v4 12/12] arm64: enable generic CPU vulnerabilites support Jeremy Linton
  2019-02-08 20:05 ` [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Stefan Wahren
  12 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-01-25 18:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, julien.thierry,
	mlangsdo, steven.price, stefan.wahren, Jeremy Linton

Return status based on ssbd_state and the arm64 SSBS feature. If
the mitigation is disabled, or the firmware isn't responding then
return the expected machine state based on a new blacklist of known
vulnerable cores.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 45 ++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index caedf268c972..e9ae8e5fd7e1 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -265,6 +265,7 @@ static int detect_harden_bp_fw(void)
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
 
 int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
+static bool __ssb_safe = true;
 
 static const struct ssbd_options {
 	const char	*str;
@@ -362,10 +363,16 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 {
 	struct arm_smccc_res res;
 	bool required = true;
+	bool is_vul;
 	s32 val;
 
 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
 
+	is_vul = is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list);
+
+	if (is_vul)
+		__ssb_safe = false;
+
 	if (this_cpu_has_cap(ARM64_SSBS)) {
 		required = false;
 		goto out_printmsg;
@@ -399,6 +406,7 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 		ssbd_state = ARM64_SSBD_UNKNOWN;
 		return false;
 
+	/* machines with mixed mitigation requirements must not return this */
 	case SMCCC_RET_NOT_REQUIRED:
 		pr_info_once("%s mitigation not required\n", entry->desc);
 		ssbd_state = ARM64_SSBD_MITIGATED;
@@ -454,6 +462,16 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 	return required;
 }
 
+/* known vulnerable cores */
+static const struct midr_range arm64_ssb_cpus[] = {
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
+	{},
+};
+
 static void __maybe_unused
 cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
 {
@@ -743,6 +761,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		.capability = ARM64_SSBD,
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.matches = has_ssbd_mitigation,
+		.midr_range_list = arm64_ssb_cpus,
 	},
 #ifdef CONFIG_ARM64_ERRATUM_1188873
 	{
@@ -784,4 +803,30 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
 	return sprintf(buf, "Vulnerable\n");
 }
 
+ssize_t cpu_show_spec_store_bypass(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	/*
+	 *  Two assumptions: First, get_ssbd_state() reflects the worse case
+	 *  for hetrogenous machines, and that if SSBS is supported its
+	 *  supported by all cores.
+	 */
+	switch (arm64_get_ssbd_state()) {
+	case ARM64_SSBD_MITIGATED:
+		return sprintf(buf, "Not affected\n");
+
+	case ARM64_SSBD_KERNEL:
+	case ARM64_SSBD_FORCE_ENABLE:
+		if (cpus_have_cap(ARM64_SSBS))
+			return sprintf(buf, "Not affected\n");
+		return sprintf(buf,
+			"Mitigation: Speculative Store Bypass disabled\n");
+	}
+
+	if (__ssb_safe)
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
+
 #endif
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 12/12] arm64: enable generic CPU vulnerabilites support
  2019-01-25 18:06 [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (10 preceding siblings ...)
  2019-01-25 18:07 ` [PATCH v4 11/12] arm64: add sysfs vulnerability show for speculative store bypass Jeremy Linton
@ 2019-01-25 18:07 ` Jeremy Linton
  2019-01-31 17:56   ` Andre Przywara
  2019-02-08 20:05 ` [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Stefan Wahren
  12 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-01-25 18:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, julien.thierry,
	mlangsdo, steven.price, stefan.wahren, Jeremy Linton

From: Mian Yousaf Kaukab <ykaukab@suse.de>

Enable CPU vulnerabilty show functions for spectre_v1, spectre_v2,
meltdown and store-bypass.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 09a85410d814..36a7cfbbfbb3 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -88,6 +88,7 @@ config ARM64
 	select GENERIC_CLOCKEVENTS
 	select GENERIC_CLOCKEVENTS_BROADCAST
 	select GENERIC_CPU_AUTOPROBE
+	select GENERIC_CPU_VULNERABILITIES
 	select GENERIC_EARLY_IOREMAP
 	select GENERIC_IDLE_POLL_SETUP
 	select GENERIC_IRQ_MULTI_HANDLER
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 01/12] Documentation: Document arm64 kpti control
  2019-01-25 18:07 ` [PATCH v4 01/12] Documentation: Document arm64 kpti control Jeremy Linton
@ 2019-01-30 18:02   ` Andre Przywara
  2019-02-06 19:24     ` Jeremy Linton
  2019-01-31 17:58   ` Andre Przywara
  2019-02-07  0:25   ` Jonathan Corbet
  2 siblings, 1 reply; 35+ messages in thread
From: Andre Przywara @ 2019-01-30 18:02 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, stefan.wahren, Jonathan Corbet, mlangsdo,
	linux-doc, suzuki.poulose, marc.zyngier, catalin.marinas,
	julien.thierry, will.deacon, linux-kernel, steven.price, ykaukab,
	dave.martin, shankerd

On Fri, 25 Jan 2019 12:07:00 -0600
Jeremy Linton <jeremy.linton@arm.com> wrote:

Hi,

> For a while Arm64 has been capable of force enabling
> or disabling the kpti mitigations. Lets make sure the
> documentation reflects that.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: linux-doc@vger.kernel.org
> ---
>  Documentation/admin-guide/kernel-parameters.txt | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt
> b/Documentation/admin-guide/kernel-parameters.txt index
> b799bcf67d7b..9475f02c79da 100644 ---
> a/Documentation/admin-guide/kernel-parameters.txt +++
> b/Documentation/admin-guide/kernel-parameters.txt @@ -1982,6 +1982,12
> @@ Built with CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y,
>  			the default is off.
>  
> +	kpti=		[ARM64] Control page table isolation of
> user
> +			and kernel address spaces.
> +			Default: enabled on cores which need
> mitigation.

Would this be a good place to mention that we enable it when
CONFIG_RANDOMIZE_BASE is enabled and we have a valid kaslr_offset? I
found this somewhat surprising, also it's unrelated to the
vulnerability.

Cheers,
Andre

> +			0: force disabled
> +			1: force enabled
> +
>  	kvm.ignore_msrs=[KVM] Ignore guest accesses to unhandled
> MSRs. Default is 0 (don't ignore, but inject #GP)
>  


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 02/12] arm64: Provide a command line to disable spectre_v2 mitigation
  2019-01-25 18:07 ` [PATCH v4 02/12] arm64: Provide a command line to disable spectre_v2 mitigation Jeremy Linton
@ 2019-01-30 18:03   ` Andre Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-01-30 18:03 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, stefan.wahren, Jonathan Corbet, mlangsdo,
	linux-doc, suzuki.poulose, marc.zyngier, catalin.marinas,
	julien.thierry, will.deacon, linux-kernel, steven.price, ykaukab,
	dave.martin, shankerd

On Fri, 25 Jan 2019 12:07:01 -0600
Jeremy Linton <jeremy.linton@arm.com> wrote:

Hi,

> There are various reasons, including bencmarking, to disable spectrev2
> mitigation on a machine. Provide a command-line to do so.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: linux-doc@vger.kernel.org

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre

> ---
>  Documentation/admin-guide/kernel-parameters.txt |  8 ++++----
>  arch/arm64/kernel/cpu_errata.c                  | 11 +++++++++++
>  2 files changed, 15 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt
> b/Documentation/admin-guide/kernel-parameters.txt index
> 9475f02c79da..2ae77979488e 100644 ---
> a/Documentation/admin-guide/kernel-parameters.txt +++
> b/Documentation/admin-guide/kernel-parameters.txt @@ -2849,10
> +2849,10 @@ check bypass). With this option data leaks are possible
>  			in the system.
>  
> -	nospectre_v2	[X86,PPC_FSL_BOOK3E] Disable all
> mitigations for the Spectre variant 2
> -			(indirect branch prediction) vulnerability.
> System may
> -			allow data leaks with this option, which is
> equivalent
> -			to spectre_v2=off.
> +	nospectre_v2	[X86,PPC_FSL_BOOK3E,ARM64] Disable all
> mitigations for
> +			the Spectre variant 2 (indirect branch
> prediction)
> +			vulnerability. System may allow data leaks
> with this
> +			option.
>  
>  	nospec_store_bypass_disable
>  			[HW] Disable all mitigations for the
> Speculative Store Bypass vulnerability diff --git
> a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 9950bb0cbd52..9a7b5fca51a0 100644 ---
> a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -220,6 +220,14 @@ static void qcom_link_stack_sanitization(void)
>  		     : "=&r" (tmp));
>  }
>  
> +static bool __nospectre_v2;
> +static int __init parse_nospectre_v2(char *str)
> +{
> +	__nospectre_v2 = true;
> +	return 0;
> +}
> +early_param("nospectre_v2", parse_nospectre_v2);
> +
>  static void
>  enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities
> *entry) {
> @@ -231,6 +239,9 @@ enable_smccc_arch_workaround_1(const struct
> arm64_cpu_capabilities *entry) if (!entry->matches(entry,
> SCOPE_LOCAL_CPU)) return;
>  
> +	if (__nospectre_v2)
> +		return;
> +
>  	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
>  		return;
>  


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 03/12] arm64: Remove the ability to build a kernel without ssbd
  2019-01-25 18:07 ` [PATCH v4 03/12] arm64: Remove the ability to build a kernel without ssbd Jeremy Linton
@ 2019-01-30 18:04   ` Andre Przywara
  2019-02-15 18:20     ` Catalin Marinas
  0 siblings, 1 reply; 35+ messages in thread
From: Andre Przywara @ 2019-01-30 18:04 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, stefan.wahren, mlangsdo, suzuki.poulose,
	marc.zyngier, catalin.marinas, julien.thierry, will.deacon,
	linux-kernel, steven.price, Christoffer Dall, kvmarm, ykaukab,
	dave.martin, shankerd

On Fri, 25 Jan 2019 12:07:02 -0600
Jeremy Linton <jeremy.linton@arm.com> wrote:

Hi,

> Buried behind EXPERT is the ability to build a kernel without
> SSBD, this needlessly clutters up the code as well as creates
> the opportunity for bugs. It also removes the kernel's ability
> to determine if the machine its running on is vulnerable.

I don't know the original motivation for this config option, typically
they are not around for no reason.
I see the benefit of dropping those config options, but we want to make
sure that people don't start hacking around to remove them again.

> Since its also possible to disable it at boot time, lets remove
> the config option.

Given the level of optimisation a compiler can do with the state being
known at compile time, I would imagine that it's not the same (though
probably very close).

But that's not my call, it would be good to hear some maintainer's
opinion on this.

Apart from the nit mentioned below, the technical part looks correct to
me (also compile tested).

> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  arch/arm64/Kconfig                  | 9 ---------
>  arch/arm64/include/asm/cpufeature.h | 8 --------
>  arch/arm64/include/asm/kvm_mmu.h    | 7 -------
>  arch/arm64/kernel/Makefile          | 3 +--
>  arch/arm64/kernel/cpu_errata.c      | 4 ----
>  arch/arm64/kernel/cpufeature.c      | 4 ----
>  arch/arm64/kernel/entry.S           | 2 --
>  arch/arm64/kvm/hyp/hyp-entry.S      | 2 --
>  arch/arm64/kvm/hyp/switch.c         | 4 ----
>  9 files changed, 1 insertion(+), 42 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index a4168d366127..0baa632bf0a8 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1038,15 +1038,6 @@ config HARDEN_EL2_VECTORS
>  
>  	  If unsure, say Y.
>  
> -config ARM64_SSBD
> -	bool "Speculative Store Bypass Disable" if EXPERT
> -	default y
> -	help
> -	  This enables mitigation of the bypassing of previous stores
> -	  by speculative loads.
> -
> -	  If unsure, say Y.
> -
>  config RODATA_FULL_DEFAULT_ENABLED
>  	bool "Apply r/o permissions of VM areas also to their linear
> aliases" default y
> diff --git a/arch/arm64/include/asm/cpufeature.h
> b/arch/arm64/include/asm/cpufeature.h index
> dfcfba725d72..bbed2067a1a4 100644 ---
> a/arch/arm64/include/asm/cpufeature.h +++
> b/arch/arm64/include/asm/cpufeature.h @@ -620,19 +620,11 @@ static
> inline bool system_supports_generic_auth(void) 
>  static inline int arm64_get_ssbd_state(void)
>  {
> -#ifdef CONFIG_ARM64_SSBD
>  	extern int ssbd_state;

Wouldn't this be a good opportunity to move this declaration outside of
this function, so that it looks less awkward?

Cheers,
Andre.

>  	return ssbd_state;
> -#else
> -	return ARM64_SSBD_UNKNOWN;
> -#endif
>  }
>  
> -#ifdef CONFIG_ARM64_SSBD
>  void arm64_set_ssbd_mitigation(bool state);
> -#else
> -static inline void arm64_set_ssbd_mitigation(bool state) {}
> -#endif
>  
>  extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
>  
> diff --git a/arch/arm64/include/asm/kvm_mmu.h
> b/arch/arm64/include/asm/kvm_mmu.h index 8af4b1befa42..a5c152d79820
> 100644 --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -541,7 +541,6 @@ static inline int kvm_map_vectors(void)
>  }
>  #endif
>  
> -#ifdef CONFIG_ARM64_SSBD
>  DECLARE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
>  
>  static inline int hyp_map_aux_data(void)
> @@ -558,12 +557,6 @@ static inline int hyp_map_aux_data(void)
>  	}
>  	return 0;
>  }
> -#else
> -static inline int hyp_map_aux_data(void)
> -{
> -	return 0;
> -}
> -#endif
>  
>  #define kvm_phys_to_vttbr(addr)		phys_to_ttbr(addr)
>  
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index cd434d0719c1..306336a2fa34 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -19,7 +19,7 @@ obj-y			:= debug-monitors.o
> entry.o irq.o fpsimd.o		\ return_address.o cpuinfo.o
> cpu_errata.o		\ cpufeature.o alternative.o
> cacheinfo.o		\ smp.o smp_spin_table.o topology.o
> smccc-call.o	\
> -			   syscall.o
> +			   syscall.o ssbd.o
>  
>  extra-$(CONFIG_EFI)			:= efi-entry.o
>  
> @@ -57,7 +57,6 @@ arm64-reloc-test-y := reloc_test_core.o
> reloc_test_syms.o obj-$(CONFIG_CRASH_DUMP)		+=
> crash_dump.o obj-$(CONFIG_CRASH_CORE)		+= crash_core.o
>  obj-$(CONFIG_ARM_SDE_INTERFACE)		+= sdei.o
> -obj-$(CONFIG_ARM64_SSBD)		+= ssbd.o
>  obj-$(CONFIG_ARM64_PTR_AUTH)		+= pointer_auth.o
>  
>  obj-y					+= vdso/ probes/
> diff --git a/arch/arm64/kernel/cpu_errata.c
> b/arch/arm64/kernel/cpu_errata.c index 9a7b5fca51a0..934d50788ca3
> 100644 --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -281,7 +281,6 @@ enable_smccc_arch_workaround_1(const struct
> arm64_cpu_capabilities *entry) }
>  #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
>  
> -#ifdef CONFIG_ARM64_SSBD
>  DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
>  
>  int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
> @@ -473,7 +472,6 @@ static bool has_ssbd_mitigation(const struct
> arm64_cpu_capabilities *entry, 
>  	return required;
>  }
> -#endif	/* CONFIG_ARM64_SSBD */
>  
>  static void __maybe_unused
>  cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities
> *__unused) @@ -726,14 +724,12 @@ const struct arm64_cpu_capabilities
> arm64_errata[] = { ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors),
>  	},
>  #endif
> -#ifdef CONFIG_ARM64_SSBD
>  	{
>  		.desc = "Speculative Store Bypass Disable",
>  		.capability = ARM64_SSBD,
>  		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
>  		.matches = has_ssbd_mitigation,
>  	},
> -#endif
>  #ifdef CONFIG_ARM64_ERRATUM_1188873
>  	{
>  		/* Cortex-A76 r0p0 to r2p0 */
> diff --git a/arch/arm64/kernel/cpufeature.c
> b/arch/arm64/kernel/cpufeature.c index f6d84e2c92fe..d1a7fd7972f9
> 100644 --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1131,7 +1131,6 @@ static void cpu_has_fwb(const struct
> arm64_cpu_capabilities *__unused) WARN_ON(val & (7 << 27 | 7 << 21));
>  }
>  
> -#ifdef CONFIG_ARM64_SSBD
>  static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr)
>  {
>  	if (user_mode(regs))
> @@ -1171,7 +1170,6 @@ static void cpu_enable_ssbs(const struct
> arm64_cpu_capabilities *__unused) arm64_set_ssbd_mitigation(true);
>  	}
>  }
> -#endif /* CONFIG_ARM64_SSBD */
>  
>  #ifdef CONFIG_ARM64_PAN
>  static void cpu_enable_pan(const struct arm64_cpu_capabilities
> *__unused) @@ -1400,7 +1398,6 @@ static const struct
> arm64_cpu_capabilities arm64_features[] = { .field_pos =
> ID_AA64ISAR0_CRC32_SHIFT, .min_field_value = 1,
>  	},
> -#ifdef CONFIG_ARM64_SSBD
>  	{
>  		.desc = "Speculative Store Bypassing Safe (SSBS)",
>  		.capability = ARM64_SSBS,
> @@ -1412,7 +1409,6 @@ static const struct arm64_cpu_capabilities
> arm64_features[] = { .min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY,
>  		.cpu_enable = cpu_enable_ssbs,
>  	},
> -#endif
>  #ifdef CONFIG_ARM64_CNP
>  	{
>  		.desc = "Common not Private translations",
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 0ec0c46b2c0c..bee54b7d17b9 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -137,7 +137,6 @@ alternative_else_nop_endif
>  	// This macro corrupts x0-x3. It is the caller's duty
>  	// to save/restore them if required.
>  	.macro	apply_ssbd, state, tmp1, tmp2
> -#ifdef CONFIG_ARM64_SSBD
>  alternative_cb	arm64_enable_wa2_handling
>  	b	.L__asm_ssbd_skip\@
>  alternative_cb_end
> @@ -151,7 +150,6 @@ alternative_cb	arm64_update_smccc_conduit
>  	nop					// Patched to
> SMC/HVC #0 alternative_cb_end
>  .L__asm_ssbd_skip\@:
> -#endif
>  	.endm
>  
>  	.macro	kernel_entry, el, regsize = 64
> diff --git a/arch/arm64/kvm/hyp/hyp-entry.S
> b/arch/arm64/kvm/hyp/hyp-entry.S index 73c1b483ec39..53c9344968d4
> 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S
> +++ b/arch/arm64/kvm/hyp/hyp-entry.S
> @@ -114,7 +114,6 @@ el1_hvc_guest:
>  			  ARM_SMCCC_ARCH_WORKAROUND_2)
>  	cbnz	w1, el1_trap
>  
> -#ifdef CONFIG_ARM64_SSBD
>  alternative_cb	arm64_enable_wa2_handling
>  	b	wa2_end
>  alternative_cb_end
> @@ -141,7 +140,6 @@ alternative_cb_end
>  wa2_end:
>  	mov	x2, xzr
>  	mov	x1, xzr
> -#endif
>  
>  wa_epilogue:
>  	mov	x0, xzr
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index b0b1478094b4..9ce43ae6fc13 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -436,7 +436,6 @@ static inline bool __hyp_text
> __needs_ssbd_off(struct kvm_vcpu *vcpu) 
>  static void __hyp_text __set_guest_arch_workaround_state(struct
> kvm_vcpu *vcpu) {
> -#ifdef CONFIG_ARM64_SSBD
>  	/*
>  	 * The host runs with the workaround always present. If the
>  	 * guest wants it disabled, so be it...
> @@ -444,19 +443,16 @@ static void __hyp_text
> __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) if
> (__needs_ssbd_off(vcpu) &&
> __hyp_this_cpu_read(arm64_ssbd_callback_required))
> arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL); -#endif
>  }
>  
>  static void __hyp_text __set_host_arch_workaround_state(struct
> kvm_vcpu *vcpu) {
> -#ifdef CONFIG_ARM64_SSBD
>  	/*
>  	 * If the guest has disabled the workaround, bring it back
> on. */
>  	if (__needs_ssbd_off(vcpu) &&
>  	    __hyp_this_cpu_read(arm64_ssbd_callback_required))
>  		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1,
> NULL); -#endif
>  }
>  
>  /* Switch to the guest for VHE systems running in EL2 */


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 04/12] arm64: remove the ability to build a kernel without hardened branch predictors
  2019-01-25 18:07 ` [PATCH v4 04/12] arm64: remove the ability to build a kernel without hardened branch predictors Jeremy Linton
@ 2019-01-30 18:04   ` Andre Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-01-30 18:04 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, stefan.wahren, mlangsdo, suzuki.poulose,
	marc.zyngier, catalin.marinas, julien.thierry, will.deacon,
	linux-kernel, steven.price, Christoffer Dall, kvmarm, ykaukab,
	dave.martin, shankerd

On Fri, 25 Jan 2019 12:07:03 -0600
Jeremy Linton <jeremy.linton@arm.com> wrote:

> Buried behind EXPERT is the ability to build a kernel without
> hardened branch predictors, this needlessly clutters up the
> code as well as creates the opportunity for bugs. It also
> removes the kernel's ability to determine if the machine its
> running on is vulnerable.
> 
> Since its also possible to disable it at boot time, lets remove
> the config option.

Same comment as before about removing the CONFIG_ options here.

> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  arch/arm64/Kconfig               | 17 -----------------
>  arch/arm64/include/asm/kvm_mmu.h | 12 ------------
>  arch/arm64/include/asm/mmu.h     | 12 ------------
>  arch/arm64/kernel/cpu_errata.c   | 19 -------------------
>  arch/arm64/kernel/entry.S        |  2 --
>  arch/arm64/kvm/Kconfig           |  3 ---
>  arch/arm64/kvm/hyp/hyp-entry.S   |  2 --
>  7 files changed, 67 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 0baa632bf0a8..6b4c6d3fdf4d 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1005,23 +1005,6 @@ config UNMAP_KERNEL_AT_EL0
>  
>  	  If unsure, say Y.
>  
> -config HARDEN_BRANCH_PREDICTOR
> -	bool "Harden the branch predictor against aliasing attacks"
> if EXPERT
> -	default y
> -	help
> -	  Speculation attacks against some high-performance
> processors rely on
> -	  being able to manipulate the branch predictor for a victim
> context by
> -	  executing aliasing branches in the attacker context.  Such
> attacks
> -	  can be partially mitigated against by clearing internal
> branch
> -	  predictor state and limiting the prediction logic in some
> situations. -
> -	  This config option will take CPU-specific actions to
> harden the
> -	  branch predictor against aliasing attacks and may rely on
> specific
> -	  instruction sequences or control bits being set by the
> system
> -	  firmware.
> -
> -	  If unsure, say Y.
> -
>  config HARDEN_EL2_VECTORS
>  	bool "Harden EL2 vector mapping against system register
> leak" if EXPERT default y
> diff --git a/arch/arm64/include/asm/kvm_mmu.h
> b/arch/arm64/include/asm/kvm_mmu.h index a5c152d79820..9dd680194db9
> 100644 --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -444,7 +444,6 @@ static inline int kvm_read_guest_lock(struct kvm
> *kvm, return ret;
>  }
>  
> -#ifdef CONFIG_KVM_INDIRECT_VECTORS
>  /*
>   * EL2 vectors can be mapped and rerouted in a number of ways,
>   * depending on the kernel configuration and CPU present:

Directly after this comment there is a #include line, can you please
move this up to the beginning of this file, now that it is
unconditional?

> @@ -529,17 +528,6 @@ static inline int kvm_map_vectors(void)
>  
>  	return 0;
>  }
> -#else
> -static inline void *kvm_get_hyp_vector(void)
> -{
> -	return kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector));
> -}
> -
> -static inline int kvm_map_vectors(void)
> -{
> -	return 0;
> -}
> -#endif
>  
>  DECLARE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
>  
> diff --git a/arch/arm64/include/asm/mmu.h
> b/arch/arm64/include/asm/mmu.h index 3e8063f4f9d3..20fdf71f96c3 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -95,13 +95,9 @@ struct bp_hardening_data {
>  	bp_hardening_cb_t	fn;
>  };
>  
> -#if (defined(CONFIG_HARDEN_BRANCH_PREDICTOR) ||	\
> -     defined(CONFIG_HARDEN_EL2_VECTORS))
>  extern char __bp_harden_hyp_vecs_start[], __bp_harden_hyp_vecs_end[];
>  extern atomic_t arm64_el2_vector_last_slot;
> -#endif  /* CONFIG_HARDEN_BRANCH_PREDICTOR ||
> CONFIG_HARDEN_EL2_VECTORS */ 
> -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>  DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data,
> bp_hardening_data); 
>  static inline struct bp_hardening_data
> *arm64_get_bp_hardening_data(void) @@ -120,14 +116,6 @@ static inline
> void arm64_apply_bp_hardening(void) if (d->fn)
>  		d->fn();
>  }
> -#else
> -static inline struct bp_hardening_data
> *arm64_get_bp_hardening_data(void) -{
> -	return NULL;
> -}
> -
> -static inline void arm64_apply_bp_hardening(void)	{ }
> -#endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
>  
>  extern void paging_init(void);
>  extern void bootmem_init(void);
> diff --git a/arch/arm64/kernel/cpu_errata.c
> b/arch/arm64/kernel/cpu_errata.c index 934d50788ca3..de09a3537cd4
> 100644 --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -109,13 +109,11 @@ cpu_enable_trap_ctr_access(const struct
> arm64_cpu_capabilities *__unused) 
>  atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
>  
> -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>  #include <asm/mmu_context.h>
>  #include <asm/cacheflush.h>

Same here, those should move up.

>  DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data,
> bp_hardening_data); 
> -#ifdef CONFIG_KVM_INDIRECT_VECTORS
>  extern char __smccc_workaround_1_smc_start[];
>  extern char __smccc_workaround_1_smc_end[];
>  
> @@ -165,17 +163,6 @@ static void
> __install_bp_hardening_cb(bp_hardening_cb_t fn,
> __this_cpu_write(bp_hardening_data.fn, fn); raw_spin_unlock(&bp_lock);
>  }
> -#else
> -#define __smccc_workaround_1_smc_start		NULL
> -#define __smccc_workaround_1_smc_end		NULL
> -
> -static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
> -				      const char *hyp_vecs_start,
> -				      const char *hyp_vecs_end)
> -{
> -	__this_cpu_write(bp_hardening_data.fn, fn);
> -}
> -#endif	/* CONFIG_KVM_INDIRECT_VECTORS */
>  
>  static void  install_bp_hardening_cb(const struct
> arm64_cpu_capabilities *entry, bp_hardening_cb_t fn,
> @@ -279,7 +266,6 @@ enable_smccc_arch_workaround_1(const struct
> arm64_cpu_capabilities *entry) 
>  	return;
>  }
> -#endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
>  
>  DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
>  
> @@ -516,7 +502,6 @@ cpu_enable_cache_maint_trap(const struct
> arm64_cpu_capabilities *__unused) .type =
> ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
> CAP_MIDR_RANGE_LIST(midr_list) 
> -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>  
>  /*
>   * List of CPUs where we need to issue a psci call to
> @@ -535,8 +520,6 @@ static const struct midr_range
> arm64_bp_harden_smccc_cpus[] = { {},
>  };
>  
> -#endif
> -
>  #ifdef CONFIG_HARDEN_EL2_VECTORS
>  
>  static const struct midr_range arm64_harden_el2_vectors[] = {
> @@ -710,13 +693,11 @@ const struct arm64_cpu_capabilities
> arm64_errata[] = { ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
>  	},
>  #endif
> -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>  	{
>  		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
>  		.cpu_enable = enable_smccc_arch_workaround_1,
>  		ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
>  	},
> -#endif
>  #ifdef CONFIG_HARDEN_EL2_VECTORS
>  	{
>  		.desc = "EL2 vector hardening",
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index bee54b7d17b9..3f0eaaf704c8 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -842,11 +842,9 @@ el0_irq_naked:
>  #endif
>  
>  	ct_user_exit
> -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>  	tbz	x22, #55, 1f
>  	bl	do_el0_irq_bp_hardening
>  1:
> -#endif
>  	irq_handler
>  
>  #ifdef CONFIG_TRACE_IRQFLAGS
> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
> index a3f85624313e..402bcfb85f25 100644
> --- a/arch/arm64/kvm/Kconfig
> +++ b/arch/arm64/kvm/Kconfig
> @@ -58,9 +58,6 @@ config KVM_ARM_PMU
>  	  Adds support for a virtual Performance Monitoring Unit
> (PMU) in virtual machines.
>  
> -config KVM_INDIRECT_VECTORS
> -       def_bool KVM && (HARDEN_BRANCH_PREDICTOR ||
> HARDEN_EL2_VECTORS) -

That sounds tempting, but breaks compilation when CONFIG_KVM is not
defined (in arch/arm64/kernel/cpu_errata.c). So either we keep
CONFIG_KVM_INDIRECT_VECTORS or we replace the guards in the code with
CONFIG_KVM.

Cheers,
Andre.

>  source "drivers/vhost/Kconfig"
>  
>  endif # VIRTUALIZATION
> diff --git a/arch/arm64/kvm/hyp/hyp-entry.S
> b/arch/arm64/kvm/hyp/hyp-entry.S index 53c9344968d4..e02ddf40f113
> 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S
> +++ b/arch/arm64/kvm/hyp/hyp-entry.S
> @@ -272,7 +272,6 @@ ENTRY(__kvm_hyp_vector)
>  	valid_vect	el1_error		// Error 32-bit
> EL1 ENDPROC(__kvm_hyp_vector)
>  
> -#ifdef CONFIG_KVM_INDIRECT_VECTORS
>  .macro hyp_ventry
>  	.align 7
>  1:	.rept 27
> @@ -331,4 +330,3 @@ ENTRY(__smccc_workaround_1_smc_start)
>  	ldp	x0, x1, [sp, #(8 * 2)]
>  	add	sp, sp, #(8 * 4)
>  ENTRY(__smccc_workaround_1_smc_end)
> -#endif


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 05/12] arm64: remove the ability to build a kernel without kpti
  2019-01-25 18:07 ` [PATCH v4 05/12] arm64: remove the ability to build a kernel without kpti Jeremy Linton
@ 2019-01-30 18:05   ` Andre Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-01-30 18:05 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, stefan.wahren, mlangsdo, suzuki.poulose,
	marc.zyngier, catalin.marinas, julien.thierry, will.deacon,
	linux-kernel, steven.price, ykaukab, dave.martin, shankerd

On Fri, 25 Jan 2019 12:07:04 -0600
Jeremy Linton <jeremy.linton@arm.com> wrote:

> Buried behind EXPERT is the ability to build a kernel without
> hardened branch predictors, this needlessly clutters up the
> code as well as creates the opportunity for bugs. It also
> removes the kernel's ability to determine if the machine its
> running on is vulnerable.
> 
> Since its also possible to disable it at boot time, lets remove
> the config option.

Same comment as for the other two before: Disabling at boot time is not
the same as not configuring.

Otherwise looks good to me.

Cheers,
Andre.

> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>  arch/arm64/Kconfig              | 12 ------------
>  arch/arm64/include/asm/fixmap.h |  2 --
>  arch/arm64/include/asm/mmu.h    |  7 +------
>  arch/arm64/include/asm/sdei.h   |  2 +-
>  arch/arm64/kernel/asm-offsets.c |  2 --
>  arch/arm64/kernel/cpufeature.c  |  4 ----
>  arch/arm64/kernel/entry.S       | 11 +----------
>  arch/arm64/kernel/sdei.c        |  2 --
>  arch/arm64/kernel/vmlinux.lds.S |  8 --------
>  arch/arm64/mm/context.c         |  6 ------
>  arch/arm64/mm/mmu.c             |  2 --
>  arch/arm64/mm/proc.S            |  2 --
>  12 files changed, 3 insertions(+), 57 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 6b4c6d3fdf4d..09a85410d814 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -993,18 +993,6 @@ config FORCE_MAX_ZONEORDER
>  	  However for 4K, we choose a higher default value, 11 as
> opposed to 10, giving us 4M allocations matching the default size
> used by generic code. 
> -config UNMAP_KERNEL_AT_EL0
> -	bool "Unmap kernel when running in userspace (aka
> \"KAISER\")" if EXPERT
> -	default y
> -	help
> -	  Speculation attacks against some high-performance
> processors can
> -	  be used to bypass MMU permission checks and leak kernel
> data to
> -	  userspace. This can be defended against by unmapping the
> kernel
> -	  when running in userspace, mapping it back in on exception
> entry
> -	  via a trampoline page in the vector table.
> -
> -	  If unsure, say Y.
> -
>  config HARDEN_EL2_VECTORS
>  	bool "Harden EL2 vector mapping against system register
> leak" if EXPERT default y
> diff --git a/arch/arm64/include/asm/fixmap.h
> b/arch/arm64/include/asm/fixmap.h index ec1e6d6fa14c..62371f07d4ce
> 100644 --- a/arch/arm64/include/asm/fixmap.h
> +++ b/arch/arm64/include/asm/fixmap.h
> @@ -58,11 +58,9 @@ enum fixed_addresses {
>  	FIX_APEI_GHES_NMI,
>  #endif /* CONFIG_ACPI_APEI_GHES */
>  
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  	FIX_ENTRY_TRAMP_DATA,
>  	FIX_ENTRY_TRAMP_TEXT,
>  #define TRAMP_VALIAS
> (__fix_to_virt(FIX_ENTRY_TRAMP_TEXT)) -#endif /*
> CONFIG_UNMAP_KERNEL_AT_EL0 */ __end_of_permanent_fixed_addresses,
>  
>  	/*
> diff --git a/arch/arm64/include/asm/mmu.h
> b/arch/arm64/include/asm/mmu.h index 20fdf71f96c3..9d689661471c 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -42,18 +42,13 @@ typedef struct {
>  
>  static inline bool arm64_kernel_unmapped_at_el0(void)
>  {
> -	return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) &&
> -	       cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
> +	return cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
>  }
>  
>  static inline bool arm64_kernel_use_ng_mappings(void)
>  {
>  	bool tx1_bug;
>  
> -	/* What's a kpti? Use global mappings if we don't know. */
> -	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
> -		return false;
> -
>  	/*
>  	 * Note: this function is called before the CPU capabilities
> have
>  	 * been configured, so our early mappings will be global. If
> we diff --git a/arch/arm64/include/asm/sdei.h
> b/arch/arm64/include/asm/sdei.h index ffe47d766c25..82c3e9b6a4b0
> 100644 --- a/arch/arm64/include/asm/sdei.h
> +++ b/arch/arm64/include/asm/sdei.h
> @@ -23,7 +23,7 @@ extern unsigned long sdei_exit_mode;
>  asmlinkage void __sdei_asm_handler(unsigned long event_num, unsigned
> long arg, unsigned long pc, unsigned long pstate);
>  
> -/* and its CONFIG_UNMAP_KERNEL_AT_EL0 trampoline */
> +/* and its unmap kernel at el0 trampoline */
>  asmlinkage void __sdei_asm_entry_trampoline(unsigned long event_num,
>  						   unsigned long arg,
>  						   unsigned long pc,
> diff --git a/arch/arm64/kernel/asm-offsets.c
> b/arch/arm64/kernel/asm-offsets.c index 65b8afc84466..6a6f83de91b8
> 100644 --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -165,9 +165,7 @@ int main(void)
>    DEFINE(HIBERN_PBE_NEXT,	offsetof(struct pbe, next));
>    DEFINE(ARM64_FTR_SYSVAL,	offsetof(struct arm64_ftr_reg,
> sys_val)); BLANK();
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>    DEFINE(TRAMP_VALIAS,		TRAMP_VALIAS);
> -#endif
>  #ifdef CONFIG_ARM_SDE_INTERFACE
>    DEFINE(SDEI_EVENT_INTREGS,	offsetof(struct
> sdei_registered_event, interrupted_regs));
> DEFINE(SDEI_EVENT_PRIORITY,	offsetof(struct
> sdei_registered_event, priority)); diff --git
> a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index d1a7fd7972f9..a9e18b9cdc1e 100644 ---
> a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c
> @@ -944,7 +944,6 @@ has_useable_cnp(const struct
> arm64_cpu_capabilities *entry, int scope) return
> has_cpuid_feature(entry, scope); } 
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  static int __kpti_forced; /* 0: not forced, >0: forced on, <0:
> forced off */ 
>  static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities
> *entry, @@ -1035,7 +1034,6 @@ static int __init parse_kpti(char *str)
>  	return 0;
>  }
>  early_param("kpti", parse_kpti);
> -#endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
>  
>  #ifdef CONFIG_ARM64_HW_AFDBM
>  static inline void __cpu_enable_hw_dbm(void)
> @@ -1284,7 +1282,6 @@ static const struct arm64_cpu_capabilities
> arm64_features[] = { .field_pos = ID_AA64PFR0_EL0_SHIFT,
>  		.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
>  	},
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  	{
>  		.desc = "Kernel page table isolation (KPTI)",
>  		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
> @@ -1300,7 +1297,6 @@ static const struct arm64_cpu_capabilities
> arm64_features[] = { .matches = unmap_kernel_at_el0,
>  		.cpu_enable = kpti_install_ng_mappings,
>  	},
> -#endif
>  	{
>  		/* FP/SIMD is not implemented */
>  		.capability = ARM64_HAS_NO_FPSIMD,
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 3f0eaaf704c8..1d8efc144b04 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -70,7 +70,6 @@
>  
>  	.macro kernel_ventry, el, label, regsize = 64
>  	.align 7
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  alternative_if ARM64_UNMAP_KERNEL_AT_EL0
>  	.if	\el == 0
>  	.if	\regsize == 64
> @@ -81,7 +80,6 @@ alternative_if ARM64_UNMAP_KERNEL_AT_EL0
>  	.endif
>  	.endif
>  alternative_else_nop_endif
> -#endif
>  
>  	sub	sp, sp, #S_FRAME_SIZE
>  #ifdef CONFIG_VMAP_STACK
> @@ -345,7 +343,6 @@ alternative_else_nop_endif
>  
>  	.if	\el == 0
>  alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  	bne	4f
>  	msr	far_el1, x30
>  	tramp_alias	x30, tramp_exit_native
> @@ -353,7 +350,7 @@ alternative_insn eret, nop,
> ARM64_UNMAP_KERNEL_AT_EL0 4:
>  	tramp_alias	x30, tramp_exit_compat
>  	br	x30
> -#endif
> +
>  	.else
>  	eret
>  	.endif
> @@ -913,7 +910,6 @@ ENDPROC(el0_svc)
>  
>  	.popsection				// .entry.text
>  
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  /*
>   * Exception vectors trampoline.
>   */
> @@ -1023,7 +1019,6 @@ __entry_tramp_data_start:
>  	.quad	vectors
>  	.popsection				// .rodata
>  #endif /* CONFIG_RANDOMIZE_BASE */
> -#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
>  
>  /*
>   * Register switch for AArch64. The callee-saved registers need to
> be saved @@ -1086,7 +1081,6 @@ NOKPROBE(ret_from_fork)
>  	b	.
>  .endm
>  
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  /*
>   * The regular SDEI entry point may have been unmapped along with
> the rest of
>   * the kernel. This trampoline restores the kernel mapping to make
> the x1 memory @@ -1146,7 +1140,6 @@
> __sdei_asm_trampoline_next_handler: .quad	__sdei_asm_handler
>  .popsection		// .rodata
>  #endif /* CONFIG_RANDOMIZE_BASE */
> -#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
>  
>  /*
>   * Software Delegated Exception entry point.
> @@ -1240,10 +1233,8 @@ alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
>  	sdei_handler_exit exit_mode=x2
>  alternative_else_nop_endif
>  
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  	tramp_alias	dst=x5, sym=__sdei_asm_exit_trampoline
>  	br	x5
> -#endif
>  ENDPROC(__sdei_asm_handler)
>  NOKPROBE(__sdei_asm_handler)
>  #endif /* CONFIG_ARM_SDE_INTERFACE */
> diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c
> index 5ba4465e44f0..a0dbdb962019 100644
> --- a/arch/arm64/kernel/sdei.c
> +++ b/arch/arm64/kernel/sdei.c
> @@ -157,7 +157,6 @@ unsigned long sdei_arch_get_entry_point(int
> conduit) 
>  	sdei_exit_mode = (conduit == CONDUIT_HVC) ? SDEI_EXIT_HVC :
> SDEI_EXIT_SMC; 
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  	if (arm64_kernel_unmapped_at_el0()) {
>  		unsigned long offset;
>  
> @@ -165,7 +164,6 @@ unsigned long sdei_arch_get_entry_point(int
> conduit) (unsigned long)__entry_tramp_text_start;
>  		return TRAMP_VALIAS + offset;
>  	} else
> -#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
>  		return (unsigned long)__sdei_asm_handler;
>  
>  }
> diff --git a/arch/arm64/kernel/vmlinux.lds.S
> b/arch/arm64/kernel/vmlinux.lds.S index 7fa008374907..a4dbee11bcb5
> 100644 --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -57,16 +57,12 @@ jiffies = jiffies_64;
>  #define HIBERNATE_TEXT
>  #endif
>  
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  #define TRAMP_TEXT					\
>  	. = ALIGN(PAGE_SIZE);				\
>  	__entry_tramp_text_start = .;			\
>  	*(.entry.tramp.text)				\
>  	. = ALIGN(PAGE_SIZE);				\
>  	__entry_tramp_text_end = .;
> -#else
> -#define TRAMP_TEXT
> -#endif
>  
>  /*
>   * The size of the PE/COFF section that covers the kernel image,
> which @@ -143,10 +139,8 @@ SECTIONS
>  	idmap_pg_dir = .;
>  	. += IDMAP_DIR_SIZE;
>  
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  	tramp_pg_dir = .;
>  	. += PAGE_SIZE;
> -#endif
>  
>  #ifdef CONFIG_ARM64_SW_TTBR0_PAN
>  	reserved_ttbr0 = .;
> @@ -257,10 +251,8 @@ ASSERT(__idmap_text_end - (__idmap_text_start &
> ~(SZ_4K - 1)) <= SZ_4K, ASSERT(__hibernate_exit_text_end -
> (__hibernate_exit_text_start & ~(SZ_4K - 1)) <= SZ_4K, "Hibernate
> exit text too big or misaligned") #endif
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) ==
> PAGE_SIZE, "Entry trampoline text too big")
> -#endif
>  /*
>   * If padding is applied before .head.text, virt<->phys conversions
> will fail. */
> diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
> index 1f0ea2facf24..e99f3e645e06 100644
> --- a/arch/arm64/mm/context.c
> +++ b/arch/arm64/mm/context.c
> @@ -40,15 +40,9 @@ static cpumask_t tlb_flush_pending;
>  #define ASID_MASK		(~GENMASK(asid_bits - 1, 0))
>  #define ASID_FIRST_VERSION	(1UL << asid_bits)
>  
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  #define NUM_USER_ASIDS		(ASID_FIRST_VERSION >> 1)
>  #define asid2idx(asid)		(((asid) & ~ASID_MASK) >> 1)
>  #define idx2asid(idx)		(((idx) << 1) & ~ASID_MASK)
> -#else
> -#define NUM_USER_ASIDS		(ASID_FIRST_VERSION)
> -#define asid2idx(asid)		((asid) & ~ASID_MASK)
> -#define idx2asid(idx)		asid2idx(idx)
> -#endif
>  
>  /* Get the ASIDBits supported by the current CPU */
>  static u32 get_cpu_asid_bits(void)
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index b6f5aa52ac67..97252baf4700 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -570,7 +570,6 @@ static int __init parse_rodata(char *arg)
>  }
>  early_param("rodata", parse_rodata);
>  
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  static int __init map_entry_trampoline(void)
>  {
>  	pgprot_t prot = rodata_enabled ? PAGE_KERNEL_ROX :
> PAGE_KERNEL_EXEC; @@ -597,7 +596,6 @@ static int __init
> map_entry_trampoline(void) return 0;
>  }
>  core_initcall(map_entry_trampoline);
> -#endif
>  
>  /*
>   * Create fine-grained mappings for the kernel.
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 73886a5f1f30..e9ca5cbb93bc 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -217,7 +217,6 @@ ENTRY(idmap_cpu_replace_ttbr1)
>  ENDPROC(idmap_cpu_replace_ttbr1)
>  	.popsection
>  
> -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
>  	.pushsection ".idmap.text", "awx"
>  
>  	.macro	__idmap_kpti_get_pgtable_ent, type
> @@ -406,7 +405,6 @@ __idmap_kpti_secondary:
>  	.unreq	pte
>  ENDPROC(idmap_kpti_install_ng_mappings)
>  	.popsection
> -#endif
>  
>  /*
>   *	__cpu_setup


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 07/12] arm64: add sysfs vulnerability show for meltdown
  2019-01-25 18:07 ` [PATCH v4 07/12] arm64: add sysfs vulnerability show for meltdown Jeremy Linton
@ 2019-01-31  9:28   ` Julien Thierry
  2019-01-31 21:48     ` Jeremy Linton
  2019-01-31 17:54   ` Andre Przywara
  1 sibling, 1 reply; 35+ messages in thread
From: Julien Thierry @ 2019-01-31  9:28 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, mlangsdo,
	steven.price, stefan.wahren

Hi Jeremy,

On 25/01/2019 18:07, Jeremy Linton wrote:
> Display the mitigation status if active, otherwise
> assume the cpu is safe unless it doesn't have CSV3
> and isn't in our whitelist.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>  arch/arm64/kernel/cpufeature.c | 33 +++++++++++++++++++++++++++------
>  1 file changed, 27 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index a9e18b9cdc1e..624dfe0b5cdd 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -944,6 +944,8 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
>  	return has_cpuid_feature(entry, scope);
>  }
>  
> +/* default value is invalid until unmap_kernel_at_el0() runs */
> +static bool __meltdown_safe = true;
>  static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
>  
>  static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
> @@ -962,6 +964,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>  		{ /* sentinel */ }
>  	};
>  	char const *str = "command line option";
> +	bool meltdown_safe;
> +
> +	meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
> +
> +	/* Defer to CPU feature registers */
> +	if (has_cpuid_feature(entry, scope))
> +		meltdown_safe = true;

Do we need to check the cpuid registers if the CPU is in the known safe
list?

Otherwise:

Reviewed-by: Julien Thierry <julien.thierry@arm.com>

> +
> +	if (!meltdown_safe)
> +		__meltdown_safe = false;
>  
>  	/*
>  	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
> @@ -984,12 +996,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>  	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
>  		return kaslr_offset() > 0;
>  
> -	/* Don't force KPTI for CPUs that are not vulnerable */
> -	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
> -		return false;
> -
> -	/* Defer to CPU feature registers */
> -	return !has_cpuid_feature(entry, scope);
> +	return !meltdown_safe;
>  }
>  
>  static void
> @@ -2055,3 +2062,17 @@ static int __init enable_mrs_emulation(void)
>  }
>  
>  core_initcall(enable_mrs_emulation);
> +
> +#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
> +		char *buf)
> +{
> +	if (arm64_kernel_unmapped_at_el0())
> +		return sprintf(buf, "Mitigation: KPTI\n");
> +
> +	if (__meltdown_safe)
> +		return sprintf(buf, "Not affected\n");
> +
> +	return sprintf(buf, "Vulnerable\n");
> +}
> +#endif
> 

-- 
Julien Thierry

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 06/12] arm64: add sysfs vulnerability show for spectre v1
  2019-01-25 18:07 ` [PATCH v4 06/12] arm64: add sysfs vulnerability show for spectre v1 Jeremy Linton
@ 2019-01-31 17:52   ` Andre Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-01-31 17:52 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, stefan.wahren, mlangsdo, suzuki.poulose,
	marc.zyngier, catalin.marinas, julien.thierry, will.deacon,
	linux-kernel, steven.price, ykaukab, dave.martin, shankerd

On Fri, 25 Jan 2019 12:07:05 -0600
Jeremy Linton <jeremy.linton@arm.com> wrote:

Hi,

> From: Mian Yousaf Kaukab <ykaukab@suse.de>
> 
> spectre v1, has been mitigated, and the mitigation is
> always active.
> 
> Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>  arch/arm64/kernel/cpu_errata.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index de09a3537cd4..ef636acf5604 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -730,3 +730,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
>  	{
>  	}
>  };
> +
> +#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
> +
> +ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
> +		char *buf)

w/s issue. Other than that:

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre.

> +{
> +	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
> +}
> +
> +#endif

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 07/12] arm64: add sysfs vulnerability show for meltdown
  2019-01-25 18:07 ` [PATCH v4 07/12] arm64: add sysfs vulnerability show for meltdown Jeremy Linton
  2019-01-31  9:28   ` Julien Thierry
@ 2019-01-31 17:54   ` Andre Przywara
  2019-01-31 21:53     ` Jeremy Linton
  1 sibling, 1 reply; 35+ messages in thread
From: Andre Przywara @ 2019-01-31 17:54 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, stefan.wahren, mlangsdo, suzuki.poulose,
	marc.zyngier, catalin.marinas, julien.thierry, will.deacon,
	linux-kernel, steven.price, ykaukab, dave.martin, shankerd

On Fri, 25 Jan 2019 12:07:06 -0600
Jeremy Linton <jeremy.linton@arm.com> wrote:

Hi,

> Display the mitigation status if active, otherwise
> assume the cpu is safe unless it doesn't have CSV3
> and isn't in our whitelist.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>  arch/arm64/kernel/cpufeature.c | 33 +++++++++++++++++++++++++++------
>  1 file changed, 27 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c
> b/arch/arm64/kernel/cpufeature.c
> index a9e18b9cdc1e..624dfe0b5cdd 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -944,6 +944,8 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
> 	return has_cpuid_feature(entry, scope);
> }
>  
> +/* default value is invalid until unmap_kernel_at_el0() runs */

Shall we somehow enforce this? For instance by making __meltdown_safe
an enum, initialised to UNKNOWN?
Then bail out with a BUG_ON or WARN_ON in the sysfs code?

I just want to avoid to accidentally report "safe" when we actually
aren't.

> +static bool __meltdown_safe = true;
>  static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */ 
>  static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
> @@ -962,6 +964,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
> 		{ /* sentinel */ }
>  	};
>  	char const *str = "command line option";
> +	bool meltdown_safe;
> +
> +	meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
> +
> +	/* Defer to CPU feature registers */
> +	if (has_cpuid_feature(entry, scope))
> +		meltdown_safe = true;
> +
> +	if (!meltdown_safe)
> +		__meltdown_safe = false;
>  
>  	/*
>  	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
> @@ -984,12 +996,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
> 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
> 		return kaslr_offset() > 0;
>  
> -	/* Don't force KPTI for CPUs that are not vulnerable */
> -	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
> -		return false;
> -
> -	/* Defer to CPU feature registers */
> -	return !has_cpuid_feature(entry, scope);
> +	return !meltdown_safe;
>  }
>  
>  static void
> @@ -2055,3 +2062,17 @@ static int __init enable_mrs_emulation(void)
>  }
>  
>  core_initcall(enable_mrs_emulation);
> +
> +#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
> +		char *buf)

w/s issue.

Cheers,
Andre.

> +{
> +	if (arm64_kernel_unmapped_at_el0())
> +		return sprintf(buf, "Mitigation: KPTI\n");
> +
> +	if (__meltdown_safe)
> +		return sprintf(buf, "Not affected\n");
> +
> +	return sprintf(buf, "Vulnerable\n");
> +}
> +#endif


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 08/12] arm64: Advertise mitigation of Spectre-v2, or lack thereof
  2019-01-25 18:07 ` [PATCH v4 08/12] arm64: Advertise mitigation of Spectre-v2, or lack thereof Jeremy Linton
@ 2019-01-31 17:54   ` Andre Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-01-31 17:54 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, stefan.wahren, mlangsdo, suzuki.poulose,
	marc.zyngier, catalin.marinas, julien.thierry, will.deacon,
	linux-kernel, steven.price, ykaukab, dave.martin, shankerd

On Fri, 25 Jan 2019 12:07:07 -0600
Jeremy Linton <jeremy.linton@arm.com> wrote:

> From: Marc Zyngier <marc.zyngier@arm.com>
> 
> We currently have a list of CPUs affected by Spectre-v2, for which
> we check that the firmware implements ARCH_WORKAROUND_1. It turns
> out that not all firmwares do implement the required mitigation,
> and that we fail to let the user know about it.
> 
> Instead, let's slightly revamp our checks, and rely on a whitelist
> of cores that are known to be non-vulnerable, and let the user know
> the status of the mitigation in the kernel log.

Yeah, this looks better, I was scratching my head about that
blacklist already.

> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> [This makes more sense in front of the sysfs patch]
> [Pick pieces of that patch into this and move it earlier]
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

> ---  arch/arm64/kernel/cpu_errata.c | 104 +++++++++++++++++----------------
> 1 file changed, 54 insertions(+), 50 deletions(-)
> 
....

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 09/12] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2
  2019-01-25 18:07 ` [PATCH v4 09/12] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2 Jeremy Linton
@ 2019-01-31 17:55   ` Andre Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-01-31 17:55 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, stefan.wahren, mlangsdo, suzuki.poulose,
	marc.zyngier, catalin.marinas, julien.thierry, will.deacon,
	linux-kernel, steven.price, ykaukab, dave.martin, shankerd

On Fri, 25 Jan 2019 12:07:08 -0600
Jeremy Linton <jeremy.linton@arm.com> wrote:

> From: Marc Zyngier <marc.zyngier@arm.com>
> 
> The SMCCC ARCH_WORKAROUND_1 service can indicate that although the
> firmware knows about the Spectre-v2 mitigation, this particular
> CPU is not vulnerable, and it is thus not necessary to call
> the firmware on this CPU.
> 
> Let's use this information to our benefit.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>

Yes, I stumbled over this in the firmware spec as well:

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre.

> ---
>  arch/arm64/kernel/cpu_errata.c | 32 +++++++++++++++++++++++---------
>  1 file changed, 23 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 4d23b4d4cfa8..024c83ffff99 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -217,22 +217,36 @@ static int detect_harden_bp_fw(void)
>  	case PSCI_CONDUIT_HVC:
>  		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>  				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> -		if ((int)res.a0 < 0)
> +		switch ((int)res.a0) {
> +		case 1:
> +			/* Firmware says we're just fine */
> +			return 0;
> +		case 0:
> +			cb = call_hvc_arch_workaround_1;
> +			/* This is a guest, no need to patch KVM vectors */
> +			smccc_start = NULL;
> +			smccc_end = NULL;
> +			break;
> +		default:
>  			return -1;
> -		cb = call_hvc_arch_workaround_1;
> -		/* This is a guest, no need to patch KVM vectors */
> -		smccc_start = NULL;
> -		smccc_end = NULL;
> +		}
>  		break;
>  
>  	case PSCI_CONDUIT_SMC:
>  		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>  				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> -		if ((int)res.a0 < 0)
> +		switch ((int)res.a0) {
> +		case 1:
> +			/* Firmware says we're just fine */
> +			return 0;
> +		case 0:
> +			cb = call_smc_arch_workaround_1;
> +			smccc_start = __smccc_workaround_1_smc_start;
> +			smccc_end = __smccc_workaround_1_smc_end;
> +			break;
> +		default:
>  			return -1;
> -		cb = call_smc_arch_workaround_1;
> -		smccc_start = __smccc_workaround_1_smc_start;
> -		smccc_end = __smccc_workaround_1_smc_end;
> +		}
>  		break;
>  
>  	default:


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 10/12] arm64: add sysfs vulnerability show for spectre v2
  2019-01-25 18:07 ` [PATCH v4 10/12] arm64: add sysfs vulnerability show for spectre v2 Jeremy Linton
@ 2019-01-31 17:55   ` Andre Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-01-31 17:55 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, stefan.wahren, mlangsdo, suzuki.poulose,
	marc.zyngier, catalin.marinas, julien.thierry, will.deacon,
	linux-kernel, steven.price, ykaukab, dave.martin, shankerd

On Fri, 25 Jan 2019 12:07:09 -0600
Jeremy Linton <jeremy.linton@arm.com> wrote:

Hi,

> Add code to track whether all the cores in the machine are
> vulnerable, and whether all the vulnerable cores have been
> mitigated.
> 
> Once we have that information we can add the sysfs stub and
> provide an accurate view of what is known about the machine.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>  arch/arm64/kernel/cpu_errata.c | 31 +++++++++++++++++++++++++++++--
>  1 file changed, 29 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 024c83ffff99..caedf268c972 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -497,6 +497,10 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
> 	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
> 	CAP_MIDR_RANGE_LIST(midr_list)
> 
> +/* Track overall mitigation state. We are only mitigated if all cores are ok */
> +static bool __hardenbp_enab = true;
> +static bool __spectrev2_safe = true;
> +
>  /*
>   * List of CPUs that do not need any Spectre-v2 mitigation at all.
>   */
> @@ -507,6 +511,10 @@ static const struct midr_range spectre_v2_safe_list[] = {
> 	{ /* sentinel */ }
>  };
>  
> +/*
> + * Track overall bp hardening for all heterogeneous cores in the machine.
> + * We are only considered "safe" if all booted cores are known safe.
> + */
>  static bool __maybe_unused
>  check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
>  {
> @@ -528,12 +536,19 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
>  	if (!need_wa)
>  		return false;
>  
> -	if (need_wa < 0)
> +	__spectrev2_safe = false;
> +
> +	if (need_wa < 0) {
>  		pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n");
> +		__hardenbp_enab = false;
> +	}
>  
>  	/* forced off */
> -	if (__nospectre_v2)
> +	if (__nospectre_v2) {
> +		pr_info_once("spectrev2 mitigation disabled by command line option\n");
> +		__hardenbp_enab = false;
>  		return false;
> +	}
>  
>  	return (need_wa > 0);
>  }
> @@ -757,4 +772,16 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
>  	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
>  }
>  
> +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
> +		char *buf)

w/s issue. Other than that:

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre.

> +{
> +	if (__spectrev2_safe)
> +		return sprintf(buf, "Not affected\n");
> +
> +	if (__hardenbp_enab)
> +		return sprintf(buf, "Mitigation: Branch predictor hardening\n");
> +
> +	return sprintf(buf, "Vulnerable\n");
> +}
> +
>  #endif


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 11/12] arm64: add sysfs vulnerability show for speculative store bypass
  2019-01-25 18:07 ` [PATCH v4 11/12] arm64: add sysfs vulnerability show for speculative store bypass Jeremy Linton
@ 2019-01-31 17:55   ` Andre Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-01-31 17:55 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, stefan.wahren, mlangsdo, suzuki.poulose,
	marc.zyngier, catalin.marinas, julien.thierry, will.deacon,
	linux-kernel, steven.price, ykaukab, dave.martin, shankerd

On Fri, 25 Jan 2019 12:07:10 -0600
Jeremy Linton <jeremy.linton@arm.com> wrote:

Hi,

> Return status based on ssbd_state and the arm64 SSBS feature. If
> the mitigation is disabled, or the firmware isn't responding then
> return the expected machine state based on a new blacklist of known
> vulnerable cores.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>  arch/arm64/kernel/cpu_errata.c | 45 ++++++++++++++++++++++++++++++++++
>  1 file changed, 45 insertions(+)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index caedf268c972..e9ae8e5fd7e1 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -265,6 +265,7 @@ static int detect_harden_bp_fw(void)
>  DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
>  
>  int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
> +static bool __ssb_safe = true;
>  
>  static const struct ssbd_options {
>  	const char	*str;
> @@ -362,10 +363,16 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>  {
>  	struct arm_smccc_res res;
>  	bool required = true;
> +	bool is_vul;

I don't think you need this variable, you can just call is_midr_in_range_list() directly.

>  	s32 val;
>  
>  	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
>  
> +	is_vul = is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list);
> +
> +	if (is_vul)
> +		__ssb_safe = false;
> +
>  	if (this_cpu_has_cap(ARM64_SSBS)) {
>  		required = false;
>  		goto out_printmsg;
> @@ -399,6 +406,7 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>  		ssbd_state = ARM64_SSBD_UNKNOWN;
>  		return false;
>  
> +	/* machines with mixed mitigation requirements must not return this */
>  	case SMCCC_RET_NOT_REQUIRED:
>  		pr_info_once("%s mitigation not required\n", entry->desc);
>  		ssbd_state = ARM64_SSBD_MITIGATED;
> @@ -454,6 +462,16 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>  	return required;
>  }
>  
> +/* known vulnerable cores */
> +static const struct midr_range arm64_ssb_cpus[] = {
> +	MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
> +	MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
> +	MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
> +	MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
> +	MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
> +	{},
> +};
> +
>  static void __maybe_unused
>  cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
>  {
> @@ -743,6 +761,7 @@ const struct arm64_cpu_capabilities arm64_errata[] =
>  	{ .capability = ARM64_SSBD,
>  		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
>  		.matches = has_ssbd_mitigation,
> +		.midr_range_list = arm64_ssb_cpus,
>  	},
>  #ifdef CONFIG_ARM64_ERRATUM_1188873
>  	{
> @@ -784,4 +803,30 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
>  	return sprintf(buf, "Vulnerable\n");
>  }
>  
> +ssize_t cpu_show_spec_store_bypass(struct device *dev,
> +		struct device_attribute *attr, char *buf)

w/s issue

Cheers,
Andre.

> +{
> +	/*
> +	 *  Two assumptions: First, get_ssbd_state() reflects the worse case
> +	 *  for hetrogenous machines, and that if SSBS is supported its
> +	 *  supported by all cores.
> +	 */
> +	switch (arm64_get_ssbd_state()) {
> +	case ARM64_SSBD_MITIGATED:
> +		return sprintf(buf, "Not affected\n");
> +
> +	case ARM64_SSBD_KERNEL:
> +	case ARM64_SSBD_FORCE_ENABLE:
> +		if (cpus_have_cap(ARM64_SSBS))
> +			return sprintf(buf, "Not affected\n");
> +		return sprintf(buf,
> +			"Mitigation: Speculative Store Bypass disabled\n");
> +	}
> +
> +	if (__ssb_safe)
> +		return sprintf(buf, "Not affected\n");
> +
> +	return sprintf(buf, "Vulnerable\n");
> +}
> +
>  #endif


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 12/12] arm64: enable generic CPU vulnerabilites support
  2019-01-25 18:07 ` [PATCH v4 12/12] arm64: enable generic CPU vulnerabilites support Jeremy Linton
@ 2019-01-31 17:56   ` Andre Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-01-31 17:56 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, stefan.wahren, mlangsdo, suzuki.poulose,
	marc.zyngier, catalin.marinas, julien.thierry, will.deacon,
	linux-kernel, steven.price, ykaukab, dave.martin, shankerd

On Fri, 25 Jan 2019 12:07:11 -0600
Jeremy Linton <jeremy.linton@arm.com> wrote:

> From: Mian Yousaf Kaukab <ykaukab@suse.de>
> 
> Enable CPU vulnerabilty show functions for spectre_v1, spectre_v2,
> meltdown and store-bypass.
> 
> Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre

> ---
>  arch/arm64/Kconfig | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 09a85410d814..36a7cfbbfbb3 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -88,6 +88,7 @@ config ARM64
>  	select GENERIC_CLOCKEVENTS
>  	select GENERIC_CLOCKEVENTS_BROADCAST
>  	select GENERIC_CPU_AUTOPROBE
> +	select GENERIC_CPU_VULNERABILITIES
>  	select GENERIC_EARLY_IOREMAP
>  	select GENERIC_IDLE_POLL_SETUP
>  	select GENERIC_IRQ_MULTI_HANDLER


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 01/12] Documentation: Document arm64 kpti control
  2019-01-25 18:07 ` [PATCH v4 01/12] Documentation: Document arm64 kpti control Jeremy Linton
  2019-01-30 18:02   ` Andre Przywara
@ 2019-01-31 17:58   ` Andre Przywara
  2019-02-07  0:25   ` Jonathan Corbet
  2 siblings, 0 replies; 35+ messages in thread
From: Andre Przywara @ 2019-01-31 17:58 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, stefan.wahren, Jonathan Corbet, mlangsdo,
	linux-doc, suzuki.poulose, marc.zyngier, catalin.marinas,
	julien.thierry, will.deacon, linux-kernel, steven.price, ykaukab,
	dave.martin, shankerd

On Fri, 25 Jan 2019 12:07:00 -0600
Jeremy Linton <jeremy.linton@arm.com> wrote:

> For a while Arm64 has been capable of force enabling
> or disabling the kpti mitigations. Lets make sure the
> documentation reflects that.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: linux-doc@vger.kernel.org

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre.

> ---
>  Documentation/admin-guide/kernel-parameters.txt | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt
> b/Documentation/admin-guide/kernel-parameters.txt
> index b799bcf67d7b..9475f02c79da 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -1982,6 +1982,12 @@
>  			Built with CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y,
>  			the default is off.
>  
> +	kpti=		[ARM64] Control page table isolation of user
> +			and kernel address spaces.
> +			Default: enabled on cores which need mitigation.
> +			0: force disabled
> +			1: force enabled
> +
>  	kvm.ignore_msrs=[KVM] Ignore guest accesses to unhandled MSRs.
>   			Default is 0 (don't ignore, but inject #GP)
>  

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 07/12] arm64: add sysfs vulnerability show for meltdown
  2019-01-31  9:28   ` Julien Thierry
@ 2019-01-31 21:48     ` Jeremy Linton
  0 siblings, 0 replies; 35+ messages in thread
From: Jeremy Linton @ 2019-01-31 21:48 UTC (permalink / raw)
  To: Julien Thierry, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, marc.zyngier, suzuki.poulose,
	dave.martin, shankerd, linux-kernel, ykaukab, mlangsdo,
	steven.price, stefan.wahren

Hi,

On 01/31/2019 03:28 AM, Julien Thierry wrote:
> Hi Jeremy,
> 
> On 25/01/2019 18:07, Jeremy Linton wrote:
>> Display the mitigation status if active, otherwise
>> assume the cpu is safe unless it doesn't have CSV3
>> and isn't in our whitelist.
>>
>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>> ---
>>   arch/arm64/kernel/cpufeature.c | 33 +++++++++++++++++++++++++++------
>>   1 file changed, 27 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index a9e18b9cdc1e..624dfe0b5cdd 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -944,6 +944,8 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
>>   	return has_cpuid_feature(entry, scope);
>>   }
>>   
>> +/* default value is invalid until unmap_kernel_at_el0() runs */
>> +static bool __meltdown_safe = true;
>>   static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
>>   
>>   static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>> @@ -962,6 +964,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>>   		{ /* sentinel */ }
>>   	};
>>   	char const *str = "command line option";
>> +	bool meltdown_safe;
>> +
>> +	meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
>> +
>> +	/* Defer to CPU feature registers */
>> +	if (has_cpuid_feature(entry, scope))
>> +		meltdown_safe = true;
> 
> Do we need to check the cpuid registers if the CPU is in the known safe
> list?

I don't believe so. In the previous patch where this was broken out 
these checks were just or'ed together. In this path it just seemed a 
little cleaner than adding the additional check/or'ing the results 
here/whatever as we only want to set it safe (never the other way 
around). AKA, i'm running out of horizontal space, and I want to keep 
the 'defer to registers' comment.


> 
> Otherwise:
> 
> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
> 
>> +
>> +	if (!meltdown_safe)
>> +		__meltdown_safe = false;
>>   
>>   	/*
>>   	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
>> @@ -984,12 +996,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>>   	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
>>   		return kaslr_offset() > 0;
>>   
>> -	/* Don't force KPTI for CPUs that are not vulnerable */
>> -	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
>> -		return false;
>> -
>> -	/* Defer to CPU feature registers */
>> -	return !has_cpuid_feature(entry, scope);
>> +	return !meltdown_safe;
>>   }
>>   
>>   static void
>> @@ -2055,3 +2062,17 @@ static int __init enable_mrs_emulation(void)
>>   }
>>   
>>   core_initcall(enable_mrs_emulation);
>> +
>> +#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
>> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
>> +		char *buf)
>> +{
>> +	if (arm64_kernel_unmapped_at_el0())
>> +		return sprintf(buf, "Mitigation: KPTI\n");
>> +
>> +	if (__meltdown_safe)
>> +		return sprintf(buf, "Not affected\n");
>> +
>> +	return sprintf(buf, "Vulnerable\n");
>> +}
>> +#endif
>>
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 07/12] arm64: add sysfs vulnerability show for meltdown
  2019-01-31 17:54   ` Andre Przywara
@ 2019-01-31 21:53     ` Jeremy Linton
  0 siblings, 0 replies; 35+ messages in thread
From: Jeremy Linton @ 2019-01-31 21:53 UTC (permalink / raw)
  To: Andre Przywara
  Cc: linux-arm-kernel, stefan.wahren, mlangsdo, suzuki.poulose,
	marc.zyngier, catalin.marinas, julien.thierry, will.deacon,
	linux-kernel, steven.price, ykaukab, dave.martin, shankerd

Hi,

On 01/31/2019 11:54 AM, Andre Przywara wrote:
> On Fri, 25 Jan 2019 12:07:06 -0600
> Jeremy Linton <jeremy.linton@arm.com> wrote:
> 
> Hi,
> 
>> Display the mitigation status if active, otherwise
>> assume the cpu is safe unless it doesn't have CSV3
>> and isn't in our whitelist.
>>
>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>> ---
>>   arch/arm64/kernel/cpufeature.c | 33 +++++++++++++++++++++++++++------
>>   1 file changed, 27 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/cpufeature.c
>> b/arch/arm64/kernel/cpufeature.c
>> index a9e18b9cdc1e..624dfe0b5cdd 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -944,6 +944,8 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
>> 	return has_cpuid_feature(entry, scope);
>> }
>>   
>> +/* default value is invalid until unmap_kernel_at_el0() runs */
> 
> Shall we somehow enforce this? For instance by making __meltdown_safe
> an enum, initialised to UNKNOWN?

Hehe, well I think people complained about my "UNKNOWN" enum. But, in 
the end this version is trying to make it clear we shouldn't have any 
unknown states remaining.

> Then bail out with a BUG_ON or WARN_ON in the sysfs code?

AFAIK, it shouldn't be possible to actually run the sysfs code before 
this gets initialized. So, the comment is just making it clear/forcing 
the understanding of that.


> 
> I just want to avoid to accidentally report "safe" when we actually
> aren't.
> 
>> +static bool __meltdown_safe = true;
>>   static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
>>   static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>> @@ -962,6 +964,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>> 		{ /* sentinel */ }
>>   	};
>>   	char const *str = "command line option";
>> +	bool meltdown_safe;
>> +
>> +	meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
>> +
>> +	/* Defer to CPU feature registers */
>> +	if (has_cpuid_feature(entry, scope))
>> +		meltdown_safe = true;
>> +
>> +	if (!meltdown_safe)
>> +		__meltdown_safe = false;
>>   
>>   	/*
>>   	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
>> @@ -984,12 +996,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>> 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
>> 		return kaslr_offset() > 0;
>>   
>> -	/* Don't force KPTI for CPUs that are not vulnerable */
>> -	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
>> -		return false;
>> -
>> -	/* Defer to CPU feature registers */
>> -	return !has_cpuid_feature(entry, scope);
>> +	return !meltdown_safe;
>>   }
>>   
>>   static void
>> @@ -2055,3 +2062,17 @@ static int __init enable_mrs_emulation(void)
>>   }
>>   
>>   core_initcall(enable_mrs_emulation);
>> +
>> +#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
>> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
>> +		char *buf)
> 
> w/s issue.
> 
> Cheers,
> Andre.
> 
>> +{
>> +	if (arm64_kernel_unmapped_at_el0())
>> +		return sprintf(buf, "Mitigation: KPTI\n");
>> +
>> +	if (__meltdown_safe)
>> +		return sprintf(buf, "Not affected\n");
>> +
>> +	return sprintf(buf, "Vulnerable\n");
>> +}
>> +#endif
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 01/12] Documentation: Document arm64 kpti control
  2019-01-30 18:02   ` Andre Przywara
@ 2019-02-06 19:24     ` Jeremy Linton
  2019-02-06 21:06       ` André Przywara
  0 siblings, 1 reply; 35+ messages in thread
From: Jeremy Linton @ 2019-02-06 19:24 UTC (permalink / raw)
  To: Andre Przywara
  Cc: linux-arm-kernel, stefan.wahren, Jonathan Corbet, mlangsdo,
	linux-doc, suzuki.poulose, marc.zyngier, catalin.marinas,
	julien.thierry, will.deacon, linux-kernel, steven.price, ykaukab,
	dave.martin, shankerd

Hi,


I just realized I replied to this off-list.

On 01/30/2019 12:02 PM, Andre Przywara wrote:
> On Fri, 25 Jan 2019 12:07:00 -0600
> Jeremy Linton <jeremy.linton@arm.com> wrote:
> 
> Hi,
> 
>> For a while Arm64 has been capable of force enabling
>> or disabling the kpti mitigations. Lets make sure the
>> documentation reflects that.
>>
>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>> Cc: Jonathan Corbet <corbet@lwn.net>
>> Cc: linux-doc@vger.kernel.org
>> ---
>>   Documentation/admin-guide/kernel-parameters.txt | 6 ++++++
>>   1 file changed, 6 insertions(+)
>>
>> diff --git a/Documentation/admin-guide/kernel-parameters.txt
>> b/Documentation/admin-guide/kernel-parameters.txt index
>> b799bcf67d7b..9475f02c79da 100644 ---
>> a/Documentation/admin-guide/kernel-parameters.txt +++
>> b/Documentation/admin-guide/kernel-parameters.txt @@ -1982,6 +1982,12
>> @@ Built with CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y,
>>   			the default is off.
>>   
>> +	kpti=		[ARM64] Control page table isolation of
>> user
>> +			and kernel address spaces.
>> +			Default: enabled on cores which need
>> mitigation.
> 
> Would this be a good place to mention that we enable it when
> CONFIG_RANDOMIZE_BASE is enabled and we have a valid kaslr_offset? I
> found this somewhat surprising, also it's unrelated to the
> vulnerability.

Maybe, but I tend to think since this command line forces it on/off 
regardless of RANDOMIZE_BASE, that a better place to mention that 
RANDOMIZE_BASE forces kpti on is the Kconfig option.

BTW: Thanks for reviewing this.


> 
> Cheers,
> Andre
> 
>> +			0: force disabled
>> +			1: force enabled
>> +
>>   	kvm.ignore_msrs=[KVM] Ignore guest accesses to unhandled
>> MSRs. Default is 0 (don't ignore, but inject #GP)
>>   
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 01/12] Documentation: Document arm64 kpti control
  2019-02-06 19:24     ` Jeremy Linton
@ 2019-02-06 21:06       ` André Przywara
  0 siblings, 0 replies; 35+ messages in thread
From: André Przywara @ 2019-02-06 21:06 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, stefan.wahren, Jonathan Corbet, mlangsdo,
	linux-doc, suzuki.poulose, marc.zyngier, catalin.marinas,
	julien.thierry, will.deacon, linux-kernel, steven.price, ykaukab,
	dave.martin, shankerd

On 06/02/2019 19:24, Jeremy Linton wrote:
> Hi,
> 
> 
> I just realized I replied to this off-list.
> 
> On 01/30/2019 12:02 PM, Andre Przywara wrote:
>> On Fri, 25 Jan 2019 12:07:00 -0600
>> Jeremy Linton <jeremy.linton@arm.com> wrote:
>>
>> Hi,
>>
>>> For a while Arm64 has been capable of force enabling
>>> or disabling the kpti mitigations. Lets make sure the
>>> documentation reflects that.
>>>
>>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>>> Cc: Jonathan Corbet <corbet@lwn.net>
>>> Cc: linux-doc@vger.kernel.org
>>> ---
>>>   Documentation/admin-guide/kernel-parameters.txt | 6 ++++++
>>>   1 file changed, 6 insertions(+)
>>>
>>> diff --git a/Documentation/admin-guide/kernel-parameters.txt
>>> b/Documentation/admin-guide/kernel-parameters.txt index
>>> b799bcf67d7b..9475f02c79da 100644 ---
>>> a/Documentation/admin-guide/kernel-parameters.txt +++
>>> b/Documentation/admin-guide/kernel-parameters.txt @@ -1982,6 +1982,12
>>> @@ Built with CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y,
>>>               the default is off.
>>>   +    kpti=        [ARM64] Control page table isolation of
>>> user
>>> +            and kernel address spaces.
>>> +            Default: enabled on cores which need
>>> mitigation.
>>
>> Would this be a good place to mention that we enable it when
>> CONFIG_RANDOMIZE_BASE is enabled and we have a valid kaslr_offset? I
>> found this somewhat surprising, also it's unrelated to the
>> vulnerability.
> 
> Maybe, but I tend to think since this command line forces it on/off
> regardless of RANDOMIZE_BASE, that a better place to mention that
> RANDOMIZE_BASE forces kpti on is the Kconfig option.

True, kpti= takes precedence, in both ways. Disregard my comment then,
this is indeed not the right place to mention RANDOMIZE_BASE.

Cheers,
Andre.

> 
> BTW: Thanks for reviewing this.
> 
> 
>>
>> Cheers,
>> Andre
>>
>>> +            0: force disabled
>>> +            1: force enabled
>>> +
>>>       kvm.ignore_msrs=[KVM] Ignore guest accesses to unhandled
>>> MSRs. Default is 0 (don't ignore, but inject #GP)
>>>   
>>
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 01/12] Documentation: Document arm64 kpti control
  2019-01-25 18:07 ` [PATCH v4 01/12] Documentation: Document arm64 kpti control Jeremy Linton
  2019-01-30 18:02   ` Andre Przywara
  2019-01-31 17:58   ` Andre Przywara
@ 2019-02-07  0:25   ` Jonathan Corbet
  2 siblings, 0 replies; 35+ messages in thread
From: Jonathan Corbet @ 2019-02-07  0:25 UTC (permalink / raw)
  To: Jeremy Linton
  Cc: linux-arm-kernel, catalin.marinas, will.deacon, marc.zyngier,
	suzuki.poulose, dave.martin, shankerd, linux-kernel, ykaukab,
	julien.thierry, mlangsdo, steven.price, stefan.wahren, linux-doc

On Fri, 25 Jan 2019 12:07:00 -0600
Jeremy Linton <jeremy.linton@arm.com> wrote:

> For a while Arm64 has been capable of force enabling
> or disabling the kpti mitigations. Lets make sure the
> documentation reflects that.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: linux-doc@vger.kernel.org

I've applied this, thanks.

jon

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 00/12] arm64: add system vulnerability sysfs entries
  2019-01-25 18:06 [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Jeremy Linton
                   ` (11 preceding siblings ...)
  2019-01-25 18:07 ` [PATCH v4 12/12] arm64: enable generic CPU vulnerabilites support Jeremy Linton
@ 2019-02-08 20:05 ` Stefan Wahren
  12 siblings, 0 replies; 35+ messages in thread
From: Stefan Wahren @ 2019-02-08 20:05 UTC (permalink / raw)
  To: Jeremy Linton, linux-arm-kernel
  Cc: mlangsdo, suzuki.poulose, marc.zyngier, catalin.marinas,
	julien.thierry, will.deacon, linux-kernel, steven.price, ykaukab,
	dave.martin, shankerd


> Jeremy Linton <jeremy.linton@arm.com> hat am 25. Januar 2019 um 19:06 geschrieben:
> 
> 
> Arm64 machines should be displaying a human readable
> vulnerability status to speculative execution attacks in
> /sys/devices/system/cpu/vulnerabilities 
> 
> This series enables that behavior by providing the expected
> functions. Those functions expose the cpu errata and feature
> states, as well as whether firmware is responding appropriately
> to display the overall machine status. This means that in a
> heterogeneous machine we will only claim the machine is mitigated
> or safe if we are confident all booted cores are safe or
> mitigated.
> 

The whole series is:

Tested-by: Stefan Wahren <stefan.wahren@i2se.com>

with a Raspberry Pi 3 B+

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 03/12] arm64: Remove the ability to build a kernel without ssbd
  2019-01-30 18:04   ` Andre Przywara
@ 2019-02-15 18:20     ` Catalin Marinas
  2019-02-15 18:54       ` Jeremy Linton
  0 siblings, 1 reply; 35+ messages in thread
From: Catalin Marinas @ 2019-02-15 18:20 UTC (permalink / raw)
  To: Andre Przywara
  Cc: Jeremy Linton, linux-arm-kernel, stefan.wahren, mlangsdo,
	suzuki.poulose, marc.zyngier, julien.thierry, will.deacon,
	linux-kernel, steven.price, Christoffer Dall, kvmarm, ykaukab,
	dave.martin, shankerd

On Wed, Jan 30, 2019 at 06:04:15PM +0000, Andre Przywara wrote:
> On Fri, 25 Jan 2019 12:07:02 -0600
> Jeremy Linton <jeremy.linton@arm.com> wrote:
> > Buried behind EXPERT is the ability to build a kernel without
> > SSBD, this needlessly clutters up the code as well as creates
> > the opportunity for bugs. It also removes the kernel's ability
> > to determine if the machine its running on is vulnerable.
> 
> I don't know the original motivation for this config option, typically
> they are not around for no reason.
> I see the benefit of dropping those config options, but we want to make
> sure that people don't start hacking around to remove them again.
> 
> > Since its also possible to disable it at boot time, lets remove
> > the config option.
> 
> Given the level of optimisation a compiler can do with the state being
> known at compile time, I would imagine that it's not the same (though
> probably very close).
> 
> But that's not my call, it would be good to hear some maintainer's
> opinion on this.

Having spoken to Will, we'd rather keep the config options if possible.
Even if they are behind EXPERT and default y, they come in handy when
debugging.

Can we still have the sysfs information regardless of whether the config
is enabled or not? IOW, move the #ifdefs around to always have the
detection while being able to disable the actual workarounds via config?

Are the code paths between config and cmdline disabling identical? At a
quick look I got the impression they are not exactly the same.

-- 
Catalin

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 03/12] arm64: Remove the ability to build a kernel without ssbd
  2019-02-15 18:20     ` Catalin Marinas
@ 2019-02-15 18:54       ` Jeremy Linton
  0 siblings, 0 replies; 35+ messages in thread
From: Jeremy Linton @ 2019-02-15 18:54 UTC (permalink / raw)
  To: Catalin Marinas, Andre Przywara
  Cc: linux-arm-kernel, stefan.wahren, mlangsdo, suzuki.poulose,
	marc.zyngier, julien.thierry, will.deacon, linux-kernel,
	steven.price, Christoffer Dall, kvmarm, ykaukab, dave.martin,
	shankerd

Hi,


Thanks for taking a look at this:

On 2/15/19 12:20 PM, Catalin Marinas wrote:
> On Wed, Jan 30, 2019 at 06:04:15PM +0000, Andre Przywara wrote:
>> On Fri, 25 Jan 2019 12:07:02 -0600
>> Jeremy Linton <jeremy.linton@arm.com> wrote:
>>> Buried behind EXPERT is the ability to build a kernel without
>>> SSBD, this needlessly clutters up the code as well as creates
>>> the opportunity for bugs. It also removes the kernel's ability
>>> to determine if the machine its running on is vulnerable.
>>
>> I don't know the original motivation for this config option, typically
>> they are not around for no reason.
>> I see the benefit of dropping those config options, but we want to make
>> sure that people don't start hacking around to remove them again.
>>
>>> Since its also possible to disable it at boot time, lets remove
>>> the config option.
>>
>> Given the level of optimisation a compiler can do with the state being
>> known at compile time, I would imagine that it's not the same (though
>> probably very close).
>>
>> But that's not my call, it would be good to hear some maintainer's
>> opinion on this.
> 
> Having spoken to Will, we'd rather keep the config options if possible.
> Even if they are behind EXPERT and default y, they come in handy when
> debugging.
> 
> Can we still have the sysfs information regardless of whether the config
> is enabled or not? IOW, move the #ifdefs around to always have the
> detection while being able to disable the actual workarounds via config?

Yes, that is possible, but the ifdef'ing gets even worse. (see v3).

> Are the code paths between config and cmdline disabling identical? At a
> quick look I got the impression they are not exactly the same.

No, they do vary slightly. For debugging I would expect that the CONFIG 
disabled code paths to be the one that accumulates bugs over time. The 
command line options just force the runtime vulnerable/not-vulnerable 
decision, which should be the code paths in general use. For benchmark 
the run-time options are also a better choice because they don't have 
any 2nd order affects caused by code alignment/etc changes.

Maybe your implying the CONFIG_ options should basically force the 
command line? That both reduces the code paths, and simplifies the 
ifdef'ing.




^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2019-02-15 18:54 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-25 18:06 [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Jeremy Linton
2019-01-25 18:07 ` [PATCH v4 01/12] Documentation: Document arm64 kpti control Jeremy Linton
2019-01-30 18:02   ` Andre Przywara
2019-02-06 19:24     ` Jeremy Linton
2019-02-06 21:06       ` André Przywara
2019-01-31 17:58   ` Andre Przywara
2019-02-07  0:25   ` Jonathan Corbet
2019-01-25 18:07 ` [PATCH v4 02/12] arm64: Provide a command line to disable spectre_v2 mitigation Jeremy Linton
2019-01-30 18:03   ` Andre Przywara
2019-01-25 18:07 ` [PATCH v4 03/12] arm64: Remove the ability to build a kernel without ssbd Jeremy Linton
2019-01-30 18:04   ` Andre Przywara
2019-02-15 18:20     ` Catalin Marinas
2019-02-15 18:54       ` Jeremy Linton
2019-01-25 18:07 ` [PATCH v4 04/12] arm64: remove the ability to build a kernel without hardened branch predictors Jeremy Linton
2019-01-30 18:04   ` Andre Przywara
2019-01-25 18:07 ` [PATCH v4 05/12] arm64: remove the ability to build a kernel without kpti Jeremy Linton
2019-01-30 18:05   ` Andre Przywara
2019-01-25 18:07 ` [PATCH v4 06/12] arm64: add sysfs vulnerability show for spectre v1 Jeremy Linton
2019-01-31 17:52   ` Andre Przywara
2019-01-25 18:07 ` [PATCH v4 07/12] arm64: add sysfs vulnerability show for meltdown Jeremy Linton
2019-01-31  9:28   ` Julien Thierry
2019-01-31 21:48     ` Jeremy Linton
2019-01-31 17:54   ` Andre Przywara
2019-01-31 21:53     ` Jeremy Linton
2019-01-25 18:07 ` [PATCH v4 08/12] arm64: Advertise mitigation of Spectre-v2, or lack thereof Jeremy Linton
2019-01-31 17:54   ` Andre Przywara
2019-01-25 18:07 ` [PATCH v4 09/12] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2 Jeremy Linton
2019-01-31 17:55   ` Andre Przywara
2019-01-25 18:07 ` [PATCH v4 10/12] arm64: add sysfs vulnerability show for spectre v2 Jeremy Linton
2019-01-31 17:55   ` Andre Przywara
2019-01-25 18:07 ` [PATCH v4 11/12] arm64: add sysfs vulnerability show for speculative store bypass Jeremy Linton
2019-01-31 17:55   ` Andre Przywara
2019-01-25 18:07 ` [PATCH v4 12/12] arm64: enable generic CPU vulnerabilites support Jeremy Linton
2019-01-31 17:56   ` Andre Przywara
2019-02-08 20:05 ` [PATCH v4 00/12] arm64: add system vulnerability sysfs entries Stefan Wahren

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).