All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/14] arm64 SSBD (aka Spectre-v4) mitigation
@ 2018-05-22 15:06 ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

Hi all,

This patch series implements the Linux kernel side of the "Spectre-v4"
(CVE-2018-3639) mitigation known as "Speculative Store Bypass Disable"
(SSBD).

More information can be found at:

  https://bugs.chromium.org/p/project-zero/issues/detail?id=1528
  https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability

For all released Arm Cortex-A CPUs that are affected by this issue, then
the preferred mitigation is simply to set a chicken bit in the firmware
during CPU initialisation and therefore no change to Linux is required.
Other CPUs may require the chicken bit to be toggled dynamically (for
example, when switching between user-mode and kernel-mode) and this is
achieved by calling into EL3 via an SMC which has been published as part
of the latest SMCCC specification:

  https://developer.arm.com/cache-speculation-vulnerability-firmware-specification

as well as an ATF update for the released ARM cores affected by SSDB:

  https://github.com/ARM-software/arm-trusted-firmware/pull/1392

These patches provide the following:

  1. Safe probing of firmware to establish which CPUs in the system
     require calling into EL3 as part of the mitigation.

  2. For CPUs that require it, call into EL3 on exception entry/exit
     from EL0 to apply the SSBD mitigation when running at EL1.

  3. A command-line option to force the SSBD mitigation to be always on,
     always off, or dymamically toggled (default) for CPUs that require
     the EL3 call.

  4. An initial implementation of a prctl() backend for arm64 that allows
     userspace tasks to opt-in to the mitigation explicitly. This is
     intended to match the interface provided by x86, and so we rely on
     their core changes here. There still is an annoying issue with
     multithreaded seccomp tasks that get flagged with the mitigation
     whilst they are running in userspace.

  5. An initial implementation of the call via KVM, which exposes the
     mitigation to the guest via an HVC interface. This isn't yet
     complete and doesn't include save/restore functionality for the
     workaround state.

All comments welcome,

	M.

Marc Zyngier (14):
  arm/arm64: smccc: Add SMCCC-specific return codes
  arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
  arm64: Add per-cpu infrastructure to call ARCH_WORKAROUND_2
  arm64: Add ARCH_WORKAROUND_2 probing
  arm64: Add 'ssbd' command-line option
  arm64: ssbd: Add global mitigation state accessor
  arm64: ssbd: Skip apply_ssbd if not using dynamic mitigation
  arm64: ssbd: Disable mitigation on CPU resume if required by user
  arm64: ssbd: Introduce thread flag to control userspace mitigation
  arm64: ssbd: Add prctl interface for per-thread mitigation
  arm64: KVM: Add HYP per-cpu accessors
  arm64: KVM: Add ARCH_WORKAROUND_2 support for guests
  arm64: KVM: Handle guest's ARCH_WORKAROUND_2 requests
  arm64: KVM: Add ARCH_WORKAROUND_2 discovery through
    ARCH_FEATURES_FUNC_ID

 Documentation/admin-guide/kernel-parameters.txt |  17 +++
 arch/arm/include/asm/kvm_host.h                 |  12 ++
 arch/arm/include/asm/kvm_mmu.h                  |   5 +
 arch/arm64/Kconfig                              |   9 ++
 arch/arm64/include/asm/cpucaps.h                |   3 +-
 arch/arm64/include/asm/cpufeature.h             |  22 +++
 arch/arm64/include/asm/kvm_asm.h                |  30 +++-
 arch/arm64/include/asm/kvm_host.h               |  26 ++++
 arch/arm64/include/asm/kvm_mmu.h                |  24 ++++
 arch/arm64/include/asm/thread_info.h            |   1 +
 arch/arm64/kernel/Makefile                      |   1 +
 arch/arm64/kernel/asm-offsets.c                 |   1 +
 arch/arm64/kernel/cpu_errata.c                  | 173 ++++++++++++++++++++++++
 arch/arm64/kernel/entry.S                       |  30 ++++
 arch/arm64/kernel/ssbd.c                        | 107 +++++++++++++++
 arch/arm64/kernel/suspend.c                     |   8 ++
 arch/arm64/kvm/hyp/hyp-entry.S                  |  38 +++++-
 arch/arm64/kvm/hyp/switch.c                     |  42 ++++++
 arch/arm64/kvm/reset.c                          |   4 +
 include/linux/arm-smccc.h                       |  10 ++
 virt/kvm/arm/arm.c                              |   4 +
 virt/kvm/arm/psci.c                             |  18 ++-
 22 files changed, 579 insertions(+), 6 deletions(-)
 create mode 100644 arch/arm64/kernel/ssbd.c

-- 
2.14.2

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 00/14] arm64 SSBD (aka Spectre-v4) mitigation
@ 2018-05-22 15:06 ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

Hi all,

This patch series implements the Linux kernel side of the "Spectre-v4"
(CVE-2018-3639) mitigation known as "Speculative Store Bypass Disable"
(SSBD).

More information can be found at:

  https://bugs.chromium.org/p/project-zero/issues/detail?id=1528
  https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability

For all released Arm Cortex-A CPUs that are affected by this issue, then
the preferred mitigation is simply to set a chicken bit in the firmware
during CPU initialisation and therefore no change to Linux is required.
Other CPUs may require the chicken bit to be toggled dynamically (for
example, when switching between user-mode and kernel-mode) and this is
achieved by calling into EL3 via an SMC which has been published as part
of the latest SMCCC specification:

  https://developer.arm.com/cache-speculation-vulnerability-firmware-specification

as well as an ATF update for the released ARM cores affected by SSDB:

  https://github.com/ARM-software/arm-trusted-firmware/pull/1392

These patches provide the following:

  1. Safe probing of firmware to establish which CPUs in the system
     require calling into EL3 as part of the mitigation.

  2. For CPUs that require it, call into EL3 on exception entry/exit
     from EL0 to apply the SSBD mitigation when running at EL1.

  3. A command-line option to force the SSBD mitigation to be always on,
     always off, or dymamically toggled (default) for CPUs that require
     the EL3 call.

  4. An initial implementation of a prctl() backend for arm64 that allows
     userspace tasks to opt-in to the mitigation explicitly. This is
     intended to match the interface provided by x86, and so we rely on
     their core changes here. There still is an annoying issue with
     multithreaded seccomp tasks that get flagged with the mitigation
     whilst they are running in userspace.

  5. An initial implementation of the call via KVM, which exposes the
     mitigation to the guest via an HVC interface. This isn't yet
     complete and doesn't include save/restore functionality for the
     workaround state.

All comments welcome,

	M.

Marc Zyngier (14):
  arm/arm64: smccc: Add SMCCC-specific return codes
  arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
  arm64: Add per-cpu infrastructure to call ARCH_WORKAROUND_2
  arm64: Add ARCH_WORKAROUND_2 probing
  arm64: Add 'ssbd' command-line option
  arm64: ssbd: Add global mitigation state accessor
  arm64: ssbd: Skip apply_ssbd if not using dynamic mitigation
  arm64: ssbd: Disable mitigation on CPU resume if required by user
  arm64: ssbd: Introduce thread flag to control userspace mitigation
  arm64: ssbd: Add prctl interface for per-thread mitigation
  arm64: KVM: Add HYP per-cpu accessors
  arm64: KVM: Add ARCH_WORKAROUND_2 support for guests
  arm64: KVM: Handle guest's ARCH_WORKAROUND_2 requests
  arm64: KVM: Add ARCH_WORKAROUND_2 discovery through
    ARCH_FEATURES_FUNC_ID

 Documentation/admin-guide/kernel-parameters.txt |  17 +++
 arch/arm/include/asm/kvm_host.h                 |  12 ++
 arch/arm/include/asm/kvm_mmu.h                  |   5 +
 arch/arm64/Kconfig                              |   9 ++
 arch/arm64/include/asm/cpucaps.h                |   3 +-
 arch/arm64/include/asm/cpufeature.h             |  22 +++
 arch/arm64/include/asm/kvm_asm.h                |  30 +++-
 arch/arm64/include/asm/kvm_host.h               |  26 ++++
 arch/arm64/include/asm/kvm_mmu.h                |  24 ++++
 arch/arm64/include/asm/thread_info.h            |   1 +
 arch/arm64/kernel/Makefile                      |   1 +
 arch/arm64/kernel/asm-offsets.c                 |   1 +
 arch/arm64/kernel/cpu_errata.c                  | 173 ++++++++++++++++++++++++
 arch/arm64/kernel/entry.S                       |  30 ++++
 arch/arm64/kernel/ssbd.c                        | 107 +++++++++++++++
 arch/arm64/kernel/suspend.c                     |   8 ++
 arch/arm64/kvm/hyp/hyp-entry.S                  |  38 +++++-
 arch/arm64/kvm/hyp/switch.c                     |  42 ++++++
 arch/arm64/kvm/reset.c                          |   4 +
 include/linux/arm-smccc.h                       |  10 ++
 virt/kvm/arm/arm.c                              |   4 +
 virt/kvm/arm/psci.c                             |  18 ++-
 22 files changed, 579 insertions(+), 6 deletions(-)
 create mode 100644 arch/arm64/kernel/ssbd.c

-- 
2.14.2

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 01/14] arm/arm64: smccc: Add SMCCC-specific return codes
  2018-05-22 15:06 ` Marc Zyngier
  (?)
@ 2018-05-22 15:06   ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

We've so far used the PSCI return codes for SMCCC because they
were extremely similar. But with the new ARM DEN 0070A specification,
"NOT_REQUIRED" (-2) is clashing with PSCI's "PSCI_RET_INVALID_PARAMS".

Let's bite the bullet and add SMCCC specific return codes. Users
can be repainted as and when required.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 include/linux/arm-smccc.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index a031897fca76..c89da86de99f 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -291,5 +291,10 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
  */
 #define arm_smccc_1_1_hvc(...)	__arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__)
 
+/* Return codes defined in ARM DEN 0070A */
+#define SMCCC_RET_SUCCESS			0
+#define SMCCC_RET_NOT_SUPPORTED			-1
+#define SMCCC_RET_NOT_REQUIRED			-2
+
 #endif /*__ASSEMBLY__*/
 #endif /*__LINUX_ARM_SMCCC_H*/
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 01/14] arm/arm64: smccc: Add SMCCC-specific return codes
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Kees Cook, Catalin Marinas, Will Deacon, Christoffer Dall,
	Andy Lutomirski, Greg Kroah-Hartman, Thomas Gleixner

We've so far used the PSCI return codes for SMCCC because they
were extremely similar. But with the new ARM DEN 0070A specification,
"NOT_REQUIRED" (-2) is clashing with PSCI's "PSCI_RET_INVALID_PARAMS".

Let's bite the bullet and add SMCCC specific return codes. Users
can be repainted as and when required.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 include/linux/arm-smccc.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index a031897fca76..c89da86de99f 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -291,5 +291,10 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
  */
 #define arm_smccc_1_1_hvc(...)	__arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__)
 
+/* Return codes defined in ARM DEN 0070A */
+#define SMCCC_RET_SUCCESS			0
+#define SMCCC_RET_NOT_SUPPORTED			-1
+#define SMCCC_RET_NOT_REQUIRED			-2
+
 #endif /*__ASSEMBLY__*/
 #endif /*__LINUX_ARM_SMCCC_H*/
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 01/14] arm/arm64: smccc: Add SMCCC-specific return codes
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

We've so far used the PSCI return codes for SMCCC because they
were extremely similar. But with the new ARM DEN 0070A specification,
"NOT_REQUIRED" (-2) is clashing with PSCI's "PSCI_RET_INVALID_PARAMS".

Let's bite the bullet and add SMCCC specific return codes. Users
can be repainted as and when required.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 include/linux/arm-smccc.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index a031897fca76..c89da86de99f 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -291,5 +291,10 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
  */
 #define arm_smccc_1_1_hvc(...)	__arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__)
 
+/* Return codes defined in ARM DEN 0070A */
+#define SMCCC_RET_SUCCESS			0
+#define SMCCC_RET_NOT_SUPPORTED			-1
+#define SMCCC_RET_NOT_REQUIRED			-2
+
 #endif /*__ASSEMBLY__*/
 #endif /*__LINUX_ARM_SMCCC_H*/
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
  2018-05-22 15:06 ` Marc Zyngier
  (?)
@ 2018-05-22 15:06   ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

In order for the kernel to protect itself, let's call the SSBD mitigation
implemented by the higher exception level (either hypervisor or firmware)
on each transition between userspace and kernel.

We must take the PSCI conduit into account in order to target the
right exception level, hence the introduction of a runtime patching
callback.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 18 ++++++++++++++++++
 arch/arm64/kernel/entry.S      | 22 ++++++++++++++++++++++
 include/linux/arm-smccc.h      |  5 +++++
 3 files changed, 45 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index a900befadfe8..46b3aafb631a 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -232,6 +232,24 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 }
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
+#ifdef CONFIG_ARM64_SSBD
+void __init arm64_update_smccc_conduit(struct alt_instr *alt,
+				       __le32 *origptr, __le32 *updptr,
+				       int nr_inst)
+{
+	u32 insn;
+
+	BUG_ON(nr_inst != 1);
+
+	if (psci_ops.conduit == PSCI_CONDUIT_HVC)
+		insn = aarch64_insn_get_hvc_value();
+	else
+		insn = aarch64_insn_get_smc_value();
+
+	*updptr = cpu_to_le32(insn);
+}
+#endif	/* CONFIG_ARM64_SSBD */
+
 #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)	\
 	.matches = is_affected_midr_range,			\
 	.midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index ec2ee720e33e..f33e6aed3037 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -18,6 +18,7 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <linux/arm-smccc.h>
 #include <linux/init.h>
 #include <linux/linkage.h>
 
@@ -137,6 +138,18 @@ alternative_else_nop_endif
 	add	\dst, \dst, #(\sym - .entry.tramp.text)
 	.endm
 
+	// This macro corrupts x0-x3. It is the caller's duty
+	// to save/restore them if required.
+	.macro	apply_ssbd, state
+#ifdef CONFIG_ARM64_SSBD
+	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
+	mov	w1, #\state
+alternative_cb	arm64_update_smccc_conduit
+	nop					// Patched to SMC/HVC #0
+alternative_cb_end
+#endif
+	.endm
+
 	.macro	kernel_entry, el, regsize = 64
 	.if	\regsize == 32
 	mov	w0, w0				// zero upper 32 bits of x0
@@ -163,6 +176,13 @@ alternative_else_nop_endif
 	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
 	disable_step_tsk x19, x20		// exceptions when scheduling.
 
+	apply_ssbd 1
+
+#ifdef CONFIG_ARM64_SSBD
+	ldp	x0, x1, [sp, #16 * 0]
+	ldp	x2, x3, [sp, #16 * 1]
+#endif
+
 	mov	x29, xzr			// fp pointed to user-space
 	.else
 	add	x21, sp, #S_FRAME_SIZE
@@ -303,6 +323,8 @@ alternative_if ARM64_WORKAROUND_845719
 alternative_else_nop_endif
 #endif
 3:
+	apply_ssbd 0
+
 	.endif
 
 	msr	elr_el1, x21			// set up the return data
diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index c89da86de99f..ca1d2cc2cdfa 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -80,6 +80,11 @@
 			   ARM_SMCCC_SMC_32,				\
 			   0, 0x8000)
 
+#define ARM_SMCCC_ARCH_WORKAROUND_2					\
+	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,				\
+			   ARM_SMCCC_SMC_32,				\
+			   0, 0x7fff)
+
 #ifndef __ASSEMBLY__
 
 #include <linux/linkage.h>
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Kees Cook, Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

In order for the kernel to protect itself, let's call the SSBD mitigation
implemented by the higher exception level (either hypervisor or firmware)
on each transition between userspace and kernel.

We must take the PSCI conduit into account in order to target the
right exception level, hence the introduction of a runtime patching
callback.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 18 ++++++++++++++++++
 arch/arm64/kernel/entry.S      | 22 ++++++++++++++++++++++
 include/linux/arm-smccc.h      |  5 +++++
 3 files changed, 45 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index a900befadfe8..46b3aafb631a 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -232,6 +232,24 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 }
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
+#ifdef CONFIG_ARM64_SSBD
+void __init arm64_update_smccc_conduit(struct alt_instr *alt,
+				       __le32 *origptr, __le32 *updptr,
+				       int nr_inst)
+{
+	u32 insn;
+
+	BUG_ON(nr_inst != 1);
+
+	if (psci_ops.conduit == PSCI_CONDUIT_HVC)
+		insn = aarch64_insn_get_hvc_value();
+	else
+		insn = aarch64_insn_get_smc_value();
+
+	*updptr = cpu_to_le32(insn);
+}
+#endif	/* CONFIG_ARM64_SSBD */
+
 #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)	\
 	.matches = is_affected_midr_range,			\
 	.midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index ec2ee720e33e..f33e6aed3037 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -18,6 +18,7 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <linux/arm-smccc.h>
 #include <linux/init.h>
 #include <linux/linkage.h>
 
@@ -137,6 +138,18 @@ alternative_else_nop_endif
 	add	\dst, \dst, #(\sym - .entry.tramp.text)
 	.endm
 
+	// This macro corrupts x0-x3. It is the caller's duty
+	// to save/restore them if required.
+	.macro	apply_ssbd, state
+#ifdef CONFIG_ARM64_SSBD
+	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
+	mov	w1, #\state
+alternative_cb	arm64_update_smccc_conduit
+	nop					// Patched to SMC/HVC #0
+alternative_cb_end
+#endif
+	.endm
+
 	.macro	kernel_entry, el, regsize = 64
 	.if	\regsize == 32
 	mov	w0, w0				// zero upper 32 bits of x0
@@ -163,6 +176,13 @@ alternative_else_nop_endif
 	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
 	disable_step_tsk x19, x20		// exceptions when scheduling.
 
+	apply_ssbd 1
+
+#ifdef CONFIG_ARM64_SSBD
+	ldp	x0, x1, [sp, #16 * 0]
+	ldp	x2, x3, [sp, #16 * 1]
+#endif
+
 	mov	x29, xzr			// fp pointed to user-space
 	.else
 	add	x21, sp, #S_FRAME_SIZE
@@ -303,6 +323,8 @@ alternative_if ARM64_WORKAROUND_845719
 alternative_else_nop_endif
 #endif
 3:
+	apply_ssbd 0
+
 	.endif
 
 	msr	elr_el1, x21			// set up the return data
diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index c89da86de99f..ca1d2cc2cdfa 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -80,6 +80,11 @@
 			   ARM_SMCCC_SMC_32,				\
 			   0, 0x8000)
 
+#define ARM_SMCCC_ARCH_WORKAROUND_2					\
+	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,				\
+			   ARM_SMCCC_SMC_32,				\
+			   0, 0x7fff)
+
 #ifndef __ASSEMBLY__
 
 #include <linux/linkage.h>
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

In order for the kernel to protect itself, let's call the SSBD mitigation
implemented by the higher exception level (either hypervisor or firmware)
on each transition between userspace and kernel.

We must take the PSCI conduit into account in order to target the
right exception level, hence the introduction of a runtime patching
callback.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 18 ++++++++++++++++++
 arch/arm64/kernel/entry.S      | 22 ++++++++++++++++++++++
 include/linux/arm-smccc.h      |  5 +++++
 3 files changed, 45 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index a900befadfe8..46b3aafb631a 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -232,6 +232,24 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 }
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
+#ifdef CONFIG_ARM64_SSBD
+void __init arm64_update_smccc_conduit(struct alt_instr *alt,
+				       __le32 *origptr, __le32 *updptr,
+				       int nr_inst)
+{
+	u32 insn;
+
+	BUG_ON(nr_inst != 1);
+
+	if (psci_ops.conduit == PSCI_CONDUIT_HVC)
+		insn = aarch64_insn_get_hvc_value();
+	else
+		insn = aarch64_insn_get_smc_value();
+
+	*updptr = cpu_to_le32(insn);
+}
+#endif	/* CONFIG_ARM64_SSBD */
+
 #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)	\
 	.matches = is_affected_midr_range,			\
 	.midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index ec2ee720e33e..f33e6aed3037 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -18,6 +18,7 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <linux/arm-smccc.h>
 #include <linux/init.h>
 #include <linux/linkage.h>
 
@@ -137,6 +138,18 @@ alternative_else_nop_endif
 	add	\dst, \dst, #(\sym - .entry.tramp.text)
 	.endm
 
+	// This macro corrupts x0-x3. It is the caller's duty
+	// to save/restore them if required.
+	.macro	apply_ssbd, state
+#ifdef CONFIG_ARM64_SSBD
+	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
+	mov	w1, #\state
+alternative_cb	arm64_update_smccc_conduit
+	nop					// Patched to SMC/HVC #0
+alternative_cb_end
+#endif
+	.endm
+
 	.macro	kernel_entry, el, regsize = 64
 	.if	\regsize == 32
 	mov	w0, w0				// zero upper 32 bits of x0
@@ -163,6 +176,13 @@ alternative_else_nop_endif
 	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
 	disable_step_tsk x19, x20		// exceptions when scheduling.
 
+	apply_ssbd 1
+
+#ifdef CONFIG_ARM64_SSBD
+	ldp	x0, x1, [sp, #16 * 0]
+	ldp	x2, x3, [sp, #16 * 1]
+#endif
+
 	mov	x29, xzr			// fp pointed to user-space
 	.else
 	add	x21, sp, #S_FRAME_SIZE
@@ -303,6 +323,8 @@ alternative_if ARM64_WORKAROUND_845719
 alternative_else_nop_endif
 #endif
 3:
+	apply_ssbd 0
+
 	.endif
 
 	msr	elr_el1, x21			// set up the return data
diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index c89da86de99f..ca1d2cc2cdfa 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -80,6 +80,11 @@
 			   ARM_SMCCC_SMC_32,				\
 			   0, 0x8000)
 
+#define ARM_SMCCC_ARCH_WORKAROUND_2					\
+	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,				\
+			   ARM_SMCCC_SMC_32,				\
+			   0, 0x7fff)
+
 #ifndef __ASSEMBLY__
 
 #include <linux/linkage.h>
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 03/14] arm64: Add per-cpu infrastructure to call ARCH_WORKAROUND_2
  2018-05-22 15:06 ` Marc Zyngier
@ 2018-05-22 15:06   ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

In a heterogeneous system, we can end up with both affected and
unaffected CPUs. Let's check their status before calling into the
firmware.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/cpu_errata.c |  2 ++
 arch/arm64/kernel/entry.S      | 11 +++++++----
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 46b3aafb631a..0288d6cf560e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -233,6 +233,8 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
 #ifdef CONFIG_ARM64_SSBD
+DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
+
 void __init arm64_update_smccc_conduit(struct alt_instr *alt,
 				       __le32 *origptr, __le32 *updptr,
 				       int nr_inst)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index f33e6aed3037..29ad672a6abd 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -140,8 +140,10 @@ alternative_else_nop_endif
 
 	// This macro corrupts x0-x3. It is the caller's duty
 	// to save/restore them if required.
-	.macro	apply_ssbd, state
+	.macro	apply_ssbd, state, targ, tmp1, tmp2
 #ifdef CONFIG_ARM64_SSBD
+	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
+	cbz	\tmp2, \targ
 	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
 	mov	w1, #\state
 alternative_cb	arm64_update_smccc_conduit
@@ -176,12 +178,13 @@ alternative_cb_end
 	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
 	disable_step_tsk x19, x20		// exceptions when scheduling.
 
-	apply_ssbd 1
+	apply_ssbd 1, 1f, x22, x23
 
 #ifdef CONFIG_ARM64_SSBD
 	ldp	x0, x1, [sp, #16 * 0]
 	ldp	x2, x3, [sp, #16 * 1]
 #endif
+1:
 
 	mov	x29, xzr			// fp pointed to user-space
 	.else
@@ -323,8 +326,8 @@ alternative_if ARM64_WORKAROUND_845719
 alternative_else_nop_endif
 #endif
 3:
-	apply_ssbd 0
-
+	apply_ssbd 0, 5f, x0, x1
+5:
 	.endif
 
 	msr	elr_el1, x21			// set up the return data
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 03/14] arm64: Add per-cpu infrastructure to call ARCH_WORKAROUND_2
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

In a heterogeneous system, we can end up with both affected and
unaffected CPUs. Let's check their status before calling into the
firmware.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/cpu_errata.c |  2 ++
 arch/arm64/kernel/entry.S      | 11 +++++++----
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 46b3aafb631a..0288d6cf560e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -233,6 +233,8 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
 #ifdef CONFIG_ARM64_SSBD
+DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
+
 void __init arm64_update_smccc_conduit(struct alt_instr *alt,
 				       __le32 *origptr, __le32 *updptr,
 				       int nr_inst)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index f33e6aed3037..29ad672a6abd 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -140,8 +140,10 @@ alternative_else_nop_endif
 
 	// This macro corrupts x0-x3. It is the caller's duty
 	// to save/restore them if required.
-	.macro	apply_ssbd, state
+	.macro	apply_ssbd, state, targ, tmp1, tmp2
 #ifdef CONFIG_ARM64_SSBD
+	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
+	cbz	\tmp2, \targ
 	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
 	mov	w1, #\state
 alternative_cb	arm64_update_smccc_conduit
@@ -176,12 +178,13 @@ alternative_cb_end
 	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
 	disable_step_tsk x19, x20		// exceptions when scheduling.
 
-	apply_ssbd 1
+	apply_ssbd 1, 1f, x22, x23
 
 #ifdef CONFIG_ARM64_SSBD
 	ldp	x0, x1, [sp, #16 * 0]
 	ldp	x2, x3, [sp, #16 * 1]
 #endif
+1:
 
 	mov	x29, xzr			// fp pointed to user-space
 	.else
@@ -323,8 +326,8 @@ alternative_if ARM64_WORKAROUND_845719
 alternative_else_nop_endif
 #endif
 3:
-	apply_ssbd 0
-
+	apply_ssbd 0, 5f, x0, x1
+5:
 	.endif
 
 	msr	elr_el1, x21			// set up the return data
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 04/14] arm64: Add ARCH_WORKAROUND_2 probing
  2018-05-22 15:06 ` Marc Zyngier
  (?)
@ 2018-05-22 15:06   ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

As for Spectre variant-2, we rely on SMCCC 1.1 to provide the
discovery mechanism for detecting the SSBD mitigation.

A new capability is also allocated for that purpose, and a
config option.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/Kconfig               |  9 ++++++
 arch/arm64/include/asm/cpucaps.h |  3 +-
 arch/arm64/kernel/cpu_errata.c   | 69 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 80 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index eb2cf4938f6d..b2103b4df467 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -938,6 +938,15 @@ config HARDEN_EL2_VECTORS
 
 	  If unsure, say Y.
 
+config ARM64_SSBD
+	bool "Speculative Store Bypass Disable" if EXPERT
+	default y
+	help
+	  This enables mitigation of the bypassing of previous stores
+	  by speculative loads.
+
+	  If unsure, say Y.
+
 menuconfig ARMV8_DEPRECATED
 	bool "Emulate deprecated/obsolete ARMv8 instructions"
 	depends on COMPAT
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index bc51b72fafd4..5b2facf786ba 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -48,7 +48,8 @@
 #define ARM64_HAS_CACHE_IDC			27
 #define ARM64_HAS_CACHE_DIC			28
 #define ARM64_HW_DBM				29
+#define ARM64_SSBD			30
 
-#define ARM64_NCAPS				30
+#define ARM64_NCAPS				31
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 0288d6cf560e..7fd6d5b001f5 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -250,6 +250,67 @@ void __init arm64_update_smccc_conduit(struct alt_instr *alt,
 
 	*updptr = cpu_to_le32(insn);
 }
+
+static void do_ssbd(bool state)
+{
+	switch (psci_ops.conduit) {
+	case PSCI_CONDUIT_HVC:
+		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
+		break;
+
+	case PSCI_CONDUIT_SMC:
+		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
+		break;
+
+	default:
+		WARN_ON_ONCE(1);
+		break;
+	}
+}
+
+static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
+				    int scope)
+{
+	struct arm_smccc_res res;
+	bool supported = true;
+
+	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+
+	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
+		return false;
+
+	/*
+	 * The probe function return value is either negative
+	 * (unsupported or mitigated), positive (unaffected), or zero
+	 * (requires mitigation). We only need to do anything in the
+	 * last case.
+	 */
+	switch (psci_ops.conduit) {
+	case PSCI_CONDUIT_HVC:
+		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
+		if ((int)res.a0 != 0)
+			supported = false;
+		break;
+
+	case PSCI_CONDUIT_SMC:
+		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
+		if ((int)res.a0 != 0)
+			supported = false;
+		break;
+
+	default:
+		supported = false;
+	}
+
+	if (supported) {
+		__this_cpu_write(arm64_ssbd_callback_required, 1);
+		do_ssbd(true);
+	}
+
+	return supported;
+}
 #endif	/* CONFIG_ARM64_SSBD */
 
 #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)	\
@@ -506,6 +567,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors),
 	},
+#endif
+#ifdef CONFIG_ARM64_SSBD
+	{
+		.desc = "Speculative Store Bypass Disable",
+		.capability = ARM64_SSBD,
+		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+		.matches = has_ssbd_mitigation,
+	},
 #endif
 	{
 	}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 04/14] arm64: Add ARCH_WORKAROUND_2 probing
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Kees Cook, Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

As for Spectre variant-2, we rely on SMCCC 1.1 to provide the
discovery mechanism for detecting the SSBD mitigation.

A new capability is also allocated for that purpose, and a
config option.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/Kconfig               |  9 ++++++
 arch/arm64/include/asm/cpucaps.h |  3 +-
 arch/arm64/kernel/cpu_errata.c   | 69 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 80 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index eb2cf4938f6d..b2103b4df467 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -938,6 +938,15 @@ config HARDEN_EL2_VECTORS
 
 	  If unsure, say Y.
 
+config ARM64_SSBD
+	bool "Speculative Store Bypass Disable" if EXPERT
+	default y
+	help
+	  This enables mitigation of the bypassing of previous stores
+	  by speculative loads.
+
+	  If unsure, say Y.
+
 menuconfig ARMV8_DEPRECATED
 	bool "Emulate deprecated/obsolete ARMv8 instructions"
 	depends on COMPAT
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index bc51b72fafd4..5b2facf786ba 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -48,7 +48,8 @@
 #define ARM64_HAS_CACHE_IDC			27
 #define ARM64_HAS_CACHE_DIC			28
 #define ARM64_HW_DBM				29
+#define ARM64_SSBD			30
 
-#define ARM64_NCAPS				30
+#define ARM64_NCAPS				31
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 0288d6cf560e..7fd6d5b001f5 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -250,6 +250,67 @@ void __init arm64_update_smccc_conduit(struct alt_instr *alt,
 
 	*updptr = cpu_to_le32(insn);
 }
+
+static void do_ssbd(bool state)
+{
+	switch (psci_ops.conduit) {
+	case PSCI_CONDUIT_HVC:
+		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
+		break;
+
+	case PSCI_CONDUIT_SMC:
+		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
+		break;
+
+	default:
+		WARN_ON_ONCE(1);
+		break;
+	}
+}
+
+static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
+				    int scope)
+{
+	struct arm_smccc_res res;
+	bool supported = true;
+
+	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+
+	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
+		return false;
+
+	/*
+	 * The probe function return value is either negative
+	 * (unsupported or mitigated), positive (unaffected), or zero
+	 * (requires mitigation). We only need to do anything in the
+	 * last case.
+	 */
+	switch (psci_ops.conduit) {
+	case PSCI_CONDUIT_HVC:
+		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
+		if ((int)res.a0 != 0)
+			supported = false;
+		break;
+
+	case PSCI_CONDUIT_SMC:
+		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
+		if ((int)res.a0 != 0)
+			supported = false;
+		break;
+
+	default:
+		supported = false;
+	}
+
+	if (supported) {
+		__this_cpu_write(arm64_ssbd_callback_required, 1);
+		do_ssbd(true);
+	}
+
+	return supported;
+}
 #endif	/* CONFIG_ARM64_SSBD */
 
 #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)	\
@@ -506,6 +567,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors),
 	},
+#endif
+#ifdef CONFIG_ARM64_SSBD
+	{
+		.desc = "Speculative Store Bypass Disable",
+		.capability = ARM64_SSBD,
+		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+		.matches = has_ssbd_mitigation,
+	},
 #endif
 	{
 	}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 04/14] arm64: Add ARCH_WORKAROUND_2 probing
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

As for Spectre variant-2, we rely on SMCCC 1.1 to provide the
discovery mechanism for detecting the SSBD mitigation.

A new capability is also allocated for that purpose, and a
config option.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/Kconfig               |  9 ++++++
 arch/arm64/include/asm/cpucaps.h |  3 +-
 arch/arm64/kernel/cpu_errata.c   | 69 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 80 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index eb2cf4938f6d..b2103b4df467 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -938,6 +938,15 @@ config HARDEN_EL2_VECTORS
 
 	  If unsure, say Y.
 
+config ARM64_SSBD
+	bool "Speculative Store Bypass Disable" if EXPERT
+	default y
+	help
+	  This enables mitigation of the bypassing of previous stores
+	  by speculative loads.
+
+	  If unsure, say Y.
+
 menuconfig ARMV8_DEPRECATED
 	bool "Emulate deprecated/obsolete ARMv8 instructions"
 	depends on COMPAT
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index bc51b72fafd4..5b2facf786ba 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -48,7 +48,8 @@
 #define ARM64_HAS_CACHE_IDC			27
 #define ARM64_HAS_CACHE_DIC			28
 #define ARM64_HW_DBM				29
+#define ARM64_SSBD			30
 
-#define ARM64_NCAPS				30
+#define ARM64_NCAPS				31
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 0288d6cf560e..7fd6d5b001f5 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -250,6 +250,67 @@ void __init arm64_update_smccc_conduit(struct alt_instr *alt,
 
 	*updptr = cpu_to_le32(insn);
 }
+
+static void do_ssbd(bool state)
+{
+	switch (psci_ops.conduit) {
+	case PSCI_CONDUIT_HVC:
+		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
+		break;
+
+	case PSCI_CONDUIT_SMC:
+		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
+		break;
+
+	default:
+		WARN_ON_ONCE(1);
+		break;
+	}
+}
+
+static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
+				    int scope)
+{
+	struct arm_smccc_res res;
+	bool supported = true;
+
+	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+
+	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
+		return false;
+
+	/*
+	 * The probe function return value is either negative
+	 * (unsupported or mitigated), positive (unaffected), or zero
+	 * (requires mitigation). We only need to do anything in the
+	 * last case.
+	 */
+	switch (psci_ops.conduit) {
+	case PSCI_CONDUIT_HVC:
+		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
+		if ((int)res.a0 != 0)
+			supported = false;
+		break;
+
+	case PSCI_CONDUIT_SMC:
+		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
+		if ((int)res.a0 != 0)
+			supported = false;
+		break;
+
+	default:
+		supported = false;
+	}
+
+	if (supported) {
+		__this_cpu_write(arm64_ssbd_callback_required, 1);
+		do_ssbd(true);
+	}
+
+	return supported;
+}
 #endif	/* CONFIG_ARM64_SSBD */
 
 #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)	\
@@ -506,6 +567,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors),
 	},
+#endif
+#ifdef CONFIG_ARM64_SSBD
+	{
+		.desc = "Speculative Store Bypass Disable",
+		.capability = ARM64_SSBD,
+		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+		.matches = has_ssbd_mitigation,
+	},
 #endif
 	{
 	}
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 05/14] arm64: Add 'ssbd' command-line option
  2018-05-22 15:06 ` Marc Zyngier
@ 2018-05-22 15:06   ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

On a system where the firmware implements ARCH_WORKAROUND_2,
it may be useful to either permanently enable or disable the
workaround for cases where the user decides that they'd rather
not get a trap overhead, and keep the mitigation permanently
on or off instead of switching it on exception entry/exit.

In any case, default to the mitigation being enabled.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 Documentation/admin-guide/kernel-parameters.txt |  17 ++++
 arch/arm64/include/asm/cpufeature.h             |   6 ++
 arch/arm64/kernel/cpu_errata.c                  | 102 ++++++++++++++++++++----
 3 files changed, 109 insertions(+), 16 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index f2040d46f095..646e112c6f63 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4092,6 +4092,23 @@
 			expediting.  Set to zero to disable automatic
 			expediting.
 
+	ssbd=		[ARM64,HW]
+			Speculative Store Bypass Disable control
+
+			On CPUs that are vulnerable to the Speculative
+			Store Bypass vulnerability and offer a
+			firmware based mitigation, this parameter
+			indicates how the mitigation should be used:
+
+			force-on:  Unconditionnaly enable mitigation for
+				   for both kernel and userspace
+			force-off: Unconditionnaly disable mitigation for
+				   for both kernel and userspace
+			kernel:    Always enable mitigation in the
+				   kernel, and offer a prctl interface
+				   to allow userspace to register its
+				   interest in being mitigated too.
+
 	stack_guard_gap=	[MM]
 			override the default stack gap protection. The value
 			is in page units and it defines how many pages prior
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 09b0f2a80c8f..9bc548e22784 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -537,6 +537,12 @@ static inline u64 read_zcr_features(void)
 	return zcr;
 }
 
+#define ARM64_SSBD_UNKNOWN		-1
+#define ARM64_SSBD_FORCE_DISABLE	0
+#define ARM64_SSBD_EL1_ENTRY		1
+#define ARM64_SSBD_FORCE_ENABLE		2
+#define ARM64_SSBD_MITIGATED		3
+
 #endif /* __ASSEMBLY__ */
 
 #endif
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 7fd6d5b001f5..f1d4e75b0ddd 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -235,6 +235,38 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 #ifdef CONFIG_ARM64_SSBD
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
 
+int ssbd_state __read_mostly = ARM64_SSBD_EL1_ENTRY;
+
+static const struct ssbd_options {
+	const char	*str;
+	int		state;
+} ssbd_options[] = {
+	{ "force-on",	ARM64_SSBD_FORCE_ENABLE, },
+	{ "force-off",	ARM64_SSBD_FORCE_DISABLE, },
+	{ "kernel",	ARM64_SSBD_EL1_ENTRY, },
+};
+
+static int __init ssbd_cfg(char *buf)
+{
+	int i;
+
+	if (!buf || !buf[0])
+		return -EINVAL;
+
+	for (i = 0; i < ARRAY_SIZE(ssbd_options); i++) {
+		int len = strlen(ssbd_options[i].str);
+
+		if (strncmp(buf, ssbd_options[i].str, len))
+			continue;
+
+		ssbd_state = ssbd_options[i].state;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+early_param("ssbd", ssbd_cfg);
+
 void __init arm64_update_smccc_conduit(struct alt_instr *alt,
 				       __le32 *origptr, __le32 *updptr,
 				       int nr_inst)
@@ -272,44 +304,82 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 				    int scope)
 {
 	struct arm_smccc_res res;
-	bool supported = true;
+	bool required = true;
+	s32 val;
 
 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
 
-	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
+	if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
+		ssbd_state = ARM64_SSBD_UNKNOWN;
 		return false;
+	}
 
-	/*
-	 * The probe function return value is either negative
-	 * (unsupported or mitigated), positive (unaffected), or zero
-	 * (requires mitigation). We only need to do anything in the
-	 * last case.
-	 */
 	switch (psci_ops.conduit) {
 	case PSCI_CONDUIT_HVC:
 		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
-		if ((int)res.a0 != 0)
-			supported = false;
 		break;
 
 	case PSCI_CONDUIT_SMC:
 		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
-		if ((int)res.a0 != 0)
-			supported = false;
 		break;
 
 	default:
-		supported = false;
+		ssbd_state = ARM64_SSBD_UNKNOWN;
+		return false;
 	}
 
-	if (supported) {
-		__this_cpu_write(arm64_ssbd_callback_required, 1);
+	val = (s32)res.a0;
+
+	switch (val) {
+	case SMCCC_RET_NOT_SUPPORTED:
+		ssbd_state = ARM64_SSBD_UNKNOWN;
+		return false;
+
+	case SMCCC_RET_NOT_REQUIRED:
+		ssbd_state = ARM64_SSBD_MITIGATED;
+		return false;
+
+	case SMCCC_RET_SUCCESS:
+		required = true;
+		break;
+
+	case 1:	/* Mitigation not required on this CPU */
+		required = false;
+		break;
+
+	default:
+		WARN_ON(1);
+		return false;
+	}
+
+	switch (ssbd_state) {
+	case ARM64_SSBD_FORCE_DISABLE:
+		pr_info_once("%s disabled from command-line\n", entry->desc);
+		do_ssbd(false);
+		required = false;
+		break;
+
+	case ARM64_SSBD_EL1_ENTRY:
+		if (required) {
+			__this_cpu_write(arm64_ssbd_callback_required, 1);
+			do_ssbd(true);
+		}
+		break;
+
+	case ARM64_SSBD_FORCE_ENABLE:
+		pr_info_once("%s forced from command-line\n", entry->desc);
 		do_ssbd(true);
+		required = true;
+		break;
+
+	default:
+		WARN_ON(1);
+		break;
 	}
 
-	return supported;
+	return required;
 }
 #endif	/* CONFIG_ARM64_SSBD */
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 05/14] arm64: Add 'ssbd' command-line option
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

On a system where the firmware implements ARCH_WORKAROUND_2,
it may be useful to either permanently enable or disable the
workaround for cases where the user decides that they'd rather
not get a trap overhead, and keep the mitigation permanently
on or off instead of switching it on exception entry/exit.

In any case, default to the mitigation being enabled.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 Documentation/admin-guide/kernel-parameters.txt |  17 ++++
 arch/arm64/include/asm/cpufeature.h             |   6 ++
 arch/arm64/kernel/cpu_errata.c                  | 102 ++++++++++++++++++++----
 3 files changed, 109 insertions(+), 16 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index f2040d46f095..646e112c6f63 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4092,6 +4092,23 @@
 			expediting.  Set to zero to disable automatic
 			expediting.
 
+	ssbd=		[ARM64,HW]
+			Speculative Store Bypass Disable control
+
+			On CPUs that are vulnerable to the Speculative
+			Store Bypass vulnerability and offer a
+			firmware based mitigation, this parameter
+			indicates how the mitigation should be used:
+
+			force-on:  Unconditionnaly enable mitigation for
+				   for both kernel and userspace
+			force-off: Unconditionnaly disable mitigation for
+				   for both kernel and userspace
+			kernel:    Always enable mitigation in the
+				   kernel, and offer a prctl interface
+				   to allow userspace to register its
+				   interest in being mitigated too.
+
 	stack_guard_gap=	[MM]
 			override the default stack gap protection. The value
 			is in page units and it defines how many pages prior
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 09b0f2a80c8f..9bc548e22784 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -537,6 +537,12 @@ static inline u64 read_zcr_features(void)
 	return zcr;
 }
 
+#define ARM64_SSBD_UNKNOWN		-1
+#define ARM64_SSBD_FORCE_DISABLE	0
+#define ARM64_SSBD_EL1_ENTRY		1
+#define ARM64_SSBD_FORCE_ENABLE		2
+#define ARM64_SSBD_MITIGATED		3
+
 #endif /* __ASSEMBLY__ */
 
 #endif
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 7fd6d5b001f5..f1d4e75b0ddd 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -235,6 +235,38 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 #ifdef CONFIG_ARM64_SSBD
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
 
+int ssbd_state __read_mostly = ARM64_SSBD_EL1_ENTRY;
+
+static const struct ssbd_options {
+	const char	*str;
+	int		state;
+} ssbd_options[] = {
+	{ "force-on",	ARM64_SSBD_FORCE_ENABLE, },
+	{ "force-off",	ARM64_SSBD_FORCE_DISABLE, },
+	{ "kernel",	ARM64_SSBD_EL1_ENTRY, },
+};
+
+static int __init ssbd_cfg(char *buf)
+{
+	int i;
+
+	if (!buf || !buf[0])
+		return -EINVAL;
+
+	for (i = 0; i < ARRAY_SIZE(ssbd_options); i++) {
+		int len = strlen(ssbd_options[i].str);
+
+		if (strncmp(buf, ssbd_options[i].str, len))
+			continue;
+
+		ssbd_state = ssbd_options[i].state;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+early_param("ssbd", ssbd_cfg);
+
 void __init arm64_update_smccc_conduit(struct alt_instr *alt,
 				       __le32 *origptr, __le32 *updptr,
 				       int nr_inst)
@@ -272,44 +304,82 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 				    int scope)
 {
 	struct arm_smccc_res res;
-	bool supported = true;
+	bool required = true;
+	s32 val;
 
 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
 
-	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
+	if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
+		ssbd_state = ARM64_SSBD_UNKNOWN;
 		return false;
+	}
 
-	/*
-	 * The probe function return value is either negative
-	 * (unsupported or mitigated), positive (unaffected), or zero
-	 * (requires mitigation). We only need to do anything in the
-	 * last case.
-	 */
 	switch (psci_ops.conduit) {
 	case PSCI_CONDUIT_HVC:
 		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
-		if ((int)res.a0 != 0)
-			supported = false;
 		break;
 
 	case PSCI_CONDUIT_SMC:
 		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
-		if ((int)res.a0 != 0)
-			supported = false;
 		break;
 
 	default:
-		supported = false;
+		ssbd_state = ARM64_SSBD_UNKNOWN;
+		return false;
 	}
 
-	if (supported) {
-		__this_cpu_write(arm64_ssbd_callback_required, 1);
+	val = (s32)res.a0;
+
+	switch (val) {
+	case SMCCC_RET_NOT_SUPPORTED:
+		ssbd_state = ARM64_SSBD_UNKNOWN;
+		return false;
+
+	case SMCCC_RET_NOT_REQUIRED:
+		ssbd_state = ARM64_SSBD_MITIGATED;
+		return false;
+
+	case SMCCC_RET_SUCCESS:
+		required = true;
+		break;
+
+	case 1:	/* Mitigation not required on this CPU */
+		required = false;
+		break;
+
+	default:
+		WARN_ON(1);
+		return false;
+	}
+
+	switch (ssbd_state) {
+	case ARM64_SSBD_FORCE_DISABLE:
+		pr_info_once("%s disabled from command-line\n", entry->desc);
+		do_ssbd(false);
+		required = false;
+		break;
+
+	case ARM64_SSBD_EL1_ENTRY:
+		if (required) {
+			__this_cpu_write(arm64_ssbd_callback_required, 1);
+			do_ssbd(true);
+		}
+		break;
+
+	case ARM64_SSBD_FORCE_ENABLE:
+		pr_info_once("%s forced from command-line\n", entry->desc);
 		do_ssbd(true);
+		required = true;
+		break;
+
+	default:
+		WARN_ON(1);
+		break;
 	}
 
-	return supported;
+	return required;
 }
 #endif	/* CONFIG_ARM64_SSBD */
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 06/14] arm64: ssbd: Add global mitigation state accessor
  2018-05-22 15:06 ` Marc Zyngier
@ 2018-05-22 15:06   ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

We're about to need the mitigation state in various parts of the
kernel in order to do the right thing for userspace and guests.

Let's expose an accessor that will let other subsystems know
about the state.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 9bc548e22784..1bacdf57f0af 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -543,6 +543,16 @@ static inline u64 read_zcr_features(void)
 #define ARM64_SSBD_FORCE_ENABLE		2
 #define ARM64_SSBD_MITIGATED		3
 
+static inline int arm64_get_ssbd_state(void)
+{
+#ifdef CONFIG_ARM64_SSBD
+	extern int ssbd_state;
+	return ssbd_state;
+#else
+	return ARM64_SSBD_UNKNOWN;
+#endif
+}
+
 #endif /* __ASSEMBLY__ */
 
 #endif
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 06/14] arm64: ssbd: Add global mitigation state accessor
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

We're about to need the mitigation state in various parts of the
kernel in order to do the right thing for userspace and guests.

Let's expose an accessor that will let other subsystems know
about the state.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 9bc548e22784..1bacdf57f0af 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -543,6 +543,16 @@ static inline u64 read_zcr_features(void)
 #define ARM64_SSBD_FORCE_ENABLE		2
 #define ARM64_SSBD_MITIGATED		3
 
+static inline int arm64_get_ssbd_state(void)
+{
+#ifdef CONFIG_ARM64_SSBD
+	extern int ssbd_state;
+	return ssbd_state;
+#else
+	return ARM64_SSBD_UNKNOWN;
+#endif
+}
+
 #endif /* __ASSEMBLY__ */
 
 #endif
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 07/14] arm64: ssbd: Skip apply_ssbd if not using dynamic mitigation
  2018-05-22 15:06 ` Marc Zyngier
@ 2018-05-22 15:06   ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

In order to avoid checking arm64_ssbd_callback_required on each
kernel entry/exit even if no mitigation is required, let's
add yet another alternative that by default jumps over the mitigation,
and that gets nop'ed out if we're doing dynamic mitigation.

Think of it as a poor man's static key...

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 14 ++++++++++++++
 arch/arm64/kernel/entry.S      |  3 +++
 2 files changed, 17 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index f1d4e75b0ddd..8f686f39b9c1 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -283,6 +283,20 @@ void __init arm64_update_smccc_conduit(struct alt_instr *alt,
 	*updptr = cpu_to_le32(insn);
 }
 
+void __init arm64_enable_wa2_handling(struct alt_instr *alt,
+				      __le32 *origptr, __le32 *updptr,
+				      int nr_inst)
+{
+	BUG_ON(nr_inst != 1);
+	/*
+	 * Only allow mitigation on EL1 entry/exit and guest
+	 * ARCH_WORKAROUND_2 handling if the SSBD state allows it to
+	 * be flipped.
+	 */
+	if (arm64_get_ssbd_state() == ARM64_SSBD_EL1_ENTRY)
+		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
+}
+
 static void do_ssbd(bool state)
 {
 	switch (psci_ops.conduit) {
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 29ad672a6abd..e6f6e2339b22 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -142,6 +142,9 @@ alternative_else_nop_endif
 	// to save/restore them if required.
 	.macro	apply_ssbd, state, targ, tmp1, tmp2
 #ifdef CONFIG_ARM64_SSBD
+alternative_cb	arm64_enable_wa2_handling
+	b	\targ
+alternative_cb_end
 	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
 	cbz	\tmp2, \targ
 	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 07/14] arm64: ssbd: Skip apply_ssbd if not using dynamic mitigation
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

In order to avoid checking arm64_ssbd_callback_required on each
kernel entry/exit even if no mitigation is required, let's
add yet another alternative that by default jumps over the mitigation,
and that gets nop'ed out if we're doing dynamic mitigation.

Think of it as a poor man's static key...

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 14 ++++++++++++++
 arch/arm64/kernel/entry.S      |  3 +++
 2 files changed, 17 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index f1d4e75b0ddd..8f686f39b9c1 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -283,6 +283,20 @@ void __init arm64_update_smccc_conduit(struct alt_instr *alt,
 	*updptr = cpu_to_le32(insn);
 }
 
+void __init arm64_enable_wa2_handling(struct alt_instr *alt,
+				      __le32 *origptr, __le32 *updptr,
+				      int nr_inst)
+{
+	BUG_ON(nr_inst != 1);
+	/*
+	 * Only allow mitigation on EL1 entry/exit and guest
+	 * ARCH_WORKAROUND_2 handling if the SSBD state allows it to
+	 * be flipped.
+	 */
+	if (arm64_get_ssbd_state() == ARM64_SSBD_EL1_ENTRY)
+		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
+}
+
 static void do_ssbd(bool state)
 {
 	switch (psci_ops.conduit) {
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 29ad672a6abd..e6f6e2339b22 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -142,6 +142,9 @@ alternative_else_nop_endif
 	// to save/restore them if required.
 	.macro	apply_ssbd, state, targ, tmp1, tmp2
 #ifdef CONFIG_ARM64_SSBD
+alternative_cb	arm64_enable_wa2_handling
+	b	\targ
+alternative_cb_end
 	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
 	cbz	\tmp2, \targ
 	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 08/14] arm64: ssbd: Disable mitigation on CPU resume if required by user
  2018-05-22 15:06 ` Marc Zyngier
  (?)
@ 2018-05-22 15:06   ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

On a system where firmware can dynamically change the state of the
mitigation, the CPU will always come up with the mitigation enabled,
including when coming back from suspend.

If the user has requested "no mitigation" via a command line option,
let's enforce it by calling into the firmware again to disable it.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 6 ++++++
 arch/arm64/kernel/cpu_errata.c      | 8 ++++----
 arch/arm64/kernel/suspend.c         | 8 ++++++++
 3 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 1bacdf57f0af..d9dcb683259e 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -553,6 +553,12 @@ static inline int arm64_get_ssbd_state(void)
 #endif
 }
 
+#ifdef CONFIG_ARM64_SSBD
+void arm64_set_ssbd_mitigation(bool state);
+#else
+static inline void arm64_set_ssbd_mitigation(bool state) {}
+#endif
+
 #endif /* __ASSEMBLY__ */
 
 #endif
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 8f686f39b9c1..b4c12e9140f0 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -297,7 +297,7 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt,
 		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
 }
 
-static void do_ssbd(bool state)
+void arm64_set_ssbd_mitigation(bool state)
 {
 	switch (psci_ops.conduit) {
 	case PSCI_CONDUIT_HVC:
@@ -371,20 +371,20 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 	switch (ssbd_state) {
 	case ARM64_SSBD_FORCE_DISABLE:
 		pr_info_once("%s disabled from command-line\n", entry->desc);
-		do_ssbd(false);
+		arm64_set_ssbd_mitigation(false);
 		required = false;
 		break;
 
 	case ARM64_SSBD_EL1_ENTRY:
 		if (required) {
 			__this_cpu_write(arm64_ssbd_callback_required, 1);
-			do_ssbd(true);
+			arm64_set_ssbd_mitigation(true);
 		}
 		break;
 
 	case ARM64_SSBD_FORCE_ENABLE:
 		pr_info_once("%s forced from command-line\n", entry->desc);
-		do_ssbd(true);
+		arm64_set_ssbd_mitigation(true);
 		required = true;
 		break;
 
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index a307b9e13392..70c283368b64 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -62,6 +62,14 @@ void notrace __cpu_suspend_exit(void)
 	 */
 	if (hw_breakpoint_restore)
 		hw_breakpoint_restore(cpu);
+
+	/*
+	 * On resume, firmware implementing dynamic mitigation will
+	 * have turned the mitigation on. If the user has forcefully
+	 * disabled it, make sure their wishes are obeyed.
+	 */
+	if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
+		arm64_set_ssbd_mitigation(false);
 }
 
 /*
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 08/14] arm64: ssbd: Disable mitigation on CPU resume if required by user
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Kees Cook, Catalin Marinas, Will Deacon, Christoffer Dall,
	Andy Lutomirski, Greg Kroah-Hartman, Thomas Gleixner

On a system where firmware can dynamically change the state of the
mitigation, the CPU will always come up with the mitigation enabled,
including when coming back from suspend.

If the user has requested "no mitigation" via a command line option,
let's enforce it by calling into the firmware again to disable it.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 6 ++++++
 arch/arm64/kernel/cpu_errata.c      | 8 ++++----
 arch/arm64/kernel/suspend.c         | 8 ++++++++
 3 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 1bacdf57f0af..d9dcb683259e 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -553,6 +553,12 @@ static inline int arm64_get_ssbd_state(void)
 #endif
 }
 
+#ifdef CONFIG_ARM64_SSBD
+void arm64_set_ssbd_mitigation(bool state);
+#else
+static inline void arm64_set_ssbd_mitigation(bool state) {}
+#endif
+
 #endif /* __ASSEMBLY__ */
 
 #endif
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 8f686f39b9c1..b4c12e9140f0 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -297,7 +297,7 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt,
 		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
 }
 
-static void do_ssbd(bool state)
+void arm64_set_ssbd_mitigation(bool state)
 {
 	switch (psci_ops.conduit) {
 	case PSCI_CONDUIT_HVC:
@@ -371,20 +371,20 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 	switch (ssbd_state) {
 	case ARM64_SSBD_FORCE_DISABLE:
 		pr_info_once("%s disabled from command-line\n", entry->desc);
-		do_ssbd(false);
+		arm64_set_ssbd_mitigation(false);
 		required = false;
 		break;
 
 	case ARM64_SSBD_EL1_ENTRY:
 		if (required) {
 			__this_cpu_write(arm64_ssbd_callback_required, 1);
-			do_ssbd(true);
+			arm64_set_ssbd_mitigation(true);
 		}
 		break;
 
 	case ARM64_SSBD_FORCE_ENABLE:
 		pr_info_once("%s forced from command-line\n", entry->desc);
-		do_ssbd(true);
+		arm64_set_ssbd_mitigation(true);
 		required = true;
 		break;
 
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index a307b9e13392..70c283368b64 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -62,6 +62,14 @@ void notrace __cpu_suspend_exit(void)
 	 */
 	if (hw_breakpoint_restore)
 		hw_breakpoint_restore(cpu);
+
+	/*
+	 * On resume, firmware implementing dynamic mitigation will
+	 * have turned the mitigation on. If the user has forcefully
+	 * disabled it, make sure their wishes are obeyed.
+	 */
+	if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
+		arm64_set_ssbd_mitigation(false);
 }
 
 /*
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 08/14] arm64: ssbd: Disable mitigation on CPU resume if required by user
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

On a system where firmware can dynamically change the state of the
mitigation, the CPU will always come up with the mitigation enabled,
including when coming back from suspend.

If the user has requested "no mitigation" via a command line option,
let's enforce it by calling into the firmware again to disable it.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 6 ++++++
 arch/arm64/kernel/cpu_errata.c      | 8 ++++----
 arch/arm64/kernel/suspend.c         | 8 ++++++++
 3 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 1bacdf57f0af..d9dcb683259e 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -553,6 +553,12 @@ static inline int arm64_get_ssbd_state(void)
 #endif
 }
 
+#ifdef CONFIG_ARM64_SSBD
+void arm64_set_ssbd_mitigation(bool state);
+#else
+static inline void arm64_set_ssbd_mitigation(bool state) {}
+#endif
+
 #endif /* __ASSEMBLY__ */
 
 #endif
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 8f686f39b9c1..b4c12e9140f0 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -297,7 +297,7 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt,
 		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
 }
 
-static void do_ssbd(bool state)
+void arm64_set_ssbd_mitigation(bool state)
 {
 	switch (psci_ops.conduit) {
 	case PSCI_CONDUIT_HVC:
@@ -371,20 +371,20 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 	switch (ssbd_state) {
 	case ARM64_SSBD_FORCE_DISABLE:
 		pr_info_once("%s disabled from command-line\n", entry->desc);
-		do_ssbd(false);
+		arm64_set_ssbd_mitigation(false);
 		required = false;
 		break;
 
 	case ARM64_SSBD_EL1_ENTRY:
 		if (required) {
 			__this_cpu_write(arm64_ssbd_callback_required, 1);
-			do_ssbd(true);
+			arm64_set_ssbd_mitigation(true);
 		}
 		break;
 
 	case ARM64_SSBD_FORCE_ENABLE:
 		pr_info_once("%s forced from command-line\n", entry->desc);
-		do_ssbd(true);
+		arm64_set_ssbd_mitigation(true);
 		required = true;
 		break;
 
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index a307b9e13392..70c283368b64 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -62,6 +62,14 @@ void notrace __cpu_suspend_exit(void)
 	 */
 	if (hw_breakpoint_restore)
 		hw_breakpoint_restore(cpu);
+
+	/*
+	 * On resume, firmware implementing dynamic mitigation will
+	 * have turned the mitigation on. If the user has forcefully
+	 * disabled it, make sure their wishes are obeyed.
+	 */
+	if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
+		arm64_set_ssbd_mitigation(false);
 }
 
 /*
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 09/14] arm64: ssbd: Introduce thread flag to control userspace mitigation
  2018-05-22 15:06 ` Marc Zyngier
  (?)
@ 2018-05-22 15:06   ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

In order to allow userspace to be mitigated on demand, let's
introduce a new thread flag that prevents the mitigation from
being turned off when exiting to userspace, and doesn't turn
it on on entry into the kernel (with the assumtion that the
mitigation is always enabled in the kernel itself).

This will be used by a prctl interface introduced in a later
patch.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/thread_info.h | 1 +
 arch/arm64/kernel/entry.S            | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 740aa03c5f0d..cbcf11b5e637 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -94,6 +94,7 @@ void arch_release_task_struct(struct task_struct *tsk);
 #define TIF_32BIT		22	/* 32bit process */
 #define TIF_SVE			23	/* Scalable Vector Extension in use */
 #define TIF_SVE_VL_INHERIT	24	/* Inherit sve_vl_onexec across exec */
+#define TIF_SSBD		25	/* Wants SSB mitigation */
 
 #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
 #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index e6f6e2339b22..28ad8799406f 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -147,6 +147,8 @@ alternative_cb	arm64_enable_wa2_handling
 alternative_cb_end
 	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
 	cbz	\tmp2, \targ
+	ldr	\tmp2, [tsk, #TSK_TI_FLAGS]
+	tbnz	\tmp2, #TIF_SSBD, \targ
 	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
 	mov	w1, #\state
 alternative_cb	arm64_update_smccc_conduit
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 09/14] arm64: ssbd: Introduce thread flag to control userspace mitigation
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Kees Cook, Catalin Marinas, Will Deacon, Christoffer Dall,
	Andy Lutomirski, Greg Kroah-Hartman, Thomas Gleixner

In order to allow userspace to be mitigated on demand, let's
introduce a new thread flag that prevents the mitigation from
being turned off when exiting to userspace, and doesn't turn
it on on entry into the kernel (with the assumtion that the
mitigation is always enabled in the kernel itself).

This will be used by a prctl interface introduced in a later
patch.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/thread_info.h | 1 +
 arch/arm64/kernel/entry.S            | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 740aa03c5f0d..cbcf11b5e637 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -94,6 +94,7 @@ void arch_release_task_struct(struct task_struct *tsk);
 #define TIF_32BIT		22	/* 32bit process */
 #define TIF_SVE			23	/* Scalable Vector Extension in use */
 #define TIF_SVE_VL_INHERIT	24	/* Inherit sve_vl_onexec across exec */
+#define TIF_SSBD		25	/* Wants SSB mitigation */
 
 #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
 #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index e6f6e2339b22..28ad8799406f 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -147,6 +147,8 @@ alternative_cb	arm64_enable_wa2_handling
 alternative_cb_end
 	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
 	cbz	\tmp2, \targ
+	ldr	\tmp2, [tsk, #TSK_TI_FLAGS]
+	tbnz	\tmp2, #TIF_SSBD, \targ
 	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
 	mov	w1, #\state
 alternative_cb	arm64_update_smccc_conduit
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 09/14] arm64: ssbd: Introduce thread flag to control userspace mitigation
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

In order to allow userspace to be mitigated on demand, let's
introduce a new thread flag that prevents the mitigation from
being turned off when exiting to userspace, and doesn't turn
it on on entry into the kernel (with the assumtion that the
mitigation is always enabled in the kernel itself).

This will be used by a prctl interface introduced in a later
patch.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/thread_info.h | 1 +
 arch/arm64/kernel/entry.S            | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 740aa03c5f0d..cbcf11b5e637 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -94,6 +94,7 @@ void arch_release_task_struct(struct task_struct *tsk);
 #define TIF_32BIT		22	/* 32bit process */
 #define TIF_SVE			23	/* Scalable Vector Extension in use */
 #define TIF_SVE_VL_INHERIT	24	/* Inherit sve_vl_onexec across exec */
+#define TIF_SSBD		25	/* Wants SSB mitigation */
 
 #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
 #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index e6f6e2339b22..28ad8799406f 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -147,6 +147,8 @@ alternative_cb	arm64_enable_wa2_handling
 alternative_cb_end
 	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
 	cbz	\tmp2, \targ
+	ldr	\tmp2, [tsk, #TSK_TI_FLAGS]
+	tbnz	\tmp2, #TIF_SSBD, \targ
 	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
 	mov	w1, #\state
 alternative_cb	arm64_update_smccc_conduit
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 10/14] arm64: ssbd: Add prctl interface for per-thread mitigation
  2018-05-22 15:06 ` Marc Zyngier
@ 2018-05-22 15:06   ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

If running on a system that performs dynamic SSBD mitigation, allow
userspace to request the mitigation for itself. This is implemented
as a prctl call, allowing the mitigation to be enabled or disabled at
will for this particular thread.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/Makefile |   1 +
 arch/arm64/kernel/ssbd.c   | 107 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 108 insertions(+)
 create mode 100644 arch/arm64/kernel/ssbd.c

diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index bf825f38d206..0025f8691046 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -54,6 +54,7 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST)	+= arm64-reloc-test.o
 arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
 arm64-obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
 arm64-obj-$(CONFIG_ARM_SDE_INTERFACE)	+= sdei.o
+arm64-obj-$(CONFIG_ARM64_SSBD)		+= ssbd.o
 
 obj-y					+= $(arm64-obj-y) vdso/ probes/
 obj-m					+= $(arm64-obj-m)
diff --git a/arch/arm64/kernel/ssbd.c b/arch/arm64/kernel/ssbd.c
new file mode 100644
index 000000000000..34e3c430176b
--- /dev/null
+++ b/arch/arm64/kernel/ssbd.c
@@ -0,0 +1,107 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2018 ARM Ltd, All Rights Reserved.
+ */
+
+#include <linux/sched.h>
+#include <linux/thread_info.h>
+
+#include <asm/cpufeature.h>
+
+/*
+ * prctl interface for SSBD
+ * FIXME: Drop the below ifdefery once the common interface has been merged.
+ */
+#ifdef PR_SPEC_STORE_BYPASS
+static int ssbd_prctl_set(struct task_struct *task, unsigned long ctrl)
+{
+	int state = arm64_get_ssbd_state();
+
+	/* Unsupported or already mitigated */
+	if (state == ARM64_SSBD_UNKNOWN)
+		return -EINVAL;
+	if (state == ARM64_SSBD_MITIGATED)
+		return -EPERM;
+
+	/*
+	 * Things are a bit backward here: the arm64 internal API
+	 * *enables the mitigation* when the userspace API *disables
+	 * speculation*. So much fun.
+	 */
+	switch (ctrl) {
+	case PR_SPEC_ENABLE:
+		/* If speculation is force disabled, enable is not allowed */
+		if (state == ARM64_SSBD_FORCE_ENABLE ||
+		    task_spec_ssb_force_disable(task))
+			return -EPERM;
+		task_clear_spec_ssb_disable(task);
+		clear_tsk_thread_flag(task, TIF_SSBD);
+		break;
+	case PR_SPEC_DISABLE:
+		if (state == ARM64_SSBD_FORCE_DISABLE)
+			return -EPERM;
+		task_set_spec_ssb_disable(task);
+		set_tsk_thread_flag(task, TIF_SSBD);
+		break;
+	case PR_SPEC_FORCE_DISABLE:
+		if (state == ARM64_SSBD_FORCE_DISABLE)
+			return -EPERM;
+		task_set_spec_ssb_disable(task);
+		task_set_spec_ssb_force_disable(task);
+		set_tsk_thread_flag(task, TIF_SSBD);
+		break;
+	default:
+		return -ERANGE;
+	}
+
+	return 0;
+}
+
+int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
+			     unsigned long ctrl)
+{
+	switch (which) {
+	case PR_SPEC_STORE_BYPASS:
+		return ssbd_prctl_set(task, ctrl);
+	default:
+		return -ENODEV;
+	}
+}
+
+#ifdef CONFIG_SECCOMP
+void arch_seccomp_spec_mitigate(struct task_struct *task)
+{
+	ssbd_prctl_set(task, PR_SPEC_FORCE_DISABLE);
+}
+#endif
+
+static int ssbd_prctl_get(struct task_struct *task)
+{
+	switch (arm64_get_ssbd_state()) {
+	case ARM64_SSBD_UNKNOWN:
+		return -EINVAL;
+	case ARM64_SSBD_FORCE_ENABLE:
+		return PR_SPEC_DISABLE;
+	case ARM64_SSBD_EL1_ENTRY:
+		if (task_spec_ssb_force_disable(task))
+			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
+		if (task_spec_ssb_disable(task))
+			return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
+		return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
+	case ARM64_SSBD_FORCE_DISABLE:
+		return PR_SPEC_ENABLE;
+	default:
+		return PR_SPEC_NOT_AFFECTED;
+	}
+}
+
+int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
+{
+	switch (which) {
+	case PR_SPEC_STORE_BYPASS:
+		return ssbd_prctl_get(task);
+	default:
+		return -ENODEV;
+	}
+}
+#endif	/* PR_SPEC_STORE_BYPASS */
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 10/14] arm64: ssbd: Add prctl interface for per-thread mitigation
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

If running on a system that performs dynamic SSBD mitigation, allow
userspace to request the mitigation for itself. This is implemented
as a prctl call, allowing the mitigation to be enabled or disabled at
will for this particular thread.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/Makefile |   1 +
 arch/arm64/kernel/ssbd.c   | 107 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 108 insertions(+)
 create mode 100644 arch/arm64/kernel/ssbd.c

diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index bf825f38d206..0025f8691046 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -54,6 +54,7 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST)	+= arm64-reloc-test.o
 arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
 arm64-obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
 arm64-obj-$(CONFIG_ARM_SDE_INTERFACE)	+= sdei.o
+arm64-obj-$(CONFIG_ARM64_SSBD)		+= ssbd.o
 
 obj-y					+= $(arm64-obj-y) vdso/ probes/
 obj-m					+= $(arm64-obj-m)
diff --git a/arch/arm64/kernel/ssbd.c b/arch/arm64/kernel/ssbd.c
new file mode 100644
index 000000000000..34e3c430176b
--- /dev/null
+++ b/arch/arm64/kernel/ssbd.c
@@ -0,0 +1,107 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2018 ARM Ltd, All Rights Reserved.
+ */
+
+#include <linux/sched.h>
+#include <linux/thread_info.h>
+
+#include <asm/cpufeature.h>
+
+/*
+ * prctl interface for SSBD
+ * FIXME: Drop the below ifdefery once the common interface has been merged.
+ */
+#ifdef PR_SPEC_STORE_BYPASS
+static int ssbd_prctl_set(struct task_struct *task, unsigned long ctrl)
+{
+	int state = arm64_get_ssbd_state();
+
+	/* Unsupported or already mitigated */
+	if (state == ARM64_SSBD_UNKNOWN)
+		return -EINVAL;
+	if (state == ARM64_SSBD_MITIGATED)
+		return -EPERM;
+
+	/*
+	 * Things are a bit backward here: the arm64 internal API
+	 * *enables the mitigation* when the userspace API *disables
+	 * speculation*. So much fun.
+	 */
+	switch (ctrl) {
+	case PR_SPEC_ENABLE:
+		/* If speculation is force disabled, enable is not allowed */
+		if (state == ARM64_SSBD_FORCE_ENABLE ||
+		    task_spec_ssb_force_disable(task))
+			return -EPERM;
+		task_clear_spec_ssb_disable(task);
+		clear_tsk_thread_flag(task, TIF_SSBD);
+		break;
+	case PR_SPEC_DISABLE:
+		if (state == ARM64_SSBD_FORCE_DISABLE)
+			return -EPERM;
+		task_set_spec_ssb_disable(task);
+		set_tsk_thread_flag(task, TIF_SSBD);
+		break;
+	case PR_SPEC_FORCE_DISABLE:
+		if (state == ARM64_SSBD_FORCE_DISABLE)
+			return -EPERM;
+		task_set_spec_ssb_disable(task);
+		task_set_spec_ssb_force_disable(task);
+		set_tsk_thread_flag(task, TIF_SSBD);
+		break;
+	default:
+		return -ERANGE;
+	}
+
+	return 0;
+}
+
+int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
+			     unsigned long ctrl)
+{
+	switch (which) {
+	case PR_SPEC_STORE_BYPASS:
+		return ssbd_prctl_set(task, ctrl);
+	default:
+		return -ENODEV;
+	}
+}
+
+#ifdef CONFIG_SECCOMP
+void arch_seccomp_spec_mitigate(struct task_struct *task)
+{
+	ssbd_prctl_set(task, PR_SPEC_FORCE_DISABLE);
+}
+#endif
+
+static int ssbd_prctl_get(struct task_struct *task)
+{
+	switch (arm64_get_ssbd_state()) {
+	case ARM64_SSBD_UNKNOWN:
+		return -EINVAL;
+	case ARM64_SSBD_FORCE_ENABLE:
+		return PR_SPEC_DISABLE;
+	case ARM64_SSBD_EL1_ENTRY:
+		if (task_spec_ssb_force_disable(task))
+			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
+		if (task_spec_ssb_disable(task))
+			return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
+		return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
+	case ARM64_SSBD_FORCE_DISABLE:
+		return PR_SPEC_ENABLE;
+	default:
+		return PR_SPEC_NOT_AFFECTED;
+	}
+}
+
+int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
+{
+	switch (which) {
+	case PR_SPEC_STORE_BYPASS:
+		return ssbd_prctl_get(task);
+	default:
+		return -ENODEV;
+	}
+}
+#endif	/* PR_SPEC_STORE_BYPASS */
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 11/14] arm64: KVM: Add HYP per-cpu accessors
  2018-05-22 15:06 ` Marc Zyngier
@ 2018-05-22 15:06   ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

As we're going to require to access per-cpu variables at EL2,
let's craft the minimum set of accessors required to implement
reading a per-cpu variable, relying on tpidr_el2 to contain the
per-cpu offset.

Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_asm.h | 27 +++++++++++++++++++++++++--
 1 file changed, 25 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index f6648a3e4152..fefd8cf42c35 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -71,14 +71,37 @@ extern u32 __kvm_get_mdcr_el2(void);
 
 extern u32 __init_stage2_translation(void);
 
+/* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
+#define __hyp_this_cpu_ptr(sym)						\
+	({								\
+		void *__ptr = hyp_symbol_addr(sym);			\
+		__ptr += read_sysreg(tpidr_el2);			\
+		(typeof(&sym))__ptr;					\
+	 })
+
+#define __hyp_this_cpu_read(sym)					\
+	({								\
+		*__hyp_this_cpu_ptr(sym);				\
+	 })
+
 #else /* __ASSEMBLY__ */
 
-.macro get_host_ctxt reg, tmp
-	adr_l	\reg, kvm_host_cpu_state
+.macro hyp_adr_this_cpu reg, sym, tmp
+	adr_l	\reg, \sym
 	mrs	\tmp, tpidr_el2
 	add	\reg, \reg, \tmp
 .endm
 
+.macro hyp_ldr_this_cpu reg, sym, tmp
+	adr_l	\reg, \sym
+	mrs	\tmp, tpidr_el2
+	ldr	\reg,  [\reg, \tmp]
+.endm
+
+.macro get_host_ctxt reg, tmp
+	hyp_adr_this_cpu \reg, kvm_host_cpu_state, \tmp
+.endm
+
 .macro get_vcpu_ptr vcpu, ctxt
 	get_host_ctxt \ctxt, \vcpu
 	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 11/14] arm64: KVM: Add HYP per-cpu accessors
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

As we're going to require to access per-cpu variables at EL2,
let's craft the minimum set of accessors required to implement
reading a per-cpu variable, relying on tpidr_el2 to contain the
per-cpu offset.

Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_asm.h | 27 +++++++++++++++++++++++++--
 1 file changed, 25 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index f6648a3e4152..fefd8cf42c35 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -71,14 +71,37 @@ extern u32 __kvm_get_mdcr_el2(void);
 
 extern u32 __init_stage2_translation(void);
 
+/* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
+#define __hyp_this_cpu_ptr(sym)						\
+	({								\
+		void *__ptr = hyp_symbol_addr(sym);			\
+		__ptr += read_sysreg(tpidr_el2);			\
+		(typeof(&sym))__ptr;					\
+	 })
+
+#define __hyp_this_cpu_read(sym)					\
+	({								\
+		*__hyp_this_cpu_ptr(sym);				\
+	 })
+
 #else /* __ASSEMBLY__ */
 
-.macro get_host_ctxt reg, tmp
-	adr_l	\reg, kvm_host_cpu_state
+.macro hyp_adr_this_cpu reg, sym, tmp
+	adr_l	\reg, \sym
 	mrs	\tmp, tpidr_el2
 	add	\reg, \reg, \tmp
 .endm
 
+.macro hyp_ldr_this_cpu reg, sym, tmp
+	adr_l	\reg, \sym
+	mrs	\tmp, tpidr_el2
+	ldr	\reg,  [\reg, \tmp]
+.endm
+
+.macro get_host_ctxt reg, tmp
+	hyp_adr_this_cpu \reg, kvm_host_cpu_state, \tmp
+.endm
+
 .macro get_vcpu_ptr vcpu, ctxt
 	get_host_ctxt \ctxt, \vcpu
 	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 12/14] arm64: KVM: Add ARCH_WORKAROUND_2 support for guests
  2018-05-22 15:06 ` Marc Zyngier
@ 2018-05-22 15:06   ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

In order to offer ARCH_WORKAROUND_2 support to guests, we need
a bit of infrastructure.

Let's add a flag indicating whether or not the guest uses
SSBD mitigation. Depending on the state of this flag, allow
KVM to disable ARCH_WORKAROUND_2 before entering the guest,
and enable it when exiting it.

Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h    |  5 +++++
 arch/arm64/include/asm/kvm_asm.h  |  3 +++
 arch/arm64/include/asm/kvm_host.h |  3 +++
 arch/arm64/include/asm/kvm_mmu.h  | 24 ++++++++++++++++++++++
 arch/arm64/kvm/hyp/switch.c       | 42 +++++++++++++++++++++++++++++++++++++++
 virt/kvm/arm/arm.c                |  4 ++++
 6 files changed, 81 insertions(+)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 707a1f06dc5d..b0c17d88ed40 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -319,6 +319,11 @@ static inline int kvm_map_vectors(void)
 	return 0;
 }
 
+static inline int hyp_map_aux_data(void)
+{
+	return 0;
+}
+
 #define kvm_phys_to_vttbr(addr)		(addr)
 
 #endif	/* !__ASSEMBLY__ */
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index fefd8cf42c35..d4fbb1356c4c 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -33,6 +33,9 @@
 #define KVM_ARM64_DEBUG_DIRTY_SHIFT	0
 #define KVM_ARM64_DEBUG_DIRTY		(1 << KVM_ARM64_DEBUG_DIRTY_SHIFT)
 
+#define	VCPU_WORKAROUND_2_FLAG_SHIFT	0
+#define	VCPU_WORKAROUND_2_FLAG		(_AC(1, UL) << VCPU_WORKAROUND_2_FLAG_SHIFT)
+
 /* Translate a kernel address of @sym into its equivalent linear mapping */
 #define kvm_ksym_ref(sym)						\
 	({								\
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 469de8acd06f..9bef3f69bdcd 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -216,6 +216,9 @@ struct kvm_vcpu_arch {
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
 
+	/* State of various workarounds, see kvm_asm.h for bit assignment */
+	u64 workaround_flags;
+
 	/* Guest debug state */
 	u64 debug_flags;
 
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 082110993647..eb7a5c2a2bfb 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -457,6 +457,30 @@ static inline int kvm_map_vectors(void)
 }
 #endif
 
+#ifdef CONFIG_ARM64_SSBD
+DECLARE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
+
+static inline int hyp_map_aux_data(void)
+{
+	int cpu, err;
+
+	for_each_possible_cpu(cpu) {
+		u64 *ptr;
+
+		ptr = per_cpu_ptr(&arm64_ssbd_callback_required, cpu);
+		err = create_hyp_mappings(ptr, ptr + 1, PAGE_HYP);
+		if (err)
+			return err;
+	}
+	return 0;
+}
+#else
+static inline int hyp_map_aux_data(void)
+{
+	return 0;
+}
+#endif
+
 #define kvm_phys_to_vttbr(addr)		phys_to_ttbr(addr)
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index d9645236e474..c50cedc447f1 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -15,6 +15,7 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <linux/arm-smccc.h>
 #include <linux/types.h>
 #include <linux/jump_label.h>
 #include <uapi/linux/psci.h>
@@ -389,6 +390,39 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 	return false;
 }
 
+static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu)
+{
+	if (!cpus_have_const_cap(ARM64_SSBD))
+		return false;
+
+	return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG);
+}
+
+static void __hyp_text __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu)
+{
+#ifdef CONFIG_ARM64_SSBD
+	/*
+	 * The host runs with the workaround always present. If the
+	 * guest wants it disabled, so be it...
+	 */
+	if (__needs_ssbd_off(vcpu) &&
+	    __hyp_this_cpu_read(arm64_ssbd_callback_required))
+		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL);
+#endif
+}
+
+static void __hyp_text __set_host_arch_workaround_state(struct kvm_vcpu *vcpu)
+{
+#ifdef CONFIG_ARM64_SSBD
+	/*
+	 * If the guest has disabled the workaround, bring it back on.
+	 */
+	if (__needs_ssbd_off(vcpu) &&
+	    __hyp_this_cpu_read(arm64_ssbd_callback_required))
+		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1, NULL);
+#endif
+}
+
 /* Switch to the guest for VHE systems running in EL2 */
 int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 {
@@ -409,6 +443,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	sysreg_restore_guest_state_vhe(guest_ctxt);
 	__debug_switch_to_guest(vcpu);
 
+	__set_guest_arch_workaround_state(vcpu);
+
 	do {
 		/* Jump in the fire! */
 		exit_code = __guest_enter(vcpu, host_ctxt);
@@ -416,6 +452,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 		/* And we're baaack! */
 	} while (fixup_guest_exit(vcpu, &exit_code));
 
+	__set_host_arch_workaround_state(vcpu);
+
 	fp_enabled = fpsimd_enabled_vhe();
 
 	sysreg_save_guest_state_vhe(guest_ctxt);
@@ -465,6 +503,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
 	__sysreg_restore_state_nvhe(guest_ctxt);
 	__debug_switch_to_guest(vcpu);
 
+	__set_guest_arch_workaround_state(vcpu);
+
 	do {
 		/* Jump in the fire! */
 		exit_code = __guest_enter(vcpu, host_ctxt);
@@ -472,6 +512,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
 		/* And we're baaack! */
 	} while (fixup_guest_exit(vcpu, &exit_code));
 
+	__set_host_arch_workaround_state(vcpu);
+
 	fp_enabled = __fpsimd_enabled_nvhe();
 
 	__sysreg_save_state_nvhe(guest_ctxt);
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index a4c1b76240df..2d9b4795edb2 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1490,6 +1490,10 @@ static int init_hyp_mode(void)
 		}
 	}
 
+	err = hyp_map_aux_data();
+	if (err)
+		kvm_err("Cannot map host auxilary data: %d\n", err);
+
 	return 0;
 
 out_err:
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 12/14] arm64: KVM: Add ARCH_WORKAROUND_2 support for guests
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

In order to offer ARCH_WORKAROUND_2 support to guests, we need
a bit of infrastructure.

Let's add a flag indicating whether or not the guest uses
SSBD mitigation. Depending on the state of this flag, allow
KVM to disable ARCH_WORKAROUND_2 before entering the guest,
and enable it when exiting it.

Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h    |  5 +++++
 arch/arm64/include/asm/kvm_asm.h  |  3 +++
 arch/arm64/include/asm/kvm_host.h |  3 +++
 arch/arm64/include/asm/kvm_mmu.h  | 24 ++++++++++++++++++++++
 arch/arm64/kvm/hyp/switch.c       | 42 +++++++++++++++++++++++++++++++++++++++
 virt/kvm/arm/arm.c                |  4 ++++
 6 files changed, 81 insertions(+)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 707a1f06dc5d..b0c17d88ed40 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -319,6 +319,11 @@ static inline int kvm_map_vectors(void)
 	return 0;
 }
 
+static inline int hyp_map_aux_data(void)
+{
+	return 0;
+}
+
 #define kvm_phys_to_vttbr(addr)		(addr)
 
 #endif	/* !__ASSEMBLY__ */
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index fefd8cf42c35..d4fbb1356c4c 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -33,6 +33,9 @@
 #define KVM_ARM64_DEBUG_DIRTY_SHIFT	0
 #define KVM_ARM64_DEBUG_DIRTY		(1 << KVM_ARM64_DEBUG_DIRTY_SHIFT)
 
+#define	VCPU_WORKAROUND_2_FLAG_SHIFT	0
+#define	VCPU_WORKAROUND_2_FLAG		(_AC(1, UL) << VCPU_WORKAROUND_2_FLAG_SHIFT)
+
 /* Translate a kernel address of @sym into its equivalent linear mapping */
 #define kvm_ksym_ref(sym)						\
 	({								\
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 469de8acd06f..9bef3f69bdcd 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -216,6 +216,9 @@ struct kvm_vcpu_arch {
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
 
+	/* State of various workarounds, see kvm_asm.h for bit assignment */
+	u64 workaround_flags;
+
 	/* Guest debug state */
 	u64 debug_flags;
 
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 082110993647..eb7a5c2a2bfb 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -457,6 +457,30 @@ static inline int kvm_map_vectors(void)
 }
 #endif
 
+#ifdef CONFIG_ARM64_SSBD
+DECLARE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
+
+static inline int hyp_map_aux_data(void)
+{
+	int cpu, err;
+
+	for_each_possible_cpu(cpu) {
+		u64 *ptr;
+
+		ptr = per_cpu_ptr(&arm64_ssbd_callback_required, cpu);
+		err = create_hyp_mappings(ptr, ptr + 1, PAGE_HYP);
+		if (err)
+			return err;
+	}
+	return 0;
+}
+#else
+static inline int hyp_map_aux_data(void)
+{
+	return 0;
+}
+#endif
+
 #define kvm_phys_to_vttbr(addr)		phys_to_ttbr(addr)
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index d9645236e474..c50cedc447f1 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -15,6 +15,7 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <linux/arm-smccc.h>
 #include <linux/types.h>
 #include <linux/jump_label.h>
 #include <uapi/linux/psci.h>
@@ -389,6 +390,39 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 	return false;
 }
 
+static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu)
+{
+	if (!cpus_have_const_cap(ARM64_SSBD))
+		return false;
+
+	return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG);
+}
+
+static void __hyp_text __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu)
+{
+#ifdef CONFIG_ARM64_SSBD
+	/*
+	 * The host runs with the workaround always present. If the
+	 * guest wants it disabled, so be it...
+	 */
+	if (__needs_ssbd_off(vcpu) &&
+	    __hyp_this_cpu_read(arm64_ssbd_callback_required))
+		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL);
+#endif
+}
+
+static void __hyp_text __set_host_arch_workaround_state(struct kvm_vcpu *vcpu)
+{
+#ifdef CONFIG_ARM64_SSBD
+	/*
+	 * If the guest has disabled the workaround, bring it back on.
+	 */
+	if (__needs_ssbd_off(vcpu) &&
+	    __hyp_this_cpu_read(arm64_ssbd_callback_required))
+		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1, NULL);
+#endif
+}
+
 /* Switch to the guest for VHE systems running in EL2 */
 int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 {
@@ -409,6 +443,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	sysreg_restore_guest_state_vhe(guest_ctxt);
 	__debug_switch_to_guest(vcpu);
 
+	__set_guest_arch_workaround_state(vcpu);
+
 	do {
 		/* Jump in the fire! */
 		exit_code = __guest_enter(vcpu, host_ctxt);
@@ -416,6 +452,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 		/* And we're baaack! */
 	} while (fixup_guest_exit(vcpu, &exit_code));
 
+	__set_host_arch_workaround_state(vcpu);
+
 	fp_enabled = fpsimd_enabled_vhe();
 
 	sysreg_save_guest_state_vhe(guest_ctxt);
@@ -465,6 +503,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
 	__sysreg_restore_state_nvhe(guest_ctxt);
 	__debug_switch_to_guest(vcpu);
 
+	__set_guest_arch_workaround_state(vcpu);
+
 	do {
 		/* Jump in the fire! */
 		exit_code = __guest_enter(vcpu, host_ctxt);
@@ -472,6 +512,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
 		/* And we're baaack! */
 	} while (fixup_guest_exit(vcpu, &exit_code));
 
+	__set_host_arch_workaround_state(vcpu);
+
 	fp_enabled = __fpsimd_enabled_nvhe();
 
 	__sysreg_save_state_nvhe(guest_ctxt);
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index a4c1b76240df..2d9b4795edb2 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1490,6 +1490,10 @@ static int init_hyp_mode(void)
 		}
 	}
 
+	err = hyp_map_aux_data();
+	if (err)
+		kvm_err("Cannot map host auxilary data: %d\n", err);
+
 	return 0;
 
 out_err:
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 13/14] arm64: KVM: Handle guest's ARCH_WORKAROUND_2 requests
  2018-05-22 15:06 ` Marc Zyngier
@ 2018-05-22 15:06   ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

In order to forward the guest's ARCH_WORKAROUND_2 calls to EL3,
add a small(-ish) sequence to handle it at EL2. Special care must
be taken to track the state of the guest itself by updating the
workaround flags. We also rely on patching to enable calls into
the firmware.

Note that since we need to execute branches, this always executes
after the Spectre-v2 mitigation has been applied.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/asm-offsets.c |  1 +
 arch/arm64/kvm/hyp/hyp-entry.S  | 38 +++++++++++++++++++++++++++++++++++++-
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 5bdda651bd05..323aeb5f2fe6 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -136,6 +136,7 @@ int main(void)
 #ifdef CONFIG_KVM_ARM_HOST
   DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
   DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
+  DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
   DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
   DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
   DEFINE(CPU_FP_REGS,		offsetof(struct kvm_regs, fp_regs));
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index bffece27b5c1..5b1fa37ca1f4 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -106,8 +106,44 @@ el1_hvc_guest:
 	 */
 	ldr	x1, [sp]				// Guest's x0
 	eor	w1, w1, #ARM_SMCCC_ARCH_WORKAROUND_1
+	cbz	w1, wa_epilogue
+
+	/* ARM_SMCCC_ARCH_WORKAROUND_2 handling */
+	eor	w1, w1, #(ARM_SMCCC_ARCH_WORKAROUND_1 ^ \
+			  ARM_SMCCC_ARCH_WORKAROUND_2)
 	cbnz	w1, el1_trap
-	mov	x0, x1
+
+#ifdef CONFIG_ARM64_SSBD
+alternative_cb	arm64_enable_wa2_handling
+	b	wa2_end
+alternative_cb_end
+	get_vcpu_ptr	x2, x0
+	ldr	x0, [x2, #VCPU_WORKAROUND_FLAGS]
+
+	/* Sanitize the argument and update the guest flags*/
+	ldr	x1, [sp, #8]			// Guest's x1
+	clz	w1, w1				// Murphy's device:
+	lsr	w1, w1, #5			// w1 = !!w1 without using
+	eor	w1, w1, #1			// the flags...
+	bfi	x0, x1, #VCPU_WORKAROUND_2_FLAG_SHIFT, #1
+	str	x0, [x2, #VCPU_WORKAROUND_FLAGS]
+
+	/* Check that we actually need to perform the call */
+	hyp_ldr_this_cpu x0, arm64_ssbd_callback_required, x2
+	cbz	x0, wa2_end
+
+	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
+	smc	#0
+
+	/* Don't leak data from the SMC call */
+	mov	x3, xzr
+wa2_end:
+	mov	x2, xzr
+	mov	x1, xzr
+#endif
+
+wa_epilogue:
+	mov	x0, xzr
 	add	sp, sp, #16
 	eret
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 13/14] arm64: KVM: Handle guest's ARCH_WORKAROUND_2 requests
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

In order to forward the guest's ARCH_WORKAROUND_2 calls to EL3,
add a small(-ish) sequence to handle it at EL2. Special care must
be taken to track the state of the guest itself by updating the
workaround flags. We also rely on patching to enable calls into
the firmware.

Note that since we need to execute branches, this always executes
after the Spectre-v2 mitigation has been applied.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/asm-offsets.c |  1 +
 arch/arm64/kvm/hyp/hyp-entry.S  | 38 +++++++++++++++++++++++++++++++++++++-
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 5bdda651bd05..323aeb5f2fe6 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -136,6 +136,7 @@ int main(void)
 #ifdef CONFIG_KVM_ARM_HOST
   DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
   DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
+  DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
   DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
   DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
   DEFINE(CPU_FP_REGS,		offsetof(struct kvm_regs, fp_regs));
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index bffece27b5c1..5b1fa37ca1f4 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -106,8 +106,44 @@ el1_hvc_guest:
 	 */
 	ldr	x1, [sp]				// Guest's x0
 	eor	w1, w1, #ARM_SMCCC_ARCH_WORKAROUND_1
+	cbz	w1, wa_epilogue
+
+	/* ARM_SMCCC_ARCH_WORKAROUND_2 handling */
+	eor	w1, w1, #(ARM_SMCCC_ARCH_WORKAROUND_1 ^ \
+			  ARM_SMCCC_ARCH_WORKAROUND_2)
 	cbnz	w1, el1_trap
-	mov	x0, x1
+
+#ifdef CONFIG_ARM64_SSBD
+alternative_cb	arm64_enable_wa2_handling
+	b	wa2_end
+alternative_cb_end
+	get_vcpu_ptr	x2, x0
+	ldr	x0, [x2, #VCPU_WORKAROUND_FLAGS]
+
+	/* Sanitize the argument and update the guest flags*/
+	ldr	x1, [sp, #8]			// Guest's x1
+	clz	w1, w1				// Murphy's device:
+	lsr	w1, w1, #5			// w1 = !!w1 without using
+	eor	w1, w1, #1			// the flags...
+	bfi	x0, x1, #VCPU_WORKAROUND_2_FLAG_SHIFT, #1
+	str	x0, [x2, #VCPU_WORKAROUND_FLAGS]
+
+	/* Check that we actually need to perform the call */
+	hyp_ldr_this_cpu x0, arm64_ssbd_callback_required, x2
+	cbz	x0, wa2_end
+
+	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
+	smc	#0
+
+	/* Don't leak data from the SMC call */
+	mov	x3, xzr
+wa2_end:
+	mov	x2, xzr
+	mov	x1, xzr
+#endif
+
+wa_epilogue:
+	mov	x0, xzr
 	add	sp, sp, #16
 	eret
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 14/14] arm64: KVM: Add ARCH_WORKAROUND_2 discovery through ARCH_FEATURES_FUNC_ID
  2018-05-22 15:06 ` Marc Zyngier
@ 2018-05-22 15:06   ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

Now that all our infrastructure is in place, let's expose the
availability of ARCH_WORKAROUND_2 to guests. We take this opportunity
to tidy up a couple of SMCCC constants.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_host.h   | 12 ++++++++++++
 arch/arm64/include/asm/kvm_host.h | 23 +++++++++++++++++++++++
 arch/arm64/kvm/reset.c            |  4 ++++
 virt/kvm/arm/psci.c               | 18 ++++++++++++++++--
 4 files changed, 55 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index c7c28c885a19..d478766b56c1 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -315,6 +315,18 @@ static inline bool kvm_arm_harden_branch_predictor(void)
 	return false;
 }
 
+#define KVM_SSBD_UNKNOWN		-1
+#define KVM_SSBD_FORCE_DISABLE		0
+#define KVM_SSBD_EL1_ENTRY		1
+#define KVM_SSBD_FORCE_ENABLE		2
+#define KVM_SSBD_MITIGATED		3
+
+static inline int kvm_arm_have_ssbd(void)
+{
+	/* No way to detect it yet, pretend it is not there. */
+	return KVM_SSBD_UNKNOWN;
+}
+
 static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
 
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9bef3f69bdcd..082b0dbb85c6 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -455,6 +455,29 @@ static inline bool kvm_arm_harden_branch_predictor(void)
 	return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
 }
 
+#define KVM_SSBD_UNKNOWN		-1
+#define KVM_SSBD_FORCE_DISABLE		0
+#define KVM_SSBD_EL1_ENTRY		1
+#define KVM_SSBD_FORCE_ENABLE		2
+#define KVM_SSBD_MITIGATED		3
+
+static inline int kvm_arm_have_ssbd(void)
+{
+	switch (arm64_get_ssbd_state()) {
+	case ARM64_SSBD_FORCE_DISABLE:
+		return KVM_SSBD_FORCE_DISABLE;
+	case ARM64_SSBD_EL1_ENTRY:
+		return KVM_SSBD_EL1_ENTRY;
+	case ARM64_SSBD_FORCE_ENABLE:
+		return KVM_SSBD_FORCE_ENABLE;
+	case ARM64_SSBD_MITIGATED:
+		return KVM_SSBD_MITIGATED;
+	case ARM64_SSBD_UNKNOWN:
+	default:
+		return KVM_SSBD_UNKNOWN;
+	}
+}
+
 void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu);
 void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu);
 
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 3256b9228e75..20a7dfee8494 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -122,6 +122,10 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 	/* Reset PMU */
 	kvm_pmu_vcpu_reset(vcpu);
 
+	/* Default workaround setup is enabled (if supported) */
+	if (kvm_arm_have_ssbd() == KVM_SSBD_EL1_ENTRY)
+		vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG;
+
 	/* Reset timer */
 	return kvm_timer_vcpu_reset(vcpu);
 }
diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
index c4762bef13c6..4843bfa1f986 100644
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -405,7 +405,7 @@ static int kvm_psci_call(struct kvm_vcpu *vcpu)
 int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
 {
 	u32 func_id = smccc_get_function(vcpu);
-	u32 val = PSCI_RET_NOT_SUPPORTED;
+	u32 val = SMCCC_RET_NOT_SUPPORTED;
 	u32 feature;
 
 	switch (func_id) {
@@ -417,7 +417,21 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
 		switch(feature) {
 		case ARM_SMCCC_ARCH_WORKAROUND_1:
 			if (kvm_arm_harden_branch_predictor())
-				val = 0;
+				val = SMCCC_RET_SUCCESS;
+			break;
+		case ARM_SMCCC_ARCH_WORKAROUND_2:
+			switch (kvm_arm_have_ssbd()) {
+			case KVM_SSBD_FORCE_DISABLE:
+			case KVM_SSBD_UNKNOWN:
+				break;
+			case KVM_SSBD_EL1_ENTRY:
+				val = SMCCC_RET_SUCCESS;
+				break;
+			case KVM_SSBD_FORCE_ENABLE:
+			case KVM_SSBD_MITIGATED:
+				val = SMCCC_RET_NOT_REQUIRED;
+				break;
+			}
 			break;
 		}
 		break;
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* [PATCH 14/14] arm64: KVM: Add ARCH_WORKAROUND_2 discovery through ARCH_FEATURES_FUNC_ID
@ 2018-05-22 15:06   ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 15:06 UTC (permalink / raw)
  To: linux-arm-kernel

Now that all our infrastructure is in place, let's expose the
availability of ARCH_WORKAROUND_2 to guests. We take this opportunity
to tidy up a couple of SMCCC constants.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_host.h   | 12 ++++++++++++
 arch/arm64/include/asm/kvm_host.h | 23 +++++++++++++++++++++++
 arch/arm64/kvm/reset.c            |  4 ++++
 virt/kvm/arm/psci.c               | 18 ++++++++++++++++--
 4 files changed, 55 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index c7c28c885a19..d478766b56c1 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -315,6 +315,18 @@ static inline bool kvm_arm_harden_branch_predictor(void)
 	return false;
 }
 
+#define KVM_SSBD_UNKNOWN		-1
+#define KVM_SSBD_FORCE_DISABLE		0
+#define KVM_SSBD_EL1_ENTRY		1
+#define KVM_SSBD_FORCE_ENABLE		2
+#define KVM_SSBD_MITIGATED		3
+
+static inline int kvm_arm_have_ssbd(void)
+{
+	/* No way to detect it yet, pretend it is not there. */
+	return KVM_SSBD_UNKNOWN;
+}
+
 static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
 
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9bef3f69bdcd..082b0dbb85c6 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -455,6 +455,29 @@ static inline bool kvm_arm_harden_branch_predictor(void)
 	return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
 }
 
+#define KVM_SSBD_UNKNOWN		-1
+#define KVM_SSBD_FORCE_DISABLE		0
+#define KVM_SSBD_EL1_ENTRY		1
+#define KVM_SSBD_FORCE_ENABLE		2
+#define KVM_SSBD_MITIGATED		3
+
+static inline int kvm_arm_have_ssbd(void)
+{
+	switch (arm64_get_ssbd_state()) {
+	case ARM64_SSBD_FORCE_DISABLE:
+		return KVM_SSBD_FORCE_DISABLE;
+	case ARM64_SSBD_EL1_ENTRY:
+		return KVM_SSBD_EL1_ENTRY;
+	case ARM64_SSBD_FORCE_ENABLE:
+		return KVM_SSBD_FORCE_ENABLE;
+	case ARM64_SSBD_MITIGATED:
+		return KVM_SSBD_MITIGATED;
+	case ARM64_SSBD_UNKNOWN:
+	default:
+		return KVM_SSBD_UNKNOWN;
+	}
+}
+
 void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu);
 void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu);
 
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 3256b9228e75..20a7dfee8494 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -122,6 +122,10 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 	/* Reset PMU */
 	kvm_pmu_vcpu_reset(vcpu);
 
+	/* Default workaround setup is enabled (if supported) */
+	if (kvm_arm_have_ssbd() == KVM_SSBD_EL1_ENTRY)
+		vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG;
+
 	/* Reset timer */
 	return kvm_timer_vcpu_reset(vcpu);
 }
diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
index c4762bef13c6..4843bfa1f986 100644
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -405,7 +405,7 @@ static int kvm_psci_call(struct kvm_vcpu *vcpu)
 int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
 {
 	u32 func_id = smccc_get_function(vcpu);
-	u32 val = PSCI_RET_NOT_SUPPORTED;
+	u32 val = SMCCC_RET_NOT_SUPPORTED;
 	u32 feature;
 
 	switch (func_id) {
@@ -417,7 +417,21 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
 		switch(feature) {
 		case ARM_SMCCC_ARCH_WORKAROUND_1:
 			if (kvm_arm_harden_branch_predictor())
-				val = 0;
+				val = SMCCC_RET_SUCCESS;
+			break;
+		case ARM_SMCCC_ARCH_WORKAROUND_2:
+			switch (kvm_arm_have_ssbd()) {
+			case KVM_SSBD_FORCE_DISABLE:
+			case KVM_SSBD_UNKNOWN:
+				break;
+			case KVM_SSBD_EL1_ENTRY:
+				val = SMCCC_RET_SUCCESS;
+				break;
+			case KVM_SSBD_FORCE_ENABLE:
+			case KVM_SSBD_MITIGATED:
+				val = SMCCC_RET_NOT_REQUIRED;
+				break;
+			}
 			break;
 		}
 		break;
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 110+ messages in thread

* Re: [PATCH 05/14] arm64: Add 'ssbd' command-line option
  2018-05-22 15:06   ` Marc Zyngier
  (?)
@ 2018-05-22 15:29     ` Randy Dunlap
  -1 siblings, 0 replies; 110+ messages in thread
From: Randy Dunlap @ 2018-05-22 15:29 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, linux-kernel, kvmarm
  Cc: Will Deacon, Catalin Marinas, Thomas Gleixner, Andy Lutomirski,
	Kees Cook, Greg Kroah-Hartman, Christoffer Dall

On 05/22/2018 08:06 AM, Marc Zyngier wrote:
> On a system where the firmware implements ARCH_WORKAROUND_2,
> it may be useful to either permanently enable or disable the
> workaround for cases where the user decides that they'd rather
> not get a trap overhead, and keep the mitigation permanently
> on or off instead of switching it on exception entry/exit.
> 
> In any case, default to the mitigation being enabled.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  Documentation/admin-guide/kernel-parameters.txt |  17 ++++
>  arch/arm64/include/asm/cpufeature.h             |   6 ++
>  arch/arm64/kernel/cpu_errata.c                  | 102 ++++++++++++++++++++----
>  3 files changed, 109 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index f2040d46f095..646e112c6f63 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4092,6 +4092,23 @@
>  			expediting.  Set to zero to disable automatic
>  			expediting.
>  
> +	ssbd=		[ARM64,HW]
> +			Speculative Store Bypass Disable control
> +
> +			On CPUs that are vulnerable to the Speculative
> +			Store Bypass vulnerability and offer a
> +			firmware based mitigation, this parameter
> +			indicates how the mitigation should be used:
> +
> +			force-on:  Unconditionnaly enable mitigation for

			           Unconditionally

> +				   for both kernel and userspace
> +			force-off: Unconditionnaly disable mitigation for

			           Unconditionally

> +				   for both kernel and userspace
> +			kernel:    Always enable mitigation in the
> +				   kernel, and offer a prctl interface
> +				   to allow userspace to register its
> +				   interest in being mitigated too.
> +
>  	stack_guard_gap=	[MM]
>  			override the default stack gap protection. The value
>  			is in page units and it defines how many pages prior



-- 
~Randy

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 05/14] arm64: Add 'ssbd' command-line option
@ 2018-05-22 15:29     ` Randy Dunlap
  0 siblings, 0 replies; 110+ messages in thread
From: Randy Dunlap @ 2018-05-22 15:29 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, linux-kernel, kvmarm
  Cc: Kees Cook, Catalin Marinas, Will Deacon, Christoffer Dall,
	Andy Lutomirski, Greg Kroah-Hartman, Thomas Gleixner

On 05/22/2018 08:06 AM, Marc Zyngier wrote:
> On a system where the firmware implements ARCH_WORKAROUND_2,
> it may be useful to either permanently enable or disable the
> workaround for cases where the user decides that they'd rather
> not get a trap overhead, and keep the mitigation permanently
> on or off instead of switching it on exception entry/exit.
> 
> In any case, default to the mitigation being enabled.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  Documentation/admin-guide/kernel-parameters.txt |  17 ++++
>  arch/arm64/include/asm/cpufeature.h             |   6 ++
>  arch/arm64/kernel/cpu_errata.c                  | 102 ++++++++++++++++++++----
>  3 files changed, 109 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index f2040d46f095..646e112c6f63 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4092,6 +4092,23 @@
>  			expediting.  Set to zero to disable automatic
>  			expediting.
>  
> +	ssbd=		[ARM64,HW]
> +			Speculative Store Bypass Disable control
> +
> +			On CPUs that are vulnerable to the Speculative
> +			Store Bypass vulnerability and offer a
> +			firmware based mitigation, this parameter
> +			indicates how the mitigation should be used:
> +
> +			force-on:  Unconditionnaly enable mitigation for

			           Unconditionally

> +				   for both kernel and userspace
> +			force-off: Unconditionnaly disable mitigation for

			           Unconditionally

> +				   for both kernel and userspace
> +			kernel:    Always enable mitigation in the
> +				   kernel, and offer a prctl interface
> +				   to allow userspace to register its
> +				   interest in being mitigated too.
> +
>  	stack_guard_gap=	[MM]
>  			override the default stack gap protection. The value
>  			is in page units and it defines how many pages prior



-- 
~Randy

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 05/14] arm64: Add 'ssbd' command-line option
@ 2018-05-22 15:29     ` Randy Dunlap
  0 siblings, 0 replies; 110+ messages in thread
From: Randy Dunlap @ 2018-05-22 15:29 UTC (permalink / raw)
  To: linux-arm-kernel

On 05/22/2018 08:06 AM, Marc Zyngier wrote:
> On a system where the firmware implements ARCH_WORKAROUND_2,
> it may be useful to either permanently enable or disable the
> workaround for cases where the user decides that they'd rather
> not get a trap overhead, and keep the mitigation permanently
> on or off instead of switching it on exception entry/exit.
> 
> In any case, default to the mitigation being enabled.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  Documentation/admin-guide/kernel-parameters.txt |  17 ++++
>  arch/arm64/include/asm/cpufeature.h             |   6 ++
>  arch/arm64/kernel/cpu_errata.c                  | 102 ++++++++++++++++++++----
>  3 files changed, 109 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index f2040d46f095..646e112c6f63 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4092,6 +4092,23 @@
>  			expediting.  Set to zero to disable automatic
>  			expediting.
>  
> +	ssbd=		[ARM64,HW]
> +			Speculative Store Bypass Disable control
> +
> +			On CPUs that are vulnerable to the Speculative
> +			Store Bypass vulnerability and offer a
> +			firmware based mitigation, this parameter
> +			indicates how the mitigation should be used:
> +
> +			force-on:  Unconditionnaly enable mitigation for

			           Unconditionally

> +				   for both kernel and userspace
> +			force-off: Unconditionnaly disable mitigation for

			           Unconditionally

> +				   for both kernel and userspace
> +			kernel:    Always enable mitigation in the
> +				   kernel, and offer a prctl interface
> +				   to allow userspace to register its
> +				   interest in being mitigated too.
> +
>  	stack_guard_gap=	[MM]
>  			override the default stack gap protection. The value
>  			is in page units and it defines how many pages prior



-- 
~Randy

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 10/14] arm64: ssbd: Add prctl interface for per-thread mitigation
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-22 15:48     ` Dominik Brodowski
  -1 siblings, 0 replies; 110+ messages in thread
From: Dominik Brodowski @ 2018-05-22 15:48 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Will Deacon,
	Catalin Marinas, Thomas Gleixner, Andy Lutomirski, Kees Cook,
	Greg Kroah-Hartman, Christoffer Dall


On Tue, May 22, 2018 at 04:06:44PM +0100, Marc Zyngier wrote:
> If running on a system that performs dynamic SSBD mitigation, allow
> userspace to request the mitigation for itself. This is implemented
> as a prctl call, allowing the mitigation to be enabled or disabled at
> will for this particular thread.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/Makefile |   1 +
>  arch/arm64/kernel/ssbd.c   | 107 +++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 108 insertions(+)
>  create mode 100644 arch/arm64/kernel/ssbd.c
> 
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index bf825f38d206..0025f8691046 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -54,6 +54,7 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST)	+= arm64-reloc-test.o
>  arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
>  arm64-obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
>  arm64-obj-$(CONFIG_ARM_SDE_INTERFACE)	+= sdei.o
> +arm64-obj-$(CONFIG_ARM64_SSBD)		+= ssbd.o
>  
>  obj-y					+= $(arm64-obj-y) vdso/ probes/
>  obj-m					+= $(arm64-obj-m)
> diff --git a/arch/arm64/kernel/ssbd.c b/arch/arm64/kernel/ssbd.c
> new file mode 100644
> index 000000000000..34e3c430176b
> --- /dev/null
> +++ b/arch/arm64/kernel/ssbd.c
> @@ -0,0 +1,107 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2018 ARM Ltd, All Rights Reserved.
> + */
> +
> +#include <linux/sched.h>
> +#include <linux/thread_info.h>
> +
> +#include <asm/cpufeature.h>
> +
> +/*
> + * prctl interface for SSBD
> + * FIXME: Drop the below ifdefery once the common interface has been merged.
> + */
> +#ifdef PR_SPEC_STORE_BYPASS

That FIXME wants to be looked at.

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 10/14] arm64: ssbd: Add prctl interface for per-thread mitigation
@ 2018-05-22 15:48     ` Dominik Brodowski
  0 siblings, 0 replies; 110+ messages in thread
From: Dominik Brodowski @ 2018-05-22 15:48 UTC (permalink / raw)
  To: linux-arm-kernel


On Tue, May 22, 2018 at 04:06:44PM +0100, Marc Zyngier wrote:
> If running on a system that performs dynamic SSBD mitigation, allow
> userspace to request the mitigation for itself. This is implemented
> as a prctl call, allowing the mitigation to be enabled or disabled at
> will for this particular thread.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/Makefile |   1 +
>  arch/arm64/kernel/ssbd.c   | 107 +++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 108 insertions(+)
>  create mode 100644 arch/arm64/kernel/ssbd.c
> 
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index bf825f38d206..0025f8691046 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -54,6 +54,7 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST)	+= arm64-reloc-test.o
>  arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
>  arm64-obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
>  arm64-obj-$(CONFIG_ARM_SDE_INTERFACE)	+= sdei.o
> +arm64-obj-$(CONFIG_ARM64_SSBD)		+= ssbd.o
>  
>  obj-y					+= $(arm64-obj-y) vdso/ probes/
>  obj-m					+= $(arm64-obj-m)
> diff --git a/arch/arm64/kernel/ssbd.c b/arch/arm64/kernel/ssbd.c
> new file mode 100644
> index 000000000000..34e3c430176b
> --- /dev/null
> +++ b/arch/arm64/kernel/ssbd.c
> @@ -0,0 +1,107 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2018 ARM Ltd, All Rights Reserved.
> + */
> +
> +#include <linux/sched.h>
> +#include <linux/thread_info.h>
> +
> +#include <asm/cpufeature.h>
> +
> +/*
> + * prctl interface for SSBD
> + * FIXME: Drop the below ifdefery once the common interface has been merged.
> + */
> +#ifdef PR_SPEC_STORE_BYPASS

That FIXME wants to be looked at.

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 10/14] arm64: ssbd: Add prctl interface for per-thread mitigation
  2018-05-22 15:48     ` Dominik Brodowski
  (?)
@ 2018-05-22 16:30       ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 16:30 UTC (permalink / raw)
  To: Dominik Brodowski
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Will Deacon,
	Catalin Marinas, Thomas Gleixner, Andy Lutomirski, Kees Cook,
	Greg Kroah-Hartman, Christoffer Dall

On Tue, 22 May 2018 16:48:42 +0100,
Dominik Brodowski wrote:
> 
> 
> On Tue, May 22, 2018 at 04:06:44PM +0100, Marc Zyngier wrote:
> > If running on a system that performs dynamic SSBD mitigation, allow
> > userspace to request the mitigation for itself. This is implemented
> > as a prctl call, allowing the mitigation to be enabled or disabled at
> > will for this particular thread.
> > 
> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > ---
> >  arch/arm64/kernel/Makefile |   1 +
> >  arch/arm64/kernel/ssbd.c   | 107 +++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 108 insertions(+)
> >  create mode 100644 arch/arm64/kernel/ssbd.c
> > 
> > diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> > index bf825f38d206..0025f8691046 100644
> > --- a/arch/arm64/kernel/Makefile
> > +++ b/arch/arm64/kernel/Makefile
> > @@ -54,6 +54,7 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST)	+= arm64-reloc-test.o
> >  arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
> >  arm64-obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
> >  arm64-obj-$(CONFIG_ARM_SDE_INTERFACE)	+= sdei.o
> > +arm64-obj-$(CONFIG_ARM64_SSBD)		+= ssbd.o
> >  
> >  obj-y					+= $(arm64-obj-y) vdso/ probes/
> >  obj-m					+= $(arm64-obj-m)
> > diff --git a/arch/arm64/kernel/ssbd.c b/arch/arm64/kernel/ssbd.c
> > new file mode 100644
> > index 000000000000..34e3c430176b
> > --- /dev/null
> > +++ b/arch/arm64/kernel/ssbd.c
> > @@ -0,0 +1,107 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright (C) 2018 ARM Ltd, All Rights Reserved.
> > + */
> > +
> > +#include <linux/sched.h>
> > +#include <linux/thread_info.h>
> > +
> > +#include <asm/cpufeature.h>
> > +
> > +/*
> > + * prctl interface for SSBD
> > + * FIXME: Drop the below ifdefery once the common interface has been merged.
> > + */
> > +#ifdef PR_SPEC_STORE_BYPASS
> 
> That FIXME wants to be looked at.

It is what allowed the series to compile with mainline until last
night. Once I rebase on top of -rc7, I'll remove it.

Thanks,

	M.

-- 
Jazz is not dead, it just smell funny.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 10/14] arm64: ssbd: Add prctl interface for per-thread mitigation
@ 2018-05-22 16:30       ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 16:30 UTC (permalink / raw)
  To: Dominik Brodowski
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Will Deacon,
	Catalin Marinas, Thomas Gleixner, Andy Lutomirski, Kees Cook,
	Greg Kroah-Hartman, Christoffer Dall

On Tue, 22 May 2018 16:48:42 +0100,
Dominik Brodowski wrote:
> 
> 
> On Tue, May 22, 2018 at 04:06:44PM +0100, Marc Zyngier wrote:
> > If running on a system that performs dynamic SSBD mitigation, allow
> > userspace to request the mitigation for itself. This is implemented
> > as a prctl call, allowing the mitigation to be enabled or disabled at
> > will for this particular thread.
> > 
> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > ---
> >  arch/arm64/kernel/Makefile |   1 +
> >  arch/arm64/kernel/ssbd.c   | 107 +++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 108 insertions(+)
> >  create mode 100644 arch/arm64/kernel/ssbd.c
> > 
> > diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> > index bf825f38d206..0025f8691046 100644
> > --- a/arch/arm64/kernel/Makefile
> > +++ b/arch/arm64/kernel/Makefile
> > @@ -54,6 +54,7 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST)	+= arm64-reloc-test.o
> >  arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
> >  arm64-obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
> >  arm64-obj-$(CONFIG_ARM_SDE_INTERFACE)	+= sdei.o
> > +arm64-obj-$(CONFIG_ARM64_SSBD)		+= ssbd.o
> >  
> >  obj-y					+= $(arm64-obj-y) vdso/ probes/
> >  obj-m					+= $(arm64-obj-m)
> > diff --git a/arch/arm64/kernel/ssbd.c b/arch/arm64/kernel/ssbd.c
> > new file mode 100644
> > index 000000000000..34e3c430176b
> > --- /dev/null
> > +++ b/arch/arm64/kernel/ssbd.c
> > @@ -0,0 +1,107 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright (C) 2018 ARM Ltd, All Rights Reserved.
> > + */
> > +
> > +#include <linux/sched.h>
> > +#include <linux/thread_info.h>
> > +
> > +#include <asm/cpufeature.h>
> > +
> > +/*
> > + * prctl interface for SSBD
> > + * FIXME: Drop the below ifdefery once the common interface has been merged.
> > + */
> > +#ifdef PR_SPEC_STORE_BYPASS
> 
> That FIXME wants to be looked at.

It is what allowed the series to compile with mainline until last
night. Once I rebase on top of -rc7, I'll remove it.

Thanks,

	M.

-- 
Jazz is not dead, it just smell funny.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 10/14] arm64: ssbd: Add prctl interface for per-thread mitigation
@ 2018-05-22 16:30       ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-22 16:30 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 22 May 2018 16:48:42 +0100,
Dominik Brodowski wrote:
> 
> 
> On Tue, May 22, 2018 at 04:06:44PM +0100, Marc Zyngier wrote:
> > If running on a system that performs dynamic SSBD mitigation, allow
> > userspace to request the mitigation for itself. This is implemented
> > as a prctl call, allowing the mitigation to be enabled or disabled at
> > will for this particular thread.
> > 
> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > ---
> >  arch/arm64/kernel/Makefile |   1 +
> >  arch/arm64/kernel/ssbd.c   | 107 +++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 108 insertions(+)
> >  create mode 100644 arch/arm64/kernel/ssbd.c
> > 
> > diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> > index bf825f38d206..0025f8691046 100644
> > --- a/arch/arm64/kernel/Makefile
> > +++ b/arch/arm64/kernel/Makefile
> > @@ -54,6 +54,7 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST)	+= arm64-reloc-test.o
> >  arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
> >  arm64-obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
> >  arm64-obj-$(CONFIG_ARM_SDE_INTERFACE)	+= sdei.o
> > +arm64-obj-$(CONFIG_ARM64_SSBD)		+= ssbd.o
> >  
> >  obj-y					+= $(arm64-obj-y) vdso/ probes/
> >  obj-m					+= $(arm64-obj-m)
> > diff --git a/arch/arm64/kernel/ssbd.c b/arch/arm64/kernel/ssbd.c
> > new file mode 100644
> > index 000000000000..34e3c430176b
> > --- /dev/null
> > +++ b/arch/arm64/kernel/ssbd.c
> > @@ -0,0 +1,107 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright (C) 2018 ARM Ltd, All Rights Reserved.
> > + */
> > +
> > +#include <linux/sched.h>
> > +#include <linux/thread_info.h>
> > +
> > +#include <asm/cpufeature.h>
> > +
> > +/*
> > + * prctl interface for SSBD
> > + * FIXME: Drop the below ifdefery once the common interface has been merged.
> > + */
> > +#ifdef PR_SPEC_STORE_BYPASS
> 
> That FIXME wants to be looked at.

It is what allowed the series to compile with mainline until last
night. Once I rebase on top of -rc7, I'll remove it.

Thanks,

	M.

-- 
Jazz is not dead, it just smell funny.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-23  9:23     ` Julien Grall
  -1 siblings, 0 replies; 110+ messages in thread
From: Julien Grall @ 2018-05-23  9:23 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, linux-kernel, kvmarm
  Cc: Kees Cook, Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

Hi Marc,

On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index ec2ee720e33e..f33e6aed3037 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -18,6 +18,7 @@
>    * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>    */
>   
> +#include <linux/arm-smccc.h>
>   #include <linux/init.h>
>   #include <linux/linkage.h>
>   
> @@ -137,6 +138,18 @@ alternative_else_nop_endif
>   	add	\dst, \dst, #(\sym - .entry.tramp.text)
>   	.endm
>   
> +	// This macro corrupts x0-x3. It is the caller's duty
> +	// to save/restore them if required.

NIT: Shouldn't you use /* ... */ for multi-line comments?

Regardless that:

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
@ 2018-05-23  9:23     ` Julien Grall
  0 siblings, 0 replies; 110+ messages in thread
From: Julien Grall @ 2018-05-23  9:23 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Marc,

On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index ec2ee720e33e..f33e6aed3037 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -18,6 +18,7 @@
>    * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>    */
>   
> +#include <linux/arm-smccc.h>
>   #include <linux/init.h>
>   #include <linux/linkage.h>
>   
> @@ -137,6 +138,18 @@ alternative_else_nop_endif
>   	add	\dst, \dst, #(\sym - .entry.tramp.text)
>   	.endm
>   
> +	// This macro corrupts x0-x3. It is the caller's duty
> +	// to save/restore them if required.

NIT: Shouldn't you use /* ... */ for multi-line comments?

Regardless that:

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 03/14] arm64: Add per-cpu infrastructure to call ARCH_WORKAROUND_2
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-23 10:03     ` Julien Grall
  -1 siblings, 0 replies; 110+ messages in thread
From: Julien Grall @ 2018-05-23 10:03 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, linux-kernel, kvmarm
  Cc: Kees Cook, Catalin Marinas, Will Deacon, Christoffer Dall,
	Andy Lutomirski, Greg Kroah-Hartman, Thomas Gleixner

Hi Marc,

On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> In a heterogeneous system, we can end up with both affected and
> unaffected CPUs. Let's check their status before calling into the
> firmware.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

> ---
>   arch/arm64/kernel/cpu_errata.c |  2 ++
>   arch/arm64/kernel/entry.S      | 11 +++++++----
>   2 files changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 46b3aafb631a..0288d6cf560e 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -233,6 +233,8 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>   #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
>   
>   #ifdef CONFIG_ARM64_SSBD
> +DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
> +
>   void __init arm64_update_smccc_conduit(struct alt_instr *alt,
>   				       __le32 *origptr, __le32 *updptr,
>   				       int nr_inst)
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index f33e6aed3037..29ad672a6abd 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -140,8 +140,10 @@ alternative_else_nop_endif
>   
>   	// This macro corrupts x0-x3. It is the caller's duty
>   	// to save/restore them if required.
> -	.macro	apply_ssbd, state
> +	.macro	apply_ssbd, state, targ, tmp1, tmp2
>   #ifdef CONFIG_ARM64_SSBD
> +	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
> +	cbz	\tmp2, \targ
>   	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
>   	mov	w1, #\state
>   alternative_cb	arm64_update_smccc_conduit
> @@ -176,12 +178,13 @@ alternative_cb_end
>   	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
>   	disable_step_tsk x19, x20		// exceptions when scheduling.
>   
> -	apply_ssbd 1
> +	apply_ssbd 1, 1f, x22, x23
>   
>   #ifdef CONFIG_ARM64_SSBD
>   	ldp	x0, x1, [sp, #16 * 0]
>   	ldp	x2, x3, [sp, #16 * 1]
>   #endif
> +1:
>   
>   	mov	x29, xzr			// fp pointed to user-space
>   	.else
> @@ -323,8 +326,8 @@ alternative_if ARM64_WORKAROUND_845719
>   alternative_else_nop_endif
>   #endif
>   3:
> -	apply_ssbd 0
> -
> +	apply_ssbd 0, 5f, x0, x1
> +5:
>   	.endif
>   
>   	msr	elr_el1, x21			// set up the return data
> 

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 03/14] arm64: Add per-cpu infrastructure to call ARCH_WORKAROUND_2
@ 2018-05-23 10:03     ` Julien Grall
  0 siblings, 0 replies; 110+ messages in thread
From: Julien Grall @ 2018-05-23 10:03 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Marc,

On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> In a heterogeneous system, we can end up with both affected and
> unaffected CPUs. Let's check their status before calling into the
> firmware.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

> ---
>   arch/arm64/kernel/cpu_errata.c |  2 ++
>   arch/arm64/kernel/entry.S      | 11 +++++++----
>   2 files changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 46b3aafb631a..0288d6cf560e 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -233,6 +233,8 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>   #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
>   
>   #ifdef CONFIG_ARM64_SSBD
> +DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
> +
>   void __init arm64_update_smccc_conduit(struct alt_instr *alt,
>   				       __le32 *origptr, __le32 *updptr,
>   				       int nr_inst)
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index f33e6aed3037..29ad672a6abd 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -140,8 +140,10 @@ alternative_else_nop_endif
>   
>   	// This macro corrupts x0-x3. It is the caller's duty
>   	// to save/restore them if required.
> -	.macro	apply_ssbd, state
> +	.macro	apply_ssbd, state, targ, tmp1, tmp2
>   #ifdef CONFIG_ARM64_SSBD
> +	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
> +	cbz	\tmp2, \targ
>   	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
>   	mov	w1, #\state
>   alternative_cb	arm64_update_smccc_conduit
> @@ -176,12 +178,13 @@ alternative_cb_end
>   	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
>   	disable_step_tsk x19, x20		// exceptions when scheduling.
>   
> -	apply_ssbd 1
> +	apply_ssbd 1, 1f, x22, x23
>   
>   #ifdef CONFIG_ARM64_SSBD
>   	ldp	x0, x1, [sp, #16 * 0]
>   	ldp	x2, x3, [sp, #16 * 1]
>   #endif
> +1:
>   
>   	mov	x29, xzr			// fp pointed to user-space
>   	.else
> @@ -323,8 +326,8 @@ alternative_if ARM64_WORKAROUND_845719
>   alternative_else_nop_endif
>   #endif
>   3:
> -	apply_ssbd 0
> -
> +	apply_ssbd 0, 5f, x0, x1
> +5:
>   	.endif
>   
>   	msr	elr_el1, x21			// set up the return data
> 

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 04/14] arm64: Add ARCH_WORKAROUND_2 probing
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-23 10:06     ` Julien Grall
  -1 siblings, 0 replies; 110+ messages in thread
From: Julien Grall @ 2018-05-23 10:06 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, linux-kernel, kvmarm
  Cc: Kees Cook, Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

Hi Marc,

On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> As for Spectre variant-2, we rely on SMCCC 1.1 to provide the
> discovery mechanism for detecting the SSBD mitigation.
> 
> A new capability is also allocated for that purpose, and a
> config option.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>   arch/arm64/Kconfig               |  9 ++++++
>   arch/arm64/include/asm/cpucaps.h |  3 +-
>   arch/arm64/kernel/cpu_errata.c   | 69 ++++++++++++++++++++++++++++++++++++++++
>   3 files changed, 80 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index eb2cf4938f6d..b2103b4df467 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -938,6 +938,15 @@ config HARDEN_EL2_VECTORS
>   
>   	  If unsure, say Y.
>   
> +config ARM64_SSBD
> +	bool "Speculative Store Bypass Disable" if EXPERT
> +	default y
> +	help
> +	  This enables mitigation of the bypassing of previous stores
> +	  by speculative loads.
> +
> +	  If unsure, say Y.
> +
>   menuconfig ARMV8_DEPRECATED
>   	bool "Emulate deprecated/obsolete ARMv8 instructions"
>   	depends on COMPAT
> diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
> index bc51b72fafd4..5b2facf786ba 100644
> --- a/arch/arm64/include/asm/cpucaps.h
> +++ b/arch/arm64/include/asm/cpucaps.h
> @@ -48,7 +48,8 @@
>   #define ARM64_HAS_CACHE_IDC			27
>   #define ARM64_HAS_CACHE_DIC			28
>   #define ARM64_HW_DBM				29
> +#define ARM64_SSBD			30

NIT: Could you indent 30 the same way as the other number?

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 04/14] arm64: Add ARCH_WORKAROUND_2 probing
@ 2018-05-23 10:06     ` Julien Grall
  0 siblings, 0 replies; 110+ messages in thread
From: Julien Grall @ 2018-05-23 10:06 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Marc,

On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> As for Spectre variant-2, we rely on SMCCC 1.1 to provide the
> discovery mechanism for detecting the SSBD mitigation.
> 
> A new capability is also allocated for that purpose, and a
> config option.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>   arch/arm64/Kconfig               |  9 ++++++
>   arch/arm64/include/asm/cpucaps.h |  3 +-
>   arch/arm64/kernel/cpu_errata.c   | 69 ++++++++++++++++++++++++++++++++++++++++
>   3 files changed, 80 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index eb2cf4938f6d..b2103b4df467 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -938,6 +938,15 @@ config HARDEN_EL2_VECTORS
>   
>   	  If unsure, say Y.
>   
> +config ARM64_SSBD
> +	bool "Speculative Store Bypass Disable" if EXPERT
> +	default y
> +	help
> +	  This enables mitigation of the bypassing of previous stores
> +	  by speculative loads.
> +
> +	  If unsure, say Y.
> +
>   menuconfig ARMV8_DEPRECATED
>   	bool "Emulate deprecated/obsolete ARMv8 instructions"
>   	depends on COMPAT
> diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
> index bc51b72fafd4..5b2facf786ba 100644
> --- a/arch/arm64/include/asm/cpucaps.h
> +++ b/arch/arm64/include/asm/cpucaps.h
> @@ -48,7 +48,8 @@
>   #define ARM64_HAS_CACHE_IDC			27
>   #define ARM64_HAS_CACHE_DIC			28
>   #define ARM64_HW_DBM				29
> +#define ARM64_SSBD			30

NIT: Could you indent 30 the same way as the other number?

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 05/14] arm64: Add 'ssbd' command-line option
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-23 10:08     ` Julien Grall
  -1 siblings, 0 replies; 110+ messages in thread
From: Julien Grall @ 2018-05-23 10:08 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, linux-kernel, kvmarm
  Cc: Kees Cook, Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

Hi Marc,

On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> On a system where the firmware implements ARCH_WORKAROUND_2,
> it may be useful to either permanently enable or disable the
> workaround for cases where the user decides that they'd rather
> not get a trap overhead, and keep the mitigation permanently
> on or off instead of switching it on exception entry/exit.
> 
> In any case, default to the mitigation being enabled.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

> ---
>   Documentation/admin-guide/kernel-parameters.txt |  17 ++++
>   arch/arm64/include/asm/cpufeature.h             |   6 ++
>   arch/arm64/kernel/cpu_errata.c                  | 102 ++++++++++++++++++++----
>   3 files changed, 109 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index f2040d46f095..646e112c6f63 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4092,6 +4092,23 @@
>   			expediting.  Set to zero to disable automatic
>   			expediting.
>   
> +	ssbd=		[ARM64,HW]
> +			Speculative Store Bypass Disable control
> +
> +			On CPUs that are vulnerable to the Speculative
> +			Store Bypass vulnerability and offer a
> +			firmware based mitigation, this parameter
> +			indicates how the mitigation should be used:
> +
> +			force-on:  Unconditionnaly enable mitigation for
> +				   for both kernel and userspace
> +			force-off: Unconditionnaly disable mitigation for
> +				   for both kernel and userspace
> +			kernel:    Always enable mitigation in the
> +				   kernel, and offer a prctl interface
> +				   to allow userspace to register its
> +				   interest in being mitigated too.
> +
>   	stack_guard_gap=	[MM]
>   			override the default stack gap protection. The value
>   			is in page units and it defines how many pages prior
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 09b0f2a80c8f..9bc548e22784 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -537,6 +537,12 @@ static inline u64 read_zcr_features(void)
>   	return zcr;
>   }
>   
> +#define ARM64_SSBD_UNKNOWN		-1
> +#define ARM64_SSBD_FORCE_DISABLE	0
> +#define ARM64_SSBD_EL1_ENTRY		1
> +#define ARM64_SSBD_FORCE_ENABLE		2
> +#define ARM64_SSBD_MITIGATED		3
> +
>   #endif /* __ASSEMBLY__ */
>   
>   #endif
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 7fd6d5b001f5..f1d4e75b0ddd 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -235,6 +235,38 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>   #ifdef CONFIG_ARM64_SSBD
>   DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
>   
> +int ssbd_state __read_mostly = ARM64_SSBD_EL1_ENTRY;
> +
> +static const struct ssbd_options {
> +	const char	*str;
> +	int		state;
> +} ssbd_options[] = {
> +	{ "force-on",	ARM64_SSBD_FORCE_ENABLE, },
> +	{ "force-off",	ARM64_SSBD_FORCE_DISABLE, },
> +	{ "kernel",	ARM64_SSBD_EL1_ENTRY, },
> +};
> +
> +static int __init ssbd_cfg(char *buf)
> +{
> +	int i;
> +
> +	if (!buf || !buf[0])
> +		return -EINVAL;
> +
> +	for (i = 0; i < ARRAY_SIZE(ssbd_options); i++) {
> +		int len = strlen(ssbd_options[i].str);
> +
> +		if (strncmp(buf, ssbd_options[i].str, len))
> +			continue;
> +
> +		ssbd_state = ssbd_options[i].state;
> +		return 0;
> +	}
> +
> +	return -EINVAL;
> +}
> +early_param("ssbd", ssbd_cfg);
> +
>   void __init arm64_update_smccc_conduit(struct alt_instr *alt,
>   				       __le32 *origptr, __le32 *updptr,
>   				       int nr_inst)
> @@ -272,44 +304,82 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>   				    int scope)
>   {
>   	struct arm_smccc_res res;
> -	bool supported = true;
> +	bool required = true;
> +	s32 val;
>   
>   	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
>   
> -	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
> +	if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
> +		ssbd_state = ARM64_SSBD_UNKNOWN;
>   		return false;
> +	}
>   
> -	/*
> -	 * The probe function return value is either negative
> -	 * (unsupported or mitigated), positive (unaffected), or zero
> -	 * (requires mitigation). We only need to do anything in the
> -	 * last case.
> -	 */
>   	switch (psci_ops.conduit) {
>   	case PSCI_CONDUIT_HVC:
>   		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>   				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> -		if ((int)res.a0 != 0)
> -			supported = false;
>   		break;
>   
>   	case PSCI_CONDUIT_SMC:
>   		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>   				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> -		if ((int)res.a0 != 0)
> -			supported = false;
>   		break;
>   
>   	default:
> -		supported = false;
> +		ssbd_state = ARM64_SSBD_UNKNOWN;
> +		return false;
>   	}
>   
> -	if (supported) {
> -		__this_cpu_write(arm64_ssbd_callback_required, 1);
> +	val = (s32)res.a0;
> +
> +	switch (val) {
> +	case SMCCC_RET_NOT_SUPPORTED:
> +		ssbd_state = ARM64_SSBD_UNKNOWN;
> +		return false;
> +
> +	case SMCCC_RET_NOT_REQUIRED:
> +		ssbd_state = ARM64_SSBD_MITIGATED;
> +		return false;
> +
> +	case SMCCC_RET_SUCCESS:
> +		required = true;
> +		break;
> +
> +	case 1:	/* Mitigation not required on this CPU */
> +		required = false;
> +		break;
> +
> +	default:
> +		WARN_ON(1);
> +		return false;
> +	}
> +
> +	switch (ssbd_state) {
> +	case ARM64_SSBD_FORCE_DISABLE:
> +		pr_info_once("%s disabled from command-line\n", entry->desc);
> +		do_ssbd(false);
> +		required = false;
> +		break;
> +
> +	case ARM64_SSBD_EL1_ENTRY:
> +		if (required) {
> +			__this_cpu_write(arm64_ssbd_callback_required, 1);
> +			do_ssbd(true);
> +		}
> +		break;
> +
> +	case ARM64_SSBD_FORCE_ENABLE:
> +		pr_info_once("%s forced from command-line\n", entry->desc);
>   		do_ssbd(true);
> +		required = true;
> +		break;
> +
> +	default:
> +		WARN_ON(1);
> +		break;
>   	}
>   
> -	return supported;
> +	return required;
>   }
>   #endif	/* CONFIG_ARM64_SSBD */
>   
> 

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 05/14] arm64: Add 'ssbd' command-line option
@ 2018-05-23 10:08     ` Julien Grall
  0 siblings, 0 replies; 110+ messages in thread
From: Julien Grall @ 2018-05-23 10:08 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Marc,

On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> On a system where the firmware implements ARCH_WORKAROUND_2,
> it may be useful to either permanently enable or disable the
> workaround for cases where the user decides that they'd rather
> not get a trap overhead, and keep the mitigation permanently
> on or off instead of switching it on exception entry/exit.
> 
> In any case, default to the mitigation being enabled.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

> ---
>   Documentation/admin-guide/kernel-parameters.txt |  17 ++++
>   arch/arm64/include/asm/cpufeature.h             |   6 ++
>   arch/arm64/kernel/cpu_errata.c                  | 102 ++++++++++++++++++++----
>   3 files changed, 109 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index f2040d46f095..646e112c6f63 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4092,6 +4092,23 @@
>   			expediting.  Set to zero to disable automatic
>   			expediting.
>   
> +	ssbd=		[ARM64,HW]
> +			Speculative Store Bypass Disable control
> +
> +			On CPUs that are vulnerable to the Speculative
> +			Store Bypass vulnerability and offer a
> +			firmware based mitigation, this parameter
> +			indicates how the mitigation should be used:
> +
> +			force-on:  Unconditionnaly enable mitigation for
> +				   for both kernel and userspace
> +			force-off: Unconditionnaly disable mitigation for
> +				   for both kernel and userspace
> +			kernel:    Always enable mitigation in the
> +				   kernel, and offer a prctl interface
> +				   to allow userspace to register its
> +				   interest in being mitigated too.
> +
>   	stack_guard_gap=	[MM]
>   			override the default stack gap protection. The value
>   			is in page units and it defines how many pages prior
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 09b0f2a80c8f..9bc548e22784 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -537,6 +537,12 @@ static inline u64 read_zcr_features(void)
>   	return zcr;
>   }
>   
> +#define ARM64_SSBD_UNKNOWN		-1
> +#define ARM64_SSBD_FORCE_DISABLE	0
> +#define ARM64_SSBD_EL1_ENTRY		1
> +#define ARM64_SSBD_FORCE_ENABLE		2
> +#define ARM64_SSBD_MITIGATED		3
> +
>   #endif /* __ASSEMBLY__ */
>   
>   #endif
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 7fd6d5b001f5..f1d4e75b0ddd 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -235,6 +235,38 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>   #ifdef CONFIG_ARM64_SSBD
>   DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
>   
> +int ssbd_state __read_mostly = ARM64_SSBD_EL1_ENTRY;
> +
> +static const struct ssbd_options {
> +	const char	*str;
> +	int		state;
> +} ssbd_options[] = {
> +	{ "force-on",	ARM64_SSBD_FORCE_ENABLE, },
> +	{ "force-off",	ARM64_SSBD_FORCE_DISABLE, },
> +	{ "kernel",	ARM64_SSBD_EL1_ENTRY, },
> +};
> +
> +static int __init ssbd_cfg(char *buf)
> +{
> +	int i;
> +
> +	if (!buf || !buf[0])
> +		return -EINVAL;
> +
> +	for (i = 0; i < ARRAY_SIZE(ssbd_options); i++) {
> +		int len = strlen(ssbd_options[i].str);
> +
> +		if (strncmp(buf, ssbd_options[i].str, len))
> +			continue;
> +
> +		ssbd_state = ssbd_options[i].state;
> +		return 0;
> +	}
> +
> +	return -EINVAL;
> +}
> +early_param("ssbd", ssbd_cfg);
> +
>   void __init arm64_update_smccc_conduit(struct alt_instr *alt,
>   				       __le32 *origptr, __le32 *updptr,
>   				       int nr_inst)
> @@ -272,44 +304,82 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>   				    int scope)
>   {
>   	struct arm_smccc_res res;
> -	bool supported = true;
> +	bool required = true;
> +	s32 val;
>   
>   	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
>   
> -	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
> +	if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
> +		ssbd_state = ARM64_SSBD_UNKNOWN;
>   		return false;
> +	}
>   
> -	/*
> -	 * The probe function return value is either negative
> -	 * (unsupported or mitigated), positive (unaffected), or zero
> -	 * (requires mitigation). We only need to do anything in the
> -	 * last case.
> -	 */
>   	switch (psci_ops.conduit) {
>   	case PSCI_CONDUIT_HVC:
>   		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>   				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> -		if ((int)res.a0 != 0)
> -			supported = false;
>   		break;
>   
>   	case PSCI_CONDUIT_SMC:
>   		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>   				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> -		if ((int)res.a0 != 0)
> -			supported = false;
>   		break;
>   
>   	default:
> -		supported = false;
> +		ssbd_state = ARM64_SSBD_UNKNOWN;
> +		return false;
>   	}
>   
> -	if (supported) {
> -		__this_cpu_write(arm64_ssbd_callback_required, 1);
> +	val = (s32)res.a0;
> +
> +	switch (val) {
> +	case SMCCC_RET_NOT_SUPPORTED:
> +		ssbd_state = ARM64_SSBD_UNKNOWN;
> +		return false;
> +
> +	case SMCCC_RET_NOT_REQUIRED:
> +		ssbd_state = ARM64_SSBD_MITIGATED;
> +		return false;
> +
> +	case SMCCC_RET_SUCCESS:
> +		required = true;
> +		break;
> +
> +	case 1:	/* Mitigation not required on this CPU */
> +		required = false;
> +		break;
> +
> +	default:
> +		WARN_ON(1);
> +		return false;
> +	}
> +
> +	switch (ssbd_state) {
> +	case ARM64_SSBD_FORCE_DISABLE:
> +		pr_info_once("%s disabled from command-line\n", entry->desc);
> +		do_ssbd(false);
> +		required = false;
> +		break;
> +
> +	case ARM64_SSBD_EL1_ENTRY:
> +		if (required) {
> +			__this_cpu_write(arm64_ssbd_callback_required, 1);
> +			do_ssbd(true);
> +		}
> +		break;
> +
> +	case ARM64_SSBD_FORCE_ENABLE:
> +		pr_info_once("%s forced from command-line\n", entry->desc);
>   		do_ssbd(true);
> +		required = true;
> +		break;
> +
> +	default:
> +		WARN_ON(1);
> +		break;
>   	}
>   
> -	return supported;
> +	return required;
>   }
>   #endif	/* CONFIG_ARM64_SSBD */
>   
> 

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 06/14] arm64: ssbd: Add global mitigation state accessor
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-23 10:11     ` Julien Grall
  -1 siblings, 0 replies; 110+ messages in thread
From: Julien Grall @ 2018-05-23 10:11 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, linux-kernel, kvmarm
  Cc: Kees Cook, Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

Hi Marc,

On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> We're about to need the mitigation state in various parts of the
> kernel in order to do the right thing for userspace and guests.
> 
> Let's expose an accessor that will let other subsystems know
> about the state.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

> ---
>   arch/arm64/include/asm/cpufeature.h | 10 ++++++++++
>   1 file changed, 10 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 9bc548e22784..1bacdf57f0af 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -543,6 +543,16 @@ static inline u64 read_zcr_features(void)
>   #define ARM64_SSBD_FORCE_ENABLE		2
>   #define ARM64_SSBD_MITIGATED		3
>   
> +static inline int arm64_get_ssbd_state(void)
> +{
> +#ifdef CONFIG_ARM64_SSBD
> +	extern int ssbd_state;
> +	return ssbd_state;
> +#else
> +	return ARM64_SSBD_UNKNOWN;
> +#endif
> +}
> +
>   #endif /* __ASSEMBLY__ */
>   
>   #endif
> 

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 06/14] arm64: ssbd: Add global mitigation state accessor
@ 2018-05-23 10:11     ` Julien Grall
  0 siblings, 0 replies; 110+ messages in thread
From: Julien Grall @ 2018-05-23 10:11 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Marc,

On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> We're about to need the mitigation state in various parts of the
> kernel in order to do the right thing for userspace and guests.
> 
> Let's expose an accessor that will let other subsystems know
> about the state.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

> ---
>   arch/arm64/include/asm/cpufeature.h | 10 ++++++++++
>   1 file changed, 10 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 9bc548e22784..1bacdf57f0af 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -543,6 +543,16 @@ static inline u64 read_zcr_features(void)
>   #define ARM64_SSBD_FORCE_ENABLE		2
>   #define ARM64_SSBD_MITIGATED		3
>   
> +static inline int arm64_get_ssbd_state(void)
> +{
> +#ifdef CONFIG_ARM64_SSBD
> +	extern int ssbd_state;
> +	return ssbd_state;
> +#else
> +	return ARM64_SSBD_UNKNOWN;
> +#endif
> +}
> +
>   #endif /* __ASSEMBLY__ */
>   
>   #endif
> 

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 07/14] arm64: ssbd: Skip apply_ssbd if not using dynamic mitigation
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-23 10:13     ` Julien Grall
  -1 siblings, 0 replies; 110+ messages in thread
From: Julien Grall @ 2018-05-23 10:13 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, linux-kernel, kvmarm
  Cc: Kees Cook, Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

Hi Marc,

On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> In order to avoid checking arm64_ssbd_callback_required on each
> kernel entry/exit even if no mitigation is required, let's
> add yet another alternative that by default jumps over the mitigation,
> and that gets nop'ed out if we're doing dynamic mitigation.
> 
> Think of it as a poor man's static key...
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,


> ---
>   arch/arm64/kernel/cpu_errata.c | 14 ++++++++++++++
>   arch/arm64/kernel/entry.S      |  3 +++
>   2 files changed, 17 insertions(+)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index f1d4e75b0ddd..8f686f39b9c1 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -283,6 +283,20 @@ void __init arm64_update_smccc_conduit(struct alt_instr *alt,
>   	*updptr = cpu_to_le32(insn);
>   }
>   
> +void __init arm64_enable_wa2_handling(struct alt_instr *alt,
> +				      __le32 *origptr, __le32 *updptr,
> +				      int nr_inst)
> +{
> +	BUG_ON(nr_inst != 1);
> +	/*
> +	 * Only allow mitigation on EL1 entry/exit and guest
> +	 * ARCH_WORKAROUND_2 handling if the SSBD state allows it to
> +	 * be flipped.
> +	 */
> +	if (arm64_get_ssbd_state() == ARM64_SSBD_EL1_ENTRY)
> +		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
> +}
> +
>   static void do_ssbd(bool state)
>   {
>   	switch (psci_ops.conduit) {
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 29ad672a6abd..e6f6e2339b22 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -142,6 +142,9 @@ alternative_else_nop_endif
>   	// to save/restore them if required.
>   	.macro	apply_ssbd, state, targ, tmp1, tmp2
>   #ifdef CONFIG_ARM64_SSBD
> +alternative_cb	arm64_enable_wa2_handling
> +	b	\targ
> +alternative_cb_end
>   	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
>   	cbz	\tmp2, \targ
>   	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
> 

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 07/14] arm64: ssbd: Skip apply_ssbd if not using dynamic mitigation
@ 2018-05-23 10:13     ` Julien Grall
  0 siblings, 0 replies; 110+ messages in thread
From: Julien Grall @ 2018-05-23 10:13 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Marc,

On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> In order to avoid checking arm64_ssbd_callback_required on each
> kernel entry/exit even if no mitigation is required, let's
> add yet another alternative that by default jumps over the mitigation,
> and that gets nop'ed out if we're doing dynamic mitigation.
> 
> Think of it as a poor man's static key...
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,


> ---
>   arch/arm64/kernel/cpu_errata.c | 14 ++++++++++++++
>   arch/arm64/kernel/entry.S      |  3 +++
>   2 files changed, 17 insertions(+)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index f1d4e75b0ddd..8f686f39b9c1 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -283,6 +283,20 @@ void __init arm64_update_smccc_conduit(struct alt_instr *alt,
>   	*updptr = cpu_to_le32(insn);
>   }
>   
> +void __init arm64_enable_wa2_handling(struct alt_instr *alt,
> +				      __le32 *origptr, __le32 *updptr,
> +				      int nr_inst)
> +{
> +	BUG_ON(nr_inst != 1);
> +	/*
> +	 * Only allow mitigation on EL1 entry/exit and guest
> +	 * ARCH_WORKAROUND_2 handling if the SSBD state allows it to
> +	 * be flipped.
> +	 */
> +	if (arm64_get_ssbd_state() == ARM64_SSBD_EL1_ENTRY)
> +		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
> +}
> +
>   static void do_ssbd(bool state)
>   {
>   	switch (psci_ops.conduit) {
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 29ad672a6abd..e6f6e2339b22 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -142,6 +142,9 @@ alternative_else_nop_endif
>   	// to save/restore them if required.
>   	.macro	apply_ssbd, state, targ, tmp1, tmp2
>   #ifdef CONFIG_ARM64_SSBD
> +alternative_cb	arm64_enable_wa2_handling
> +	b	\targ
> +alternative_cb_end
>   	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
>   	cbz	\tmp2, \targ
>   	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
> 

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 08/14] arm64: ssbd: Disable mitigation on CPU resume if required by user
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-23 10:52     ` Julien Grall
  -1 siblings, 0 replies; 110+ messages in thread
From: Julien Grall @ 2018-05-23 10:52 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, linux-kernel, kvmarm
  Cc: Kees Cook, Catalin Marinas, Will Deacon, Christoffer Dall,
	Andy Lutomirski, Greg Kroah-Hartman, Thomas Gleixner

Hi,

On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> On a system where firmware can dynamically change the state of the
> mitigation, the CPU will always come up with the mitigation enabled,
> including when coming back from suspend.
> 
> If the user has requested "no mitigation" via a command line option,
> let's enforce it by calling into the firmware again to disable it.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

> ---
>   arch/arm64/include/asm/cpufeature.h | 6 ++++++
>   arch/arm64/kernel/cpu_errata.c      | 8 ++++----
>   arch/arm64/kernel/suspend.c         | 8 ++++++++
>   3 files changed, 18 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 1bacdf57f0af..d9dcb683259e 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -553,6 +553,12 @@ static inline int arm64_get_ssbd_state(void)
>   #endif
>   }
>   
> +#ifdef CONFIG_ARM64_SSBD
> +void arm64_set_ssbd_mitigation(bool state);
> +#else
> +static inline void arm64_set_ssbd_mitigation(bool state) {}
> +#endif
> +
>   #endif /* __ASSEMBLY__ */
>   
>   #endif
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 8f686f39b9c1..b4c12e9140f0 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -297,7 +297,7 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt,
>   		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
>   }
>   
> -static void do_ssbd(bool state)
> +void arm64_set_ssbd_mitigation(bool state)
>   {
>   	switch (psci_ops.conduit) {
>   	case PSCI_CONDUIT_HVC:
> @@ -371,20 +371,20 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>   	switch (ssbd_state) {
>   	case ARM64_SSBD_FORCE_DISABLE:
>   		pr_info_once("%s disabled from command-line\n", entry->desc);
> -		do_ssbd(false);
> +		arm64_set_ssbd_mitigation(false);
>   		required = false;
>   		break;
>   
>   	case ARM64_SSBD_EL1_ENTRY:
>   		if (required) {
>   			__this_cpu_write(arm64_ssbd_callback_required, 1);
> -			do_ssbd(true);
> +			arm64_set_ssbd_mitigation(true);
>   		}
>   		break;
>   
>   	case ARM64_SSBD_FORCE_ENABLE:
>   		pr_info_once("%s forced from command-line\n", entry->desc);
> -		do_ssbd(true);
> +		arm64_set_ssbd_mitigation(true);
>   		required = true;
>   		break;
>   
> diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
> index a307b9e13392..70c283368b64 100644
> --- a/arch/arm64/kernel/suspend.c
> +++ b/arch/arm64/kernel/suspend.c
> @@ -62,6 +62,14 @@ void notrace __cpu_suspend_exit(void)
>   	 */
>   	if (hw_breakpoint_restore)
>   		hw_breakpoint_restore(cpu);
> +
> +	/*
> +	 * On resume, firmware implementing dynamic mitigation will
> +	 * have turned the mitigation on. If the user has forcefully
> +	 * disabled it, make sure their wishes are obeyed.
> +	 */
> +	if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
> +		arm64_set_ssbd_mitigation(false);
>   }
>   
>   /*
> 

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 08/14] arm64: ssbd: Disable mitigation on CPU resume if required by user
@ 2018-05-23 10:52     ` Julien Grall
  0 siblings, 0 replies; 110+ messages in thread
From: Julien Grall @ 2018-05-23 10:52 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> On a system where firmware can dynamically change the state of the
> mitigation, the CPU will always come up with the mitigation enabled,
> including when coming back from suspend.
> 
> If the user has requested "no mitigation" via a command line option,
> let's enforce it by calling into the firmware again to disable it.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

> ---
>   arch/arm64/include/asm/cpufeature.h | 6 ++++++
>   arch/arm64/kernel/cpu_errata.c      | 8 ++++----
>   arch/arm64/kernel/suspend.c         | 8 ++++++++
>   3 files changed, 18 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 1bacdf57f0af..d9dcb683259e 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -553,6 +553,12 @@ static inline int arm64_get_ssbd_state(void)
>   #endif
>   }
>   
> +#ifdef CONFIG_ARM64_SSBD
> +void arm64_set_ssbd_mitigation(bool state);
> +#else
> +static inline void arm64_set_ssbd_mitigation(bool state) {}
> +#endif
> +
>   #endif /* __ASSEMBLY__ */
>   
>   #endif
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 8f686f39b9c1..b4c12e9140f0 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -297,7 +297,7 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt,
>   		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
>   }
>   
> -static void do_ssbd(bool state)
> +void arm64_set_ssbd_mitigation(bool state)
>   {
>   	switch (psci_ops.conduit) {
>   	case PSCI_CONDUIT_HVC:
> @@ -371,20 +371,20 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>   	switch (ssbd_state) {
>   	case ARM64_SSBD_FORCE_DISABLE:
>   		pr_info_once("%s disabled from command-line\n", entry->desc);
> -		do_ssbd(false);
> +		arm64_set_ssbd_mitigation(false);
>   		required = false;
>   		break;
>   
>   	case ARM64_SSBD_EL1_ENTRY:
>   		if (required) {
>   			__this_cpu_write(arm64_ssbd_callback_required, 1);
> -			do_ssbd(true);
> +			arm64_set_ssbd_mitigation(true);
>   		}
>   		break;
>   
>   	case ARM64_SSBD_FORCE_ENABLE:
>   		pr_info_once("%s forced from command-line\n", entry->desc);
> -		do_ssbd(true);
> +		arm64_set_ssbd_mitigation(true);
>   		required = true;
>   		break;
>   
> diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
> index a307b9e13392..70c283368b64 100644
> --- a/arch/arm64/kernel/suspend.c
> +++ b/arch/arm64/kernel/suspend.c
> @@ -62,6 +62,14 @@ void notrace __cpu_suspend_exit(void)
>   	 */
>   	if (hw_breakpoint_restore)
>   		hw_breakpoint_restore(cpu);
> +
> +	/*
> +	 * On resume, firmware implementing dynamic mitigation will
> +	 * have turned the mitigation on. If the user has forcefully
> +	 * disabled it, make sure their wishes are obeyed.
> +	 */
> +	if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
> +		arm64_set_ssbd_mitigation(false);
>   }
>   
>   /*
> 

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 04/14] arm64: Add ARCH_WORKAROUND_2 probing
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24  9:58     ` Suzuki K Poulose
  -1 siblings, 0 replies; 110+ messages in thread
From: Suzuki K Poulose @ 2018-05-24  9:58 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, linux-kernel, kvmarm
  Cc: Kees Cook, Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

On 22/05/18 16:06, Marc Zyngier wrote:
> As for Spectre variant-2, we rely on SMCCC 1.1 to provide the
> discovery mechanism for detecting the SSBD mitigation.
> 
> A new capability is also allocated for that purpose, and a
> config option.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>


> +static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
> +				    int scope)
> +{
> +	struct arm_smccc_res res;
> +	bool supported = true;
> +
> +	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
> +
> +	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
> +		return false;
> +
> +	/*
> +	 * The probe function return value is either negative
> +	 * (unsupported or mitigated), positive (unaffected), or zero
> +	 * (requires mitigation). We only need to do anything in the
> +	 * last case.
> +	 */
> +	switch (psci_ops.conduit) {
> +	case PSCI_CONDUIT_HVC:
> +		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> +				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> +		if ((int)res.a0 != 0)
> +			supported = false;
> +		break;
> +
> +	case PSCI_CONDUIT_SMC:
> +		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> +				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> +		if ((int)res.a0 != 0)
> +			supported = false;
> +		break;
> +
> +	default:
> +		supported = false;
> +	}
> +
> +	if (supported) {
> +		__this_cpu_write(arm64_ssbd_callback_required, 1);
> +		do_ssbd(true);
> +	}


Marc,

As discussed, we have minor issue with the "corner case". If a CPU
is hotplugged in which requires the mitigation, after the system has
finalised the cap to "not available", the CPU could go ahead and
do the "work around" as above, while not effectively doing anything
about it at runtime for KVM guests (as thats the only place where
we rely on the CAP being set).

But, yes this is real corner case. There is no easy way to solve it
other than

1) Allow late modifications to CPU hwcaps

OR

2) Penalise the fastpath to always check per-cpu setting.


Regardless,

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 04/14] arm64: Add ARCH_WORKAROUND_2 probing
@ 2018-05-24  9:58     ` Suzuki K Poulose
  0 siblings, 0 replies; 110+ messages in thread
From: Suzuki K Poulose @ 2018-05-24  9:58 UTC (permalink / raw)
  To: linux-arm-kernel

On 22/05/18 16:06, Marc Zyngier wrote:
> As for Spectre variant-2, we rely on SMCCC 1.1 to provide the
> discovery mechanism for detecting the SSBD mitigation.
> 
> A new capability is also allocated for that purpose, and a
> config option.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>


> +static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
> +				    int scope)
> +{
> +	struct arm_smccc_res res;
> +	bool supported = true;
> +
> +	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
> +
> +	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
> +		return false;
> +
> +	/*
> +	 * The probe function return value is either negative
> +	 * (unsupported or mitigated), positive (unaffected), or zero
> +	 * (requires mitigation). We only need to do anything in the
> +	 * last case.
> +	 */
> +	switch (psci_ops.conduit) {
> +	case PSCI_CONDUIT_HVC:
> +		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> +				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> +		if ((int)res.a0 != 0)
> +			supported = false;
> +		break;
> +
> +	case PSCI_CONDUIT_SMC:
> +		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> +				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> +		if ((int)res.a0 != 0)
> +			supported = false;
> +		break;
> +
> +	default:
> +		supported = false;
> +	}
> +
> +	if (supported) {
> +		__this_cpu_write(arm64_ssbd_callback_required, 1);
> +		do_ssbd(true);
> +	}


Marc,

As discussed, we have minor issue with the "corner case". If a CPU
is hotplugged in which requires the mitigation, after the system has
finalised the cap to "not available", the CPU could go ahead and
do the "work around" as above, while not effectively doing anything
about it at runtime for KVM guests (as thats the only place where
we rely on the CAP being set).

But, yes this is real corner case. There is no easy way to solve it
other than

1) Allow late modifications to CPU hwcaps

OR

2) Penalise the fastpath to always check per-cpu setting.


Regardless,

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
  2018-05-23  9:23     ` Julien Grall
@ 2018-05-24 10:52       ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 10:52 UTC (permalink / raw)
  To: Julien Grall
  Cc: Marc Zyngier, linux-arm-kernel, linux-kernel, kvmarm, Kees Cook,
	Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

On Wed, May 23, 2018 at 10:23:20AM +0100, Julien Grall wrote:
> Hi Marc,
> 
> On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index ec2ee720e33e..f33e6aed3037 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -18,6 +18,7 @@
> >    * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> >    */
> > +#include <linux/arm-smccc.h>
> >   #include <linux/init.h>
> >   #include <linux/linkage.h>
> > @@ -137,6 +138,18 @@ alternative_else_nop_endif
> >   	add	\dst, \dst, #(\sym - .entry.tramp.text)
> >   	.endm
> > +	// This macro corrupts x0-x3. It is the caller's duty
> > +	// to save/restore them if required.
> 
> NIT: Shouldn't you use /* ... */ for multi-line comments?

There's no requirement to do so, and IIRC even Torvalds prefers '//'
comments for multi-line things these days.

Mark.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
@ 2018-05-24 10:52       ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 10:52 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, May 23, 2018 at 10:23:20AM +0100, Julien Grall wrote:
> Hi Marc,
> 
> On 05/22/2018 04:06 PM, Marc Zyngier wrote:
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index ec2ee720e33e..f33e6aed3037 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -18,6 +18,7 @@
> >    * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> >    */
> > +#include <linux/arm-smccc.h>
> >   #include <linux/init.h>
> >   #include <linux/linkage.h>
> > @@ -137,6 +138,18 @@ alternative_else_nop_endif
> >   	add	\dst, \dst, #(\sym - .entry.tramp.text)
> >   	.endm
> > +	// This macro corrupts x0-x3. It is the caller's duty
> > +	// to save/restore them if required.
> 
> NIT: Shouldn't you use /* ... */ for multi-line comments?

There's no requirement to do so, and IIRC even Torvalds prefers '//'
comments for multi-line things these days.

Mark.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 01/14] arm/arm64: smccc: Add SMCCC-specific return codes
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 10:55     ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 10:55 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Kees Cook,
	Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

On Tue, May 22, 2018 at 04:06:35PM +0100, Marc Zyngier wrote:
> We've so far used the PSCI return codes for SMCCC because they
> were extremely similar. But with the new ARM DEN 0070A specification,
> "NOT_REQUIRED" (-2) is clashing with PSCI's "PSCI_RET_INVALID_PARAMS".
> 
> Let's bite the bullet and add SMCCC specific return codes. Users
> can be repainted as and when required.
> 
> Acked-by: Will Deacon <will.deacon@arm.com>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  include/linux/arm-smccc.h | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
> index a031897fca76..c89da86de99f 100644
> --- a/include/linux/arm-smccc.h
> +++ b/include/linux/arm-smccc.h
> @@ -291,5 +291,10 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
>   */
>  #define arm_smccc_1_1_hvc(...)	__arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__)
>  
> +/* Return codes defined in ARM DEN 0070A */
> +#define SMCCC_RET_SUCCESS			0
> +#define SMCCC_RET_NOT_SUPPORTED			-1
> +#define SMCCC_RET_NOT_REQUIRED			-2
> +
>  #endif /*__ASSEMBLY__*/
>  #endif /*__LINUX_ARM_SMCCC_H*/
> -- 
> 2.14.2
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 01/14] arm/arm64: smccc: Add SMCCC-specific return codes
@ 2018-05-24 10:55     ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 10:55 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:35PM +0100, Marc Zyngier wrote:
> We've so far used the PSCI return codes for SMCCC because they
> were extremely similar. But with the new ARM DEN 0070A specification,
> "NOT_REQUIRED" (-2) is clashing with PSCI's "PSCI_RET_INVALID_PARAMS".
> 
> Let's bite the bullet and add SMCCC specific return codes. Users
> can be repainted as and when required.
> 
> Acked-by: Will Deacon <will.deacon@arm.com>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  include/linux/arm-smccc.h | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
> index a031897fca76..c89da86de99f 100644
> --- a/include/linux/arm-smccc.h
> +++ b/include/linux/arm-smccc.h
> @@ -291,5 +291,10 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
>   */
>  #define arm_smccc_1_1_hvc(...)	__arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__)
>  
> +/* Return codes defined in ARM DEN 0070A */
> +#define SMCCC_RET_SUCCESS			0
> +#define SMCCC_RET_NOT_SUPPORTED			-1
> +#define SMCCC_RET_NOT_REQUIRED			-2
> +
>  #endif /*__ASSEMBLY__*/
>  #endif /*__LINUX_ARM_SMCCC_H*/
> -- 
> 2.14.2
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm at lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 11:00     ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:00 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Kees Cook,
	Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

On Tue, May 22, 2018 at 04:06:36PM +0100, Marc Zyngier wrote:
> In order for the kernel to protect itself, let's call the SSBD mitigation
> implemented by the higher exception level (either hypervisor or firmware)
> on each transition between userspace and kernel.
> 
> We must take the PSCI conduit into account in order to target the
> right exception level, hence the introduction of a runtime patching
> callback.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/cpu_errata.c | 18 ++++++++++++++++++
>  arch/arm64/kernel/entry.S      | 22 ++++++++++++++++++++++
>  include/linux/arm-smccc.h      |  5 +++++
>  3 files changed, 45 insertions(+)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index a900befadfe8..46b3aafb631a 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -232,6 +232,24 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>  }
>  #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
>  
> +#ifdef CONFIG_ARM64_SSBD
> +void __init arm64_update_smccc_conduit(struct alt_instr *alt,
> +				       __le32 *origptr, __le32 *updptr,
> +				       int nr_inst)
> +{
> +	u32 insn;
> +
> +	BUG_ON(nr_inst != 1);
> +
> +	if (psci_ops.conduit == PSCI_CONDUIT_HVC)
> +		insn = aarch64_insn_get_hvc_value();
> +	else
> +		insn = aarch64_insn_get_smc_value();

Shouldn't this also handle the case where there is no conduit?

See below comment in apply_ssbd for rationale.

> +
> +	*updptr = cpu_to_le32(insn);
> +}
> +#endif	/* CONFIG_ARM64_SSBD */
> +
>  #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)	\
>  	.matches = is_affected_midr_range,			\
>  	.midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max)
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index ec2ee720e33e..f33e6aed3037 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -18,6 +18,7 @@
>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <linux/arm-smccc.h>
>  #include <linux/init.h>
>  #include <linux/linkage.h>
>  
> @@ -137,6 +138,18 @@ alternative_else_nop_endif
>  	add	\dst, \dst, #(\sym - .entry.tramp.text)
>  	.endm
>  
> +	// This macro corrupts x0-x3. It is the caller's duty
> +	// to save/restore them if required.
> +	.macro	apply_ssbd, state
> +#ifdef CONFIG_ARM64_SSBD
> +	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
> +	mov	w1, #\state
> +alternative_cb	arm64_update_smccc_conduit
> +	nop					// Patched to SMC/HVC #0
> +alternative_cb_end
> +#endif
> +	.endm

If my system doesn't have SMCCC1.1, or the FW doesn't have an
implementation of ARCH_WORKAROUND_2, does this stay as a NOP?

It looks like this would be patched to an SMC, which would be fatal on
systems without EL3 FW.

> +
>  	.macro	kernel_entry, el, regsize = 64
>  	.if	\regsize == 32
>  	mov	w0, w0				// zero upper 32 bits of x0
> @@ -163,6 +176,13 @@ alternative_else_nop_endif
>  	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
>  	disable_step_tsk x19, x20		// exceptions when scheduling.
>  
> +	apply_ssbd 1


... and thus kernel_entry would be fatal.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
@ 2018-05-24 11:00     ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:00 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:36PM +0100, Marc Zyngier wrote:
> In order for the kernel to protect itself, let's call the SSBD mitigation
> implemented by the higher exception level (either hypervisor or firmware)
> on each transition between userspace and kernel.
> 
> We must take the PSCI conduit into account in order to target the
> right exception level, hence the introduction of a runtime patching
> callback.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/cpu_errata.c | 18 ++++++++++++++++++
>  arch/arm64/kernel/entry.S      | 22 ++++++++++++++++++++++
>  include/linux/arm-smccc.h      |  5 +++++
>  3 files changed, 45 insertions(+)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index a900befadfe8..46b3aafb631a 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -232,6 +232,24 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>  }
>  #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
>  
> +#ifdef CONFIG_ARM64_SSBD
> +void __init arm64_update_smccc_conduit(struct alt_instr *alt,
> +				       __le32 *origptr, __le32 *updptr,
> +				       int nr_inst)
> +{
> +	u32 insn;
> +
> +	BUG_ON(nr_inst != 1);
> +
> +	if (psci_ops.conduit == PSCI_CONDUIT_HVC)
> +		insn = aarch64_insn_get_hvc_value();
> +	else
> +		insn = aarch64_insn_get_smc_value();

Shouldn't this also handle the case where there is no conduit?

See below comment in apply_ssbd for rationale.

> +
> +	*updptr = cpu_to_le32(insn);
> +}
> +#endif	/* CONFIG_ARM64_SSBD */
> +
>  #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)	\
>  	.matches = is_affected_midr_range,			\
>  	.midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max)
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index ec2ee720e33e..f33e6aed3037 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -18,6 +18,7 @@
>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <linux/arm-smccc.h>
>  #include <linux/init.h>
>  #include <linux/linkage.h>
>  
> @@ -137,6 +138,18 @@ alternative_else_nop_endif
>  	add	\dst, \dst, #(\sym - .entry.tramp.text)
>  	.endm
>  
> +	// This macro corrupts x0-x3. It is the caller's duty
> +	// to save/restore them if required.
> +	.macro	apply_ssbd, state
> +#ifdef CONFIG_ARM64_SSBD
> +	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
> +	mov	w1, #\state
> +alternative_cb	arm64_update_smccc_conduit
> +	nop					// Patched to SMC/HVC #0
> +alternative_cb_end
> +#endif
> +	.endm

If my system doesn't have SMCCC1.1, or the FW doesn't have an
implementation of ARCH_WORKAROUND_2, does this stay as a NOP?

It looks like this would be patched to an SMC, which would be fatal on
systems without EL3 FW.

> +
>  	.macro	kernel_entry, el, regsize = 64
>  	.if	\regsize == 32
>  	mov	w0, w0				// zero upper 32 bits of x0
> @@ -163,6 +176,13 @@ alternative_else_nop_endif
>  	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
>  	disable_step_tsk x19, x20		// exceptions when scheduling.
>  
> +	apply_ssbd 1


... and thus kernel_entry would be fatal.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 03/14] arm64: Add per-cpu infrastructure to call ARCH_WORKAROUND_2
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 11:14     ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:14 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Kees Cook,
	Catalin Marinas, Will Deacon, Christoffer Dall, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

On Tue, May 22, 2018 at 04:06:37PM +0100, Marc Zyngier wrote:
> In a heterogeneous system, we can end up with both affected and
> unaffected CPUs. Let's check their status before calling into the
> firmware.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Ah, I guess this may fix the issue I noted with the prior patch,
assuming we only set arm64_ssbd_callback_required for a CPU when the FW
supports the mitigation.

If so, if you fold this together with the prior patch:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

> ---
>  arch/arm64/kernel/cpu_errata.c |  2 ++
>  arch/arm64/kernel/entry.S      | 11 +++++++----
>  2 files changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 46b3aafb631a..0288d6cf560e 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -233,6 +233,8 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>  #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
>  
>  #ifdef CONFIG_ARM64_SSBD
> +DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
> +
>  void __init arm64_update_smccc_conduit(struct alt_instr *alt,
>  				       __le32 *origptr, __le32 *updptr,
>  				       int nr_inst)
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index f33e6aed3037..29ad672a6abd 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -140,8 +140,10 @@ alternative_else_nop_endif
>  
>  	// This macro corrupts x0-x3. It is the caller's duty
>  	// to save/restore them if required.
> -	.macro	apply_ssbd, state
> +	.macro	apply_ssbd, state, targ, tmp1, tmp2
>  #ifdef CONFIG_ARM64_SSBD
> +	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
> +	cbz	\tmp2, \targ
>  	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
>  	mov	w1, #\state
>  alternative_cb	arm64_update_smccc_conduit
> @@ -176,12 +178,13 @@ alternative_cb_end
>  	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
>  	disable_step_tsk x19, x20		// exceptions when scheduling.
>  
> -	apply_ssbd 1
> +	apply_ssbd 1, 1f, x22, x23
>  
>  #ifdef CONFIG_ARM64_SSBD
>  	ldp	x0, x1, [sp, #16 * 0]
>  	ldp	x2, x3, [sp, #16 * 1]
>  #endif
> +1:
>  
>  	mov	x29, xzr			// fp pointed to user-space
>  	.else
> @@ -323,8 +326,8 @@ alternative_if ARM64_WORKAROUND_845719
>  alternative_else_nop_endif
>  #endif
>  3:
> -	apply_ssbd 0
> -
> +	apply_ssbd 0, 5f, x0, x1
> +5:
>  	.endif
>  
>  	msr	elr_el1, x21			// set up the return data
> -- 
> 2.14.2
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 03/14] arm64: Add per-cpu infrastructure to call ARCH_WORKAROUND_2
@ 2018-05-24 11:14     ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:14 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:37PM +0100, Marc Zyngier wrote:
> In a heterogeneous system, we can end up with both affected and
> unaffected CPUs. Let's check their status before calling into the
> firmware.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Ah, I guess this may fix the issue I noted with the prior patch,
assuming we only set arm64_ssbd_callback_required for a CPU when the FW
supports the mitigation.

If so, if you fold this together with the prior patch:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

> ---
>  arch/arm64/kernel/cpu_errata.c |  2 ++
>  arch/arm64/kernel/entry.S      | 11 +++++++----
>  2 files changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 46b3aafb631a..0288d6cf560e 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -233,6 +233,8 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>  #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
>  
>  #ifdef CONFIG_ARM64_SSBD
> +DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
> +
>  void __init arm64_update_smccc_conduit(struct alt_instr *alt,
>  				       __le32 *origptr, __le32 *updptr,
>  				       int nr_inst)
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index f33e6aed3037..29ad672a6abd 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -140,8 +140,10 @@ alternative_else_nop_endif
>  
>  	// This macro corrupts x0-x3. It is the caller's duty
>  	// to save/restore them if required.
> -	.macro	apply_ssbd, state
> +	.macro	apply_ssbd, state, targ, tmp1, tmp2
>  #ifdef CONFIG_ARM64_SSBD
> +	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
> +	cbz	\tmp2, \targ
>  	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
>  	mov	w1, #\state
>  alternative_cb	arm64_update_smccc_conduit
> @@ -176,12 +178,13 @@ alternative_cb_end
>  	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
>  	disable_step_tsk x19, x20		// exceptions when scheduling.
>  
> -	apply_ssbd 1
> +	apply_ssbd 1, 1f, x22, x23
>  
>  #ifdef CONFIG_ARM64_SSBD
>  	ldp	x0, x1, [sp, #16 * 0]
>  	ldp	x2, x3, [sp, #16 * 1]
>  #endif
> +1:
>  
>  	mov	x29, xzr			// fp pointed to user-space
>  	.else
> @@ -323,8 +326,8 @@ alternative_if ARM64_WORKAROUND_845719
>  alternative_else_nop_endif
>  #endif
>  3:
> -	apply_ssbd 0
> -
> +	apply_ssbd 0, 5f, x0, x1
> +5:
>  	.endif
>  
>  	msr	elr_el1, x21			// set up the return data
> -- 
> 2.14.2
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
  2018-05-24 11:00     ` Mark Rutland
@ 2018-05-24 11:23       ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:23 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Kees Cook,
	Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

On Thu, May 24, 2018 at 12:00:58PM +0100, Mark Rutland wrote:
> On Tue, May 22, 2018 at 04:06:36PM +0100, Marc Zyngier wrote:
> > In order for the kernel to protect itself, let's call the SSBD mitigation
> > implemented by the higher exception level (either hypervisor or firmware)
> > on each transition between userspace and kernel.
> > 
> > We must take the PSCI conduit into account in order to target the
> > right exception level, hence the introduction of a runtime patching
> > callback.
> > 
> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > ---
> >  arch/arm64/kernel/cpu_errata.c | 18 ++++++++++++++++++
> >  arch/arm64/kernel/entry.S      | 22 ++++++++++++++++++++++
> >  include/linux/arm-smccc.h      |  5 +++++
> >  3 files changed, 45 insertions(+)
> > 
> > diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> > index a900befadfe8..46b3aafb631a 100644
> > --- a/arch/arm64/kernel/cpu_errata.c
> > +++ b/arch/arm64/kernel/cpu_errata.c
> > @@ -232,6 +232,24 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
> >  }
> >  #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
> >  
> > +#ifdef CONFIG_ARM64_SSBD
> > +void __init arm64_update_smccc_conduit(struct alt_instr *alt,
> > +				       __le32 *origptr, __le32 *updptr,
> > +				       int nr_inst)
> > +{
> > +	u32 insn;
> > +
> > +	BUG_ON(nr_inst != 1);
> > +
> > +	if (psci_ops.conduit == PSCI_CONDUIT_HVC)
> > +		insn = aarch64_insn_get_hvc_value();
> > +	else
> > +		insn = aarch64_insn_get_smc_value();
> 
> Shouldn't this also handle the case where there is no conduit?

Due to the config symbol not being defined yet, and various other fixups
in later patches, this is actually benign.

However, if you make this:

	switch (psci_ops.conduit) {
	case PSCI_CONDUIT_NONE:
		return;
	case PSCI_CONDUIT_HVC:
		insn = aarch64_insn_get_hvc_value();
		break;
	case PSCI_CONDUIT_SMC:
		insn = aarch64_insn_get_smc_value();
		break;
	}

... then we won't even bother patching the nop in the default case
regardless, which is nicer, IMO.

With that:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

> 
> See below comment in apply_ssbd for rationale.
> 
> > +
> > +	*updptr = cpu_to_le32(insn);
> > +}
> > +#endif	/* CONFIG_ARM64_SSBD */
> > +
> >  #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)	\
> >  	.matches = is_affected_midr_range,			\
> >  	.midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max)
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index ec2ee720e33e..f33e6aed3037 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -18,6 +18,7 @@
> >   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> >   */
> >  
> > +#include <linux/arm-smccc.h>
> >  #include <linux/init.h>
> >  #include <linux/linkage.h>
> >  
> > @@ -137,6 +138,18 @@ alternative_else_nop_endif
> >  	add	\dst, \dst, #(\sym - .entry.tramp.text)
> >  	.endm
> >  
> > +	// This macro corrupts x0-x3. It is the caller's duty
> > +	// to save/restore them if required.
> > +	.macro	apply_ssbd, state
> > +#ifdef CONFIG_ARM64_SSBD
> > +	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
> > +	mov	w1, #\state
> > +alternative_cb	arm64_update_smccc_conduit
> > +	nop					// Patched to SMC/HVC #0
> > +alternative_cb_end
> > +#endif
> > +	.endm
> 
> If my system doesn't have SMCCC1.1, or the FW doesn't have an
> implementation of ARCH_WORKAROUND_2, does this stay as a NOP?
> 
> It looks like this would be patched to an SMC, which would be fatal on
> systems without EL3 FW.
> 
> > +
> >  	.macro	kernel_entry, el, regsize = 64
> >  	.if	\regsize == 32
> >  	mov	w0, w0				// zero upper 32 bits of x0
> > @@ -163,6 +176,13 @@ alternative_else_nop_endif
> >  	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
> >  	disable_step_tsk x19, x20		// exceptions when scheduling.
> >  
> > +	apply_ssbd 1
> 
> 
> ... and thus kernel_entry would be fatal.
> 
> Thanks,
> Mark.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
@ 2018-05-24 11:23       ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:23 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, May 24, 2018 at 12:00:58PM +0100, Mark Rutland wrote:
> On Tue, May 22, 2018 at 04:06:36PM +0100, Marc Zyngier wrote:
> > In order for the kernel to protect itself, let's call the SSBD mitigation
> > implemented by the higher exception level (either hypervisor or firmware)
> > on each transition between userspace and kernel.
> > 
> > We must take the PSCI conduit into account in order to target the
> > right exception level, hence the introduction of a runtime patching
> > callback.
> > 
> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > ---
> >  arch/arm64/kernel/cpu_errata.c | 18 ++++++++++++++++++
> >  arch/arm64/kernel/entry.S      | 22 ++++++++++++++++++++++
> >  include/linux/arm-smccc.h      |  5 +++++
> >  3 files changed, 45 insertions(+)
> > 
> > diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> > index a900befadfe8..46b3aafb631a 100644
> > --- a/arch/arm64/kernel/cpu_errata.c
> > +++ b/arch/arm64/kernel/cpu_errata.c
> > @@ -232,6 +232,24 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
> >  }
> >  #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
> >  
> > +#ifdef CONFIG_ARM64_SSBD
> > +void __init arm64_update_smccc_conduit(struct alt_instr *alt,
> > +				       __le32 *origptr, __le32 *updptr,
> > +				       int nr_inst)
> > +{
> > +	u32 insn;
> > +
> > +	BUG_ON(nr_inst != 1);
> > +
> > +	if (psci_ops.conduit == PSCI_CONDUIT_HVC)
> > +		insn = aarch64_insn_get_hvc_value();
> > +	else
> > +		insn = aarch64_insn_get_smc_value();
> 
> Shouldn't this also handle the case where there is no conduit?

Due to the config symbol not being defined yet, and various other fixups
in later patches, this is actually benign.

However, if you make this:

	switch (psci_ops.conduit) {
	case PSCI_CONDUIT_NONE:
		return;
	case PSCI_CONDUIT_HVC:
		insn = aarch64_insn_get_hvc_value();
		break;
	case PSCI_CONDUIT_SMC:
		insn = aarch64_insn_get_smc_value();
		break;
	}

... then we won't even bother patching the nop in the default case
regardless, which is nicer, IMO.

With that:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

> 
> See below comment in apply_ssbd for rationale.
> 
> > +
> > +	*updptr = cpu_to_le32(insn);
> > +}
> > +#endif	/* CONFIG_ARM64_SSBD */
> > +
> >  #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)	\
> >  	.matches = is_affected_midr_range,			\
> >  	.midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max)
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index ec2ee720e33e..f33e6aed3037 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -18,6 +18,7 @@
> >   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> >   */
> >  
> > +#include <linux/arm-smccc.h>
> >  #include <linux/init.h>
> >  #include <linux/linkage.h>
> >  
> > @@ -137,6 +138,18 @@ alternative_else_nop_endif
> >  	add	\dst, \dst, #(\sym - .entry.tramp.text)
> >  	.endm
> >  
> > +	// This macro corrupts x0-x3. It is the caller's duty
> > +	// to save/restore them if required.
> > +	.macro	apply_ssbd, state
> > +#ifdef CONFIG_ARM64_SSBD
> > +	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
> > +	mov	w1, #\state
> > +alternative_cb	arm64_update_smccc_conduit
> > +	nop					// Patched to SMC/HVC #0
> > +alternative_cb_end
> > +#endif
> > +	.endm
> 
> If my system doesn't have SMCCC1.1, or the FW doesn't have an
> implementation of ARCH_WORKAROUND_2, does this stay as a NOP?
> 
> It looks like this would be patched to an SMC, which would be fatal on
> systems without EL3 FW.
> 
> > +
> >  	.macro	kernel_entry, el, regsize = 64
> >  	.if	\regsize == 32
> >  	mov	w0, w0				// zero upper 32 bits of x0
> > @@ -163,6 +176,13 @@ alternative_else_nop_endif
> >  	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
> >  	disable_step_tsk x19, x20		// exceptions when scheduling.
> >  
> > +	apply_ssbd 1
> 
> 
> ... and thus kernel_entry would be fatal.
> 
> Thanks,
> Mark.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 04/14] arm64: Add ARCH_WORKAROUND_2 probing
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 11:27     ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:27 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Kees Cook,
	Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

On Tue, May 22, 2018 at 04:06:38PM +0100, Marc Zyngier wrote:
> As for Spectre variant-2, we rely on SMCCC 1.1 to provide the
> discovery mechanism for detecting the SSBD mitigation.
> 
> A new capability is also allocated for that purpose, and a
> config option.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

[...]

> +static void do_ssbd(bool state)
> +{
> +	switch (psci_ops.conduit) {
> +	case PSCI_CONDUIT_HVC:
> +		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
> +		break;
> +
> +	case PSCI_CONDUIT_SMC:
> +		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
> +		break;
> +
> +	default:
> +		WARN_ON_ONCE(1);
> +		break;
> +	}
> +}
> +
> +static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
> +				    int scope)
> +{
> +	struct arm_smccc_res res;
> +	bool supported = true;
> +
> +	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
> +
> +	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
> +		return false;
> +
> +	/*
> +	 * The probe function return value is either negative
> +	 * (unsupported or mitigated), positive (unaffected), or zero
> +	 * (requires mitigation). We only need to do anything in the
> +	 * last case.
> +	 */
> +	switch (psci_ops.conduit) {
> +	case PSCI_CONDUIT_HVC:
> +		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> +				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> +		if ((int)res.a0 != 0)
> +			supported = false;
> +		break;
> +
> +	case PSCI_CONDUIT_SMC:
> +		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> +				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> +		if ((int)res.a0 != 0)
> +			supported = false;
> +		break;

Once this is merged, I'll rebase my SMCCCC conduit cleanup atop.

Mark.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 04/14] arm64: Add ARCH_WORKAROUND_2 probing
@ 2018-05-24 11:27     ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:27 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:38PM +0100, Marc Zyngier wrote:
> As for Spectre variant-2, we rely on SMCCC 1.1 to provide the
> discovery mechanism for detecting the SSBD mitigation.
> 
> A new capability is also allocated for that purpose, and a
> config option.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

[...]

> +static void do_ssbd(bool state)
> +{
> +	switch (psci_ops.conduit) {
> +	case PSCI_CONDUIT_HVC:
> +		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
> +		break;
> +
> +	case PSCI_CONDUIT_SMC:
> +		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
> +		break;
> +
> +	default:
> +		WARN_ON_ONCE(1);
> +		break;
> +	}
> +}
> +
> +static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
> +				    int scope)
> +{
> +	struct arm_smccc_res res;
> +	bool supported = true;
> +
> +	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
> +
> +	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
> +		return false;
> +
> +	/*
> +	 * The probe function return value is either negative
> +	 * (unsupported or mitigated), positive (unaffected), or zero
> +	 * (requires mitigation). We only need to do anything in the
> +	 * last case.
> +	 */
> +	switch (psci_ops.conduit) {
> +	case PSCI_CONDUIT_HVC:
> +		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> +				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> +		if ((int)res.a0 != 0)
> +			supported = false;
> +		break;
> +
> +	case PSCI_CONDUIT_SMC:
> +		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> +				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> +		if ((int)res.a0 != 0)
> +			supported = false;
> +		break;

Once this is merged, I'll rebase my SMCCCC conduit cleanup atop.

Mark.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
  2018-05-24 11:23       ` Mark Rutland
@ 2018-05-24 11:28         ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-24 11:28 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Kees Cook,
	Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

On 24/05/18 12:23, Mark Rutland wrote:
> On Thu, May 24, 2018 at 12:00:58PM +0100, Mark Rutland wrote:
>> On Tue, May 22, 2018 at 04:06:36PM +0100, Marc Zyngier wrote:
>>> In order for the kernel to protect itself, let's call the SSBD mitigation
>>> implemented by the higher exception level (either hypervisor or firmware)
>>> on each transition between userspace and kernel.
>>>
>>> We must take the PSCI conduit into account in order to target the
>>> right exception level, hence the introduction of a runtime patching
>>> callback.
>>>
>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>> ---
>>>  arch/arm64/kernel/cpu_errata.c | 18 ++++++++++++++++++
>>>  arch/arm64/kernel/entry.S      | 22 ++++++++++++++++++++++
>>>  include/linux/arm-smccc.h      |  5 +++++
>>>  3 files changed, 45 insertions(+)
>>>
>>> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
>>> index a900befadfe8..46b3aafb631a 100644
>>> --- a/arch/arm64/kernel/cpu_errata.c
>>> +++ b/arch/arm64/kernel/cpu_errata.c
>>> @@ -232,6 +232,24 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>>>  }
>>>  #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
>>>  
>>> +#ifdef CONFIG_ARM64_SSBD
>>> +void __init arm64_update_smccc_conduit(struct alt_instr *alt,
>>> +				       __le32 *origptr, __le32 *updptr,
>>> +				       int nr_inst)
>>> +{
>>> +	u32 insn;
>>> +
>>> +	BUG_ON(nr_inst != 1);
>>> +
>>> +	if (psci_ops.conduit == PSCI_CONDUIT_HVC)
>>> +		insn = aarch64_insn_get_hvc_value();
>>> +	else
>>> +		insn = aarch64_insn_get_smc_value();
>>
>> Shouldn't this also handle the case where there is no conduit?
> 
> Due to the config symbol not being defined yet, and various other fixups
> in later patches, this is actually benign.
> 
> However, if you make this:
> 
> 	switch (psci_ops.conduit) {
> 	case PSCI_CONDUIT_NONE:
> 		return;
> 	case PSCI_CONDUIT_HVC:
> 		insn = aarch64_insn_get_hvc_value();
> 		break;
> 	case PSCI_CONDUIT_SMC:
> 		insn = aarch64_insn_get_smc_value();
> 		break;
> 	}
> 
> ... then we won't even bother patching the nop in the default case
> regardless, which is nicer, IMO.

Yup, looks better to me too. I'll fold that in.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
@ 2018-05-24 11:28         ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-24 11:28 UTC (permalink / raw)
  To: linux-arm-kernel

On 24/05/18 12:23, Mark Rutland wrote:
> On Thu, May 24, 2018 at 12:00:58PM +0100, Mark Rutland wrote:
>> On Tue, May 22, 2018 at 04:06:36PM +0100, Marc Zyngier wrote:
>>> In order for the kernel to protect itself, let's call the SSBD mitigation
>>> implemented by the higher exception level (either hypervisor or firmware)
>>> on each transition between userspace and kernel.
>>>
>>> We must take the PSCI conduit into account in order to target the
>>> right exception level, hence the introduction of a runtime patching
>>> callback.
>>>
>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>> ---
>>>  arch/arm64/kernel/cpu_errata.c | 18 ++++++++++++++++++
>>>  arch/arm64/kernel/entry.S      | 22 ++++++++++++++++++++++
>>>  include/linux/arm-smccc.h      |  5 +++++
>>>  3 files changed, 45 insertions(+)
>>>
>>> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
>>> index a900befadfe8..46b3aafb631a 100644
>>> --- a/arch/arm64/kernel/cpu_errata.c
>>> +++ b/arch/arm64/kernel/cpu_errata.c
>>> @@ -232,6 +232,24 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>>>  }
>>>  #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
>>>  
>>> +#ifdef CONFIG_ARM64_SSBD
>>> +void __init arm64_update_smccc_conduit(struct alt_instr *alt,
>>> +				       __le32 *origptr, __le32 *updptr,
>>> +				       int nr_inst)
>>> +{
>>> +	u32 insn;
>>> +
>>> +	BUG_ON(nr_inst != 1);
>>> +
>>> +	if (psci_ops.conduit == PSCI_CONDUIT_HVC)
>>> +		insn = aarch64_insn_get_hvc_value();
>>> +	else
>>> +		insn = aarch64_insn_get_smc_value();
>>
>> Shouldn't this also handle the case where there is no conduit?
> 
> Due to the config symbol not being defined yet, and various other fixups
> in later patches, this is actually benign.
> 
> However, if you make this:
> 
> 	switch (psci_ops.conduit) {
> 	case PSCI_CONDUIT_NONE:
> 		return;
> 	case PSCI_CONDUIT_HVC:
> 		insn = aarch64_insn_get_hvc_value();
> 		break;
> 	case PSCI_CONDUIT_SMC:
> 		insn = aarch64_insn_get_smc_value();
> 		break;
> 	}
> 
> ... then we won't even bother patching the nop in the default case
> regardless, which is nicer, IMO.

Yup, looks better to me too. I'll fold that in.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 04/14] arm64: Add ARCH_WORKAROUND_2 probing
  2018-05-24  9:58     ` Suzuki K Poulose
@ 2018-05-24 11:39       ` Will Deacon
  -1 siblings, 0 replies; 110+ messages in thread
From: Will Deacon @ 2018-05-24 11:39 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: Marc Zyngier, linux-arm-kernel, linux-kernel, kvmarm, Kees Cook,
	Catalin Marinas, Andy Lutomirski, Greg Kroah-Hartman,
	Thomas Gleixner

On Thu, May 24, 2018 at 10:58:43AM +0100, Suzuki K Poulose wrote:
> On 22/05/18 16:06, Marc Zyngier wrote:
> >As for Spectre variant-2, we rely on SMCCC 1.1 to provide the
> >discovery mechanism for detecting the SSBD mitigation.
> >
> >A new capability is also allocated for that purpose, and a
> >config option.
> >
> >Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> 
> 
> >+static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
> >+				    int scope)
> >+{
> >+	struct arm_smccc_res res;
> >+	bool supported = true;
> >+
> >+	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
> >+
> >+	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
> >+		return false;
> >+
> >+	/*
> >+	 * The probe function return value is either negative
> >+	 * (unsupported or mitigated), positive (unaffected), or zero
> >+	 * (requires mitigation). We only need to do anything in the
> >+	 * last case.
> >+	 */
> >+	switch (psci_ops.conduit) {
> >+	case PSCI_CONDUIT_HVC:
> >+		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> >+				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> >+		if ((int)res.a0 != 0)
> >+			supported = false;
> >+		break;
> >+
> >+	case PSCI_CONDUIT_SMC:
> >+		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> >+				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> >+		if ((int)res.a0 != 0)
> >+			supported = false;
> >+		break;
> >+
> >+	default:
> >+		supported = false;
> >+	}
> >+
> >+	if (supported) {
> >+		__this_cpu_write(arm64_ssbd_callback_required, 1);
> >+		do_ssbd(true);
> >+	}
> 
> 
> Marc,
> 
> As discussed, we have minor issue with the "corner case". If a CPU
> is hotplugged in which requires the mitigation, after the system has
> finalised the cap to "not available", the CPU could go ahead and
> do the "work around" as above, while not effectively doing anything
> about it at runtime for KVM guests (as thats the only place where
> we rely on the CAP being set).
> 
> But, yes this is real corner case. There is no easy way to solve it
> other than
> 
> 1) Allow late modifications to CPU hwcaps
> 
> OR
> 
> 2) Penalise the fastpath to always check per-cpu setting.

Shouldn't we just avoid bring up CPUs that require the mitigation after
we've finalised the capability to say that it's not required? Assuming this
is just another issue with maxcpus=, then I don't much care for it.

Will

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 04/14] arm64: Add ARCH_WORKAROUND_2 probing
@ 2018-05-24 11:39       ` Will Deacon
  0 siblings, 0 replies; 110+ messages in thread
From: Will Deacon @ 2018-05-24 11:39 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, May 24, 2018 at 10:58:43AM +0100, Suzuki K Poulose wrote:
> On 22/05/18 16:06, Marc Zyngier wrote:
> >As for Spectre variant-2, we rely on SMCCC 1.1 to provide the
> >discovery mechanism for detecting the SSBD mitigation.
> >
> >A new capability is also allocated for that purpose, and a
> >config option.
> >
> >Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> 
> 
> >+static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
> >+				    int scope)
> >+{
> >+	struct arm_smccc_res res;
> >+	bool supported = true;
> >+
> >+	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
> >+
> >+	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
> >+		return false;
> >+
> >+	/*
> >+	 * The probe function return value is either negative
> >+	 * (unsupported or mitigated), positive (unaffected), or zero
> >+	 * (requires mitigation). We only need to do anything in the
> >+	 * last case.
> >+	 */
> >+	switch (psci_ops.conduit) {
> >+	case PSCI_CONDUIT_HVC:
> >+		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> >+				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> >+		if ((int)res.a0 != 0)
> >+			supported = false;
> >+		break;
> >+
> >+	case PSCI_CONDUIT_SMC:
> >+		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> >+				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> >+		if ((int)res.a0 != 0)
> >+			supported = false;
> >+		break;
> >+
> >+	default:
> >+		supported = false;
> >+	}
> >+
> >+	if (supported) {
> >+		__this_cpu_write(arm64_ssbd_callback_required, 1);
> >+		do_ssbd(true);
> >+	}
> 
> 
> Marc,
> 
> As discussed, we have minor issue with the "corner case". If a CPU
> is hotplugged in which requires the mitigation, after the system has
> finalised the cap to "not available", the CPU could go ahead and
> do the "work around" as above, while not effectively doing anything
> about it at runtime for KVM guests (as thats the only place where
> we rely on the CAP being set).
> 
> But, yes this is real corner case. There is no easy way to solve it
> other than
> 
> 1) Allow late modifications to CPU hwcaps
> 
> OR
> 
> 2) Penalise the fastpath to always check per-cpu setting.

Shouldn't we just avoid bring up CPUs that require the mitigation after
we've finalised the capability to say that it's not required? Assuming this
is just another issue with maxcpus=, then I don't much care for it.

Will

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 05/14] arm64: Add 'ssbd' command-line option
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 11:40     ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:40 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Will Deacon,
	Catalin Marinas, Thomas Gleixner, Andy Lutomirski, Kees Cook,
	Greg Kroah-Hartman, Christoffer Dall

On Tue, May 22, 2018 at 04:06:39PM +0100, Marc Zyngier wrote:
> On a system where the firmware implements ARCH_WORKAROUND_2,
> it may be useful to either permanently enable or disable the
> workaround for cases where the user decides that they'd rather
> not get a trap overhead, and keep the mitigation permanently
> on or off instead of switching it on exception entry/exit.
> 
> In any case, default to the mitigation being enabled.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  Documentation/admin-guide/kernel-parameters.txt |  17 ++++
>  arch/arm64/include/asm/cpufeature.h             |   6 ++
>  arch/arm64/kernel/cpu_errata.c                  | 102 ++++++++++++++++++++----
>  3 files changed, 109 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index f2040d46f095..646e112c6f63 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4092,6 +4092,23 @@
>  			expediting.  Set to zero to disable automatic
>  			expediting.
>  
> +	ssbd=		[ARM64,HW]
> +			Speculative Store Bypass Disable control
> +
> +			On CPUs that are vulnerable to the Speculative
> +			Store Bypass vulnerability and offer a
> +			firmware based mitigation, this parameter
> +			indicates how the mitigation should be used:
> +
> +			force-on:  Unconditionnaly enable mitigation for
> +				   for both kernel and userspace
> +			force-off: Unconditionnaly disable mitigation for
> +				   for both kernel and userspace
> +			kernel:    Always enable mitigation in the
> +				   kernel, and offer a prctl interface
> +				   to allow userspace to register its
> +				   interest in being mitigated too.
> +
>  	stack_guard_gap=	[MM]
>  			override the default stack gap protection. The value
>  			is in page units and it defines how many pages prior
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 09b0f2a80c8f..9bc548e22784 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -537,6 +537,12 @@ static inline u64 read_zcr_features(void)
>  	return zcr;
>  }
>  
> +#define ARM64_SSBD_UNKNOWN		-1
> +#define ARM64_SSBD_FORCE_DISABLE	0
> +#define ARM64_SSBD_EL1_ENTRY		1

The EL1_ENTRY part of the name is a bit misleading, since this doesn't
apply to EL1->EL1 exceptions (and as with many other bits of the arm64
code, it's arguably misleading in the VHE case).

Perhaps ARM64_SSBD_KERNEL, which would align with the parameter name?

Not a big deal either way, and otherwise this looks good to me.
Regardless:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> +#define ARM64_SSBD_FORCE_ENABLE		2
> +#define ARM64_SSBD_MITIGATED		3
> +
>  #endif /* __ASSEMBLY__ */
>  
>  #endif
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 7fd6d5b001f5..f1d4e75b0ddd 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -235,6 +235,38 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>  #ifdef CONFIG_ARM64_SSBD
>  DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
>  
> +int ssbd_state __read_mostly = ARM64_SSBD_EL1_ENTRY;
> +
> +static const struct ssbd_options {
> +	const char	*str;
> +	int		state;
> +} ssbd_options[] = {
> +	{ "force-on",	ARM64_SSBD_FORCE_ENABLE, },
> +	{ "force-off",	ARM64_SSBD_FORCE_DISABLE, },
> +	{ "kernel",	ARM64_SSBD_EL1_ENTRY, },
> +};
> +
> +static int __init ssbd_cfg(char *buf)
> +{
> +	int i;
> +
> +	if (!buf || !buf[0])
> +		return -EINVAL;
> +
> +	for (i = 0; i < ARRAY_SIZE(ssbd_options); i++) {
> +		int len = strlen(ssbd_options[i].str);
> +
> +		if (strncmp(buf, ssbd_options[i].str, len))
> +			continue;
> +
> +		ssbd_state = ssbd_options[i].state;
> +		return 0;
> +	}
> +
> +	return -EINVAL;
> +}
> +early_param("ssbd", ssbd_cfg);
> +
>  void __init arm64_update_smccc_conduit(struct alt_instr *alt,
>  				       __le32 *origptr, __le32 *updptr,
>  				       int nr_inst)
> @@ -272,44 +304,82 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>  				    int scope)
>  {
>  	struct arm_smccc_res res;
> -	bool supported = true;
> +	bool required = true;
> +	s32 val;
>  
>  	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
>  
> -	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
> +	if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
> +		ssbd_state = ARM64_SSBD_UNKNOWN;
>  		return false;
> +	}
>  
> -	/*
> -	 * The probe function return value is either negative
> -	 * (unsupported or mitigated), positive (unaffected), or zero
> -	 * (requires mitigation). We only need to do anything in the
> -	 * last case.
> -	 */
>  	switch (psci_ops.conduit) {
>  	case PSCI_CONDUIT_HVC:
>  		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>  				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> -		if ((int)res.a0 != 0)
> -			supported = false;
>  		break;
>  
>  	case PSCI_CONDUIT_SMC:
>  		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>  				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> -		if ((int)res.a0 != 0)
> -			supported = false;
>  		break;
>  
>  	default:
> -		supported = false;
> +		ssbd_state = ARM64_SSBD_UNKNOWN;
> +		return false;
>  	}
>  
> -	if (supported) {
> -		__this_cpu_write(arm64_ssbd_callback_required, 1);
> +	val = (s32)res.a0;
> +
> +	switch (val) {
> +	case SMCCC_RET_NOT_SUPPORTED:
> +		ssbd_state = ARM64_SSBD_UNKNOWN;
> +		return false;
> +
> +	case SMCCC_RET_NOT_REQUIRED:
> +		ssbd_state = ARM64_SSBD_MITIGATED;
> +		return false;
> +
> +	case SMCCC_RET_SUCCESS:
> +		required = true;
> +		break;
> +
> +	case 1:	/* Mitigation not required on this CPU */
> +		required = false;
> +		break;
> +
> +	default:
> +		WARN_ON(1);
> +		return false;
> +	}
> +
> +	switch (ssbd_state) {
> +	case ARM64_SSBD_FORCE_DISABLE:
> +		pr_info_once("%s disabled from command-line\n", entry->desc);
> +		do_ssbd(false);
> +		required = false;
> +		break;
> +
> +	case ARM64_SSBD_EL1_ENTRY:
> +		if (required) {
> +			__this_cpu_write(arm64_ssbd_callback_required, 1);
> +			do_ssbd(true);
> +		}
> +		break;
> +
> +	case ARM64_SSBD_FORCE_ENABLE:
> +		pr_info_once("%s forced from command-line\n", entry->desc);
>  		do_ssbd(true);
> +		required = true;
> +		break;
> +
> +	default:
> +		WARN_ON(1);
> +		break;
>  	}
>  
> -	return supported;
> +	return required;
>  }
>  #endif	/* CONFIG_ARM64_SSBD */
>  
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 05/14] arm64: Add 'ssbd' command-line option
@ 2018-05-24 11:40     ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:40 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:39PM +0100, Marc Zyngier wrote:
> On a system where the firmware implements ARCH_WORKAROUND_2,
> it may be useful to either permanently enable or disable the
> workaround for cases where the user decides that they'd rather
> not get a trap overhead, and keep the mitigation permanently
> on or off instead of switching it on exception entry/exit.
> 
> In any case, default to the mitigation being enabled.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  Documentation/admin-guide/kernel-parameters.txt |  17 ++++
>  arch/arm64/include/asm/cpufeature.h             |   6 ++
>  arch/arm64/kernel/cpu_errata.c                  | 102 ++++++++++++++++++++----
>  3 files changed, 109 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index f2040d46f095..646e112c6f63 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4092,6 +4092,23 @@
>  			expediting.  Set to zero to disable automatic
>  			expediting.
>  
> +	ssbd=		[ARM64,HW]
> +			Speculative Store Bypass Disable control
> +
> +			On CPUs that are vulnerable to the Speculative
> +			Store Bypass vulnerability and offer a
> +			firmware based mitigation, this parameter
> +			indicates how the mitigation should be used:
> +
> +			force-on:  Unconditionnaly enable mitigation for
> +				   for both kernel and userspace
> +			force-off: Unconditionnaly disable mitigation for
> +				   for both kernel and userspace
> +			kernel:    Always enable mitigation in the
> +				   kernel, and offer a prctl interface
> +				   to allow userspace to register its
> +				   interest in being mitigated too.
> +
>  	stack_guard_gap=	[MM]
>  			override the default stack gap protection. The value
>  			is in page units and it defines how many pages prior
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 09b0f2a80c8f..9bc548e22784 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -537,6 +537,12 @@ static inline u64 read_zcr_features(void)
>  	return zcr;
>  }
>  
> +#define ARM64_SSBD_UNKNOWN		-1
> +#define ARM64_SSBD_FORCE_DISABLE	0
> +#define ARM64_SSBD_EL1_ENTRY		1

The EL1_ENTRY part of the name is a bit misleading, since this doesn't
apply to EL1->EL1 exceptions (and as with many other bits of the arm64
code, it's arguably misleading in the VHE case).

Perhaps ARM64_SSBD_KERNEL, which would align with the parameter name?

Not a big deal either way, and otherwise this looks good to me.
Regardless:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> +#define ARM64_SSBD_FORCE_ENABLE		2
> +#define ARM64_SSBD_MITIGATED		3
> +
>  #endif /* __ASSEMBLY__ */
>  
>  #endif
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 7fd6d5b001f5..f1d4e75b0ddd 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -235,6 +235,38 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
>  #ifdef CONFIG_ARM64_SSBD
>  DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
>  
> +int ssbd_state __read_mostly = ARM64_SSBD_EL1_ENTRY;
> +
> +static const struct ssbd_options {
> +	const char	*str;
> +	int		state;
> +} ssbd_options[] = {
> +	{ "force-on",	ARM64_SSBD_FORCE_ENABLE, },
> +	{ "force-off",	ARM64_SSBD_FORCE_DISABLE, },
> +	{ "kernel",	ARM64_SSBD_EL1_ENTRY, },
> +};
> +
> +static int __init ssbd_cfg(char *buf)
> +{
> +	int i;
> +
> +	if (!buf || !buf[0])
> +		return -EINVAL;
> +
> +	for (i = 0; i < ARRAY_SIZE(ssbd_options); i++) {
> +		int len = strlen(ssbd_options[i].str);
> +
> +		if (strncmp(buf, ssbd_options[i].str, len))
> +			continue;
> +
> +		ssbd_state = ssbd_options[i].state;
> +		return 0;
> +	}
> +
> +	return -EINVAL;
> +}
> +early_param("ssbd", ssbd_cfg);
> +
>  void __init arm64_update_smccc_conduit(struct alt_instr *alt,
>  				       __le32 *origptr, __le32 *updptr,
>  				       int nr_inst)
> @@ -272,44 +304,82 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>  				    int scope)
>  {
>  	struct arm_smccc_res res;
> -	bool supported = true;
> +	bool required = true;
> +	s32 val;
>  
>  	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
>  
> -	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
> +	if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
> +		ssbd_state = ARM64_SSBD_UNKNOWN;
>  		return false;
> +	}
>  
> -	/*
> -	 * The probe function return value is either negative
> -	 * (unsupported or mitigated), positive (unaffected), or zero
> -	 * (requires mitigation). We only need to do anything in the
> -	 * last case.
> -	 */
>  	switch (psci_ops.conduit) {
>  	case PSCI_CONDUIT_HVC:
>  		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>  				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> -		if ((int)res.a0 != 0)
> -			supported = false;
>  		break;
>  
>  	case PSCI_CONDUIT_SMC:
>  		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>  				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
> -		if ((int)res.a0 != 0)
> -			supported = false;
>  		break;
>  
>  	default:
> -		supported = false;
> +		ssbd_state = ARM64_SSBD_UNKNOWN;
> +		return false;
>  	}
>  
> -	if (supported) {
> -		__this_cpu_write(arm64_ssbd_callback_required, 1);
> +	val = (s32)res.a0;
> +
> +	switch (val) {
> +	case SMCCC_RET_NOT_SUPPORTED:
> +		ssbd_state = ARM64_SSBD_UNKNOWN;
> +		return false;
> +
> +	case SMCCC_RET_NOT_REQUIRED:
> +		ssbd_state = ARM64_SSBD_MITIGATED;
> +		return false;
> +
> +	case SMCCC_RET_SUCCESS:
> +		required = true;
> +		break;
> +
> +	case 1:	/* Mitigation not required on this CPU */
> +		required = false;
> +		break;
> +
> +	default:
> +		WARN_ON(1);
> +		return false;
> +	}
> +
> +	switch (ssbd_state) {
> +	case ARM64_SSBD_FORCE_DISABLE:
> +		pr_info_once("%s disabled from command-line\n", entry->desc);
> +		do_ssbd(false);
> +		required = false;
> +		break;
> +
> +	case ARM64_SSBD_EL1_ENTRY:
> +		if (required) {
> +			__this_cpu_write(arm64_ssbd_callback_required, 1);
> +			do_ssbd(true);
> +		}
> +		break;
> +
> +	case ARM64_SSBD_FORCE_ENABLE:
> +		pr_info_once("%s forced from command-line\n", entry->desc);
>  		do_ssbd(true);
> +		required = true;
> +		break;
> +
> +	default:
> +		WARN_ON(1);
> +		break;
>  	}
>  
> -	return supported;
> +	return required;
>  }
>  #endif	/* CONFIG_ARM64_SSBD */
>  
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 06/14] arm64: ssbd: Add global mitigation state accessor
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 11:41     ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:41 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Will Deacon,
	Catalin Marinas, Thomas Gleixner, Andy Lutomirski, Kees Cook,
	Greg Kroah-Hartman, Christoffer Dall

On Tue, May 22, 2018 at 04:06:40PM +0100, Marc Zyngier wrote:
> We're about to need the mitigation state in various parts of the
> kernel in order to do the right thing for userspace and guests.
> 
> Let's expose an accessor that will let other subsystems know
> about the state.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/cpufeature.h | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 9bc548e22784..1bacdf57f0af 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -543,6 +543,16 @@ static inline u64 read_zcr_features(void)
>  #define ARM64_SSBD_FORCE_ENABLE		2
>  #define ARM64_SSBD_MITIGATED		3
>  
> +static inline int arm64_get_ssbd_state(void)
> +{
> +#ifdef CONFIG_ARM64_SSBD
> +	extern int ssbd_state;
> +	return ssbd_state;
> +#else
> +	return ARM64_SSBD_UNKNOWN;
> +#endif
> +}
> +
>  #endif /* __ASSEMBLY__ */
>  
>  #endif
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 06/14] arm64: ssbd: Add global mitigation state accessor
@ 2018-05-24 11:41     ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:40PM +0100, Marc Zyngier wrote:
> We're about to need the mitigation state in various parts of the
> kernel in order to do the right thing for userspace and guests.
> 
> Let's expose an accessor that will let other subsystems know
> about the state.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/cpufeature.h | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 9bc548e22784..1bacdf57f0af 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -543,6 +543,16 @@ static inline u64 read_zcr_features(void)
>  #define ARM64_SSBD_FORCE_ENABLE		2
>  #define ARM64_SSBD_MITIGATED		3
>  
> +static inline int arm64_get_ssbd_state(void)
> +{
> +#ifdef CONFIG_ARM64_SSBD
> +	extern int ssbd_state;
> +	return ssbd_state;
> +#else
> +	return ARM64_SSBD_UNKNOWN;
> +#endif
> +}
> +
>  #endif /* __ASSEMBLY__ */
>  
>  #endif
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 07/14] arm64: ssbd: Skip apply_ssbd if not using dynamic mitigation
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 11:43     ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:43 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Will Deacon,
	Catalin Marinas, Thomas Gleixner, Andy Lutomirski, Kees Cook,
	Greg Kroah-Hartman, Christoffer Dall

On Tue, May 22, 2018 at 04:06:41PM +0100, Marc Zyngier wrote:
> In order to avoid checking arm64_ssbd_callback_required on each
> kernel entry/exit even if no mitigation is required, let's
> add yet another alternative that by default jumps over the mitigation,
> and that gets nop'ed out if we're doing dynamic mitigation.
> 
> Think of it as a poor man's static key...

I guess in future we can magic up a more general asm static key if we
need them elsewhere.

> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/kernel/cpu_errata.c | 14 ++++++++++++++
>  arch/arm64/kernel/entry.S      |  3 +++
>  2 files changed, 17 insertions(+)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index f1d4e75b0ddd..8f686f39b9c1 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -283,6 +283,20 @@ void __init arm64_update_smccc_conduit(struct alt_instr *alt,
>  	*updptr = cpu_to_le32(insn);
>  }
>  
> +void __init arm64_enable_wa2_handling(struct alt_instr *alt,
> +				      __le32 *origptr, __le32 *updptr,
> +				      int nr_inst)
> +{
> +	BUG_ON(nr_inst != 1);
> +	/*
> +	 * Only allow mitigation on EL1 entry/exit and guest
> +	 * ARCH_WORKAROUND_2 handling if the SSBD state allows it to
> +	 * be flipped.
> +	 */
> +	if (arm64_get_ssbd_state() == ARM64_SSBD_EL1_ENTRY)
> +		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
> +}
> +
>  static void do_ssbd(bool state)
>  {
>  	switch (psci_ops.conduit) {
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 29ad672a6abd..e6f6e2339b22 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -142,6 +142,9 @@ alternative_else_nop_endif
>  	// to save/restore them if required.
>  	.macro	apply_ssbd, state, targ, tmp1, tmp2
>  #ifdef CONFIG_ARM64_SSBD
> +alternative_cb	arm64_enable_wa2_handling
> +	b	\targ
> +alternative_cb_end
>  	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
>  	cbz	\tmp2, \targ
>  	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 07/14] arm64: ssbd: Skip apply_ssbd if not using dynamic mitigation
@ 2018-05-24 11:43     ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:43 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:41PM +0100, Marc Zyngier wrote:
> In order to avoid checking arm64_ssbd_callback_required on each
> kernel entry/exit even if no mitigation is required, let's
> add yet another alternative that by default jumps over the mitigation,
> and that gets nop'ed out if we're doing dynamic mitigation.
> 
> Think of it as a poor man's static key...

I guess in future we can magic up a more general asm static key if we
need them elsewhere.

> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/kernel/cpu_errata.c | 14 ++++++++++++++
>  arch/arm64/kernel/entry.S      |  3 +++
>  2 files changed, 17 insertions(+)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index f1d4e75b0ddd..8f686f39b9c1 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -283,6 +283,20 @@ void __init arm64_update_smccc_conduit(struct alt_instr *alt,
>  	*updptr = cpu_to_le32(insn);
>  }
>  
> +void __init arm64_enable_wa2_handling(struct alt_instr *alt,
> +				      __le32 *origptr, __le32 *updptr,
> +				      int nr_inst)
> +{
> +	BUG_ON(nr_inst != 1);
> +	/*
> +	 * Only allow mitigation on EL1 entry/exit and guest
> +	 * ARCH_WORKAROUND_2 handling if the SSBD state allows it to
> +	 * be flipped.
> +	 */
> +	if (arm64_get_ssbd_state() == ARM64_SSBD_EL1_ENTRY)
> +		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
> +}
> +
>  static void do_ssbd(bool state)
>  {
>  	switch (psci_ops.conduit) {
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 29ad672a6abd..e6f6e2339b22 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -142,6 +142,9 @@ alternative_else_nop_endif
>  	// to save/restore them if required.
>  	.macro	apply_ssbd, state, targ, tmp1, tmp2
>  #ifdef CONFIG_ARM64_SSBD
> +alternative_cb	arm64_enable_wa2_handling
> +	b	\targ
> +alternative_cb_end
>  	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
>  	cbz	\tmp2, \targ
>  	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 05/14] arm64: Add 'ssbd' command-line option
  2018-05-24 11:40     ` Mark Rutland
@ 2018-05-24 11:52       ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-24 11:52 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Will Deacon,
	Catalin Marinas, Thomas Gleixner, Andy Lutomirski, Kees Cook,
	Greg Kroah-Hartman, Christoffer Dall

On 24/05/18 12:40, Mark Rutland wrote:
> On Tue, May 22, 2018 at 04:06:39PM +0100, Marc Zyngier wrote:
>> On a system where the firmware implements ARCH_WORKAROUND_2,
>> it may be useful to either permanently enable or disable the
>> workaround for cases where the user decides that they'd rather
>> not get a trap overhead, and keep the mitigation permanently
>> on or off instead of switching it on exception entry/exit.
>>
>> In any case, default to the mitigation being enabled.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  Documentation/admin-guide/kernel-parameters.txt |  17 ++++
>>  arch/arm64/include/asm/cpufeature.h             |   6 ++
>>  arch/arm64/kernel/cpu_errata.c                  | 102 ++++++++++++++++++++----
>>  3 files changed, 109 insertions(+), 16 deletions(-)
>>
>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>> index f2040d46f095..646e112c6f63 100644
>> --- a/Documentation/admin-guide/kernel-parameters.txt
>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>> @@ -4092,6 +4092,23 @@
>>  			expediting.  Set to zero to disable automatic
>>  			expediting.
>>  
>> +	ssbd=		[ARM64,HW]
>> +			Speculative Store Bypass Disable control
>> +
>> +			On CPUs that are vulnerable to the Speculative
>> +			Store Bypass vulnerability and offer a
>> +			firmware based mitigation, this parameter
>> +			indicates how the mitigation should be used:
>> +
>> +			force-on:  Unconditionnaly enable mitigation for
>> +				   for both kernel and userspace
>> +			force-off: Unconditionnaly disable mitigation for
>> +				   for both kernel and userspace
>> +			kernel:    Always enable mitigation in the
>> +				   kernel, and offer a prctl interface
>> +				   to allow userspace to register its
>> +				   interest in being mitigated too.
>> +
>>  	stack_guard_gap=	[MM]
>>  			override the default stack gap protection. The value
>>  			is in page units and it defines how many pages prior
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index 09b0f2a80c8f..9bc548e22784 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -537,6 +537,12 @@ static inline u64 read_zcr_features(void)
>>  	return zcr;
>>  }
>>  
>> +#define ARM64_SSBD_UNKNOWN		-1
>> +#define ARM64_SSBD_FORCE_DISABLE	0
>> +#define ARM64_SSBD_EL1_ENTRY		1
> 
> The EL1_ENTRY part of the name is a bit misleading, since this doesn't
> apply to EL1->EL1 exceptions (and as with many other bits of the arm64
> code, it's arguably misleading in the VHE case).
> 
> Perhaps ARM64_SSBD_KERNEL, which would align with the parameter name?

I was just waiting for someone to sort out the naming for me, thanks for
falling into that trap! ;-)

I'll update that.

> Not a big deal either way, and otherwise this looks good to me.
> Regardless:
> 
> Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 05/14] arm64: Add 'ssbd' command-line option
@ 2018-05-24 11:52       ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-24 11:52 UTC (permalink / raw)
  To: linux-arm-kernel

On 24/05/18 12:40, Mark Rutland wrote:
> On Tue, May 22, 2018 at 04:06:39PM +0100, Marc Zyngier wrote:
>> On a system where the firmware implements ARCH_WORKAROUND_2,
>> it may be useful to either permanently enable or disable the
>> workaround for cases where the user decides that they'd rather
>> not get a trap overhead, and keep the mitigation permanently
>> on or off instead of switching it on exception entry/exit.
>>
>> In any case, default to the mitigation being enabled.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  Documentation/admin-guide/kernel-parameters.txt |  17 ++++
>>  arch/arm64/include/asm/cpufeature.h             |   6 ++
>>  arch/arm64/kernel/cpu_errata.c                  | 102 ++++++++++++++++++++----
>>  3 files changed, 109 insertions(+), 16 deletions(-)
>>
>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>> index f2040d46f095..646e112c6f63 100644
>> --- a/Documentation/admin-guide/kernel-parameters.txt
>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>> @@ -4092,6 +4092,23 @@
>>  			expediting.  Set to zero to disable automatic
>>  			expediting.
>>  
>> +	ssbd=		[ARM64,HW]
>> +			Speculative Store Bypass Disable control
>> +
>> +			On CPUs that are vulnerable to the Speculative
>> +			Store Bypass vulnerability and offer a
>> +			firmware based mitigation, this parameter
>> +			indicates how the mitigation should be used:
>> +
>> +			force-on:  Unconditionnaly enable mitigation for
>> +				   for both kernel and userspace
>> +			force-off: Unconditionnaly disable mitigation for
>> +				   for both kernel and userspace
>> +			kernel:    Always enable mitigation in the
>> +				   kernel, and offer a prctl interface
>> +				   to allow userspace to register its
>> +				   interest in being mitigated too.
>> +
>>  	stack_guard_gap=	[MM]
>>  			override the default stack gap protection. The value
>>  			is in page units and it defines how many pages prior
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index 09b0f2a80c8f..9bc548e22784 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -537,6 +537,12 @@ static inline u64 read_zcr_features(void)
>>  	return zcr;
>>  }
>>  
>> +#define ARM64_SSBD_UNKNOWN		-1
>> +#define ARM64_SSBD_FORCE_DISABLE	0
>> +#define ARM64_SSBD_EL1_ENTRY		1
> 
> The EL1_ENTRY part of the name is a bit misleading, since this doesn't
> apply to EL1->EL1 exceptions (and as with many other bits of the arm64
> code, it's arguably misleading in the VHE case).
> 
> Perhaps ARM64_SSBD_KERNEL, which would align with the parameter name?

I was just waiting for someone to sort out the naming for me, thanks for
falling into that trap! ;-)

I'll update that.

> Not a big deal either way, and otherwise this looks good to me.
> Regardless:
> 
> Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 08/14] arm64: ssbd: Disable mitigation on CPU resume if required by user
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 11:55     ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:55 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Kees Cook,
	Catalin Marinas, Will Deacon, Christoffer Dall, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

On Tue, May 22, 2018 at 04:06:42PM +0100, Marc Zyngier wrote:
> On a system where firmware can dynamically change the state of the
> mitigation, the CPU will always come up with the mitigation enabled,
> including when coming back from suspend.
> 
> If the user has requested "no mitigation" via a command line option,
> let's enforce it by calling into the firmware again to disable it.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/include/asm/cpufeature.h | 6 ++++++
>  arch/arm64/kernel/cpu_errata.c      | 8 ++++----
>  arch/arm64/kernel/suspend.c         | 8 ++++++++
>  3 files changed, 18 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 1bacdf57f0af..d9dcb683259e 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -553,6 +553,12 @@ static inline int arm64_get_ssbd_state(void)
>  #endif
>  }
>  
> +#ifdef CONFIG_ARM64_SSBD
> +void arm64_set_ssbd_mitigation(bool state);
> +#else
> +static inline void arm64_set_ssbd_mitigation(bool state) {}
> +#endif
> +
>  #endif /* __ASSEMBLY__ */
>  
>  #endif
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 8f686f39b9c1..b4c12e9140f0 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -297,7 +297,7 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt,
>  		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
>  }
>  
> -static void do_ssbd(bool state)
> +void arm64_set_ssbd_mitigation(bool state)

Using this name from the outset would be nice, if you're happy to fold
that earlier in the seires. Not a big deal either way.

>  {
>  	switch (psci_ops.conduit) {
>  	case PSCI_CONDUIT_HVC:
> @@ -371,20 +371,20 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>  	switch (ssbd_state) {
>  	case ARM64_SSBD_FORCE_DISABLE:
>  		pr_info_once("%s disabled from command-line\n", entry->desc);
> -		do_ssbd(false);
> +		arm64_set_ssbd_mitigation(false);
>  		required = false;
>  		break;
>  
>  	case ARM64_SSBD_EL1_ENTRY:
>  		if (required) {
>  			__this_cpu_write(arm64_ssbd_callback_required, 1);
> -			do_ssbd(true);
> +			arm64_set_ssbd_mitigation(true);
>  		}
>  		break;
>  
>  	case ARM64_SSBD_FORCE_ENABLE:
>  		pr_info_once("%s forced from command-line\n", entry->desc);
> -		do_ssbd(true);
> +		arm64_set_ssbd_mitigation(true);
>  		required = true;
>  		break;
>  
> diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
> index a307b9e13392..70c283368b64 100644
> --- a/arch/arm64/kernel/suspend.c
> +++ b/arch/arm64/kernel/suspend.c
> @@ -62,6 +62,14 @@ void notrace __cpu_suspend_exit(void)
>  	 */
>  	if (hw_breakpoint_restore)
>  		hw_breakpoint_restore(cpu);
> +
> +	/*
> +	 * On resume, firmware implementing dynamic mitigation will
> +	 * have turned the mitigation on. If the user has forcefully
> +	 * disabled it, make sure their wishes are obeyed.
> +	 */
> +	if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
> +		arm64_set_ssbd_mitigation(false);
>  }

This looks fine for idle and suspend-to-ram, so:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

However, for suspend-to-disk (i.e hibernate), the kernel doing the
resume might have SSBD force-disabled, while this kernel (which has just
been resumed) wants it enabled.

I think we also need something in swsusp_arch_suspend(), right after the
call to __cpu_suspend_exit() to re-enable that.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 08/14] arm64: ssbd: Disable mitigation on CPU resume if required by user
@ 2018-05-24 11:55     ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 11:55 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:42PM +0100, Marc Zyngier wrote:
> On a system where firmware can dynamically change the state of the
> mitigation, the CPU will always come up with the mitigation enabled,
> including when coming back from suspend.
> 
> If the user has requested "no mitigation" via a command line option,
> let's enforce it by calling into the firmware again to disable it.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/include/asm/cpufeature.h | 6 ++++++
>  arch/arm64/kernel/cpu_errata.c      | 8 ++++----
>  arch/arm64/kernel/suspend.c         | 8 ++++++++
>  3 files changed, 18 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 1bacdf57f0af..d9dcb683259e 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -553,6 +553,12 @@ static inline int arm64_get_ssbd_state(void)
>  #endif
>  }
>  
> +#ifdef CONFIG_ARM64_SSBD
> +void arm64_set_ssbd_mitigation(bool state);
> +#else
> +static inline void arm64_set_ssbd_mitigation(bool state) {}
> +#endif
> +
>  #endif /* __ASSEMBLY__ */
>  
>  #endif
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 8f686f39b9c1..b4c12e9140f0 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -297,7 +297,7 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt,
>  		*updptr = cpu_to_le32(aarch64_insn_gen_nop());
>  }
>  
> -static void do_ssbd(bool state)
> +void arm64_set_ssbd_mitigation(bool state)

Using this name from the outset would be nice, if you're happy to fold
that earlier in the seires. Not a big deal either way.

>  {
>  	switch (psci_ops.conduit) {
>  	case PSCI_CONDUIT_HVC:
> @@ -371,20 +371,20 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>  	switch (ssbd_state) {
>  	case ARM64_SSBD_FORCE_DISABLE:
>  		pr_info_once("%s disabled from command-line\n", entry->desc);
> -		do_ssbd(false);
> +		arm64_set_ssbd_mitigation(false);
>  		required = false;
>  		break;
>  
>  	case ARM64_SSBD_EL1_ENTRY:
>  		if (required) {
>  			__this_cpu_write(arm64_ssbd_callback_required, 1);
> -			do_ssbd(true);
> +			arm64_set_ssbd_mitigation(true);
>  		}
>  		break;
>  
>  	case ARM64_SSBD_FORCE_ENABLE:
>  		pr_info_once("%s forced from command-line\n", entry->desc);
> -		do_ssbd(true);
> +		arm64_set_ssbd_mitigation(true);
>  		required = true;
>  		break;
>  
> diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
> index a307b9e13392..70c283368b64 100644
> --- a/arch/arm64/kernel/suspend.c
> +++ b/arch/arm64/kernel/suspend.c
> @@ -62,6 +62,14 @@ void notrace __cpu_suspend_exit(void)
>  	 */
>  	if (hw_breakpoint_restore)
>  		hw_breakpoint_restore(cpu);
> +
> +	/*
> +	 * On resume, firmware implementing dynamic mitigation will
> +	 * have turned the mitigation on. If the user has forcefully
> +	 * disabled it, make sure their wishes are obeyed.
> +	 */
> +	if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
> +		arm64_set_ssbd_mitigation(false);
>  }

This looks fine for idle and suspend-to-ram, so:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

However, for suspend-to-disk (i.e hibernate), the kernel doing the
resume might have SSBD force-disabled, while this kernel (which has just
been resumed) wants it enabled.

I think we also need something in swsusp_arch_suspend(), right after the
call to __cpu_suspend_exit() to re-enable that.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 09/14] arm64: ssbd: Introduce thread flag to control userspace mitigation
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 12:01     ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 12:01 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Kees Cook,
	Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

On Tue, May 22, 2018 at 04:06:43PM +0100, Marc Zyngier wrote:
> In order to allow userspace to be mitigated on demand, let's
> introduce a new thread flag that prevents the mitigation from
> being turned off when exiting to userspace, and doesn't turn
> it on on entry into the kernel (with the assumtion that the

Nit: s/assumtion/assumption/

> mitigation is always enabled in the kernel itself).
> 
> This will be used by a prctl interface introduced in a later
> patch.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

On the assumption that this flag cannot be flipped while a task is in
userspace:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/thread_info.h | 1 +
>  arch/arm64/kernel/entry.S            | 2 ++
>  2 files changed, 3 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
> index 740aa03c5f0d..cbcf11b5e637 100644
> --- a/arch/arm64/include/asm/thread_info.h
> +++ b/arch/arm64/include/asm/thread_info.h
> @@ -94,6 +94,7 @@ void arch_release_task_struct(struct task_struct *tsk);
>  #define TIF_32BIT		22	/* 32bit process */
>  #define TIF_SVE			23	/* Scalable Vector Extension in use */
>  #define TIF_SVE_VL_INHERIT	24	/* Inherit sve_vl_onexec across exec */
> +#define TIF_SSBD		25	/* Wants SSB mitigation */
>  
>  #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
>  #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index e6f6e2339b22..28ad8799406f 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -147,6 +147,8 @@ alternative_cb	arm64_enable_wa2_handling
>  alternative_cb_end
>  	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
>  	cbz	\tmp2, \targ
> +	ldr	\tmp2, [tsk, #TSK_TI_FLAGS]
> +	tbnz	\tmp2, #TIF_SSBD, \targ
>  	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
>  	mov	w1, #\state
>  alternative_cb	arm64_update_smccc_conduit
> -- 
> 2.14.2
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 09/14] arm64: ssbd: Introduce thread flag to control userspace mitigation
@ 2018-05-24 12:01     ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 12:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:43PM +0100, Marc Zyngier wrote:
> In order to allow userspace to be mitigated on demand, let's
> introduce a new thread flag that prevents the mitigation from
> being turned off when exiting to userspace, and doesn't turn
> it on on entry into the kernel (with the assumtion that the

Nit: s/assumtion/assumption/

> mitigation is always enabled in the kernel itself).
> 
> This will be used by a prctl interface introduced in a later
> patch.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

On the assumption that this flag cannot be flipped while a task is in
userspace:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/thread_info.h | 1 +
>  arch/arm64/kernel/entry.S            | 2 ++
>  2 files changed, 3 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
> index 740aa03c5f0d..cbcf11b5e637 100644
> --- a/arch/arm64/include/asm/thread_info.h
> +++ b/arch/arm64/include/asm/thread_info.h
> @@ -94,6 +94,7 @@ void arch_release_task_struct(struct task_struct *tsk);
>  #define TIF_32BIT		22	/* 32bit process */
>  #define TIF_SVE			23	/* Scalable Vector Extension in use */
>  #define TIF_SVE_VL_INHERIT	24	/* Inherit sve_vl_onexec across exec */
> +#define TIF_SSBD		25	/* Wants SSB mitigation */
>  
>  #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
>  #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index e6f6e2339b22..28ad8799406f 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -147,6 +147,8 @@ alternative_cb	arm64_enable_wa2_handling
>  alternative_cb_end
>  	ldr_this_cpu	\tmp2, arm64_ssbd_callback_required, \tmp1
>  	cbz	\tmp2, \targ
> +	ldr	\tmp2, [tsk, #TSK_TI_FLAGS]
> +	tbnz	\tmp2, #TIF_SSBD, \targ
>  	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
>  	mov	w1, #\state
>  alternative_cb	arm64_update_smccc_conduit
> -- 
> 2.14.2
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm at lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 10/14] arm64: ssbd: Add prctl interface for per-thread mitigation
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 12:10     ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 12:10 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Will Deacon,
	Catalin Marinas, Thomas Gleixner, Andy Lutomirski, Kees Cook,
	Greg Kroah-Hartman, Christoffer Dall

On Tue, May 22, 2018 at 04:06:44PM +0100, Marc Zyngier wrote:
> If running on a system that performs dynamic SSBD mitigation, allow
> userspace to request the mitigation for itself. This is implemented
> as a prctl call, allowing the mitigation to be enabled or disabled at
> will for this particular thread.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/Makefile |   1 +
>  arch/arm64/kernel/ssbd.c   | 107 +++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 108 insertions(+)
>  create mode 100644 arch/arm64/kernel/ssbd.c
> 
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index bf825f38d206..0025f8691046 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -54,6 +54,7 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST)	+= arm64-reloc-test.o
>  arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
>  arm64-obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
>  arm64-obj-$(CONFIG_ARM_SDE_INTERFACE)	+= sdei.o
> +arm64-obj-$(CONFIG_ARM64_SSBD)		+= ssbd.o
>  
>  obj-y					+= $(arm64-obj-y) vdso/ probes/
>  obj-m					+= $(arm64-obj-m)
> diff --git a/arch/arm64/kernel/ssbd.c b/arch/arm64/kernel/ssbd.c
> new file mode 100644
> index 000000000000..34e3c430176b
> --- /dev/null
> +++ b/arch/arm64/kernel/ssbd.c
> @@ -0,0 +1,107 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2018 ARM Ltd, All Rights Reserved.
> + */
> +

#include <linux/errno.h>

... for the error numbers you return below.

> +#include <linux/sched.h>
> +#include <linux/thread_info.h>
> +
> +#include <asm/cpufeature.h>
> +
> +/*
> + * prctl interface for SSBD
> + * FIXME: Drop the below ifdefery once the common interface has been merged.
> + */
> +#ifdef PR_SPEC_STORE_BYPASS
> +static int ssbd_prctl_set(struct task_struct *task, unsigned long ctrl)
> +{
> +	int state = arm64_get_ssbd_state();
> +
> +	/* Unsupported or already mitigated */
> +	if (state == ARM64_SSBD_UNKNOWN)
> +		return -EINVAL;
> +	if (state == ARM64_SSBD_MITIGATED)
> +		return -EPERM;
> +
> +	/*
> +	 * Things are a bit backward here: the arm64 internal API
> +	 * *enables the mitigation* when the userspace API *disables
> +	 * speculation*. So much fun.
> +	 */
> +	switch (ctrl) {
> +	case PR_SPEC_ENABLE:
> +		/* If speculation is force disabled, enable is not allowed */
> +		if (state == ARM64_SSBD_FORCE_ENABLE ||
> +		    task_spec_ssb_force_disable(task))
> +			return -EPERM;
> +		task_clear_spec_ssb_disable(task);
> +		clear_tsk_thread_flag(task, TIF_SSBD);
> +		break;
> +	case PR_SPEC_DISABLE:
> +		if (state == ARM64_SSBD_FORCE_DISABLE)
> +			return -EPERM;
> +		task_set_spec_ssb_disable(task);
> +		set_tsk_thread_flag(task, TIF_SSBD);
> +		break;
> +	case PR_SPEC_FORCE_DISABLE:
> +		if (state == ARM64_SSBD_FORCE_DISABLE)
> +			return -EPERM;
> +		task_set_spec_ssb_disable(task);
> +		task_set_spec_ssb_force_disable(task);
> +		set_tsk_thread_flag(task, TIF_SSBD);
> +		break;
> +	default:
> +		return -ERANGE;
> +	}
> +
> +	return 0;
> +}

I'll have to take a look at the core implementation to make sense of
the rest.

Mark.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 10/14] arm64: ssbd: Add prctl interface for per-thread mitigation
@ 2018-05-24 12:10     ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 12:10 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:44PM +0100, Marc Zyngier wrote:
> If running on a system that performs dynamic SSBD mitigation, allow
> userspace to request the mitigation for itself. This is implemented
> as a prctl call, allowing the mitigation to be enabled or disabled at
> will for this particular thread.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/Makefile |   1 +
>  arch/arm64/kernel/ssbd.c   | 107 +++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 108 insertions(+)
>  create mode 100644 arch/arm64/kernel/ssbd.c
> 
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index bf825f38d206..0025f8691046 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -54,6 +54,7 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST)	+= arm64-reloc-test.o
>  arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
>  arm64-obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
>  arm64-obj-$(CONFIG_ARM_SDE_INTERFACE)	+= sdei.o
> +arm64-obj-$(CONFIG_ARM64_SSBD)		+= ssbd.o
>  
>  obj-y					+= $(arm64-obj-y) vdso/ probes/
>  obj-m					+= $(arm64-obj-m)
> diff --git a/arch/arm64/kernel/ssbd.c b/arch/arm64/kernel/ssbd.c
> new file mode 100644
> index 000000000000..34e3c430176b
> --- /dev/null
> +++ b/arch/arm64/kernel/ssbd.c
> @@ -0,0 +1,107 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2018 ARM Ltd, All Rights Reserved.
> + */
> +

#include <linux/errno.h>

... for the error numbers you return below.

> +#include <linux/sched.h>
> +#include <linux/thread_info.h>
> +
> +#include <asm/cpufeature.h>
> +
> +/*
> + * prctl interface for SSBD
> + * FIXME: Drop the below ifdefery once the common interface has been merged.
> + */
> +#ifdef PR_SPEC_STORE_BYPASS
> +static int ssbd_prctl_set(struct task_struct *task, unsigned long ctrl)
> +{
> +	int state = arm64_get_ssbd_state();
> +
> +	/* Unsupported or already mitigated */
> +	if (state == ARM64_SSBD_UNKNOWN)
> +		return -EINVAL;
> +	if (state == ARM64_SSBD_MITIGATED)
> +		return -EPERM;
> +
> +	/*
> +	 * Things are a bit backward here: the arm64 internal API
> +	 * *enables the mitigation* when the userspace API *disables
> +	 * speculation*. So much fun.
> +	 */
> +	switch (ctrl) {
> +	case PR_SPEC_ENABLE:
> +		/* If speculation is force disabled, enable is not allowed */
> +		if (state == ARM64_SSBD_FORCE_ENABLE ||
> +		    task_spec_ssb_force_disable(task))
> +			return -EPERM;
> +		task_clear_spec_ssb_disable(task);
> +		clear_tsk_thread_flag(task, TIF_SSBD);
> +		break;
> +	case PR_SPEC_DISABLE:
> +		if (state == ARM64_SSBD_FORCE_DISABLE)
> +			return -EPERM;
> +		task_set_spec_ssb_disable(task);
> +		set_tsk_thread_flag(task, TIF_SSBD);
> +		break;
> +	case PR_SPEC_FORCE_DISABLE:
> +		if (state == ARM64_SSBD_FORCE_DISABLE)
> +			return -EPERM;
> +		task_set_spec_ssb_disable(task);
> +		task_set_spec_ssb_force_disable(task);
> +		set_tsk_thread_flag(task, TIF_SSBD);
> +		break;
> +	default:
> +		return -ERANGE;
> +	}
> +
> +	return 0;
> +}

I'll have to take a look at the core implementation to make sense of
the rest.

Mark.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
  2018-05-24 10:52       ` Mark Rutland
@ 2018-05-24 12:10         ` Robin Murphy
  -1 siblings, 0 replies; 110+ messages in thread
From: Robin Murphy @ 2018-05-24 12:10 UTC (permalink / raw)
  To: Mark Rutland, Julien Grall
  Cc: Kees Cook, Marc Zyngier, Catalin Marinas, Will Deacon,
	linux-kernel, Andy Lutomirski, Greg Kroah-Hartman,
	Thomas Gleixner, kvmarm, linux-arm-kernel

On 24/05/18 11:52, Mark Rutland wrote:
> On Wed, May 23, 2018 at 10:23:20AM +0100, Julien Grall wrote:
>> Hi Marc,
>>
>> On 05/22/2018 04:06 PM, Marc Zyngier wrote:
>>> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
>>> index ec2ee720e33e..f33e6aed3037 100644
>>> --- a/arch/arm64/kernel/entry.S
>>> +++ b/arch/arm64/kernel/entry.S
>>> @@ -18,6 +18,7 @@
>>>     * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>>>     */
>>> +#include <linux/arm-smccc.h>
>>>    #include <linux/init.h>
>>>    #include <linux/linkage.h>
>>> @@ -137,6 +138,18 @@ alternative_else_nop_endif
>>>    	add	\dst, \dst, #(\sym - .entry.tramp.text)
>>>    	.endm
>>> +	// This macro corrupts x0-x3. It is the caller's duty
>>> +	// to save/restore them if required.
>>
>> NIT: Shouldn't you use /* ... */ for multi-line comments?
> 
> There's no requirement to do so, and IIRC even Torvalds prefers '//'
> comments for multi-line things these days.

Also, this is an assembly code, not C; '//' is the actual A64 assembler 
comment syntax so is arguably more appropriate here in spite of being 
moot thanks to preprocessing.

Robin.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
@ 2018-05-24 12:10         ` Robin Murphy
  0 siblings, 0 replies; 110+ messages in thread
From: Robin Murphy @ 2018-05-24 12:10 UTC (permalink / raw)
  To: linux-arm-kernel

On 24/05/18 11:52, Mark Rutland wrote:
> On Wed, May 23, 2018 at 10:23:20AM +0100, Julien Grall wrote:
>> Hi Marc,
>>
>> On 05/22/2018 04:06 PM, Marc Zyngier wrote:
>>> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
>>> index ec2ee720e33e..f33e6aed3037 100644
>>> --- a/arch/arm64/kernel/entry.S
>>> +++ b/arch/arm64/kernel/entry.S
>>> @@ -18,6 +18,7 @@
>>>     * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>>>     */
>>> +#include <linux/arm-smccc.h>
>>>    #include <linux/init.h>
>>>    #include <linux/linkage.h>
>>> @@ -137,6 +138,18 @@ alternative_else_nop_endif
>>>    	add	\dst, \dst, #(\sym - .entry.tramp.text)
>>>    	.endm
>>> +	// This macro corrupts x0-x3. It is the caller's duty
>>> +	// to save/restore them if required.
>>
>> NIT: Shouldn't you use /* ... */ for multi-line comments?
> 
> There's no requirement to do so, and IIRC even Torvalds prefers '//'
> comments for multi-line things these days.

Also, this is an assembly code, not C; '//' is the actual A64 assembler 
comment syntax so is arguably more appropriate here in spite of being 
moot thanks to preprocessing.

Robin.

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 11/14] arm64: KVM: Add HYP per-cpu accessors
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 12:11     ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 12:11 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Will Deacon,
	Catalin Marinas, Thomas Gleixner, Andy Lutomirski, Kees Cook,
	Greg Kroah-Hartman, Christoffer Dall

On Tue, May 22, 2018 at 04:06:45PM +0100, Marc Zyngier wrote:
> As we're going to require to access per-cpu variables at EL2,
> let's craft the minimum set of accessors required to implement
> reading a per-cpu variable, relying on tpidr_el2 to contain the
> per-cpu offset.
> 
> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/kvm_asm.h | 27 +++++++++++++++++++++++++--
>  1 file changed, 25 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index f6648a3e4152..fefd8cf42c35 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -71,14 +71,37 @@ extern u32 __kvm_get_mdcr_el2(void);
>  
>  extern u32 __init_stage2_translation(void);
>  
> +/* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
> +#define __hyp_this_cpu_ptr(sym)						\
> +	({								\
> +		void *__ptr = hyp_symbol_addr(sym);			\
> +		__ptr += read_sysreg(tpidr_el2);			\
> +		(typeof(&sym))__ptr;					\
> +	 })
> +
> +#define __hyp_this_cpu_read(sym)					\
> +	({								\
> +		*__hyp_this_cpu_ptr(sym);				\
> +	 })
> +
>  #else /* __ASSEMBLY__ */
>  
> -.macro get_host_ctxt reg, tmp
> -	adr_l	\reg, kvm_host_cpu_state
> +.macro hyp_adr_this_cpu reg, sym, tmp
> +	adr_l	\reg, \sym
>  	mrs	\tmp, tpidr_el2
>  	add	\reg, \reg, \tmp
>  .endm
>  
> +.macro hyp_ldr_this_cpu reg, sym, tmp
> +	adr_l	\reg, \sym
> +	mrs	\tmp, tpidr_el2
> +	ldr	\reg,  [\reg, \tmp]
> +.endm
> +
> +.macro get_host_ctxt reg, tmp
> +	hyp_adr_this_cpu \reg, kvm_host_cpu_state, \tmp
> +.endm
> +
>  .macro get_vcpu_ptr vcpu, ctxt
>  	get_host_ctxt \ctxt, \vcpu
>  	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 11/14] arm64: KVM: Add HYP per-cpu accessors
@ 2018-05-24 12:11     ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 12:11 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:45PM +0100, Marc Zyngier wrote:
> As we're going to require to access per-cpu variables at EL2,
> let's craft the minimum set of accessors required to implement
> reading a per-cpu variable, relying on tpidr_el2 to contain the
> per-cpu offset.
> 
> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/kvm_asm.h | 27 +++++++++++++++++++++++++--
>  1 file changed, 25 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index f6648a3e4152..fefd8cf42c35 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -71,14 +71,37 @@ extern u32 __kvm_get_mdcr_el2(void);
>  
>  extern u32 __init_stage2_translation(void);
>  
> +/* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
> +#define __hyp_this_cpu_ptr(sym)						\
> +	({								\
> +		void *__ptr = hyp_symbol_addr(sym);			\
> +		__ptr += read_sysreg(tpidr_el2);			\
> +		(typeof(&sym))__ptr;					\
> +	 })
> +
> +#define __hyp_this_cpu_read(sym)					\
> +	({								\
> +		*__hyp_this_cpu_ptr(sym);				\
> +	 })
> +
>  #else /* __ASSEMBLY__ */
>  
> -.macro get_host_ctxt reg, tmp
> -	adr_l	\reg, kvm_host_cpu_state
> +.macro hyp_adr_this_cpu reg, sym, tmp
> +	adr_l	\reg, \sym
>  	mrs	\tmp, tpidr_el2
>  	add	\reg, \reg, \tmp
>  .endm
>  
> +.macro hyp_ldr_this_cpu reg, sym, tmp
> +	adr_l	\reg, \sym
> +	mrs	\tmp, tpidr_el2
> +	ldr	\reg,  [\reg, \tmp]
> +.endm
> +
> +.macro get_host_ctxt reg, tmp
> +	hyp_adr_this_cpu \reg, kvm_host_cpu_state, \tmp
> +.endm
> +
>  .macro get_vcpu_ptr vcpu, ctxt
>  	get_host_ctxt \ctxt, \vcpu
>  	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 12/14] arm64: KVM: Add ARCH_WORKAROUND_2 support for guests
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 12:15     ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 12:15 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Will Deacon,
	Catalin Marinas, Thomas Gleixner, Andy Lutomirski, Kees Cook,
	Greg Kroah-Hartman, Christoffer Dall

On Tue, May 22, 2018 at 04:06:46PM +0100, Marc Zyngier wrote:
> In order to offer ARCH_WORKAROUND_2 support to guests, we need
> a bit of infrastructure.
> 
> Let's add a flag indicating whether or not the guest uses
> SSBD mitigation. Depending on the state of this flag, allow
> KVM to disable ARCH_WORKAROUND_2 before entering the guest,
> and enable it when exiting it.
> 
> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm/include/asm/kvm_mmu.h    |  5 +++++
>  arch/arm64/include/asm/kvm_asm.h  |  3 +++
>  arch/arm64/include/asm/kvm_host.h |  3 +++
>  arch/arm64/include/asm/kvm_mmu.h  | 24 ++++++++++++++++++++++
>  arch/arm64/kvm/hyp/switch.c       | 42 +++++++++++++++++++++++++++++++++++++++
>  virt/kvm/arm/arm.c                |  4 ++++
>  6 files changed, 81 insertions(+)
> 
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index 707a1f06dc5d..b0c17d88ed40 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -319,6 +319,11 @@ static inline int kvm_map_vectors(void)
>  	return 0;
>  }
>  
> +static inline int hyp_map_aux_data(void)
> +{
> +	return 0;
> +}
> +
>  #define kvm_phys_to_vttbr(addr)		(addr)
>  
>  #endif	/* !__ASSEMBLY__ */
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index fefd8cf42c35..d4fbb1356c4c 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -33,6 +33,9 @@
>  #define KVM_ARM64_DEBUG_DIRTY_SHIFT	0
>  #define KVM_ARM64_DEBUG_DIRTY		(1 << KVM_ARM64_DEBUG_DIRTY_SHIFT)
>  
> +#define	VCPU_WORKAROUND_2_FLAG_SHIFT	0
> +#define	VCPU_WORKAROUND_2_FLAG		(_AC(1, UL) << VCPU_WORKAROUND_2_FLAG_SHIFT)
> +
>  /* Translate a kernel address of @sym into its equivalent linear mapping */
>  #define kvm_ksym_ref(sym)						\
>  	({								\
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 469de8acd06f..9bef3f69bdcd 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -216,6 +216,9 @@ struct kvm_vcpu_arch {
>  	/* Exception Information */
>  	struct kvm_vcpu_fault_info fault;
>  
> +	/* State of various workarounds, see kvm_asm.h for bit assignment */
> +	u64 workaround_flags;
> +
>  	/* Guest debug state */
>  	u64 debug_flags;
>  
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 082110993647..eb7a5c2a2bfb 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -457,6 +457,30 @@ static inline int kvm_map_vectors(void)
>  }
>  #endif
>  
> +#ifdef CONFIG_ARM64_SSBD
> +DECLARE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
> +
> +static inline int hyp_map_aux_data(void)
> +{
> +	int cpu, err;
> +
> +	for_each_possible_cpu(cpu) {
> +		u64 *ptr;
> +
> +		ptr = per_cpu_ptr(&arm64_ssbd_callback_required, cpu);
> +		err = create_hyp_mappings(ptr, ptr + 1, PAGE_HYP);
> +		if (err)
> +			return err;
> +	}
> +	return 0;
> +}
> +#else
> +static inline int hyp_map_aux_data(void)
> +{
> +	return 0;
> +}
> +#endif
> +
>  #define kvm_phys_to_vttbr(addr)		phys_to_ttbr(addr)
>  
>  #endif /* __ASSEMBLY__ */
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index d9645236e474..c50cedc447f1 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -15,6 +15,7 @@
>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <linux/arm-smccc.h>
>  #include <linux/types.h>
>  #include <linux/jump_label.h>
>  #include <uapi/linux/psci.h>
> @@ -389,6 +390,39 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>  	return false;
>  }
>  
> +static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu)
> +{
> +	if (!cpus_have_const_cap(ARM64_SSBD))
> +		return false;
> +
> +	return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG);
> +}
> +
> +static void __hyp_text __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu)
> +{
> +#ifdef CONFIG_ARM64_SSBD
> +	/*
> +	 * The host runs with the workaround always present. If the
> +	 * guest wants it disabled, so be it...
> +	 */
> +	if (__needs_ssbd_off(vcpu) &&
> +	    __hyp_this_cpu_read(arm64_ssbd_callback_required))
> +		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL);
> +#endif
> +}
> +
> +static void __hyp_text __set_host_arch_workaround_state(struct kvm_vcpu *vcpu)
> +{
> +#ifdef CONFIG_ARM64_SSBD
> +	/*
> +	 * If the guest has disabled the workaround, bring it back on.
> +	 */
> +	if (__needs_ssbd_off(vcpu) &&
> +	    __hyp_this_cpu_read(arm64_ssbd_callback_required))
> +		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1, NULL);
> +#endif
> +}
> +
>  /* Switch to the guest for VHE systems running in EL2 */
>  int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>  {
> @@ -409,6 +443,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>  	sysreg_restore_guest_state_vhe(guest_ctxt);
>  	__debug_switch_to_guest(vcpu);
>  
> +	__set_guest_arch_workaround_state(vcpu);
> +
>  	do {
>  		/* Jump in the fire! */
>  		exit_code = __guest_enter(vcpu, host_ctxt);
> @@ -416,6 +452,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>  		/* And we're baaack! */
>  	} while (fixup_guest_exit(vcpu, &exit_code));
>  
> +	__set_host_arch_workaround_state(vcpu);
> +
>  	fp_enabled = fpsimd_enabled_vhe();
>  
>  	sysreg_save_guest_state_vhe(guest_ctxt);
> @@ -465,6 +503,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
>  	__sysreg_restore_state_nvhe(guest_ctxt);
>  	__debug_switch_to_guest(vcpu);
>  
> +	__set_guest_arch_workaround_state(vcpu);
> +
>  	do {
>  		/* Jump in the fire! */
>  		exit_code = __guest_enter(vcpu, host_ctxt);
> @@ -472,6 +512,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
>  		/* And we're baaack! */
>  	} while (fixup_guest_exit(vcpu, &exit_code));
>  
> +	__set_host_arch_workaround_state(vcpu);
> +
>  	fp_enabled = __fpsimd_enabled_nvhe();
>  
>  	__sysreg_save_state_nvhe(guest_ctxt);
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index a4c1b76240df..2d9b4795edb2 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -1490,6 +1490,10 @@ static int init_hyp_mode(void)
>  		}
>  	}
>  
> +	err = hyp_map_aux_data();
> +	if (err)
> +		kvm_err("Cannot map host auxilary data: %d\n", err);
> +
>  	return 0;
>  
>  out_err:
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 12/14] arm64: KVM: Add ARCH_WORKAROUND_2 support for guests
@ 2018-05-24 12:15     ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 12:15 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:46PM +0100, Marc Zyngier wrote:
> In order to offer ARCH_WORKAROUND_2 support to guests, we need
> a bit of infrastructure.
> 
> Let's add a flag indicating whether or not the guest uses
> SSBD mitigation. Depending on the state of this flag, allow
> KVM to disable ARCH_WORKAROUND_2 before entering the guest,
> and enable it when exiting it.
> 
> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm/include/asm/kvm_mmu.h    |  5 +++++
>  arch/arm64/include/asm/kvm_asm.h  |  3 +++
>  arch/arm64/include/asm/kvm_host.h |  3 +++
>  arch/arm64/include/asm/kvm_mmu.h  | 24 ++++++++++++++++++++++
>  arch/arm64/kvm/hyp/switch.c       | 42 +++++++++++++++++++++++++++++++++++++++
>  virt/kvm/arm/arm.c                |  4 ++++
>  6 files changed, 81 insertions(+)
> 
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index 707a1f06dc5d..b0c17d88ed40 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -319,6 +319,11 @@ static inline int kvm_map_vectors(void)
>  	return 0;
>  }
>  
> +static inline int hyp_map_aux_data(void)
> +{
> +	return 0;
> +}
> +
>  #define kvm_phys_to_vttbr(addr)		(addr)
>  
>  #endif	/* !__ASSEMBLY__ */
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index fefd8cf42c35..d4fbb1356c4c 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -33,6 +33,9 @@
>  #define KVM_ARM64_DEBUG_DIRTY_SHIFT	0
>  #define KVM_ARM64_DEBUG_DIRTY		(1 << KVM_ARM64_DEBUG_DIRTY_SHIFT)
>  
> +#define	VCPU_WORKAROUND_2_FLAG_SHIFT	0
> +#define	VCPU_WORKAROUND_2_FLAG		(_AC(1, UL) << VCPU_WORKAROUND_2_FLAG_SHIFT)
> +
>  /* Translate a kernel address of @sym into its equivalent linear mapping */
>  #define kvm_ksym_ref(sym)						\
>  	({								\
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 469de8acd06f..9bef3f69bdcd 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -216,6 +216,9 @@ struct kvm_vcpu_arch {
>  	/* Exception Information */
>  	struct kvm_vcpu_fault_info fault;
>  
> +	/* State of various workarounds, see kvm_asm.h for bit assignment */
> +	u64 workaround_flags;
> +
>  	/* Guest debug state */
>  	u64 debug_flags;
>  
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 082110993647..eb7a5c2a2bfb 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -457,6 +457,30 @@ static inline int kvm_map_vectors(void)
>  }
>  #endif
>  
> +#ifdef CONFIG_ARM64_SSBD
> +DECLARE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
> +
> +static inline int hyp_map_aux_data(void)
> +{
> +	int cpu, err;
> +
> +	for_each_possible_cpu(cpu) {
> +		u64 *ptr;
> +
> +		ptr = per_cpu_ptr(&arm64_ssbd_callback_required, cpu);
> +		err = create_hyp_mappings(ptr, ptr + 1, PAGE_HYP);
> +		if (err)
> +			return err;
> +	}
> +	return 0;
> +}
> +#else
> +static inline int hyp_map_aux_data(void)
> +{
> +	return 0;
> +}
> +#endif
> +
>  #define kvm_phys_to_vttbr(addr)		phys_to_ttbr(addr)
>  
>  #endif /* __ASSEMBLY__ */
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index d9645236e474..c50cedc447f1 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -15,6 +15,7 @@
>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <linux/arm-smccc.h>
>  #include <linux/types.h>
>  #include <linux/jump_label.h>
>  #include <uapi/linux/psci.h>
> @@ -389,6 +390,39 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>  	return false;
>  }
>  
> +static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu)
> +{
> +	if (!cpus_have_const_cap(ARM64_SSBD))
> +		return false;
> +
> +	return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG);
> +}
> +
> +static void __hyp_text __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu)
> +{
> +#ifdef CONFIG_ARM64_SSBD
> +	/*
> +	 * The host runs with the workaround always present. If the
> +	 * guest wants it disabled, so be it...
> +	 */
> +	if (__needs_ssbd_off(vcpu) &&
> +	    __hyp_this_cpu_read(arm64_ssbd_callback_required))
> +		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL);
> +#endif
> +}
> +
> +static void __hyp_text __set_host_arch_workaround_state(struct kvm_vcpu *vcpu)
> +{
> +#ifdef CONFIG_ARM64_SSBD
> +	/*
> +	 * If the guest has disabled the workaround, bring it back on.
> +	 */
> +	if (__needs_ssbd_off(vcpu) &&
> +	    __hyp_this_cpu_read(arm64_ssbd_callback_required))
> +		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1, NULL);
> +#endif
> +}
> +
>  /* Switch to the guest for VHE systems running in EL2 */
>  int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>  {
> @@ -409,6 +443,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>  	sysreg_restore_guest_state_vhe(guest_ctxt);
>  	__debug_switch_to_guest(vcpu);
>  
> +	__set_guest_arch_workaround_state(vcpu);
> +
>  	do {
>  		/* Jump in the fire! */
>  		exit_code = __guest_enter(vcpu, host_ctxt);
> @@ -416,6 +452,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>  		/* And we're baaack! */
>  	} while (fixup_guest_exit(vcpu, &exit_code));
>  
> +	__set_host_arch_workaround_state(vcpu);
> +
>  	fp_enabled = fpsimd_enabled_vhe();
>  
>  	sysreg_save_guest_state_vhe(guest_ctxt);
> @@ -465,6 +503,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
>  	__sysreg_restore_state_nvhe(guest_ctxt);
>  	__debug_switch_to_guest(vcpu);
>  
> +	__set_guest_arch_workaround_state(vcpu);
> +
>  	do {
>  		/* Jump in the fire! */
>  		exit_code = __guest_enter(vcpu, host_ctxt);
> @@ -472,6 +512,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
>  		/* And we're baaack! */
>  	} while (fixup_guest_exit(vcpu, &exit_code));
>  
> +	__set_host_arch_workaround_state(vcpu);
> +
>  	fp_enabled = __fpsimd_enabled_nvhe();
>  
>  	__sysreg_save_state_nvhe(guest_ctxt);
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index a4c1b76240df..2d9b4795edb2 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -1490,6 +1490,10 @@ static int init_hyp_mode(void)
>  		}
>  	}
>  
> +	err = hyp_map_aux_data();
> +	if (err)
> +		kvm_err("Cannot map host auxilary data: %d\n", err);
> +
>  	return 0;
>  
>  out_err:
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 09/14] arm64: ssbd: Introduce thread flag to control userspace mitigation
  2018-05-24 12:01     ` Mark Rutland
@ 2018-05-24 12:16       ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-24 12:16 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Kees Cook,
	Catalin Marinas, Will Deacon, Andy Lutomirski,
	Greg Kroah-Hartman, Thomas Gleixner

On 24/05/18 13:01, Mark Rutland wrote:
> On Tue, May 22, 2018 at 04:06:43PM +0100, Marc Zyngier wrote:
>> In order to allow userspace to be mitigated on demand, let's
>> introduce a new thread flag that prevents the mitigation from
>> being turned off when exiting to userspace, and doesn't turn
>> it on on entry into the kernel (with the assumtion that the
> 
> Nit: s/assumtion/assumption/
> 
>> mitigation is always enabled in the kernel itself).
>>
>> This will be used by a prctl interface introduced in a later
>> patch.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> 
> On the assumption that this flag cannot be flipped while a task is in
> userspace:

Well, that's the case unless you get into the seccomp thing, which does
change TIF_SSBD on all threads of the task, without taking it to the
kernel first. That nicely breaks the state machine, and you end-up
running non-mitigated in the kernel. Oops.

I have a couple of patches fixing that, using a second flag
(TIF_SSBD_PENDING) that gets turned into the real thing on exit to
userspace. It's pretty ugly though.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 09/14] arm64: ssbd: Introduce thread flag to control userspace mitigation
@ 2018-05-24 12:16       ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-24 12:16 UTC (permalink / raw)
  To: linux-arm-kernel

On 24/05/18 13:01, Mark Rutland wrote:
> On Tue, May 22, 2018 at 04:06:43PM +0100, Marc Zyngier wrote:
>> In order to allow userspace to be mitigated on demand, let's
>> introduce a new thread flag that prevents the mitigation from
>> being turned off when exiting to userspace, and doesn't turn
>> it on on entry into the kernel (with the assumtion that the
> 
> Nit: s/assumtion/assumption/
> 
>> mitigation is always enabled in the kernel itself).
>>
>> This will be used by a prctl interface introduced in a later
>> patch.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> 
> On the assumption that this flag cannot be flipped while a task is in
> userspace:

Well, that's the case unless you get into the seccomp thing, which does
change TIF_SSBD on all threads of the task, without taking it to the
kernel first. That nicely breaks the state machine, and you end-up
running non-mitigated in the kernel. Oops.

I have a couple of patches fixing that, using a second flag
(TIF_SSBD_PENDING) that gets turned into the real thing on exit to
userspace. It's pretty ugly though.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 09/14] arm64: ssbd: Introduce thread flag to control userspace mitigation
  2018-05-24 12:16       ` Marc Zyngier
@ 2018-05-24 12:19         ` Will Deacon
  -1 siblings, 0 replies; 110+ messages in thread
From: Will Deacon @ 2018-05-24 12:19 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Mark Rutland, linux-arm-kernel, linux-kernel, kvmarm, Kees Cook,
	Catalin Marinas, Andy Lutomirski, Greg Kroah-Hartman,
	Thomas Gleixner

On Thu, May 24, 2018 at 01:16:38PM +0100, Marc Zyngier wrote:
> On 24/05/18 13:01, Mark Rutland wrote:
> > On Tue, May 22, 2018 at 04:06:43PM +0100, Marc Zyngier wrote:
> >> In order to allow userspace to be mitigated on demand, let's
> >> introduce a new thread flag that prevents the mitigation from
> >> being turned off when exiting to userspace, and doesn't turn
> >> it on on entry into the kernel (with the assumtion that the
> > 
> > Nit: s/assumtion/assumption/
> > 
> >> mitigation is always enabled in the kernel itself).
> >>
> >> This will be used by a prctl interface introduced in a later
> >> patch.
> >>
> >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > 
> > On the assumption that this flag cannot be flipped while a task is in
> > userspace:
> 
> Well, that's the case unless you get into the seccomp thing, which does
> change TIF_SSBD on all threads of the task, without taking it to the
> kernel first. That nicely breaks the state machine, and you end-up
> running non-mitigated in the kernel. Oops.
> 
> I have a couple of patches fixing that, using a second flag
> (TIF_SSBD_PENDING) that gets turned into the real thing on exit to
> userspace. It's pretty ugly though.

... which introduces the need for atomics on the entry path too :(

I would /much/ rather kill the seccomp implicit enabling of the mitigation,
or at least have a way to opt-out per arch since it doesn't seem to be
technically justified imo.

Will

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 09/14] arm64: ssbd: Introduce thread flag to control userspace mitigation
@ 2018-05-24 12:19         ` Will Deacon
  0 siblings, 0 replies; 110+ messages in thread
From: Will Deacon @ 2018-05-24 12:19 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, May 24, 2018 at 01:16:38PM +0100, Marc Zyngier wrote:
> On 24/05/18 13:01, Mark Rutland wrote:
> > On Tue, May 22, 2018 at 04:06:43PM +0100, Marc Zyngier wrote:
> >> In order to allow userspace to be mitigated on demand, let's
> >> introduce a new thread flag that prevents the mitigation from
> >> being turned off when exiting to userspace, and doesn't turn
> >> it on on entry into the kernel (with the assumtion that the
> > 
> > Nit: s/assumtion/assumption/
> > 
> >> mitigation is always enabled in the kernel itself).
> >>
> >> This will be used by a prctl interface introduced in a later
> >> patch.
> >>
> >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > 
> > On the assumption that this flag cannot be flipped while a task is in
> > userspace:
> 
> Well, that's the case unless you get into the seccomp thing, which does
> change TIF_SSBD on all threads of the task, without taking it to the
> kernel first. That nicely breaks the state machine, and you end-up
> running non-mitigated in the kernel. Oops.
> 
> I have a couple of patches fixing that, using a second flag
> (TIF_SSBD_PENDING) that gets turned into the real thing on exit to
> userspace. It's pretty ugly though.

... which introduces the need for atomics on the entry path too :(

I would /much/ rather kill the seccomp implicit enabling of the mitigation,
or at least have a way to opt-out per arch since it doesn't seem to be
technically justified imo.

Will

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 13/14] arm64: KVM: Handle guest's ARCH_WORKAROUND_2 requests
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 12:22     ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 12:22 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Will Deacon,
	Catalin Marinas, Thomas Gleixner, Andy Lutomirski, Kees Cook,
	Greg Kroah-Hartman, Christoffer Dall

On Tue, May 22, 2018 at 04:06:47PM +0100, Marc Zyngier wrote:
> In order to forward the guest's ARCH_WORKAROUND_2 calls to EL3,
> add a small(-ish) sequence to handle it at EL2. Special care must
> be taken to track the state of the guest itself by updating the
> workaround flags. We also rely on patching to enable calls into
> the firmware.
> 
> Note that since we need to execute branches, this always executes
> after the Spectre-v2 mitigation has been applied.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/asm-offsets.c |  1 +
>  arch/arm64/kvm/hyp/hyp-entry.S  | 38 +++++++++++++++++++++++++++++++++++++-
>  2 files changed, 38 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 5bdda651bd05..323aeb5f2fe6 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -136,6 +136,7 @@ int main(void)
>  #ifdef CONFIG_KVM_ARM_HOST
>    DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
>    DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
> +  DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
>    DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
>    DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
>    DEFINE(CPU_FP_REGS,		offsetof(struct kvm_regs, fp_regs));
> diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
> index bffece27b5c1..5b1fa37ca1f4 100644
> --- a/arch/arm64/kvm/hyp/hyp-entry.S
> +++ b/arch/arm64/kvm/hyp/hyp-entry.S
> @@ -106,8 +106,44 @@ el1_hvc_guest:
>  	 */
>  	ldr	x1, [sp]				// Guest's x0
>  	eor	w1, w1, #ARM_SMCCC_ARCH_WORKAROUND_1
> +	cbz	w1, wa_epilogue
> +
> +	/* ARM_SMCCC_ARCH_WORKAROUND_2 handling */
> +	eor	w1, w1, #(ARM_SMCCC_ARCH_WORKAROUND_1 ^ \
> +			  ARM_SMCCC_ARCH_WORKAROUND_2)

... that took me a second. Lovely. :)

>  	cbnz	w1, el1_trap
> -	mov	x0, x1
> +
> +#ifdef CONFIG_ARM64_SSBD
> +alternative_cb	arm64_enable_wa2_handling
> +	b	wa2_end
> +alternative_cb_end
> +	get_vcpu_ptr	x2, x0
> +	ldr	x0, [x2, #VCPU_WORKAROUND_FLAGS]
> +
> +	/* Sanitize the argument and update the guest flags*/

Nit: space before the trailing '*/'. Either that or use a '//' comment.

Otherwise, this looks fine, so with that fixed:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> +	ldr	x1, [sp, #8]			// Guest's x1
> +	clz	w1, w1				// Murphy's device:
> +	lsr	w1, w1, #5			// w1 = !!w1 without using
> +	eor	w1, w1, #1			// the flags...
> +	bfi	x0, x1, #VCPU_WORKAROUND_2_FLAG_SHIFT, #1
> +	str	x0, [x2, #VCPU_WORKAROUND_FLAGS]
> +
> +	/* Check that we actually need to perform the call */
> +	hyp_ldr_this_cpu x0, arm64_ssbd_callback_required, x2
> +	cbz	x0, wa2_end
> +
> +	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
> +	smc	#0
> +
> +	/* Don't leak data from the SMC call */
> +	mov	x3, xzr
> +wa2_end:
> +	mov	x2, xzr
> +	mov	x1, xzr
> +#endif
> +
> +wa_epilogue:
> +	mov	x0, xzr
>  	add	sp, sp, #16
>  	eret
>  
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 13/14] arm64: KVM: Handle guest's ARCH_WORKAROUND_2 requests
@ 2018-05-24 12:22     ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 12:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:47PM +0100, Marc Zyngier wrote:
> In order to forward the guest's ARCH_WORKAROUND_2 calls to EL3,
> add a small(-ish) sequence to handle it at EL2. Special care must
> be taken to track the state of the guest itself by updating the
> workaround flags. We also rely on patching to enable calls into
> the firmware.
> 
> Note that since we need to execute branches, this always executes
> after the Spectre-v2 mitigation has been applied.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/asm-offsets.c |  1 +
>  arch/arm64/kvm/hyp/hyp-entry.S  | 38 +++++++++++++++++++++++++++++++++++++-
>  2 files changed, 38 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 5bdda651bd05..323aeb5f2fe6 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -136,6 +136,7 @@ int main(void)
>  #ifdef CONFIG_KVM_ARM_HOST
>    DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
>    DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
> +  DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
>    DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
>    DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
>    DEFINE(CPU_FP_REGS,		offsetof(struct kvm_regs, fp_regs));
> diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
> index bffece27b5c1..5b1fa37ca1f4 100644
> --- a/arch/arm64/kvm/hyp/hyp-entry.S
> +++ b/arch/arm64/kvm/hyp/hyp-entry.S
> @@ -106,8 +106,44 @@ el1_hvc_guest:
>  	 */
>  	ldr	x1, [sp]				// Guest's x0
>  	eor	w1, w1, #ARM_SMCCC_ARCH_WORKAROUND_1
> +	cbz	w1, wa_epilogue
> +
> +	/* ARM_SMCCC_ARCH_WORKAROUND_2 handling */
> +	eor	w1, w1, #(ARM_SMCCC_ARCH_WORKAROUND_1 ^ \
> +			  ARM_SMCCC_ARCH_WORKAROUND_2)

... that took me a second. Lovely. :)

>  	cbnz	w1, el1_trap
> -	mov	x0, x1
> +
> +#ifdef CONFIG_ARM64_SSBD
> +alternative_cb	arm64_enable_wa2_handling
> +	b	wa2_end
> +alternative_cb_end
> +	get_vcpu_ptr	x2, x0
> +	ldr	x0, [x2, #VCPU_WORKAROUND_FLAGS]
> +
> +	/* Sanitize the argument and update the guest flags*/

Nit: space before the trailing '*/'. Either that or use a '//' comment.

Otherwise, this looks fine, so with that fixed:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> +	ldr	x1, [sp, #8]			// Guest's x1
> +	clz	w1, w1				// Murphy's device:
> +	lsr	w1, w1, #5			// w1 = !!w1 without using
> +	eor	w1, w1, #1			// the flags...
> +	bfi	x0, x1, #VCPU_WORKAROUND_2_FLAG_SHIFT, #1
> +	str	x0, [x2, #VCPU_WORKAROUND_FLAGS]
> +
> +	/* Check that we actually need to perform the call */
> +	hyp_ldr_this_cpu x0, arm64_ssbd_callback_required, x2
> +	cbz	x0, wa2_end
> +
> +	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_2
> +	smc	#0
> +
> +	/* Don't leak data from the SMC call */
> +	mov	x3, xzr
> +wa2_end:
> +	mov	x2, xzr
> +	mov	x1, xzr
> +#endif
> +
> +wa_epilogue:
> +	mov	x0, xzr
>  	add	sp, sp, #16
>  	eret
>  
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 10/14] arm64: ssbd: Add prctl interface for per-thread mitigation
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 12:24     ` Will Deacon
  -1 siblings, 0 replies; 110+ messages in thread
From: Will Deacon @ 2018-05-24 12:24 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Catalin Marinas,
	Thomas Gleixner, Andy Lutomirski, Kees Cook, Greg Kroah-Hartman,
	Christoffer Dall

On Tue, May 22, 2018 at 04:06:44PM +0100, Marc Zyngier wrote:
> If running on a system that performs dynamic SSBD mitigation, allow
> userspace to request the mitigation for itself. This is implemented
> as a prctl call, allowing the mitigation to be enabled or disabled at
> will for this particular thread.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/Makefile |   1 +
>  arch/arm64/kernel/ssbd.c   | 107 +++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 108 insertions(+)
>  create mode 100644 arch/arm64/kernel/ssbd.c
> 
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index bf825f38d206..0025f8691046 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -54,6 +54,7 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST)	+= arm64-reloc-test.o
>  arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
>  arm64-obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
>  arm64-obj-$(CONFIG_ARM_SDE_INTERFACE)	+= sdei.o
> +arm64-obj-$(CONFIG_ARM64_SSBD)		+= ssbd.o
>  
>  obj-y					+= $(arm64-obj-y) vdso/ probes/
>  obj-m					+= $(arm64-obj-m)
> diff --git a/arch/arm64/kernel/ssbd.c b/arch/arm64/kernel/ssbd.c
> new file mode 100644
> index 000000000000..34e3c430176b
> --- /dev/null
> +++ b/arch/arm64/kernel/ssbd.c
> @@ -0,0 +1,107 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2018 ARM Ltd, All Rights Reserved.
> + */
> +
> +#include <linux/sched.h>
> +#include <linux/thread_info.h>
> +
> +#include <asm/cpufeature.h>
> +
> +/*
> + * prctl interface for SSBD
> + * FIXME: Drop the below ifdefery once the common interface has been merged.
> + */
> +#ifdef PR_SPEC_STORE_BYPASS
> +static int ssbd_prctl_set(struct task_struct *task, unsigned long ctrl)
> +{
> +	int state = arm64_get_ssbd_state();
> +
> +	/* Unsupported or already mitigated */
> +	if (state == ARM64_SSBD_UNKNOWN)
> +		return -EINVAL;
> +	if (state == ARM64_SSBD_MITIGATED)
> +		return -EPERM;

I'm not sure this is the best thing to do. If the firmware says that the
CPU is mitigated, we should probably return 0 for PR_SPEC_DISABLE but
-EPERM for PR_SPEC_ENABLE (i.e. the part that doesn't work is disabling
the mitigation).

Will

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 10/14] arm64: ssbd: Add prctl interface for per-thread mitigation
@ 2018-05-24 12:24     ` Will Deacon
  0 siblings, 0 replies; 110+ messages in thread
From: Will Deacon @ 2018-05-24 12:24 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:44PM +0100, Marc Zyngier wrote:
> If running on a system that performs dynamic SSBD mitigation, allow
> userspace to request the mitigation for itself. This is implemented
> as a prctl call, allowing the mitigation to be enabled or disabled at
> will for this particular thread.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/Makefile |   1 +
>  arch/arm64/kernel/ssbd.c   | 107 +++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 108 insertions(+)
>  create mode 100644 arch/arm64/kernel/ssbd.c
> 
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index bf825f38d206..0025f8691046 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -54,6 +54,7 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST)	+= arm64-reloc-test.o
>  arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
>  arm64-obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
>  arm64-obj-$(CONFIG_ARM_SDE_INTERFACE)	+= sdei.o
> +arm64-obj-$(CONFIG_ARM64_SSBD)		+= ssbd.o
>  
>  obj-y					+= $(arm64-obj-y) vdso/ probes/
>  obj-m					+= $(arm64-obj-m)
> diff --git a/arch/arm64/kernel/ssbd.c b/arch/arm64/kernel/ssbd.c
> new file mode 100644
> index 000000000000..34e3c430176b
> --- /dev/null
> +++ b/arch/arm64/kernel/ssbd.c
> @@ -0,0 +1,107 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2018 ARM Ltd, All Rights Reserved.
> + */
> +
> +#include <linux/sched.h>
> +#include <linux/thread_info.h>
> +
> +#include <asm/cpufeature.h>
> +
> +/*
> + * prctl interface for SSBD
> + * FIXME: Drop the below ifdefery once the common interface has been merged.
> + */
> +#ifdef PR_SPEC_STORE_BYPASS
> +static int ssbd_prctl_set(struct task_struct *task, unsigned long ctrl)
> +{
> +	int state = arm64_get_ssbd_state();
> +
> +	/* Unsupported or already mitigated */
> +	if (state == ARM64_SSBD_UNKNOWN)
> +		return -EINVAL;
> +	if (state == ARM64_SSBD_MITIGATED)
> +		return -EPERM;

I'm not sure this is the best thing to do. If the firmware says that the
CPU is mitigated, we should probably return 0 for PR_SPEC_DISABLE but
-EPERM for PR_SPEC_ENABLE (i.e. the part that doesn't work is disabling
the mitigation).

Will

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 14/14] arm64: KVM: Add ARCH_WORKAROUND_2 discovery through ARCH_FEATURES_FUNC_ID
  2018-05-22 15:06   ` Marc Zyngier
@ 2018-05-24 12:25     ` Mark Rutland
  -1 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 12:25 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, kvmarm, Will Deacon,
	Catalin Marinas, Thomas Gleixner, Andy Lutomirski, Kees Cook,
	Greg Kroah-Hartman, Christoffer Dall

On Tue, May 22, 2018 at 04:06:48PM +0100, Marc Zyngier wrote:
> Now that all our infrastructure is in place, let's expose the
> availability of ARCH_WORKAROUND_2 to guests. We take this opportunity
> to tidy up a couple of SMCCC constants.
> 
> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm/include/asm/kvm_host.h   | 12 ++++++++++++
>  arch/arm64/include/asm/kvm_host.h | 23 +++++++++++++++++++++++
>  arch/arm64/kvm/reset.c            |  4 ++++
>  virt/kvm/arm/psci.c               | 18 ++++++++++++++++--
>  4 files changed, 55 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index c7c28c885a19..d478766b56c1 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -315,6 +315,18 @@ static inline bool kvm_arm_harden_branch_predictor(void)
>  	return false;
>  }
>  
> +#define KVM_SSBD_UNKNOWN		-1
> +#define KVM_SSBD_FORCE_DISABLE		0
> +#define KVM_SSBD_EL1_ENTRY		1
> +#define KVM_SSBD_FORCE_ENABLE		2
> +#define KVM_SSBD_MITIGATED		3
> +
> +static inline int kvm_arm_have_ssbd(void)
> +{
> +	/* No way to detect it yet, pretend it is not there. */
> +	return KVM_SSBD_UNKNOWN;
> +}
> +
>  static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
>  
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 9bef3f69bdcd..082b0dbb85c6 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -455,6 +455,29 @@ static inline bool kvm_arm_harden_branch_predictor(void)
>  	return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
>  }
>  
> +#define KVM_SSBD_UNKNOWN		-1
> +#define KVM_SSBD_FORCE_DISABLE		0
> +#define KVM_SSBD_EL1_ENTRY		1
> +#define KVM_SSBD_FORCE_ENABLE		2
> +#define KVM_SSBD_MITIGATED		3
> +
> +static inline int kvm_arm_have_ssbd(void)
> +{
> +	switch (arm64_get_ssbd_state()) {
> +	case ARM64_SSBD_FORCE_DISABLE:
> +		return KVM_SSBD_FORCE_DISABLE;
> +	case ARM64_SSBD_EL1_ENTRY:
> +		return KVM_SSBD_EL1_ENTRY;
> +	case ARM64_SSBD_FORCE_ENABLE:
> +		return KVM_SSBD_FORCE_ENABLE;
> +	case ARM64_SSBD_MITIGATED:
> +		return KVM_SSBD_MITIGATED;
> +	case ARM64_SSBD_UNKNOWN:
> +	default:
> +		return KVM_SSBD_UNKNOWN;
> +	}
> +}
> +
>  void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu);
>  void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu);
>  
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index 3256b9228e75..20a7dfee8494 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -122,6 +122,10 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  	/* Reset PMU */
>  	kvm_pmu_vcpu_reset(vcpu);
>  
> +	/* Default workaround setup is enabled (if supported) */
> +	if (kvm_arm_have_ssbd() == KVM_SSBD_EL1_ENTRY)
> +		vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG;
> +
>  	/* Reset timer */
>  	return kvm_timer_vcpu_reset(vcpu);
>  }
> diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
> index c4762bef13c6..4843bfa1f986 100644
> --- a/virt/kvm/arm/psci.c
> +++ b/virt/kvm/arm/psci.c
> @@ -405,7 +405,7 @@ static int kvm_psci_call(struct kvm_vcpu *vcpu)
>  int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>  {
>  	u32 func_id = smccc_get_function(vcpu);
> -	u32 val = PSCI_RET_NOT_SUPPORTED;
> +	u32 val = SMCCC_RET_NOT_SUPPORTED;
>  	u32 feature;
>  
>  	switch (func_id) {
> @@ -417,7 +417,21 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>  		switch(feature) {
>  		case ARM_SMCCC_ARCH_WORKAROUND_1:
>  			if (kvm_arm_harden_branch_predictor())
> -				val = 0;
> +				val = SMCCC_RET_SUCCESS;
> +			break;
> +		case ARM_SMCCC_ARCH_WORKAROUND_2:
> +			switch (kvm_arm_have_ssbd()) {
> +			case KVM_SSBD_FORCE_DISABLE:
> +			case KVM_SSBD_UNKNOWN:
> +				break;
> +			case KVM_SSBD_EL1_ENTRY:
> +				val = SMCCC_RET_SUCCESS;
> +				break;
> +			case KVM_SSBD_FORCE_ENABLE:
> +			case KVM_SSBD_MITIGATED:
> +				val = SMCCC_RET_NOT_REQUIRED;
> +				break;
> +			}
>  			break;
>  		}
>  		break;
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 14/14] arm64: KVM: Add ARCH_WORKAROUND_2 discovery through ARCH_FEATURES_FUNC_ID
@ 2018-05-24 12:25     ` Mark Rutland
  0 siblings, 0 replies; 110+ messages in thread
From: Mark Rutland @ 2018-05-24 12:25 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 04:06:48PM +0100, Marc Zyngier wrote:
> Now that all our infrastructure is in place, let's expose the
> availability of ARCH_WORKAROUND_2 to guests. We take this opportunity
> to tidy up a couple of SMCCC constants.
> 
> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm/include/asm/kvm_host.h   | 12 ++++++++++++
>  arch/arm64/include/asm/kvm_host.h | 23 +++++++++++++++++++++++
>  arch/arm64/kvm/reset.c            |  4 ++++
>  virt/kvm/arm/psci.c               | 18 ++++++++++++++++--
>  4 files changed, 55 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index c7c28c885a19..d478766b56c1 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -315,6 +315,18 @@ static inline bool kvm_arm_harden_branch_predictor(void)
>  	return false;
>  }
>  
> +#define KVM_SSBD_UNKNOWN		-1
> +#define KVM_SSBD_FORCE_DISABLE		0
> +#define KVM_SSBD_EL1_ENTRY		1
> +#define KVM_SSBD_FORCE_ENABLE		2
> +#define KVM_SSBD_MITIGATED		3
> +
> +static inline int kvm_arm_have_ssbd(void)
> +{
> +	/* No way to detect it yet, pretend it is not there. */
> +	return KVM_SSBD_UNKNOWN;
> +}
> +
>  static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
>  
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 9bef3f69bdcd..082b0dbb85c6 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -455,6 +455,29 @@ static inline bool kvm_arm_harden_branch_predictor(void)
>  	return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
>  }
>  
> +#define KVM_SSBD_UNKNOWN		-1
> +#define KVM_SSBD_FORCE_DISABLE		0
> +#define KVM_SSBD_EL1_ENTRY		1
> +#define KVM_SSBD_FORCE_ENABLE		2
> +#define KVM_SSBD_MITIGATED		3
> +
> +static inline int kvm_arm_have_ssbd(void)
> +{
> +	switch (arm64_get_ssbd_state()) {
> +	case ARM64_SSBD_FORCE_DISABLE:
> +		return KVM_SSBD_FORCE_DISABLE;
> +	case ARM64_SSBD_EL1_ENTRY:
> +		return KVM_SSBD_EL1_ENTRY;
> +	case ARM64_SSBD_FORCE_ENABLE:
> +		return KVM_SSBD_FORCE_ENABLE;
> +	case ARM64_SSBD_MITIGATED:
> +		return KVM_SSBD_MITIGATED;
> +	case ARM64_SSBD_UNKNOWN:
> +	default:
> +		return KVM_SSBD_UNKNOWN;
> +	}
> +}
> +
>  void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu);
>  void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu);
>  
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index 3256b9228e75..20a7dfee8494 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -122,6 +122,10 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  	/* Reset PMU */
>  	kvm_pmu_vcpu_reset(vcpu);
>  
> +	/* Default workaround setup is enabled (if supported) */
> +	if (kvm_arm_have_ssbd() == KVM_SSBD_EL1_ENTRY)
> +		vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG;
> +
>  	/* Reset timer */
>  	return kvm_timer_vcpu_reset(vcpu);
>  }
> diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
> index c4762bef13c6..4843bfa1f986 100644
> --- a/virt/kvm/arm/psci.c
> +++ b/virt/kvm/arm/psci.c
> @@ -405,7 +405,7 @@ static int kvm_psci_call(struct kvm_vcpu *vcpu)
>  int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>  {
>  	u32 func_id = smccc_get_function(vcpu);
> -	u32 val = PSCI_RET_NOT_SUPPORTED;
> +	u32 val = SMCCC_RET_NOT_SUPPORTED;
>  	u32 feature;
>  
>  	switch (func_id) {
> @@ -417,7 +417,21 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>  		switch(feature) {
>  		case ARM_SMCCC_ARCH_WORKAROUND_1:
>  			if (kvm_arm_harden_branch_predictor())
> -				val = 0;
> +				val = SMCCC_RET_SUCCESS;
> +			break;
> +		case ARM_SMCCC_ARCH_WORKAROUND_2:
> +			switch (kvm_arm_have_ssbd()) {
> +			case KVM_SSBD_FORCE_DISABLE:
> +			case KVM_SSBD_UNKNOWN:
> +				break;
> +			case KVM_SSBD_EL1_ENTRY:
> +				val = SMCCC_RET_SUCCESS;
> +				break;
> +			case KVM_SSBD_FORCE_ENABLE:
> +			case KVM_SSBD_MITIGATED:
> +				val = SMCCC_RET_NOT_REQUIRED;
> +				break;
> +			}
>  			break;
>  		}
>  		break;
> -- 
> 2.14.2
> 

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 09/14] arm64: ssbd: Introduce thread flag to control userspace mitigation
  2018-05-24 12:19         ` Will Deacon
@ 2018-05-24 12:36           ` Marc Zyngier
  -1 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-24 12:36 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, linux-arm-kernel, linux-kernel, kvmarm, Kees Cook,
	Catalin Marinas, Andy Lutomirski, Greg Kroah-Hartman,
	Thomas Gleixner

On 24/05/18 13:19, Will Deacon wrote:
> On Thu, May 24, 2018 at 01:16:38PM +0100, Marc Zyngier wrote:
>> On 24/05/18 13:01, Mark Rutland wrote:
>>> On Tue, May 22, 2018 at 04:06:43PM +0100, Marc Zyngier wrote:
>>>> In order to allow userspace to be mitigated on demand, let's
>>>> introduce a new thread flag that prevents the mitigation from
>>>> being turned off when exiting to userspace, and doesn't turn
>>>> it on on entry into the kernel (with the assumtion that the
>>>
>>> Nit: s/assumtion/assumption/
>>>
>>>> mitigation is always enabled in the kernel itself).
>>>>
>>>> This will be used by a prctl interface introduced in a later
>>>> patch.
>>>>
>>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>>
>>> On the assumption that this flag cannot be flipped while a task is in
>>> userspace:
>>
>> Well, that's the case unless you get into the seccomp thing, which does
>> change TIF_SSBD on all threads of the task, without taking it to the
>> kernel first. That nicely breaks the state machine, and you end-up
>> running non-mitigated in the kernel. Oops.
>>
>> I have a couple of patches fixing that, using a second flag
>> (TIF_SSBD_PENDING) that gets turned into the real thing on exit to
>> userspace. It's pretty ugly though.
> 
> ... which introduces the need for atomics on the entry path too :(

Oh, I'm not saying it is nice. It would hit us on the exception return
to userspace for all tasks (and not only the mitigated ones). I'd rather
not have this at all.

> I would /much/ rather kill the seccomp implicit enabling of the mitigation,
> or at least have a way to opt-out per arch since it doesn't seem to be
> technically justified imo.
I agree. The semantics are really odd (the thread still runs unmitigated
until it traps into the kernel), and I don't really get why seccomp
tasks should get a special treatment compared to the rest of the userspace.

But 4.17 is only something like 10 days away, so whatever we decide,
we'd better decide it soon.


	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 09/14] arm64: ssbd: Introduce thread flag to control userspace mitigation
@ 2018-05-24 12:36           ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-05-24 12:36 UTC (permalink / raw)
  To: linux-arm-kernel

On 24/05/18 13:19, Will Deacon wrote:
> On Thu, May 24, 2018 at 01:16:38PM +0100, Marc Zyngier wrote:
>> On 24/05/18 13:01, Mark Rutland wrote:
>>> On Tue, May 22, 2018 at 04:06:43PM +0100, Marc Zyngier wrote:
>>>> In order to allow userspace to be mitigated on demand, let's
>>>> introduce a new thread flag that prevents the mitigation from
>>>> being turned off when exiting to userspace, and doesn't turn
>>>> it on on entry into the kernel (with the assumtion that the
>>>
>>> Nit: s/assumtion/assumption/
>>>
>>>> mitigation is always enabled in the kernel itself).
>>>>
>>>> This will be used by a prctl interface introduced in a later
>>>> patch.
>>>>
>>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>>
>>> On the assumption that this flag cannot be flipped while a task is in
>>> userspace:
>>
>> Well, that's the case unless you get into the seccomp thing, which does
>> change TIF_SSBD on all threads of the task, without taking it to the
>> kernel first. That nicely breaks the state machine, and you end-up
>> running non-mitigated in the kernel. Oops.
>>
>> I have a couple of patches fixing that, using a second flag
>> (TIF_SSBD_PENDING) that gets turned into the real thing on exit to
>> userspace. It's pretty ugly though.
> 
> ... which introduces the need for atomics on the entry path too :(

Oh, I'm not saying it is nice. It would hit us on the exception return
to userspace for all tasks (and not only the mitigated ones). I'd rather
not have this at all.

> I would /much/ rather kill the seccomp implicit enabling of the mitigation,
> or at least have a way to opt-out per arch since it doesn't seem to be
> technically justified imo.
I agree. The semantics are really odd (the thread still runs unmitigated
until it traps into the kernel), and I don't really get why seccomp
tasks should get a special treatment compared to the rest of the userspace.

But 4.17 is only something like 10 days away, so whatever we decide,
we'd better decide it soon.


	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 110+ messages in thread

* Re: [PATCH 04/14] arm64: Add ARCH_WORKAROUND_2 probing
  2018-05-24 11:39       ` Will Deacon
@ 2018-05-24 13:34         ` Suzuki K Poulose
  -1 siblings, 0 replies; 110+ messages in thread
From: Suzuki K Poulose @ 2018-05-24 13:34 UTC (permalink / raw)
  To: Will Deacon
  Cc: Marc Zyngier, linux-arm-kernel, linux-kernel, kvmarm, Kees Cook,
	Catalin Marinas, Andy Lutomirski, Greg Kroah-Hartman,
	Thomas Gleixner

On 24/05/18 12:39, Will Deacon wrote:
> On Thu, May 24, 2018 at 10:58:43AM +0100, Suzuki K Poulose wrote:
>> On 22/05/18 16:06, Marc Zyngier wrote:
>>> As for Spectre variant-2, we rely on SMCCC 1.1 to provide the
>>> discovery mechanism for detecting the SSBD mitigation.
>>>
>>> A new capability is also allocated for that purpose, and a
>>> config option.
>>>
>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>
>>
>>> +static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>>> +				    int scope)
>>> +{
>>> +	struct arm_smccc_res res;
>>> +	bool supported = true;
>>> +
>>> +	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
>>> +
>>> +	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
>>> +		return false;
>>> +
>>> +	/*
>>> +	 * The probe function return value is either negative
>>> +	 * (unsupported or mitigated), positive (unaffected), or zero
>>> +	 * (requires mitigation). We only need to do anything in the
>>> +	 * last case.
>>> +	 */
>>> +	switch (psci_ops.conduit) {
>>> +	case PSCI_CONDUIT_HVC:
>>> +		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>>> +				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
>>> +		if ((int)res.a0 != 0)
>>> +			supported = false;
>>> +		break;
>>> +
>>> +	case PSCI_CONDUIT_SMC:
>>> +		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>>> +				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
>>> +		if ((int)res.a0 != 0)
>>> +			supported = false;
>>> +		break;
>>> +
>>> +	default:
>>> +		supported = false;
>>> +	}
>>> +
>>> +	if (supported) {
>>> +		__this_cpu_write(arm64_ssbd_callback_required, 1);
>>> +		do_ssbd(true);
>>> +	}
>>
>>
>> Marc,
>>
>> As discussed, we have minor issue with the "corner case". If a CPU
>> is hotplugged in which requires the mitigation, after the system has
>> finalised the cap to "not available", the CPU could go ahead and
>> do the "work around" as above, while not effectively doing anything
>> about it at runtime for KVM guests (as thats the only place where
>> we rely on the CAP being set).
>>
>> But, yes this is real corner case. There is no easy way to solve it
>> other than
>>
>> 1) Allow late modifications to CPU hwcaps
>>
>> OR
>>
>> 2) Penalise the fastpath to always check per-cpu setting.
> 
> Shouldn't we just avoid bring up CPUs that require the mitigation after
> we've finalised the capability to say that it's not required? Assuming this
> is just another issue with maxcpus=, then I don't much care for it.
Ah! Sorry, yes we do kill the CPU. But it is just that it will set the
ssbd_callback_required flag and issue the do_ssbd(), which is not an issue.

Yes this can only be triggered by maxcpus=.

Suzuki

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 04/14] arm64: Add ARCH_WORKAROUND_2 probing
@ 2018-05-24 13:34         ` Suzuki K Poulose
  0 siblings, 0 replies; 110+ messages in thread
From: Suzuki K Poulose @ 2018-05-24 13:34 UTC (permalink / raw)
  To: linux-arm-kernel

On 24/05/18 12:39, Will Deacon wrote:
> On Thu, May 24, 2018 at 10:58:43AM +0100, Suzuki K Poulose wrote:
>> On 22/05/18 16:06, Marc Zyngier wrote:
>>> As for Spectre variant-2, we rely on SMCCC 1.1 to provide the
>>> discovery mechanism for detecting the SSBD mitigation.
>>>
>>> A new capability is also allocated for that purpose, and a
>>> config option.
>>>
>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>
>>
>>> +static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
>>> +				    int scope)
>>> +{
>>> +	struct arm_smccc_res res;
>>> +	bool supported = true;
>>> +
>>> +	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
>>> +
>>> +	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
>>> +		return false;
>>> +
>>> +	/*
>>> +	 * The probe function return value is either negative
>>> +	 * (unsupported or mitigated), positive (unaffected), or zero
>>> +	 * (requires mitigation). We only need to do anything in the
>>> +	 * last case.
>>> +	 */
>>> +	switch (psci_ops.conduit) {
>>> +	case PSCI_CONDUIT_HVC:
>>> +		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>>> +				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
>>> +		if ((int)res.a0 != 0)
>>> +			supported = false;
>>> +		break;
>>> +
>>> +	case PSCI_CONDUIT_SMC:
>>> +		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>>> +				  ARM_SMCCC_ARCH_WORKAROUND_2, &res);
>>> +		if ((int)res.a0 != 0)
>>> +			supported = false;
>>> +		break;
>>> +
>>> +	default:
>>> +		supported = false;
>>> +	}
>>> +
>>> +	if (supported) {
>>> +		__this_cpu_write(arm64_ssbd_callback_required, 1);
>>> +		do_ssbd(true);
>>> +	}
>>
>>
>> Marc,
>>
>> As discussed, we have minor issue with the "corner case". If a CPU
>> is hotplugged in which requires the mitigation, after the system has
>> finalised the cap to "not available", the CPU could go ahead and
>> do the "work around" as above, while not effectively doing anything
>> about it at runtime for KVM guests (as thats the only place where
>> we rely on the CAP being set).
>>
>> But, yes this is real corner case. There is no easy way to solve it
>> other than
>>
>> 1) Allow late modifications to CPU hwcaps
>>
>> OR
>>
>> 2) Penalise the fastpath to always check per-cpu setting.
> 
> Shouldn't we just avoid bring up CPUs that require the mitigation after
> we've finalised the capability to say that it's not required? Assuming this
> is just another issue with maxcpus=, then I don't much care for it.
Ah! Sorry, yes we do kill the CPU. But it is just that it will set the
ssbd_callback_required flag and issue the do_ssbd(), which is not an issue.

Yes this can only be triggered by maxcpus=.

Suzuki

^ permalink raw reply	[flat|nested] 110+ messages in thread

* [PATCH 06/14] arm64: ssbd: Add global mitigation state accessor
  2018-07-20  9:47 [PATCH 00/14] arm64: 4.17 backport of the SSBD mitigation patches Marc Zyngier
@ 2018-07-20  9:47 ` Marc Zyngier
  0 siblings, 0 replies; 110+ messages in thread
From: Marc Zyngier @ 2018-07-20  9:47 UTC (permalink / raw)
  To: stable; +Cc: Will Deacon, Catalin Marinas, Mark Rutland, Christoffer Dall

commit c32e1736ca03904c03de0e4459a673be194f56fd upstream.

We're about to need the mitigation state in various parts of the
kernel in order to do the right thing for userspace and guests.

Let's expose an accessor that will let other subsystems know
about the state.

Reviewed-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index b50650f3e496..b0fc3224ce8a 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -543,6 +543,16 @@ static inline u64 read_zcr_features(void)
 #define ARM64_SSBD_FORCE_ENABLE		2
 #define ARM64_SSBD_MITIGATED		3
 
+static inline int arm64_get_ssbd_state(void)
+{
+#ifdef CONFIG_ARM64_SSBD
+	extern int ssbd_state;
+	return ssbd_state;
+#else
+	return ARM64_SSBD_UNKNOWN;
+#endif
+}
+
 #endif /* __ASSEMBLY__ */
 
 #endif
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 110+ messages in thread

end of thread, other threads:[~2018-07-20 10:35 UTC | newest]

Thread overview: 110+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-22 15:06 [PATCH 00/14] arm64 SSBD (aka Spectre-v4) mitigation Marc Zyngier
2018-05-22 15:06 ` Marc Zyngier
2018-05-22 15:06 ` [PATCH 01/14] arm/arm64: smccc: Add SMCCC-specific return codes Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-24 10:55   ` Mark Rutland
2018-05-24 10:55     ` Mark Rutland
2018-05-22 15:06 ` [PATCH 02/14] arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1 Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-23  9:23   ` Julien Grall
2018-05-23  9:23     ` Julien Grall
2018-05-24 10:52     ` Mark Rutland
2018-05-24 10:52       ` Mark Rutland
2018-05-24 12:10       ` Robin Murphy
2018-05-24 12:10         ` Robin Murphy
2018-05-24 11:00   ` Mark Rutland
2018-05-24 11:00     ` Mark Rutland
2018-05-24 11:23     ` Mark Rutland
2018-05-24 11:23       ` Mark Rutland
2018-05-24 11:28       ` Marc Zyngier
2018-05-24 11:28         ` Marc Zyngier
2018-05-22 15:06 ` [PATCH 03/14] arm64: Add per-cpu infrastructure to call ARCH_WORKAROUND_2 Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-23 10:03   ` Julien Grall
2018-05-23 10:03     ` Julien Grall
2018-05-24 11:14   ` Mark Rutland
2018-05-24 11:14     ` Mark Rutland
2018-05-22 15:06 ` [PATCH 04/14] arm64: Add ARCH_WORKAROUND_2 probing Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-23 10:06   ` Julien Grall
2018-05-23 10:06     ` Julien Grall
2018-05-24  9:58   ` Suzuki K Poulose
2018-05-24  9:58     ` Suzuki K Poulose
2018-05-24 11:39     ` Will Deacon
2018-05-24 11:39       ` Will Deacon
2018-05-24 13:34       ` Suzuki K Poulose
2018-05-24 13:34         ` Suzuki K Poulose
2018-05-24 11:27   ` Mark Rutland
2018-05-24 11:27     ` Mark Rutland
2018-05-22 15:06 ` [PATCH 05/14] arm64: Add 'ssbd' command-line option Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-22 15:29   ` Randy Dunlap
2018-05-22 15:29     ` Randy Dunlap
2018-05-22 15:29     ` Randy Dunlap
2018-05-23 10:08   ` Julien Grall
2018-05-23 10:08     ` Julien Grall
2018-05-24 11:40   ` Mark Rutland
2018-05-24 11:40     ` Mark Rutland
2018-05-24 11:52     ` Marc Zyngier
2018-05-24 11:52       ` Marc Zyngier
2018-05-22 15:06 ` [PATCH 06/14] arm64: ssbd: Add global mitigation state accessor Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-23 10:11   ` Julien Grall
2018-05-23 10:11     ` Julien Grall
2018-05-24 11:41   ` Mark Rutland
2018-05-24 11:41     ` Mark Rutland
2018-05-22 15:06 ` [PATCH 07/14] arm64: ssbd: Skip apply_ssbd if not using dynamic mitigation Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-23 10:13   ` Julien Grall
2018-05-23 10:13     ` Julien Grall
2018-05-24 11:43   ` Mark Rutland
2018-05-24 11:43     ` Mark Rutland
2018-05-22 15:06 ` [PATCH 08/14] arm64: ssbd: Disable mitigation on CPU resume if required by user Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-23 10:52   ` Julien Grall
2018-05-23 10:52     ` Julien Grall
2018-05-24 11:55   ` Mark Rutland
2018-05-24 11:55     ` Mark Rutland
2018-05-22 15:06 ` [PATCH 09/14] arm64: ssbd: Introduce thread flag to control userspace mitigation Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-24 12:01   ` Mark Rutland
2018-05-24 12:01     ` Mark Rutland
2018-05-24 12:16     ` Marc Zyngier
2018-05-24 12:16       ` Marc Zyngier
2018-05-24 12:19       ` Will Deacon
2018-05-24 12:19         ` Will Deacon
2018-05-24 12:36         ` Marc Zyngier
2018-05-24 12:36           ` Marc Zyngier
2018-05-22 15:06 ` [PATCH 10/14] arm64: ssbd: Add prctl interface for per-thread mitigation Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-22 15:48   ` Dominik Brodowski
2018-05-22 15:48     ` Dominik Brodowski
2018-05-22 16:30     ` Marc Zyngier
2018-05-22 16:30       ` Marc Zyngier
2018-05-22 16:30       ` Marc Zyngier
2018-05-24 12:10   ` Mark Rutland
2018-05-24 12:10     ` Mark Rutland
2018-05-24 12:24   ` Will Deacon
2018-05-24 12:24     ` Will Deacon
2018-05-22 15:06 ` [PATCH 11/14] arm64: KVM: Add HYP per-cpu accessors Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-24 12:11   ` Mark Rutland
2018-05-24 12:11     ` Mark Rutland
2018-05-22 15:06 ` [PATCH 12/14] arm64: KVM: Add ARCH_WORKAROUND_2 support for guests Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-24 12:15   ` Mark Rutland
2018-05-24 12:15     ` Mark Rutland
2018-05-22 15:06 ` [PATCH 13/14] arm64: KVM: Handle guest's ARCH_WORKAROUND_2 requests Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-24 12:22   ` Mark Rutland
2018-05-24 12:22     ` Mark Rutland
2018-05-22 15:06 ` [PATCH 14/14] arm64: KVM: Add ARCH_WORKAROUND_2 discovery through ARCH_FEATURES_FUNC_ID Marc Zyngier
2018-05-22 15:06   ` Marc Zyngier
2018-05-24 12:25   ` Mark Rutland
2018-05-24 12:25     ` Mark Rutland
2018-07-20  9:47 [PATCH 00/14] arm64: 4.17 backport of the SSBD mitigation patches Marc Zyngier
2018-07-20  9:47 ` [PATCH 06/14] arm64: ssbd: Add global mitigation state accessor Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.