All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable
@ 2019-10-04 12:04 Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 01/16] arm64: cpufeature: Detect SSBS and advertise to userspace Ard Biesheuvel
                   ` (16 more replies)
  0 siblings, 17 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Ard Biesheuvel,
	Jeremy Linton, Andre Przywara, Marc Zyngier, Will Deacon

This is a fairly mechnical backport to v4.19 of the changes needed to support
managing the SSBS state, which controls whether Speculative Store Bypass is
permitted.

I have included Jeremy's sysfs changes as well, since they are equally
suitable for stable and made for a much cleaner backport, so it made
little sense to handle them separately.

These patches are presented for review, and are not being cc'ed to the
-stable maintainers just yet.

Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Jeremy Linton <jeremy.linton@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>

Jeremy Linton (6):
  arm64: add sysfs vulnerability show for meltdown
  arm64: Provide a command line to disable spectre_v2 mitigation
  arm64: Always enable spectre-v2 vulnerability detection
  arm64: Always enable ssb vulnerability detection
  arm64: add sysfs vulnerability show for spectre-v2
  arm64: add sysfs vulnerability show for speculative store bypass

Marc Zyngier (2):
  arm64: Advertise mitigation of Spectre-v2, or lack thereof
  arm64: Force SSBS on context switch

Mark Rutland (1):
  arm64: fix SSBS sanitization

Mian Yousaf Kaukab (2):
  arm64: Add sysfs vulnerability show for spectre-v1
  arm64: enable generic CPU vulnerabilites support

Will Deacon (5):
  arm64: cpufeature: Detect SSBS and advertise to userspace
  arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3
  KVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and
    !vhe
  arm64: docs: Document SSBS HWCAP
  arm64: ssbs: Don't treat CPUs with SSBS as unaffected by SSB

 Documentation/admin-guide/kernel-parameters.txt |   8 +-
 Documentation/arm64/elf_hwcaps.txt              |   4 +
 arch/arm64/Kconfig                              |   1 +
 arch/arm64/include/asm/cpucaps.h                |   3 +-
 arch/arm64/include/asm/cpufeature.h             |   4 -
 arch/arm64/include/asm/kvm_host.h               |  11 +
 arch/arm64/include/asm/processor.h              |  17 ++
 arch/arm64/include/asm/ptrace.h                 |   1 +
 arch/arm64/include/asm/sysreg.h                 |  19 +-
 arch/arm64/include/uapi/asm/hwcap.h             |   1 +
 arch/arm64/include/uapi/asm/ptrace.h            |   1 +
 arch/arm64/kernel/cpu_errata.c                  | 257 +++++++++++++-------
 arch/arm64/kernel/cpufeature.c                  | 122 ++++++++--
 arch/arm64/kernel/cpuinfo.c                     |   1 +
 arch/arm64/kernel/process.c                     |  31 +++
 arch/arm64/kernel/ptrace.c                      |  15 +-
 arch/arm64/kernel/ssbd.c                        |  21 ++
 arch/arm64/kvm/hyp/sysreg-sr.c                  |  11 +
 18 files changed, 407 insertions(+), 121 deletions(-)

-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 01/16] arm64: cpufeature: Detect SSBS and advertise to userspace
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-08 14:35   ` Mark Rutland
  2019-10-04 12:04 ` [RFC/RFT PATCH 02/16] arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3 Ard Biesheuvel
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Ard Biesheuvel,
	Will Deacon, Jeremy Linton, Andre Przywara, Marc Zyngier,
	Will Deacon

From: Will Deacon <will.deacon@arm.com>

Armv8.5 introduces a new PSTATE bit known as Speculative Store Bypass
Safe (SSBS) which can be used as a mitigation against Spectre variant 4.

Additionally, a CPU may provide instructions to manipulate PSTATE.SSBS
directly, so that userspace can toggle the SSBS control without trapping
to the kernel.

This patch probes for the existence of SSBS and advertise the new instructions
to userspace if they exist.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit d71be2b6c0e19180b5f80a6d42039cc074a693a2)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/cpucaps.h    |  3 ++-
 arch/arm64/include/asm/sysreg.h     | 16 ++++++++++++----
 arch/arm64/include/uapi/asm/hwcap.h |  1 +
 arch/arm64/kernel/cpufeature.c      | 19 +++++++++++++++++--
 arch/arm64/kernel/cpuinfo.c         |  1 +
 5 files changed, 33 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 25ce9056cf64..c3de0bbf0e9a 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -52,7 +52,8 @@
 #define ARM64_MISMATCHED_CACHE_TYPE		31
 #define ARM64_HAS_STAGE2_FWB			32
 #define ARM64_WORKAROUND_1463225		33
+#define ARM64_SSBS				34
 
-#define ARM64_NCAPS				34
+#define ARM64_NCAPS				35
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index c1470931b897..2fc6242baf11 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -419,6 +419,7 @@
 #define SYS_ICH_LR15_EL2		__SYS__LR8_EL2(7)
 
 /* Common SCTLR_ELx flags. */
+#define SCTLR_ELx_DSSBS	(1UL << 44)
 #define SCTLR_ELx_EE    (1 << 25)
 #define SCTLR_ELx_IESB	(1 << 21)
 #define SCTLR_ELx_WXN	(1 << 19)
@@ -439,7 +440,7 @@
 			 (1 << 10) | (1 << 13) | (1 << 14) | (1 << 15) | \
 			 (1 << 17) | (1 << 20) | (1 << 24) | (1 << 26) | \
 			 (1 << 27) | (1 << 30) | (1 << 31) | \
-			 (0xffffffffUL << 32))
+			 (0xffffefffUL << 32))
 
 #ifdef CONFIG_CPU_BIG_ENDIAN
 #define ENDIAN_SET_EL2		SCTLR_ELx_EE
@@ -453,7 +454,7 @@
 #define SCTLR_EL2_SET	(SCTLR_ELx_IESB   | ENDIAN_SET_EL2   | SCTLR_EL2_RES1)
 #define SCTLR_EL2_CLEAR	(SCTLR_ELx_M      | SCTLR_ELx_A    | SCTLR_ELx_C   | \
 			 SCTLR_ELx_SA     | SCTLR_ELx_I    | SCTLR_ELx_WXN | \
-			 ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
+			 SCTLR_ELx_DSSBS | ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
 
 #if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffffffffffff
 #error "Inconsistent SCTLR_EL2 set/clear bits"
@@ -477,7 +478,7 @@
 			 (1 << 29))
 #define SCTLR_EL1_RES0  ((1 << 6)  | (1 << 10) | (1 << 13) | (1 << 17) | \
 			 (1 << 27) | (1 << 30) | (1 << 31) | \
-			 (0xffffffffUL << 32))
+			 (0xffffefffUL << 32))
 
 #ifdef CONFIG_CPU_BIG_ENDIAN
 #define ENDIAN_SET_EL1		(SCTLR_EL1_E0E | SCTLR_ELx_EE)
@@ -494,7 +495,7 @@
 			 ENDIAN_SET_EL1 | SCTLR_EL1_UCI  | SCTLR_EL1_RES1)
 #define SCTLR_EL1_CLEAR	(SCTLR_ELx_A   | SCTLR_EL1_CP15BEN | SCTLR_EL1_ITD    |\
 			 SCTLR_EL1_UMA | SCTLR_ELx_WXN     | ENDIAN_CLEAR_EL1 |\
-			 SCTLR_EL1_RES0)
+			 SCTLR_ELx_DSSBS | SCTLR_EL1_RES0)
 
 #if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffffffffffff
 #error "Inconsistent SCTLR_EL1 set/clear bits"
@@ -544,6 +545,13 @@
 #define ID_AA64PFR0_EL0_64BIT_ONLY	0x1
 #define ID_AA64PFR0_EL0_32BIT_64BIT	0x2
 
+/* id_aa64pfr1 */
+#define ID_AA64PFR1_SSBS_SHIFT		4
+
+#define ID_AA64PFR1_SSBS_PSTATE_NI	0
+#define ID_AA64PFR1_SSBS_PSTATE_ONLY	1
+#define ID_AA64PFR1_SSBS_PSTATE_INSNS	2
+
 /* id_aa64mmfr0 */
 #define ID_AA64MMFR0_TGRAN4_SHIFT	28
 #define ID_AA64MMFR0_TGRAN64_SHIFT	24
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index 17c65c8f33cb..2bcd6e4f3474 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -48,5 +48,6 @@
 #define HWCAP_USCAT		(1 << 25)
 #define HWCAP_ILRCPC		(1 << 26)
 #define HWCAP_FLAGM		(1 << 27)
+#define HWCAP_SSBS		(1 << 28)
 
 #endif /* _UAPI__ASM_HWCAP_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 859d63cc99a3..58146d636a83 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -164,6 +164,11 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
 	ARM64_FTR_END,
 };
 
+static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI),
+	ARM64_FTR_END,
+};
+
 static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
 	/*
 	 * We already refuse to boot CPUs that don't support our configured
@@ -379,7 +384,7 @@ static const struct __ftr_reg_entry {
 
 	/* Op1 = 0, CRn = 0, CRm = 4 */
 	ARM64_FTR_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0),
-	ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_raz),
+	ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_id_aa64pfr1),
 	ARM64_FTR_REG(SYS_ID_AA64ZFR0_EL1, ftr_raz),
 
 	/* Op1 = 0, CRn = 0, CRm = 5 */
@@ -669,7 +674,6 @@ void update_cpu_features(int cpu,
 
 	/*
 	 * EL3 is not our concern.
-	 * ID_AA64PFR1 is currently RES0.
 	 */
 	taint |= check_update_ftr_reg(SYS_ID_AA64PFR0_EL1, cpu,
 				      info->reg_id_aa64pfr0, boot->reg_id_aa64pfr0);
@@ -1254,6 +1258,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.cpu_enable = cpu_enable_hw_dbm,
 	},
 #endif
+	{
+		.desc = "Speculative Store Bypassing Safe (SSBS)",
+		.capability = ARM64_SSBS,
+		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
+		.matches = has_cpuid_feature,
+		.sys_reg = SYS_ID_AA64PFR1_EL1,
+		.field_pos = ID_AA64PFR1_SSBS_SHIFT,
+		.sign = FTR_UNSIGNED,
+		.min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY,
+	},
 	{},
 };
 
@@ -1299,6 +1313,7 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
 #ifdef CONFIG_ARM64_SVE
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
 #endif
+	HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, HWCAP_SSBS),
 	{},
 };
 
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index e9ab7b3ed317..dce971f2c167 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -81,6 +81,7 @@ static const char *const hwcap_str[] = {
 	"uscat",
 	"ilrcpc",
 	"flagm",
+	"ssbs",
 	NULL
 };
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 02/16] arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 01/16] arm64: cpufeature: Detect SSBS and advertise to userspace Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 03/16] KVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and !vhe Ard Biesheuvel
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Ard Biesheuvel,
	Will Deacon, Jeremy Linton, Andre Przywara, Marc Zyngier,
	Will Deacon

From: Will Deacon <will.deacon@arm.com>

On CPUs with support for PSTATE.SSBS, the kernel can toggle the SSBD
state without needing to call into firmware.

This patch hooks into the existing SSBD infrastructure so that SSBS is
used on CPUs that support it, but it's all made horribly complicated by
the very real possibility of big/little systems that don't uniformly
provide the new capability.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 8f04e8e6e29c93421a95b61cad62e3918425eac7)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/processor.h   |  7 +++
 arch/arm64/include/asm/ptrace.h      |  1 +
 arch/arm64/include/asm/sysreg.h      |  3 ++
 arch/arm64/include/uapi/asm/ptrace.h |  1 +
 arch/arm64/kernel/cpu_errata.c       | 26 ++++++++++-
 arch/arm64/kernel/cpufeature.c       | 45 ++++++++++++++++++++
 arch/arm64/kernel/process.c          |  4 ++
 arch/arm64/kernel/ssbd.c             | 21 +++++++++
 8 files changed, 106 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index def5a5e807f0..ad208bd402f7 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -182,6 +182,10 @@ static inline void start_thread(struct pt_regs *regs, unsigned long pc,
 {
 	start_thread_common(regs, pc);
 	regs->pstate = PSR_MODE_EL0t;
+
+	if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE)
+		regs->pstate |= PSR_SSBS_BIT;
+
 	regs->sp = sp;
 }
 
@@ -198,6 +202,9 @@ static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc,
 	regs->pstate |= PSR_AA32_E_BIT;
 #endif
 
+	if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE)
+		regs->pstate |= PSR_AA32_SSBS_BIT;
+
 	regs->compat_sp = sp;
 }
 #endif
diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index 177b851ca6d9..6bc43889d11e 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -50,6 +50,7 @@
 #define PSR_AA32_I_BIT		0x00000080
 #define PSR_AA32_A_BIT		0x00000100
 #define PSR_AA32_E_BIT		0x00000200
+#define PSR_AA32_SSBS_BIT	0x00800000
 #define PSR_AA32_DIT_BIT	0x01000000
 #define PSR_AA32_Q_BIT		0x08000000
 #define PSR_AA32_V_BIT		0x10000000
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 2fc6242baf11..3091ae5975a3 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -86,11 +86,14 @@
 
 #define REG_PSTATE_PAN_IMM		sys_reg(0, 0, 4, 0, 4)
 #define REG_PSTATE_UAO_IMM		sys_reg(0, 0, 4, 0, 3)
+#define REG_PSTATE_SSBS_IMM		sys_reg(0, 3, 4, 0, 1)
 
 #define SET_PSTATE_PAN(x) __emit_inst(0xd5000000 | REG_PSTATE_PAN_IMM |	\
 				      (!!x)<<8 | 0x1f)
 #define SET_PSTATE_UAO(x) __emit_inst(0xd5000000 | REG_PSTATE_UAO_IMM |	\
 				      (!!x)<<8 | 0x1f)
+#define SET_PSTATE_SSBS(x) __emit_inst(0xd5000000 | REG_PSTATE_SSBS_IMM | \
+				       (!!x)<<8 | 0x1f)
 
 #define SYS_DC_ISW			sys_insn(1, 0, 7, 6, 2)
 #define SYS_DC_CSW			sys_insn(1, 0, 7, 10, 2)
diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h
index 5dff8eccd17d..b0fd1d300154 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -46,6 +46,7 @@
 #define PSR_I_BIT	0x00000080
 #define PSR_A_BIT	0x00000100
 #define PSR_D_BIT	0x00000200
+#define PSR_SSBS_BIT	0x00001000
 #define PSR_PAN_BIT	0x00400000
 #define PSR_UAO_BIT	0x00800000
 #define PSR_V_BIT	0x10000000
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index dc6c535cbd13..7fe3a60d1086 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -312,6 +312,14 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt,
 
 void arm64_set_ssbd_mitigation(bool state)
 {
+	if (this_cpu_has_cap(ARM64_SSBS)) {
+		if (state)
+			asm volatile(SET_PSTATE_SSBS(0));
+		else
+			asm volatile(SET_PSTATE_SSBS(1));
+		return;
+	}
+
 	switch (psci_ops.conduit) {
 	case PSCI_CONDUIT_HVC:
 		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
@@ -336,6 +344,11 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 
 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
 
+	if (this_cpu_has_cap(ARM64_SSBS)) {
+		required = false;
+		goto out_printmsg;
+	}
+
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
 		ssbd_state = ARM64_SSBD_UNKNOWN;
 		return false;
@@ -384,7 +397,6 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 
 	switch (ssbd_state) {
 	case ARM64_SSBD_FORCE_DISABLE:
-		pr_info_once("%s disabled from command-line\n", entry->desc);
 		arm64_set_ssbd_mitigation(false);
 		required = false;
 		break;
@@ -397,7 +409,6 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 		break;
 
 	case ARM64_SSBD_FORCE_ENABLE:
-		pr_info_once("%s forced from command-line\n", entry->desc);
 		arm64_set_ssbd_mitigation(true);
 		required = true;
 		break;
@@ -407,6 +418,17 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 		break;
 	}
 
+out_printmsg:
+	switch (ssbd_state) {
+	case ARM64_SSBD_FORCE_DISABLE:
+		pr_info_once("%s disabled from command-line\n", entry->desc);
+		break;
+
+	case ARM64_SSBD_FORCE_ENABLE:
+		pr_info_once("%s forced from command-line\n", entry->desc);
+		break;
+	}
+
 	return required;
 }
 #endif	/* CONFIG_ARM64_SSBD */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 58146d636a83..18fd61f6d578 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1071,6 +1071,48 @@ static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused)
 	WARN_ON(val & (7 << 27 | 7 << 21));
 }
 
+#ifdef CONFIG_ARM64_SSBD
+static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr)
+{
+	if (user_mode(regs))
+		return 1;
+
+	if (instr & BIT(CRm_shift))
+		regs->pstate |= PSR_SSBS_BIT;
+	else
+		regs->pstate &= ~PSR_SSBS_BIT;
+
+	arm64_skip_faulting_instruction(regs, 4);
+	return 0;
+}
+
+static struct undef_hook ssbs_emulation_hook = {
+	.instr_mask	= ~(1U << CRm_shift),
+	.instr_val	= 0xd500001f | REG_PSTATE_SSBS_IMM,
+	.fn		= ssbs_emulation_handler,
+};
+
+static void cpu_enable_ssbs(const struct arm64_cpu_capabilities *__unused)
+{
+	static bool undef_hook_registered = false;
+	static DEFINE_SPINLOCK(hook_lock);
+
+	spin_lock(&hook_lock);
+	if (!undef_hook_registered) {
+		register_undef_hook(&ssbs_emulation_hook);
+		undef_hook_registered = true;
+	}
+	spin_unlock(&hook_lock);
+
+	if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) {
+		sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_DSSBS);
+		arm64_set_ssbd_mitigation(false);
+	} else {
+		arm64_set_ssbd_mitigation(true);
+	}
+}
+#endif /* CONFIG_ARM64_SSBD */
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
@@ -1258,6 +1300,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.cpu_enable = cpu_enable_hw_dbm,
 	},
 #endif
+#ifdef CONFIG_ARM64_SSBD
 	{
 		.desc = "Speculative Store Bypassing Safe (SSBS)",
 		.capability = ARM64_SSBS,
@@ -1267,7 +1310,9 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64PFR1_SSBS_SHIFT,
 		.sign = FTR_UNSIGNED,
 		.min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY,
+		.cpu_enable = cpu_enable_ssbs,
 	},
+#endif
 	{},
 };
 
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 7f1628effe6d..ce99c58cd1f1 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -358,6 +358,10 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
 		if (IS_ENABLED(CONFIG_ARM64_UAO) &&
 		    cpus_have_const_cap(ARM64_HAS_UAO))
 			childregs->pstate |= PSR_UAO_BIT;
+
+		if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
+			childregs->pstate |= PSR_SSBS_BIT;
+
 		p->thread.cpu_context.x19 = stack_start;
 		p->thread.cpu_context.x20 = stk_sz;
 	}
diff --git a/arch/arm64/kernel/ssbd.c b/arch/arm64/kernel/ssbd.c
index 388f8fc13080..f496fb2f7122 100644
--- a/arch/arm64/kernel/ssbd.c
+++ b/arch/arm64/kernel/ssbd.c
@@ -3,13 +3,31 @@
  * Copyright (C) 2018 ARM Ltd, All Rights Reserved.
  */
 
+#include <linux/compat.h>
 #include <linux/errno.h>
 #include <linux/prctl.h>
 #include <linux/sched.h>
+#include <linux/sched/task_stack.h>
 #include <linux/thread_info.h>
 
 #include <asm/cpufeature.h>
 
+static void ssbd_ssbs_enable(struct task_struct *task)
+{
+	u64 val = is_compat_thread(task_thread_info(task)) ?
+		  PSR_AA32_SSBS_BIT : PSR_SSBS_BIT;
+
+	task_pt_regs(task)->pstate |= val;
+}
+
+static void ssbd_ssbs_disable(struct task_struct *task)
+{
+	u64 val = is_compat_thread(task_thread_info(task)) ?
+		  PSR_AA32_SSBS_BIT : PSR_SSBS_BIT;
+
+	task_pt_regs(task)->pstate &= ~val;
+}
+
 /*
  * prctl interface for SSBD
  * FIXME: Drop the below ifdefery once merged in 4.18.
@@ -47,12 +65,14 @@ static int ssbd_prctl_set(struct task_struct *task, unsigned long ctrl)
 			return -EPERM;
 		task_clear_spec_ssb_disable(task);
 		clear_tsk_thread_flag(task, TIF_SSBD);
+		ssbd_ssbs_enable(task);
 		break;
 	case PR_SPEC_DISABLE:
 		if (state == ARM64_SSBD_FORCE_DISABLE)
 			return -EPERM;
 		task_set_spec_ssb_disable(task);
 		set_tsk_thread_flag(task, TIF_SSBD);
+		ssbd_ssbs_disable(task);
 		break;
 	case PR_SPEC_FORCE_DISABLE:
 		if (state == ARM64_SSBD_FORCE_DISABLE)
@@ -60,6 +80,7 @@ static int ssbd_prctl_set(struct task_struct *task, unsigned long ctrl)
 		task_set_spec_ssb_disable(task);
 		task_set_spec_ssb_force_disable(task);
 		set_tsk_thread_flag(task, TIF_SSBD);
+		ssbd_ssbs_disable(task);
 		break;
 	default:
 		return -ERANGE;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 03/16] KVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and !vhe
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 01/16] arm64: cpufeature: Detect SSBS and advertise to userspace Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 02/16] arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3 Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 04/16] arm64: docs: Document SSBS HWCAP Ard Biesheuvel
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Ard Biesheuvel,
	Will Deacon, Christoffer Dall, Jeremy Linton, Andre Przywara,
	Marc Zyngier, Will Deacon

From: Will Deacon <will.deacon@arm.com>

When running without VHE, it is necessary to set SCTLR_EL2.DSSBS if SSBD
has been forcefully disabled on the kernel command-line.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 7c36447ae5a090729e7b129f24705bb231a07e0b)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/kvm_host.h | 11 +++++++++++
 arch/arm64/kvm/hyp/sysreg-sr.c    | 11 +++++++++++
 2 files changed, 22 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 6abe4002945f..367b2e0b6d76 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -398,6 +398,8 @@ struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
 
 DECLARE_PER_CPU(kvm_cpu_context_t, kvm_host_cpu_state);
 
+void __kvm_enable_ssbs(void);
+
 static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
 				       unsigned long hyp_stack_ptr,
 				       unsigned long vector_ptr)
@@ -418,6 +420,15 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
 	 */
 	BUG_ON(!static_branch_likely(&arm64_const_caps_ready));
 	__kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr, tpidr_el2);
+
+	/*
+	 * Disabling SSBD on a non-VHE system requires us to enable SSBS
+	 * at EL2.
+	 */
+	if (!has_vhe() && this_cpu_has_cap(ARM64_SSBS) &&
+	    arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) {
+		kvm_call_hyp(__kvm_enable_ssbs);
+	}
 }
 
 static inline bool kvm_arch_check_sve_has_vhe(void)
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 963d669ae3a2..7414b76191c2 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -293,3 +293,14 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.sysregs_loaded_on_cpu = false;
 }
+
+void __hyp_text __kvm_enable_ssbs(void)
+{
+	u64 tmp;
+
+	asm volatile(
+	"mrs	%0, sctlr_el2\n"
+	"orr	%0, %0, %1\n"
+	"msr	sctlr_el2, %0"
+	: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
+}
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 04/16] arm64: docs: Document SSBS HWCAP
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2019-10-04 12:04 ` [RFC/RFT PATCH 03/16] KVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and !vhe Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 05/16] arm64: fix SSBS sanitization Ard Biesheuvel
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Ard Biesheuvel,
	Will Deacon, Jeremy Linton, Andre Przywara, Marc Zyngier,
	Will Deacon

From: Will Deacon <will.deacon@arm.com>

We advertise the MRS/MSR instructions for toggling SSBS at EL0 using an
HWCAP, so document it along with the others.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit ee91176120bd584aa10c564e7e9fdcaf397190a1)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 Documentation/arm64/elf_hwcaps.txt | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
index d6aff2c5e9e2..6feaffe90e22 100644
--- a/Documentation/arm64/elf_hwcaps.txt
+++ b/Documentation/arm64/elf_hwcaps.txt
@@ -178,3 +178,7 @@ HWCAP_ILRCPC
 HWCAP_FLAGM
 
     Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0001.
+
+HWCAP_SSBS
+
+    Functionality implied by ID_AA64PFR1_EL1.SSBS == 0b0010.
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 05/16] arm64: fix SSBS sanitization
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
                   ` (3 preceding siblings ...)
  2019-10-04 12:04 ` [RFC/RFT PATCH 04/16] arm64: docs: Document SSBS HWCAP Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 06/16] arm64: Add sysfs vulnerability show for spectre-v1 Ard Biesheuvel
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Ard Biesheuvel,
	Will Deacon, Jeremy Linton, Andre Przywara, Marc Zyngier,
	Will Deacon

From: Mark Rutland <mark.rutland@arm.com>

In valid_user_regs() we treat SSBS as a RES0 bit, and consequently it is
unexpectedly cleared when we restore a sigframe or fiddle with GPRs via
ptrace.

This patch fixes valid_user_regs() to account for this, updating the
function to refer to the latest ARM ARM (ARM DDI 0487D.a). For AArch32
tasks, SSBS appears in bit 23 of SPSR_EL1, matching its position in the
AArch32-native PSR format, and we don't need to translate it as we have
to for DIT.

There are no other bit assignments that we need to account for today.
As the recent documentation describes the DIT bit, we can drop our
comment regarding DIT.

While removing SSBS from the RES0 masks, existing inconsistent
whitespace is corrected.

Fixes: d71be2b6c0e19180 ("arm64: cpufeature: Detect SSBS and advertise to userspace")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit f54dada8274643e3ff4436df0ea124aeedc43cae)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/ptrace.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 6219486fa25f..0211c3c7533b 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -1666,19 +1666,20 @@ void syscall_trace_exit(struct pt_regs *regs)
 }
 
 /*
- * SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487C.a
- * We also take into account DIT (bit 24), which is not yet documented, and
- * treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may be
- * allocated an EL0 meaning in future.
+ * SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487D.a.
+ * We permit userspace to set SSBS (AArch64 bit 12, AArch32 bit 23) which is
+ * not described in ARM DDI 0487D.a.
+ * We treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may
+ * be allocated an EL0 meaning in future.
  * Userspace cannot use these until they have an architectural meaning.
  * Note that this follows the SPSR_ELx format, not the AArch32 PSR format.
  * We also reserve IL for the kernel; SS is handled dynamically.
  */
 #define SPSR_EL1_AARCH64_RES0_BITS \
-	(GENMASK_ULL(63,32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \
-	 GENMASK_ULL(20, 10) | GENMASK_ULL(5, 5))
+	(GENMASK_ULL(63, 32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \
+	 GENMASK_ULL(20, 13) | GENMASK_ULL(11, 10) | GENMASK_ULL(5, 5))
 #define SPSR_EL1_AARCH32_RES0_BITS \
-	(GENMASK_ULL(63,32) | GENMASK_ULL(23, 22) | GENMASK_ULL(20,20))
+	(GENMASK_ULL(63, 32) | GENMASK_ULL(22, 22) | GENMASK_ULL(20, 20))
 
 static int valid_compat_regs(struct user_pt_regs *regs)
 {
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 06/16] arm64: Add sysfs vulnerability show for spectre-v1
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
                   ` (4 preceding siblings ...)
  2019-10-04 12:04 ` [RFC/RFT PATCH 05/16] arm64: fix SSBS sanitization Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 07/16] arm64: add sysfs vulnerability show for meltdown Ard Biesheuvel
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Stefan Wahren, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Mian Yousaf Kaukab, Jeremy Linton,
	Andre Przywara, Marc Zyngier, Will Deacon, Will Deacon

From: Mian Yousaf Kaukab <ykaukab@suse.de>

spectre-v1 has been mitigated and the mitigation is always active.
Report this to userspace via sysfs

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit 3891ebccace188af075ce143d8b072b65e90f695)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/cpu_errata.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 7fe3a60d1086..3758ba538a43 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -729,3 +729,9 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 	{
 	}
 };
+
+ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
+			    char *buf)
+{
+	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+}
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 07/16] arm64: add sysfs vulnerability show for meltdown
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
                   ` (5 preceding siblings ...)
  2019-10-04 12:04 ` [RFC/RFT PATCH 06/16] arm64: Add sysfs vulnerability show for spectre-v1 Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 08/16] arm64: enable generic CPU vulnerabilites support Ard Biesheuvel
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Stefan Wahren, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Jeremy Linton, Andre Przywara,
	Marc Zyngier, Will Deacon

From: Jeremy Linton <jeremy.linton@arm.com>

We implement page table isolation as a mitigation for meltdown.
Report this to userspace via sysfs.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit 1b3ccf4be0e7be8c4bd8522066b6cbc92591e912)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/cpufeature.c | 58 +++++++++++++++-----
 1 file changed, 44 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 18fd61f6d578..e93bbadc0cf1 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -889,7 +889,7 @@ static bool has_cache_dic(const struct arm64_cpu_capabilities *entry,
 	return ctr & BIT(CTR_DIC_SHIFT);
 }
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+static bool __meltdown_safe = true;
 static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 
 static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
@@ -908,6 +908,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 		{ /* sentinel */ }
 	};
 	char const *str = "command line option";
+	bool meltdown_safe;
+
+	meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
+
+	/* Defer to CPU feature registers */
+	if (has_cpuid_feature(entry, scope))
+		meltdown_safe = true;
+
+	if (!meltdown_safe)
+		__meltdown_safe = false;
 
 	/*
 	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
@@ -919,6 +929,19 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 		__kpti_forced = -1;
 	}
 
+	/* Useful for KASLR robustness */
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) {
+		if (!__kpti_forced) {
+			str = "KASLR";
+			__kpti_forced = 1;
+		}
+	}
+
+	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) {
+		pr_info_once("kernel page table isolation disabled by kernel configuration\n");
+		return false;
+	}
+
 	/* Forced? */
 	if (__kpti_forced) {
 		pr_info_once("kernel page table isolation forced %s by %s\n",
@@ -926,18 +949,10 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 		return __kpti_forced > 0;
 	}
 
-	/* Useful for KASLR robustness */
-	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
-		return true;
-
-	/* Don't force KPTI for CPUs that are not vulnerable */
-	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
-		return false;
-
-	/* Defer to CPU feature registers */
-	return !has_cpuid_feature(entry, scope);
+	return !meltdown_safe;
 }
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 static void
 kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
 {
@@ -962,6 +977,12 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
 
 	return;
 }
+#else
+static void
+kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
+{
+}
+#endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 static int __init parse_kpti(char *str)
 {
@@ -975,7 +996,6 @@ static int __init parse_kpti(char *str)
 	return 0;
 }
 early_param("kpti", parse_kpti);
-#endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 #ifdef CONFIG_ARM64_HW_AFDBM
 static inline void __cpu_enable_hw_dbm(void)
@@ -1196,7 +1216,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,
 		.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
 	},
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	{
 		.desc = "Kernel page table isolation (KPTI)",
 		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
@@ -1212,7 +1231,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.matches = unmap_kernel_at_el0,
 		.cpu_enable = kpti_install_ng_mappings,
 	},
-#endif
 	{
 		/* FP/SIMD is not implemented */
 		.capability = ARM64_HAS_NO_FPSIMD,
@@ -1853,3 +1871,15 @@ void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
 	/* Firmware may have left a deferred SError in this register. */
 	write_sysreg_s(0, SYS_DISR_EL1);
 }
+
+ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
+			  char *buf)
+{
+	if (__meltdown_safe)
+		return sprintf(buf, "Not affected\n");
+
+	if (arm64_kernel_unmapped_at_el0())
+		return sprintf(buf, "Mitigation: PTI\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 08/16] arm64: enable generic CPU vulnerabilites support
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
                   ` (6 preceding siblings ...)
  2019-10-04 12:04 ` [RFC/RFT PATCH 07/16] arm64: add sysfs vulnerability show for meltdown Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-04 12:04   ` Ard Biesheuvel
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Stefan Wahren, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Mian Yousaf Kaukab, Jeremy Linton,
	Andre Przywara, Marc Zyngier, Will Deacon, Will Deacon

From: Mian Yousaf Kaukab <ykaukab@suse.de>

Enable CPU vulnerabilty show functions for spectre_v1, spectre_v2,
meltdown and store-bypass.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit 61ae1321f06c4489c724c803e9b8363dea576da3)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e3ebece79617..51fe21f5d078 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -84,6 +84,7 @@ config ARM64
 	select GENERIC_CLOCKEVENTS
 	select GENERIC_CLOCKEVENTS_BROADCAST
 	select GENERIC_CPU_AUTOPROBE
+	select GENERIC_CPU_VULNERABILITIES
 	select GENERIC_EARLY_IOREMAP
 	select GENERIC_IDLE_POLL_SETUP
 	select GENERIC_IRQ_MULTI_HANDLER
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 09/16] arm64: Provide a command line to disable spectre_v2 mitigation
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
@ 2019-10-04 12:04   ` Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 02/16] arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3 Ard Biesheuvel
                     ` (15 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Will Deacon, Catalin Marinas, Marc Zyngier,
	Mark Rutland, Suzuki K Poulose, Jeremy Linton, Andre Przywara,
	Stefan Wahren, Jonathan Corbet, linux-doc, Will Deacon

From: Jeremy Linton <jeremy.linton@arm.com>

There are various reasons, such as benchmarking, to disable spectrev2
mitigation on a machine. Provide a command-line option to do so.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-doc@vger.kernel.org
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit e5ce5e7267ddcbe13ab9ead2542524e1b7993e5a)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 Documentation/admin-guide/kernel-parameters.txt |  8 ++++----
 arch/arm64/kernel/cpu_errata.c                  | 13 +++++++++++++
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index e8ddf0ef232e..cc2f5c9a8161 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2866,10 +2866,10 @@
 			(bounds check bypass). With this option data leaks
 			are possible in the system.
 
-	nospectre_v2	[X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2
-			(indirect branch prediction) vulnerability. System may
-			allow data leaks with this option, which is equivalent
-			to spectre_v2=off.
+	nospectre_v2	[X86,PPC_FSL_BOOK3E,ARM64] Disable all mitigations for
+			the Spectre variant 2 (indirect branch prediction)
+			vulnerability. System may allow data leaks with this
+			option.
 
 	nospec_store_bypass_disable
 			[HW] Disable all mitigations for the Speculative Store Bypass vulnerability
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 3758ba538a43..5a7fa90c668f 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -189,6 +189,14 @@ static void qcom_link_stack_sanitization(void)
 		     : "=&r" (tmp));
 }
 
+static bool __nospectre_v2;
+static int __init parse_nospectre_v2(char *str)
+{
+	__nospectre_v2 = true;
+	return 0;
+}
+early_param("nospectre_v2", parse_nospectre_v2);
+
 static void
 enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 {
@@ -200,6 +208,11 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
 		return;
 
+	if (__nospectre_v2) {
+		pr_info_once("spectrev2 mitigation disabled by command line option\n");
+		return;
+	}
+
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
 		return;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 09/16] arm64: Provide a command line to disable spectre_v2 mitigation
@ 2019-10-04 12:04   ` Ard Biesheuvel
  0 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Stefan Wahren, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, linux-doc, Jeremy Linton, Andre Przywara,
	Marc Zyngier, Will Deacon, Jonathan Corbet, Will Deacon

From: Jeremy Linton <jeremy.linton@arm.com>

There are various reasons, such as benchmarking, to disable spectrev2
mitigation on a machine. Provide a command-line option to do so.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-doc@vger.kernel.org
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit e5ce5e7267ddcbe13ab9ead2542524e1b7993e5a)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 Documentation/admin-guide/kernel-parameters.txt |  8 ++++----
 arch/arm64/kernel/cpu_errata.c                  | 13 +++++++++++++
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index e8ddf0ef232e..cc2f5c9a8161 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2866,10 +2866,10 @@
 			(bounds check bypass). With this option data leaks
 			are possible in the system.
 
-	nospectre_v2	[X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2
-			(indirect branch prediction) vulnerability. System may
-			allow data leaks with this option, which is equivalent
-			to spectre_v2=off.
+	nospectre_v2	[X86,PPC_FSL_BOOK3E,ARM64] Disable all mitigations for
+			the Spectre variant 2 (indirect branch prediction)
+			vulnerability. System may allow data leaks with this
+			option.
 
 	nospec_store_bypass_disable
 			[HW] Disable all mitigations for the Speculative Store Bypass vulnerability
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 3758ba538a43..5a7fa90c668f 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -189,6 +189,14 @@ static void qcom_link_stack_sanitization(void)
 		     : "=&r" (tmp));
 }
 
+static bool __nospectre_v2;
+static int __init parse_nospectre_v2(char *str)
+{
+	__nospectre_v2 = true;
+	return 0;
+}
+early_param("nospectre_v2", parse_nospectre_v2);
+
 static void
 enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 {
@@ -200,6 +208,11 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
 		return;
 
+	if (__nospectre_v2) {
+		pr_info_once("spectrev2 mitigation disabled by command line option\n");
+		return;
+	}
+
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
 		return;
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 10/16] arm64: Advertise mitigation of Spectre-v2, or lack thereof
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
                   ` (8 preceding siblings ...)
  2019-10-04 12:04   ` Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 11/16] arm64: Always enable spectre-v2 vulnerability detection Ard Biesheuvel
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Stefan Wahren, Suzuki K Poulose, Marc Zyngier,
	Catalin Marinas, Ard Biesheuvel, Will Deacon, Jeremy Linton,
	Andre Przywara, Marc Zyngier, Will Deacon

From: Marc Zyngier <marc.zyngier@arm.com>

We currently have a list of CPUs affected by Spectre-v2, for which
we check that the firmware implements ARCH_WORKAROUND_1. It turns
out that not all firmwares do implement the required mitigation,
and that we fail to let the user know about it.

Instead, let's slightly revamp our checks, and rely on a whitelist
of cores that are known to be non-vulnerable, and let the user know
the status of the mitigation in the kernel log.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit 73f38166095947f3b86b02fbed6bd592223a7ac8)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/cpu_errata.c | 109 ++++++++++----------
 1 file changed, 56 insertions(+), 53 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 5a7fa90c668f..def847873d21 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -109,9 +109,9 @@ static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
 	__flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K);
 }
 
-static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
-				      const char *hyp_vecs_start,
-				      const char *hyp_vecs_end)
+static void install_bp_hardening_cb(bp_hardening_cb_t fn,
+				    const char *hyp_vecs_start,
+				    const char *hyp_vecs_end)
 {
 	static DEFINE_SPINLOCK(bp_lock);
 	int cpu, slot = -1;
@@ -138,7 +138,7 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
 #define __smccc_workaround_1_smc_start		NULL
 #define __smccc_workaround_1_smc_end		NULL
 
-static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
+static void install_bp_hardening_cb(bp_hardening_cb_t fn,
 				      const char *hyp_vecs_start,
 				      const char *hyp_vecs_end)
 {
@@ -146,23 +146,6 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
 }
 #endif	/* CONFIG_KVM_INDIRECT_VECTORS */
 
-static void  install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry,
-				     bp_hardening_cb_t fn,
-				     const char *hyp_vecs_start,
-				     const char *hyp_vecs_end)
-{
-	u64 pfr0;
-
-	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
-		return;
-
-	pfr0 = read_cpuid(ID_AA64PFR0_EL1);
-	if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT))
-		return;
-
-	__install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end);
-}
-
 #include <uapi/linux/psci.h>
 #include <linux/arm-smccc.h>
 #include <linux/psci.h>
@@ -197,31 +180,27 @@ static int __init parse_nospectre_v2(char *str)
 }
 early_param("nospectre_v2", parse_nospectre_v2);
 
-static void
-enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
+/*
+ * -1: No workaround
+ *  0: No workaround required
+ *  1: Workaround installed
+ */
+static int detect_harden_bp_fw(void)
 {
 	bp_hardening_cb_t cb;
 	void *smccc_start, *smccc_end;
 	struct arm_smccc_res res;
 	u32 midr = read_cpuid_id();
 
-	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
-		return;
-
-	if (__nospectre_v2) {
-		pr_info_once("spectrev2 mitigation disabled by command line option\n");
-		return;
-	}
-
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
-		return;
+		return -1;
 
 	switch (psci_ops.conduit) {
 	case PSCI_CONDUIT_HVC:
 		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
 		if ((int)res.a0 < 0)
-			return;
+			return -1;
 		cb = call_hvc_arch_workaround_1;
 		/* This is a guest, no need to patch KVM vectors */
 		smccc_start = NULL;
@@ -232,23 +211,23 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
 		if ((int)res.a0 < 0)
-			return;
+			return -1;
 		cb = call_smc_arch_workaround_1;
 		smccc_start = __smccc_workaround_1_smc_start;
 		smccc_end = __smccc_workaround_1_smc_end;
 		break;
 
 	default:
-		return;
+		return -1;
 	}
 
 	if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) ||
 	    ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1))
 		cb = qcom_link_stack_sanitization;
 
-	install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
+	install_bp_hardening_cb(cb, smccc_start, smccc_end);
 
-	return;
+	return 1;
 }
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
@@ -532,24 +511,48 @@ multi_entry_cap_cpu_enable(const struct arm64_cpu_capabilities *entry)
 }
 
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
-
 /*
- * List of CPUs where we need to issue a psci call to
- * harden the branch predictor.
+ * List of CPUs that do not need any Spectre-v2 mitigation at all.
  */
-static const struct midr_range arm64_bp_harden_smccc_cpus[] = {
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
-	MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
-	MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
-	MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
-	MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
-	MIDR_ALL_VERSIONS(MIDR_NVIDIA_DENVER),
-	{},
+static const struct midr_range spectre_v2_safe_list[] = {
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+	{ /* sentinel */ }
 };
 
+static bool __maybe_unused
+check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
+{
+	int need_wa;
+
+	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+
+	/* If the CPU has CSV2 set, we're safe */
+	if (cpuid_feature_extract_unsigned_field(read_cpuid(ID_AA64PFR0_EL1),
+						 ID_AA64PFR0_CSV2_SHIFT))
+		return false;
+
+	/* Alternatively, we have a list of unaffected CPUs */
+	if (is_midr_in_range_list(read_cpuid_id(), spectre_v2_safe_list))
+		return false;
+
+	/* Fallback to firmware detection */
+	need_wa = detect_harden_bp_fw();
+	if (!need_wa)
+		return false;
+
+	/* forced off */
+	if (__nospectre_v2) {
+		pr_info_once("spectrev2 mitigation disabled by command line option\n");
+		return false;
+	}
+
+	if (need_wa < 0)
+		pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n");
+
+	return (need_wa > 0);
+}
 #endif
 
 #ifdef CONFIG_HARDEN_EL2_VECTORS
@@ -712,8 +715,8 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		.cpu_enable = enable_smccc_arch_workaround_1,
-		ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
+		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+		.matches = check_branch_predictor,
 	},
 #endif
 #ifdef CONFIG_HARDEN_EL2_VECTORS
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 11/16] arm64: Always enable spectre-v2 vulnerability detection
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
                   ` (9 preceding siblings ...)
  2019-10-04 12:04 ` [RFC/RFT PATCH 10/16] arm64: Advertise mitigation of Spectre-v2, or lack thereof Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-08 15:05   ` Mark Rutland
  2019-10-04 12:04 ` [RFC/RFT PATCH 12/16] arm64: Always enable ssb " Ard Biesheuvel
                   ` (5 subsequent siblings)
  16 siblings, 1 reply; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Stefan Wahren, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Jeremy Linton, Andre Przywara,
	Marc Zyngier, Will Deacon

From: Jeremy Linton <jeremy.linton@arm.com>

Ensure we are always able to detect whether or not the CPU is affected
by Spectre-v2, so that we can later advertise this to userspace.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit 8c1e3d2bb44cbb998cb28ff9a18f105fee7f1eb3)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/cpu_errata.c | 47 ++++----------------
 1 file changed, 8 insertions(+), 39 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index def847873d21..ae7d6761262f 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -87,7 +87,6 @@ cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
 
 atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 #include <asm/mmu_context.h>
 #include <asm/cacheflush.h>
 
@@ -225,11 +224,11 @@ static int detect_harden_bp_fw(void)
 	    ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1))
 		cb = qcom_link_stack_sanitization;
 
-	install_bp_hardening_cb(cb, smccc_start, smccc_end);
+	if (IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR))
+		install_bp_hardening_cb(cb, smccc_start, smccc_end);
 
 	return 1;
 }
-#endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
 #ifdef CONFIG_ARM64_SSBD
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
@@ -478,39 +477,6 @@ has_cortex_a76_erratum_1463225(const struct arm64_cpu_capabilities *entry,
 	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
 	CAP_MIDR_RANGE_LIST(midr_list)
 
-/*
- * Generic helper for handling capabilties with multiple (match,enable) pairs
- * of call backs, sharing the same capability bit.
- * Iterate over each entry to see if at least one matches.
- */
-static bool __maybe_unused
-multi_entry_cap_matches(const struct arm64_cpu_capabilities *entry, int scope)
-{
-	const struct arm64_cpu_capabilities *caps;
-
-	for (caps = entry->match_list; caps->matches; caps++)
-		if (caps->matches(caps, scope))
-			return true;
-
-	return false;
-}
-
-/*
- * Take appropriate action for all matching entries in the shared capability
- * entry.
- */
-static void __maybe_unused
-multi_entry_cap_cpu_enable(const struct arm64_cpu_capabilities *entry)
-{
-	const struct arm64_cpu_capabilities *caps;
-
-	for (caps = entry->match_list; caps->matches; caps++)
-		if (caps->matches(caps, SCOPE_LOCAL_CPU) &&
-		    caps->cpu_enable)
-			caps->cpu_enable(caps);
-}
-
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 /*
  * List of CPUs that do not need any Spectre-v2 mitigation at all.
  */
@@ -542,6 +508,12 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
 	if (!need_wa)
 		return false;
 
+	if (!IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR)) {
+		pr_warn_once("spectrev2 mitigation disabled by kernel configuration\n");
+		__hardenbp_enab = false;
+		return false;
+	}
+
 	/* forced off */
 	if (__nospectre_v2) {
 		pr_info_once("spectrev2 mitigation disabled by command line option\n");
@@ -553,7 +525,6 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
 
 	return (need_wa > 0);
 }
-#endif
 
 #ifdef CONFIG_HARDEN_EL2_VECTORS
 
@@ -712,13 +683,11 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
 	},
 #endif
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.matches = check_branch_predictor,
 	},
-#endif
 #ifdef CONFIG_HARDEN_EL2_VECTORS
 	{
 		.desc = "EL2 vector hardening",
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 12/16] arm64: Always enable ssb vulnerability detection
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
                   ` (10 preceding siblings ...)
  2019-10-04 12:04 ` [RFC/RFT PATCH 11/16] arm64: Always enable spectre-v2 vulnerability detection Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 13/16] arm64: add sysfs vulnerability show for spectre-v2 Ard Biesheuvel
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Stefan Wahren, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Jeremy Linton, Andre Przywara,
	Marc Zyngier, Will Deacon

From: Jeremy Linton <jeremy.linton@arm.com>

Ensure we are always able to detect whether or not the CPU is affected
by SSB, so that we can later advertise this to userspace.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
[will: Use IS_ENABLED instead of #ifdef]
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit d42281b6e49510f078ace15a8ea10f71e6262581)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/cpufeature.h | 4 ----
 arch/arm64/kernel/cpu_errata.c      | 9 +++++----
 2 files changed, 5 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 510f687d269a..dda6e5056810 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -525,11 +525,7 @@ static inline int arm64_get_ssbd_state(void)
 #endif
 }
 
-#ifdef CONFIG_ARM64_SSBD
 void arm64_set_ssbd_mitigation(bool state);
-#else
-static inline void arm64_set_ssbd_mitigation(bool state) {}
-#endif
 
 #endif /* __ASSEMBLY__ */
 
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index ae7d6761262f..78ce2e27396d 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -230,7 +230,6 @@ static int detect_harden_bp_fw(void)
 	return 1;
 }
 
-#ifdef CONFIG_ARM64_SSBD
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
 
 int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
@@ -303,6 +302,11 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt,
 
 void arm64_set_ssbd_mitigation(bool state)
 {
+	if (!IS_ENABLED(CONFIG_ARM64_SSBD)) {
+		pr_info_once("SSBD disabled by kernel configuration\n");
+		return;
+	}
+
 	if (this_cpu_has_cap(ARM64_SSBS)) {
 		if (state)
 			asm volatile(SET_PSTATE_SSBS(0));
@@ -422,7 +426,6 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 
 	return required;
 }
-#endif	/* CONFIG_ARM64_SSBD */
 
 #ifdef CONFIG_ARM64_ERRATUM_1463225
 DEFINE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
@@ -695,14 +698,12 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors),
 	},
 #endif
-#ifdef CONFIG_ARM64_SSBD
 	{
 		.desc = "Speculative Store Bypass Disable",
 		.capability = ARM64_SSBD,
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.matches = has_ssbd_mitigation,
 	},
-#endif
 #ifdef CONFIG_ARM64_ERRATUM_1463225
 	{
 		.desc = "ARM erratum 1463225",
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 13/16] arm64: add sysfs vulnerability show for spectre-v2
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
                   ` (11 preceding siblings ...)
  2019-10-04 12:04 ` [RFC/RFT PATCH 12/16] arm64: Always enable ssb " Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 14/16] arm64: add sysfs vulnerability show for speculative store bypass Ard Biesheuvel
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Stefan Wahren, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Jeremy Linton, Andre Przywara,
	Marc Zyngier, Will Deacon

From: Jeremy Linton <jeremy.linton@arm.com>

Track whether all the cores in the machine are vulnerable to Spectre-v2,
and whether all the vulnerable cores have been mitigated. We then expose
this information to userspace via sysfs.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit d2532e27b5638bb2e2dd52b80b7ea2ec65135377)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/cpu_errata.c | 27 +++++++++++++++++++-
 1 file changed, 26 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 78ce2e27396d..6c8e8a5bfabf 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -480,6 +480,10 @@ has_cortex_a76_erratum_1463225(const struct arm64_cpu_capabilities *entry,
 	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
 	CAP_MIDR_RANGE_LIST(midr_list)
 
+/* Track overall mitigation state. We are only mitigated if all cores are ok */
+static bool __hardenbp_enab = true;
+static bool __spectrev2_safe = true;
+
 /*
  * List of CPUs that do not need any Spectre-v2 mitigation at all.
  */
@@ -490,6 +494,10 @@ static const struct midr_range spectre_v2_safe_list[] = {
 	{ /* sentinel */ }
 };
 
+/*
+ * Track overall bp hardening for all heterogeneous cores in the machine.
+ * We are only considered "safe" if all booted cores are known safe.
+ */
 static bool __maybe_unused
 check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
 {
@@ -511,6 +519,8 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
 	if (!need_wa)
 		return false;
 
+	__spectrev2_safe = false;
+
 	if (!IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR)) {
 		pr_warn_once("spectrev2 mitigation disabled by kernel configuration\n");
 		__hardenbp_enab = false;
@@ -520,11 +530,14 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
 	/* forced off */
 	if (__nospectre_v2) {
 		pr_info_once("spectrev2 mitigation disabled by command line option\n");
+		__hardenbp_enab = false;
 		return false;
 	}
 
-	if (need_wa < 0)
+	if (need_wa < 0) {
 		pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n");
+		__hardenbp_enab = false;
+	}
 
 	return (need_wa > 0);
 }
@@ -721,3 +734,15 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
 {
 	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 }
+
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	if (__spectrev2_safe)
+		return sprintf(buf, "Not affected\n");
+
+	if (__hardenbp_enab)
+		return sprintf(buf, "Mitigation: Branch predictor hardening\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 14/16] arm64: add sysfs vulnerability show for speculative store bypass
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
                   ` (12 preceding siblings ...)
  2019-10-04 12:04 ` [RFC/RFT PATCH 13/16] arm64: add sysfs vulnerability show for spectre-v2 Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 15/16] arm64: ssbs: Don't treat CPUs with SSBS as unaffected by SSB Ard Biesheuvel
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Stefan Wahren, Suzuki K Poulose, Catalin Marinas,
	Ard Biesheuvel, Will Deacon, Jeremy Linton, Andre Przywara,
	Marc Zyngier, Will Deacon

From: Jeremy Linton <jeremy.linton@arm.com>

Return status based on ssbd_state and __ssb_safe. If the
mitigation is disabled, or the firmware isn't responding then
return the expected machine state based on a whitelist of known
good cores.

Given a heterogeneous machine, the overall machine vulnerability
defaults to safe but is reset to unsafe when we miss the whitelist
and the firmware doesn't explicitly tell us the core is safe.
In order to make that work we delay transitioning to vulnerable
until we know the firmware isn't responding to avoid a case
where we miss the whitelist, but the firmware goes ahead and
reports the core is not vulnerable. If all the cores in the
machine have SSBS, then __ssb_safe will remain true.

Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/cpu_errata.c | 42 ++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 6c8e8a5bfabf..534111eab864 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -233,6 +233,7 @@ static int detect_harden_bp_fw(void)
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
 
 int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
+static bool __ssb_safe = true;
 
 static const struct ssbd_options {
 	const char	*str;
@@ -336,6 +337,7 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 	struct arm_smccc_res res;
 	bool required = true;
 	s32 val;
+	bool this_cpu_safe = false;
 
 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
 
@@ -344,8 +346,14 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 		goto out_printmsg;
 	}
 
+	/* delay setting __ssb_safe until we get a firmware response */
+	if (is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list))
+		this_cpu_safe = true;
+
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
 		ssbd_state = ARM64_SSBD_UNKNOWN;
+		if (!this_cpu_safe)
+			__ssb_safe = false;
 		return false;
 	}
 
@@ -362,6 +370,8 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 
 	default:
 		ssbd_state = ARM64_SSBD_UNKNOWN;
+		if (!this_cpu_safe)
+			__ssb_safe = false;
 		return false;
 	}
 
@@ -370,14 +380,18 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 	switch (val) {
 	case SMCCC_RET_NOT_SUPPORTED:
 		ssbd_state = ARM64_SSBD_UNKNOWN;
+		if (!this_cpu_safe)
+			__ssb_safe = false;
 		return false;
 
+	/* machines with mixed mitigation requirements must not return this */
 	case SMCCC_RET_NOT_REQUIRED:
 		pr_info_once("%s mitigation not required\n", entry->desc);
 		ssbd_state = ARM64_SSBD_MITIGATED;
 		return false;
 
 	case SMCCC_RET_SUCCESS:
+		__ssb_safe = false;
 		required = true;
 		break;
 
@@ -387,6 +401,8 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 
 	default:
 		WARN_ON(1);
+		if (!this_cpu_safe)
+			__ssb_safe = false;
 		return false;
 	}
 
@@ -427,6 +443,14 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 	return required;
 }
 
+/* known invulnerable cores */
+static const struct midr_range arm64_ssb_cpus[] = {
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+	{},
+};
+
 #ifdef CONFIG_ARM64_ERRATUM_1463225
 DEFINE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
 
@@ -716,6 +740,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		.capability = ARM64_SSBD,
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.matches = has_ssbd_mitigation,
+		.midr_range_list = arm64_ssb_cpus,
 	},
 #ifdef CONFIG_ARM64_ERRATUM_1463225
 	{
@@ -746,3 +771,20 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
 
 	return sprintf(buf, "Vulnerable\n");
 }
+
+ssize_t cpu_show_spec_store_bypass(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	if (__ssb_safe)
+		return sprintf(buf, "Not affected\n");
+
+	switch (ssbd_state) {
+	case ARM64_SSBD_KERNEL:
+	case ARM64_SSBD_FORCE_ENABLE:
+		if (IS_ENABLED(CONFIG_ARM64_SSBD))
+			return sprintf(buf,
+			    "Mitigation: Speculative Store Bypass disabled via prctl\n");
+	}
+
+	return sprintf(buf, "Vulnerable\n");
+}
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 15/16] arm64: ssbs: Don't treat CPUs with SSBS as unaffected by SSB
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
                   ` (13 preceding siblings ...)
  2019-10-04 12:04 ` [RFC/RFT PATCH 14/16] arm64: add sysfs vulnerability show for speculative store bypass Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-04 12:04 ` [RFC/RFT PATCH 16/16] arm64: Force SSBS on context switch Ard Biesheuvel
  2019-10-08  8:12 ` [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
  16 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Ard Biesheuvel,
	Will Deacon, Jeremy Linton, Andre Przywara, Marc Zyngier,
	Will Deacon

From: Will Deacon <will.deacon@arm.com>

SSBS provides a relatively cheap mitigation for SSB, but it is still a
mitigation and its presence does not indicate that the CPU is unaffected
by the vulnerability.

Tweak the mitigation logic so that we report the correct string in sysfs.

Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit eb337cdfcd5dd3b10522c2f34140a73a4c285c30)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/cpu_errata.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 534111eab864..a9ad932160cc 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -341,7 +341,13 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
 
 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
 
+	/* delay setting __ssb_safe until we get a firmware response */
+	if (is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list))
+		this_cpu_safe = true;
+
 	if (this_cpu_has_cap(ARM64_SSBS)) {
+		if (!this_cpu_safe)
+			__ssb_safe = false;
 		required = false;
 		goto out_printmsg;
 	}
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC/RFT PATCH 16/16] arm64: Force SSBS on context switch
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
                   ` (14 preceding siblings ...)
  2019-10-04 12:04 ` [RFC/RFT PATCH 15/16] arm64: ssbs: Don't treat CPUs with SSBS as unaffected by SSB Ard Biesheuvel
@ 2019-10-04 12:04 ` Ard Biesheuvel
  2019-10-08  8:12 ` [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
  16 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-04 12:04 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Marc Zyngier, Catalin Marinas,
	Ard Biesheuvel, Jeremy Linton, Andre Przywara, Marc Zyngier,
	Will Deacon

From: Marc Zyngier <marc.zyngier@arm.com>

On a CPU that doesn't support SSBS, PSTATE[12] is RES0.  In a system
where only some of the CPUs implement SSBS, we end-up losing track of
the SSBS bit across task migration.

To address this issue, let's force the SSBS bit on context switch.

Fixes: 8f04e8e6e29c ("arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3")
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[will: inverted logic and added comments]
Signed-off-by: Will Deacon <will@kernel.org>
(cherry picked from commit cbdf8a189a66001c36007bf0f5c975d0376c5c3a)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/processor.h | 14 ++++++++--
 arch/arm64/kernel/process.c        | 29 +++++++++++++++++++-
 2 files changed, 40 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index ad208bd402f7..773ea8e0e442 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -177,6 +177,16 @@ static inline void start_thread_common(struct pt_regs *regs, unsigned long pc)
 	regs->pc = pc;
 }
 
+static inline void set_ssbs_bit(struct pt_regs *regs)
+{
+	regs->pstate |= PSR_SSBS_BIT;
+}
+
+static inline void set_compat_ssbs_bit(struct pt_regs *regs)
+{
+	regs->pstate |= PSR_AA32_SSBS_BIT;
+}
+
 static inline void start_thread(struct pt_regs *regs, unsigned long pc,
 				unsigned long sp)
 {
@@ -184,7 +194,7 @@ static inline void start_thread(struct pt_regs *regs, unsigned long pc,
 	regs->pstate = PSR_MODE_EL0t;
 
 	if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE)
-		regs->pstate |= PSR_SSBS_BIT;
+		set_ssbs_bit(regs);
 
 	regs->sp = sp;
 }
@@ -203,7 +213,7 @@ static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc,
 #endif
 
 	if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE)
-		regs->pstate |= PSR_AA32_SSBS_BIT;
+		set_compat_ssbs_bit(regs);
 
 	regs->compat_sp = sp;
 }
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index ce99c58cd1f1..bc2226608e13 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -360,7 +360,7 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
 			childregs->pstate |= PSR_UAO_BIT;
 
 		if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
-			childregs->pstate |= PSR_SSBS_BIT;
+			set_ssbs_bit(childregs);
 
 		p->thread.cpu_context.x19 = stack_start;
 		p->thread.cpu_context.x20 = stk_sz;
@@ -401,6 +401,32 @@ void uao_thread_switch(struct task_struct *next)
 	}
 }
 
+/*
+ * Force SSBS state on context-switch, since it may be lost after migrating
+ * from a CPU which treats the bit as RES0 in a heterogeneous system.
+ */
+static void ssbs_thread_switch(struct task_struct *next)
+{
+	struct pt_regs *regs = task_pt_regs(next);
+
+	/*
+	 * Nothing to do for kernel threads, but 'regs' may be junk
+	 * (e.g. idle task) so check the flags and bail early.
+	 */
+	if (unlikely(next->flags & PF_KTHREAD))
+		return;
+
+	/* If the mitigation is enabled, then we leave SSBS clear. */
+	if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
+	    test_tsk_thread_flag(next, TIF_SSBD))
+		return;
+
+	if (compat_user_mode(regs))
+		set_compat_ssbs_bit(regs);
+	else if (user_mode(regs))
+		set_ssbs_bit(regs);
+}
+
 /*
  * We store our current task in sp_el0, which is clobbered by userspace. Keep a
  * shadow copy so that we can restore this upon entry from userspace.
@@ -429,6 +455,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
 	contextidr_thread_switch(next);
 	entry_task_switch(next);
 	uao_thread_switch(next);
+	ssbs_thread_switch(next);
 
 	/*
 	 * Complete any pending TLB or cache maintenance on this CPU in case
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable
  2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
                   ` (15 preceding siblings ...)
  2019-10-04 12:04 ` [RFC/RFT PATCH 16/16] arm64: Force SSBS on context switch Ard Biesheuvel
@ 2019-10-08  8:12 ` Ard Biesheuvel
  2019-10-08 15:09   ` Mark Rutland
  16 siblings, 1 reply; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-08  8:12 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Suzuki K Poulose, Catalin Marinas, Jeremy Linton,
	Andre Przywara, Marc Zyngier, Will Deacon

On Fri, 4 Oct 2019 at 14:04, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
> This is a fairly mechnical backport to v4.19 of the changes needed to support
> managing the SSBS state, which controls whether Speculative Store Bypass is
> permitted.
>
> I have included Jeremy's sysfs changes as well, since they are equally
> suitable for stable and made for a much cleaner backport, so it made
> little sense to handle them separately.
>
> These patches are presented for review, and are not being cc'ed to the
> -stable maintainers just yet.
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Jeremy Linton <jeremy.linton@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
>

If nobody has any objections, I'll send these out to -stable end of today.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC/RFT PATCH 01/16] arm64: cpufeature: Detect SSBS and advertise to userspace
  2019-10-04 12:04 ` [RFC/RFT PATCH 01/16] arm64: cpufeature: Detect SSBS and advertise to userspace Ard Biesheuvel
@ 2019-10-08 14:35   ` Mark Rutland
  2019-10-08 14:39     ` Ard Biesheuvel
  0 siblings, 1 reply; 24+ messages in thread
From: Mark Rutland @ 2019-10-08 14:35 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Suzuki K Poulose, Catalin Marinas, Will Deacon, Jeremy Linton,
	Andre Przywara, Marc Zyngier, Will Deacon, linux-arm-kernel

Hi Ard, 

I have a few general backport notes below, and one issue with this
patch/series.

On Fri, Oct 04, 2019 at 02:04:15PM +0200, Ard Biesheuvel wrote:
> From: Will Deacon <will.deacon@arm.com>
> 
> Armv8.5 introduces a new PSTATE bit known as Speculative Store Bypass
> Safe (SSBS) which can be used as a mitigation against Spectre variant 4.
> 
> Additionally, a CPU may provide instructions to manipulate PSTATE.SSBS
> directly, so that userspace can toggle the SSBS control without trapping
> to the kernel.
> 
> This patch probes for the existence of SSBS and advertise the new instructions
> to userspace if they exist.
> 
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> (cherry picked from commit d71be2b6c0e19180b5f80a6d42039cc074a693a2)

This should be the first line, formatted like:

commit d71be2b6c0e19180b5f80a6d42039cc074a693a2 upstream.

... as documented in Documentation/process/stable-kernel-rules.rst.
I see that's sometimes not followed, but I think we should stick to that
unless there's a strong reason not to.

If you had to make any substantial changes, I'd recommend a note above
your S-o-B, e.g

[v4.9: Renamed foo() to bar() to match xxyyzz()]

... and regardless I usually add a [vX.Y backport] trailing my S-o-B to
make it clear from the log where that happened.

For this patch specifically, I believe that you also need to backport
commit:

  f54dada8274643e3 ("arm64: fix SSBS sanitization")

... which fixes the SSBS bit being cleared across sigreturns.

Otherwise, this patch looks good to me.

Thanks,
Mark.


> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/include/asm/cpucaps.h    |  3 ++-
>  arch/arm64/include/asm/sysreg.h     | 16 ++++++++++++----
>  arch/arm64/include/uapi/asm/hwcap.h |  1 +
>  arch/arm64/kernel/cpufeature.c      | 19 +++++++++++++++++--
>  arch/arm64/kernel/cpuinfo.c         |  1 +
>  5 files changed, 33 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
> index 25ce9056cf64..c3de0bbf0e9a 100644
> --- a/arch/arm64/include/asm/cpucaps.h
> +++ b/arch/arm64/include/asm/cpucaps.h
> @@ -52,7 +52,8 @@
>  #define ARM64_MISMATCHED_CACHE_TYPE		31
>  #define ARM64_HAS_STAGE2_FWB			32
>  #define ARM64_WORKAROUND_1463225		33
> +#define ARM64_SSBS				34
>  
> -#define ARM64_NCAPS				34
> +#define ARM64_NCAPS				35
>  
>  #endif /* __ASM_CPUCAPS_H */
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index c1470931b897..2fc6242baf11 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -419,6 +419,7 @@
>  #define SYS_ICH_LR15_EL2		__SYS__LR8_EL2(7)
>  
>  /* Common SCTLR_ELx flags. */
> +#define SCTLR_ELx_DSSBS	(1UL << 44)
>  #define SCTLR_ELx_EE    (1 << 25)
>  #define SCTLR_ELx_IESB	(1 << 21)
>  #define SCTLR_ELx_WXN	(1 << 19)
> @@ -439,7 +440,7 @@
>  			 (1 << 10) | (1 << 13) | (1 << 14) | (1 << 15) | \
>  			 (1 << 17) | (1 << 20) | (1 << 24) | (1 << 26) | \
>  			 (1 << 27) | (1 << 30) | (1 << 31) | \
> -			 (0xffffffffUL << 32))
> +			 (0xffffefffUL << 32))
>  
>  #ifdef CONFIG_CPU_BIG_ENDIAN
>  #define ENDIAN_SET_EL2		SCTLR_ELx_EE
> @@ -453,7 +454,7 @@
>  #define SCTLR_EL2_SET	(SCTLR_ELx_IESB   | ENDIAN_SET_EL2   | SCTLR_EL2_RES1)
>  #define SCTLR_EL2_CLEAR	(SCTLR_ELx_M      | SCTLR_ELx_A    | SCTLR_ELx_C   | \
>  			 SCTLR_ELx_SA     | SCTLR_ELx_I    | SCTLR_ELx_WXN | \
> -			 ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
> +			 SCTLR_ELx_DSSBS | ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
>  
>  #if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffffffffffff
>  #error "Inconsistent SCTLR_EL2 set/clear bits"
> @@ -477,7 +478,7 @@
>  			 (1 << 29))
>  #define SCTLR_EL1_RES0  ((1 << 6)  | (1 << 10) | (1 << 13) | (1 << 17) | \
>  			 (1 << 27) | (1 << 30) | (1 << 31) | \
> -			 (0xffffffffUL << 32))
> +			 (0xffffefffUL << 32))
>  
>  #ifdef CONFIG_CPU_BIG_ENDIAN
>  #define ENDIAN_SET_EL1		(SCTLR_EL1_E0E | SCTLR_ELx_EE)
> @@ -494,7 +495,7 @@
>  			 ENDIAN_SET_EL1 | SCTLR_EL1_UCI  | SCTLR_EL1_RES1)
>  #define SCTLR_EL1_CLEAR	(SCTLR_ELx_A   | SCTLR_EL1_CP15BEN | SCTLR_EL1_ITD    |\
>  			 SCTLR_EL1_UMA | SCTLR_ELx_WXN     | ENDIAN_CLEAR_EL1 |\
> -			 SCTLR_EL1_RES0)
> +			 SCTLR_ELx_DSSBS | SCTLR_EL1_RES0)
>  
>  #if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffffffffffff
>  #error "Inconsistent SCTLR_EL1 set/clear bits"
> @@ -544,6 +545,13 @@
>  #define ID_AA64PFR0_EL0_64BIT_ONLY	0x1
>  #define ID_AA64PFR0_EL0_32BIT_64BIT	0x2
>  
> +/* id_aa64pfr1 */
> +#define ID_AA64PFR1_SSBS_SHIFT		4
> +
> +#define ID_AA64PFR1_SSBS_PSTATE_NI	0
> +#define ID_AA64PFR1_SSBS_PSTATE_ONLY	1
> +#define ID_AA64PFR1_SSBS_PSTATE_INSNS	2
> +
>  /* id_aa64mmfr0 */
>  #define ID_AA64MMFR0_TGRAN4_SHIFT	28
>  #define ID_AA64MMFR0_TGRAN64_SHIFT	24
> diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
> index 17c65c8f33cb..2bcd6e4f3474 100644
> --- a/arch/arm64/include/uapi/asm/hwcap.h
> +++ b/arch/arm64/include/uapi/asm/hwcap.h
> @@ -48,5 +48,6 @@
>  #define HWCAP_USCAT		(1 << 25)
>  #define HWCAP_ILRCPC		(1 << 26)
>  #define HWCAP_FLAGM		(1 << 27)
> +#define HWCAP_SSBS		(1 << 28)
>  
>  #endif /* _UAPI__ASM_HWCAP_H */
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 859d63cc99a3..58146d636a83 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -164,6 +164,11 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
>  	ARM64_FTR_END,
>  };
>  
> +static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI),
> +	ARM64_FTR_END,
> +};
> +
>  static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
>  	/*
>  	 * We already refuse to boot CPUs that don't support our configured
> @@ -379,7 +384,7 @@ static const struct __ftr_reg_entry {
>  
>  	/* Op1 = 0, CRn = 0, CRm = 4 */
>  	ARM64_FTR_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0),
> -	ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_raz),
> +	ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_id_aa64pfr1),
>  	ARM64_FTR_REG(SYS_ID_AA64ZFR0_EL1, ftr_raz),
>  
>  	/* Op1 = 0, CRn = 0, CRm = 5 */
> @@ -669,7 +674,6 @@ void update_cpu_features(int cpu,
>  
>  	/*
>  	 * EL3 is not our concern.
> -	 * ID_AA64PFR1 is currently RES0.
>  	 */
>  	taint |= check_update_ftr_reg(SYS_ID_AA64PFR0_EL1, cpu,
>  				      info->reg_id_aa64pfr0, boot->reg_id_aa64pfr0);
> @@ -1254,6 +1258,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.cpu_enable = cpu_enable_hw_dbm,
>  	},
>  #endif
> +	{
> +		.desc = "Speculative Store Bypassing Safe (SSBS)",
> +		.capability = ARM64_SSBS,
> +		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
> +		.matches = has_cpuid_feature,
> +		.sys_reg = SYS_ID_AA64PFR1_EL1,
> +		.field_pos = ID_AA64PFR1_SSBS_SHIFT,
> +		.sign = FTR_UNSIGNED,
> +		.min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY,
> +	},
>  	{},
>  };
>  
> @@ -1299,6 +1313,7 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
>  #ifdef CONFIG_ARM64_SVE
>  	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
>  #endif
> +	HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, HWCAP_SSBS),
>  	{},
>  };
>  
> diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
> index e9ab7b3ed317..dce971f2c167 100644
> --- a/arch/arm64/kernel/cpuinfo.c
> +++ b/arch/arm64/kernel/cpuinfo.c
> @@ -81,6 +81,7 @@ static const char *const hwcap_str[] = {
>  	"uscat",
>  	"ilrcpc",
>  	"flagm",
> +	"ssbs",
>  	NULL
>  };
>  
> -- 
> 2.20.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC/RFT PATCH 01/16] arm64: cpufeature: Detect SSBS and advertise to userspace
  2019-10-08 14:35   ` Mark Rutland
@ 2019-10-08 14:39     ` Ard Biesheuvel
  0 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-08 14:39 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Suzuki K Poulose, Catalin Marinas, Will Deacon, Jeremy Linton,
	Andre Przywara, Marc Zyngier, Will Deacon, linux-arm-kernel

On Tue, 8 Oct 2019 at 16:35, Mark Rutland <mark.rutland@arm.com> wrote:
>
> Hi Ard,
>
> I have a few general backport notes below, and one issue with this
> patch/series.
>
> On Fri, Oct 04, 2019 at 02:04:15PM +0200, Ard Biesheuvel wrote:
> > From: Will Deacon <will.deacon@arm.com>
> >
> > Armv8.5 introduces a new PSTATE bit known as Speculative Store Bypass
> > Safe (SSBS) which can be used as a mitigation against Spectre variant 4.
> >
> > Additionally, a CPU may provide instructions to manipulate PSTATE.SSBS
> > directly, so that userspace can toggle the SSBS control without trapping
> > to the kernel.
> >
> > This patch probes for the existence of SSBS and advertise the new instructions
> > to userspace if they exist.
> >
> > Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> > Signed-off-by: Will Deacon <will.deacon@arm.com>
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > (cherry picked from commit d71be2b6c0e19180b5f80a6d42039cc074a693a2)
>
> This should be the first line, formatted like:
>
> commit d71be2b6c0e19180b5f80a6d42039cc074a693a2 upstream.
>
> ... as documented in Documentation/process/stable-kernel-rules.rst.
> I see that's sometimes not followed, but I think we should stick to that
> unless there's a strong reason not to.
>

Indeed, but I have adopted the

[ Upstream commit xxx ]

style in the meantime in [0] (and [1])


> If you had to make any substantial changes, I'd recommend a note above
> your S-o-B, e.g
>
> [v4.9: Renamed foo() to bar() to match xxyyzz()]
>

Sure.

> ... and regardless I usually add a [vX.Y backport] trailing my S-o-B to
> make it clear from the log where that happened.
>
> For this patch specifically, I believe that you also need to backport
> commit:
>
>   f54dada8274643e3 ("arm64: fix SSBS sanitization")
>
> ... which fixes the SSBS bit being cleared across sigreturns.
>

Coming up in 05/16


> Otherwise, this patch looks good to me.
>


Thanks.


[0] https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=ssbs-v4.19-stable
[1] https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=ssbs-v4.14-stable

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC/RFT PATCH 11/16] arm64: Always enable spectre-v2 vulnerability detection
  2019-10-04 12:04 ` [RFC/RFT PATCH 11/16] arm64: Always enable spectre-v2 vulnerability detection Ard Biesheuvel
@ 2019-10-08 15:05   ` Mark Rutland
  0 siblings, 0 replies; 24+ messages in thread
From: Mark Rutland @ 2019-10-08 15:05 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Stefan Wahren, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	Jeremy Linton, Andre Przywara, Marc Zyngier, Will Deacon,
	linux-arm-kernel

On Fri, Oct 04, 2019 at 02:04:25PM +0200, Ard Biesheuvel wrote:
> From: Jeremy Linton <jeremy.linton@arm.com>
> 
> Ensure we are always able to detect whether or not the CPU is affected
> by Spectre-v2, so that we can later advertise this to userspace.
> 
> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> Reviewed-by: Andre Przywara <andre.przywara@arm.com>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> (cherry picked from commit 8c1e3d2bb44cbb998cb28ff9a18f105fee7f1eb3)
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/kernel/cpu_errata.c | 47 ++++----------------
>  1 file changed, 8 insertions(+), 39 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index def847873d21..ae7d6761262f 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c

> -/*
> - * Generic helper for handling capabilties with multiple (match,enable) pairs
> - * of call backs, sharing the same capability bit.
> - * Iterate over each entry to see if at least one matches.
> - */
> -static bool __maybe_unused
> -multi_entry_cap_matches(const struct arm64_cpu_capabilities *entry, int scope)
> -{
> -	const struct arm64_cpu_capabilities *caps;
> -
> -	for (caps = entry->match_list; caps->matches; caps++)
> -		if (caps->matches(caps, scope))
> -			return true;
> -
> -	return false;
> -}
> -
> -/*
> - * Take appropriate action for all matching entries in the shared capability
> - * entry.
> - */
> -static void __maybe_unused
> -multi_entry_cap_cpu_enable(const struct arm64_cpu_capabilities *entry)
> -{
> -	const struct arm64_cpu_capabilities *caps;
> -
> -	for (caps = entry->match_list; caps->matches; caps++)
> -		if (caps->matches(caps, SCOPE_LOCAL_CPU) &&
> -		    caps->cpu_enable)
> -			caps->cpu_enable(caps);
> -}
> -

Bad rebase? These weren't removed in the upstream commit, and I can't
spot a reason to do so here.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable
  2019-10-08  8:12 ` [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
@ 2019-10-08 15:09   ` Mark Rutland
  2019-10-08 15:10     ` Ard Biesheuvel
  0 siblings, 1 reply; 24+ messages in thread
From: Mark Rutland @ 2019-10-08 15:09 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Suzuki K Poulose, Catalin Marinas, Jeremy Linton, Andre Przywara,
	Marc Zyngier, Will Deacon, linux-arm-kernel

On Tue, Oct 08, 2019 at 10:12:14AM +0200, Ard Biesheuvel wrote:
> On Fri, 4 Oct 2019 at 14:04, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> >
> > This is a fairly mechnical backport to v4.19 of the changes needed to support
> > managing the SSBS state, which controls whether Speculative Store Bypass is
> > permitted.
> >
> > I have included Jeremy's sysfs changes as well, since they are equally
> > suitable for stable and made for a much cleaner backport, so it made
> > little sense to handle them separately.
> >
> > These patches are presented for review, and are not being cc'ed to the
> > -stable maintainers just yet.
> >
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Marc Zyngier <maz@kernel.org>
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> > Cc: Jeremy Linton <jeremy.linton@arm.com>
> > Cc: Andre Przywara <andre.przywara@arm.com>
> >
> 
> If nobody has any objections, I'll send these out to -stable end of today.

Other than patch 11, this looks good to me!

Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable
  2019-10-08 15:09   ` Mark Rutland
@ 2019-10-08 15:10     ` Ard Biesheuvel
  0 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2019-10-08 15:10 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Suzuki K Poulose, Catalin Marinas, Jeremy Linton, Andre Przywara,
	Marc Zyngier, Will Deacon, linux-arm-kernel

On Tue, 8 Oct 2019 at 17:09, Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Tue, Oct 08, 2019 at 10:12:14AM +0200, Ard Biesheuvel wrote:
> > On Fri, 4 Oct 2019 at 14:04, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> > >
> > > This is a fairly mechnical backport to v4.19 of the changes needed to support
> > > managing the SSBS state, which controls whether Speculative Store Bypass is
> > > permitted.
> > >
> > > I have included Jeremy's sysfs changes as well, since they are equally
> > > suitable for stable and made for a much cleaner backport, so it made
> > > little sense to handle them separately.
> > >
> > > These patches are presented for review, and are not being cc'ed to the
> > > -stable maintainers just yet.
> > >
> > > Cc: Will Deacon <will@kernel.org>
> > > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > > Cc: Marc Zyngier <maz@kernel.org>
> > > Cc: Mark Rutland <mark.rutland@arm.com>
> > > Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> > > Cc: Jeremy Linton <jeremy.linton@arm.com>
> > > Cc: Andre Przywara <andre.przywara@arm.com>
> > >
> >
> > If nobody has any objections, I'll send these out to -stable end of today.
>
> Other than patch 11, this looks good to me!
>

Thanks Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2019-10-08 15:10 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-04 12:04 [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 01/16] arm64: cpufeature: Detect SSBS and advertise to userspace Ard Biesheuvel
2019-10-08 14:35   ` Mark Rutland
2019-10-08 14:39     ` Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 02/16] arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3 Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 03/16] KVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and !vhe Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 04/16] arm64: docs: Document SSBS HWCAP Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 05/16] arm64: fix SSBS sanitization Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 06/16] arm64: Add sysfs vulnerability show for spectre-v1 Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 07/16] arm64: add sysfs vulnerability show for meltdown Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 08/16] arm64: enable generic CPU vulnerabilites support Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 09/16] arm64: Provide a command line to disable spectre_v2 mitigation Ard Biesheuvel
2019-10-04 12:04   ` Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 10/16] arm64: Advertise mitigation of Spectre-v2, or lack thereof Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 11/16] arm64: Always enable spectre-v2 vulnerability detection Ard Biesheuvel
2019-10-08 15:05   ` Mark Rutland
2019-10-04 12:04 ` [RFC/RFT PATCH 12/16] arm64: Always enable ssb " Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 13/16] arm64: add sysfs vulnerability show for spectre-v2 Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 14/16] arm64: add sysfs vulnerability show for speculative store bypass Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 15/16] arm64: ssbs: Don't treat CPUs with SSBS as unaffected by SSB Ard Biesheuvel
2019-10-04 12:04 ` [RFC/RFT PATCH 16/16] arm64: Force SSBS on context switch Ard Biesheuvel
2019-10-08  8:12 ` [RFC/RFT PATCH 00/16] arm64: backport SSBS handling to v4.19-stable Ard Biesheuvel
2019-10-08 15:09   ` Mark Rutland
2019-10-08 15:10     ` Ard Biesheuvel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.