linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds
@ 2018-01-08 17:32 Will Deacon
  2018-01-08 17:32 ` [PATCH v3 01/13] arm64: use RET instruction for exiting the trampoline Will Deacon
                   ` (13 more replies)
  0 siblings, 14 replies; 24+ messages in thread
From: Will Deacon @ 2018-01-08 17:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, ard.biesheuvel, marc.zyngier, lorenzo.pieralisi,
	christoffer.dall, linux-kernel, shankerd, jnair, Will Deacon

Hi all,

This is version three of the patches previously posted here:

  v1: http://lists.infradead.org/pipermail/linux-arm-kernel/2018-January/551838.html
  v2: http://lists.infradead.org/pipermail/linux-arm-kernel/2018-January/552085.html

Changes since v2:

  * Fix typo in comment
  * Include Falkor hardening from Shanker
  * Add ThunderX2 MIDRs (subsequent patches under review)
  * Avoid applying hardening from preemtible context
  * Fix stack offsets in hyp SMC call

Cheers,

Will

--->8

Jayachandran C (1):
  arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs

Marc Zyngier (3):
  arm64: Move post_ttbr_update_workaround to C code
  arm64: KVM: Use per-CPU vector when BP hardening is enabled
  arm64: KVM: Make PSCI_VERSION a fast path

Shanker Donthineni (1):
  arm64: Implement branch predictor hardening for Falkor

Will Deacon (8):
  arm64: use RET instruction for exiting the trampoline
  arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
  arm64: Take into account ID_AA64PFR0_EL1.CSV3
  arm64: cpufeature: Pass capability structure to ->enable callback
  drivers/firmware: Expose psci_get_version through psci_ops structure
  arm64: Add skeleton to harden the branch predictor against aliasing
    attacks
  arm64: cputype: Add missing MIDR values for Cortex-A72 and Cortex-A75
  arm64: Implement branch predictor hardening for affected Cortex-A CPUs

 arch/arm/include/asm/kvm_mmu.h     |  10 +++
 arch/arm64/Kconfig                 |  30 +++++--
 arch/arm64/include/asm/assembler.h |  13 ---
 arch/arm64/include/asm/cpucaps.h   |   4 +-
 arch/arm64/include/asm/cputype.h   |   7 ++
 arch/arm64/include/asm/kvm_asm.h   |   2 +
 arch/arm64/include/asm/kvm_mmu.h   |  38 +++++++++
 arch/arm64/include/asm/mmu.h       |  37 +++++++++
 arch/arm64/include/asm/sysreg.h    |   2 +
 arch/arm64/kernel/Makefile         |   4 +
 arch/arm64/kernel/bpi.S            |  87 ++++++++++++++++++++
 arch/arm64/kernel/cpu_errata.c     | 161 +++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/cpufeature.c     |  13 ++-
 arch/arm64/kernel/entry.S          |  19 ++++-
 arch/arm64/kvm/hyp/entry.S         |  12 +++
 arch/arm64/kvm/hyp/switch.c        |  25 +++++-
 arch/arm64/mm/context.c            |  11 +++
 arch/arm64/mm/fault.c              |  17 ++++
 arch/arm64/mm/proc.S               |   3 +-
 drivers/firmware/psci.c            |   2 +
 include/linux/psci.h               |   1 +
 virt/kvm/arm/arm.c                 |   8 +-
 22 files changed, 474 insertions(+), 32 deletions(-)
 create mode 100644 arch/arm64/kernel/bpi.S

-- 
2.1.4

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v3 01/13] arm64: use RET instruction for exiting the trampoline
  2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
@ 2018-01-08 17:32 ` Will Deacon
  2018-01-08 17:32 ` [PATCH v3 02/13] arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry Will Deacon
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Will Deacon @ 2018-01-08 17:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, ard.biesheuvel, marc.zyngier, lorenzo.pieralisi,
	christoffer.dall, linux-kernel, shankerd, jnair, Will Deacon

Speculation attacks against the entry trampoline can potentially resteer
the speculative instruction stream through the indirect branch and into
arbitrary gadgets within the kernel.

This patch defends against these attacks by forcing a misprediction
through the return stack: a dummy BL instruction loads an entry into
the stack, so that the predicted program flow of the subsequent RET
instruction is to a branch-to-self instruction which is finally resolved
as a branch to the kernel vectors with speculation suppressed.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/entry.S | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 031392ee5f47..6ceed4877daf 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -1029,6 +1029,14 @@ alternative_else_nop_endif
 	.if	\regsize == 64
 	msr	tpidrro_el0, x30	// Restored in kernel_ventry
 	.endif
+	/*
+	 * Defend against branch aliasing attacks by pushing a dummy
+	 * entry onto the return stack and using a RET instruction to
+	 * enter the full-fat kernel vectors.
+	 */
+	bl	2f
+	b	.
+2:
 	tramp_map_kernel	x30
 #ifdef CONFIG_RANDOMIZE_BASE
 	adr	x30, tramp_vectors + PAGE_SIZE
@@ -1041,7 +1049,7 @@ alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
 	msr	vbar_el1, x30
 	add	x30, x30, #(1b - tramp_vectors)
 	isb
-	br	x30
+	ret
 	.endm
 
 	.macro tramp_exit, regsize = 64
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 02/13] arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
  2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
  2018-01-08 17:32 ` [PATCH v3 01/13] arm64: use RET instruction for exiting the trampoline Will Deacon
@ 2018-01-08 17:32 ` Will Deacon
  2018-01-09 17:17   ` Christoph Hellwig
  2018-01-08 17:32 ` [PATCH v3 03/13] arm64: Take into account ID_AA64PFR0_EL1.CSV3 Will Deacon
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 24+ messages in thread
From: Will Deacon @ 2018-01-08 17:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, ard.biesheuvel, marc.zyngier, lorenzo.pieralisi,
	christoffer.dall, linux-kernel, shankerd, jnair, Will Deacon

Although CONFIG_UNMAP_KERNEL_AT_EL0 does make KASLR more robust, it's
actually more useful as a mitigation against speculation attacks that
can leak arbitrary kernel data to userspace through speculation.

Reword the Kconfig help message to reflect this, and make the option
depend on EXPERT so that it is on by default for the majority of users.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/Kconfig | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 3af1657fcac3..efaaa3a66b95 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -834,15 +834,14 @@ config FORCE_MAX_ZONEORDER
 	  4M allocations matching the default size used by generic code.
 
 config UNMAP_KERNEL_AT_EL0
-	bool "Unmap kernel when running in userspace (aka \"KAISER\")"
+	bool "Unmap kernel when running in userspace (aka \"KAISER\")" if EXPERT
 	default y
 	help
-	  Some attacks against KASLR make use of the timing difference between
-	  a permission fault which could arise from a page table entry that is
-	  present in the TLB, and a translation fault which always requires a
-	  page table walk. This option defends against these attacks by unmapping
-	  the kernel whilst running in userspace, therefore forcing translation
-	  faults for all of kernel space.
+	  Speculation attacks against some high-performance processors can
+	  be used to bypass MMU permission checks and leak kernel data to
+	  userspace. This can be defended against by unmapping the kernel
+	  when running in userspace, mapping it back in on exception entry
+	  via a trampoline page in the vector table.
 
 	  If unsure, say Y.
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 03/13] arm64: Take into account ID_AA64PFR0_EL1.CSV3
  2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
  2018-01-08 17:32 ` [PATCH v3 01/13] arm64: use RET instruction for exiting the trampoline Will Deacon
  2018-01-08 17:32 ` [PATCH v3 02/13] arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry Will Deacon
@ 2018-01-08 17:32 ` Will Deacon
  2018-01-08 17:32 ` [PATCH v3 04/13] arm64: cpufeature: Pass capability structure to ->enable callback Will Deacon
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Will Deacon @ 2018-01-08 17:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, ard.biesheuvel, marc.zyngier, lorenzo.pieralisi,
	christoffer.dall, linux-kernel, shankerd, jnair, Will Deacon

For non-KASLR kernels where the KPTI behaviour has not been overridden
on the command line we can use ID_AA64PFR0_EL1.CSV3 to determine whether
or not we should unmap the kernel whilst running at EL0.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/sysreg.h | 1 +
 arch/arm64/kernel/cpufeature.c  | 8 +++++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 08cc88574659..ae519bbd3f9e 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -437,6 +437,7 @@
 #define ID_AA64ISAR1_DPB_SHIFT		0
 
 /* id_aa64pfr0 */
+#define ID_AA64PFR0_CSV3_SHIFT		60
 #define ID_AA64PFR0_SVE_SHIFT		32
 #define ID_AA64PFR0_GIC_SHIFT		24
 #define ID_AA64PFR0_ASIMD_SHIFT		20
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9f0545dfe497..d723fc071f39 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -145,6 +145,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
 	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
@@ -851,6 +852,8 @@ static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 				int __unused)
 {
+	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+
 	/* Forced on command line? */
 	if (__kpti_forced) {
 		pr_info_once("kernel page table isolation forced %s by command line option\n",
@@ -862,7 +865,9 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
 		return true;
 
-	return false;
+	/* Defer to CPU feature registers */
+	return !cpuid_feature_extract_unsigned_field(pfr0,
+						     ID_AA64PFR0_CSV3_SHIFT);
 }
 
 static int __init parse_kpti(char *str)
@@ -967,6 +972,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	},
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	{
+		.desc = "Kernel page table isolation (KPTI)",
 		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
 		.def_scope = SCOPE_SYSTEM,
 		.matches = unmap_kernel_at_el0,
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 04/13] arm64: cpufeature: Pass capability structure to ->enable callback
  2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
                   ` (2 preceding siblings ...)
  2018-01-08 17:32 ` [PATCH v3 03/13] arm64: Take into account ID_AA64PFR0_EL1.CSV3 Will Deacon
@ 2018-01-08 17:32 ` Will Deacon
  2018-01-08 17:32 ` [PATCH v3 05/13] drivers/firmware: Expose psci_get_version through psci_ops structure Will Deacon
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Will Deacon @ 2018-01-08 17:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, ard.biesheuvel, marc.zyngier, lorenzo.pieralisi,
	christoffer.dall, linux-kernel, shankerd, jnair, Will Deacon

In order to invoke the CPU capability ->matches callback from the ->enable
callback for applying local-CPU workarounds, we need a handle on the
capability structure.

This patch passes a pointer to the capability structure to the ->enable
callback.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d723fc071f39..55712ab4e3bf 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1152,7 +1152,7 @@ void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
 			 * uses an IPI, giving us a PSTATE that disappears when
 			 * we return.
 			 */
-			stop_machine(caps->enable, NULL, cpu_online_mask);
+			stop_machine(caps->enable, (void *)caps, cpu_online_mask);
 		}
 	}
 }
@@ -1195,7 +1195,7 @@ verify_local_cpu_features(const struct arm64_cpu_capabilities *caps)
 			cpu_die_early();
 		}
 		if (caps->enable)
-			caps->enable(NULL);
+			caps->enable((void *)caps);
 	}
 }
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 05/13] drivers/firmware: Expose psci_get_version through psci_ops structure
  2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
                   ` (3 preceding siblings ...)
  2018-01-08 17:32 ` [PATCH v3 04/13] arm64: cpufeature: Pass capability structure to ->enable callback Will Deacon
@ 2018-01-08 17:32 ` Will Deacon
  2018-01-08 17:32 ` [PATCH v3 06/13] arm64: Move post_ttbr_update_workaround to C code Will Deacon
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Will Deacon @ 2018-01-08 17:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, ard.biesheuvel, marc.zyngier, lorenzo.pieralisi,
	christoffer.dall, linux-kernel, shankerd, jnair, Will Deacon

Entry into recent versions of ARM Trusted Firmware will invalidate the CPU
branch predictor state in order to protect against aliasing attacks.

This patch exposes the PSCI "VERSION" function via psci_ops, so that it
can be invoked outside of the PSCI driver where necessary.

Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 drivers/firmware/psci.c | 2 ++
 include/linux/psci.h    | 1 +
 2 files changed, 3 insertions(+)

diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index d687ca3d5049..8b25d31e8401 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -496,6 +496,8 @@ static void __init psci_init_migrate(void)
 static void __init psci_0_2_set_functions(void)
 {
 	pr_info("Using standard PSCI v0.2 function IDs\n");
+	psci_ops.get_version = psci_get_version;
+
 	psci_function_id[PSCI_FN_CPU_SUSPEND] =
 					PSCI_FN_NATIVE(0_2, CPU_SUSPEND);
 	psci_ops.cpu_suspend = psci_cpu_suspend;
diff --git a/include/linux/psci.h b/include/linux/psci.h
index bdea1cb5e1db..6306ab10af18 100644
--- a/include/linux/psci.h
+++ b/include/linux/psci.h
@@ -26,6 +26,7 @@ int psci_cpu_init_idle(unsigned int cpu);
 int psci_cpu_suspend_enter(unsigned long index);
 
 struct psci_operations {
+	u32 (*get_version)(void);
 	int (*cpu_suspend)(u32 state, unsigned long entry_point);
 	int (*cpu_off)(u32 state);
 	int (*cpu_on)(unsigned long cpuid, unsigned long entry_point);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 06/13] arm64: Move post_ttbr_update_workaround to C code
  2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
                   ` (4 preceding siblings ...)
  2018-01-08 17:32 ` [PATCH v3 05/13] drivers/firmware: Expose psci_get_version through psci_ops structure Will Deacon
@ 2018-01-08 17:32 ` Will Deacon
  2018-01-08 17:32 ` [PATCH v3 07/13] arm64: Add skeleton to harden the branch predictor against aliasing attacks Will Deacon
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Will Deacon @ 2018-01-08 17:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, ard.biesheuvel, marc.zyngier, lorenzo.pieralisi,
	christoffer.dall, linux-kernel, shankerd, jnair, Will Deacon

From: Marc Zyngier <marc.zyngier@arm.com>

We will soon need to invoke a CPU-specific function pointer after changing
page tables, so move post_ttbr_update_workaround out into C code to make
this possible.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/assembler.h | 13 -------------
 arch/arm64/kernel/entry.S          |  2 +-
 arch/arm64/mm/context.c            |  9 +++++++++
 arch/arm64/mm/proc.S               |  3 +--
 4 files changed, 11 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index c45bc94f15d0..cee60ce0da52 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -476,17 +476,4 @@ alternative_endif
 	mrs	\rd, sp_el0
 	.endm
 
-/*
- * Errata workaround post TTBRx_EL1 update.
- */
-	.macro	post_ttbr_update_workaround
-#ifdef CONFIG_CAVIUM_ERRATUM_27456
-alternative_if ARM64_WORKAROUND_CAVIUM_27456
-	ic	iallu
-	dsb	nsh
-	isb
-alternative_else_nop_endif
-#endif
-	.endm
-
 #endif	/* __ASM_ASSEMBLER_H */
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 6ceed4877daf..80b539845da6 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -277,7 +277,7 @@ alternative_else_nop_endif
 	 * Cavium erratum 27456 (broadcast TLBI instructions may cause I-cache
 	 * corruption).
 	 */
-	post_ttbr_update_workaround
+	bl	post_ttbr_update_workaround
 	.endif
 1:
 	.if	\el != 0
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 1cb3bc92ae5c..5f7097d0cd12 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -239,6 +239,15 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
 		cpu_switch_mm(mm->pgd, mm);
 }
 
+/* Errata workaround post TTBRx_EL1 update. */
+asmlinkage void post_ttbr_update_workaround(void)
+{
+	asm(ALTERNATIVE("nop; nop; nop",
+			"ic iallu; dsb nsh; isb",
+			ARM64_WORKAROUND_CAVIUM_27456,
+			CONFIG_CAVIUM_ERRATUM_27456));
+}
+
 static int asids_init(void)
 {
 	asid_bits = get_cpu_asid_bits();
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 3146dc96f05b..6affb68a9a14 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -145,8 +145,7 @@ ENTRY(cpu_do_switch_mm)
 	isb
 	msr	ttbr0_el1, x0			// now update TTBR0
 	isb
-	post_ttbr_update_workaround
-	ret
+	b	post_ttbr_update_workaround	// Back to C code...
 ENDPROC(cpu_do_switch_mm)
 
 	.pushsection ".idmap.text", "ax"
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 07/13] arm64: Add skeleton to harden the branch predictor against aliasing attacks
  2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
                   ` (5 preceding siblings ...)
  2018-01-08 17:32 ` [PATCH v3 06/13] arm64: Move post_ttbr_update_workaround to C code Will Deacon
@ 2018-01-08 17:32 ` Will Deacon
  2018-01-09 12:55   ` Philippe Ombredanne
  2018-01-08 17:32 ` [PATCH v3 08/13] arm64: KVM: Use per-CPU vector when BP hardening is enabled Will Deacon
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 24+ messages in thread
From: Will Deacon @ 2018-01-08 17:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, ard.biesheuvel, marc.zyngier, lorenzo.pieralisi,
	christoffer.dall, linux-kernel, shankerd, jnair, Will Deacon

Aliasing attacks against CPU branch predictors can allow an attacker to
redirect speculative control flow on some CPUs and potentially divulge
information from one context to another.

This patch adds initial skeleton code behind a new Kconfig option to
enable implementation-specific mitigations against these attacks for
CPUs that are affected.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/Kconfig               | 17 +++++++++
 arch/arm64/include/asm/cpucaps.h |  3 +-
 arch/arm64/include/asm/mmu.h     | 37 ++++++++++++++++++++
 arch/arm64/include/asm/sysreg.h  |  1 +
 arch/arm64/kernel/Makefile       |  4 +++
 arch/arm64/kernel/bpi.S          | 55 +++++++++++++++++++++++++++++
 arch/arm64/kernel/cpu_errata.c   | 74 ++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/cpufeature.c   |  1 +
 arch/arm64/kernel/entry.S        |  7 ++--
 arch/arm64/mm/context.c          |  2 ++
 arch/arm64/mm/fault.c            | 17 +++++++++
 11 files changed, 215 insertions(+), 3 deletions(-)
 create mode 100644 arch/arm64/kernel/bpi.S

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index efaaa3a66b95..cea44b95187c 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -845,6 +845,23 @@ config UNMAP_KERNEL_AT_EL0
 
 	  If unsure, say Y.
 
+config HARDEN_BRANCH_PREDICTOR
+	bool "Harden the branch predictor against aliasing attacks" if EXPERT
+	default y
+	help
+	  Speculation attacks against some high-performance processors rely on
+	  being able to manipulate the branch predictor for a victim context by
+	  executing aliasing branches in the attacker context.  Such attacks
+	  can be partially mitigated against by clearing internal branch
+	  predictor state and limiting the prediction logic in some situations.
+
+	  This config option will take CPU-specific actions to harden the
+	  branch predictor against aliasing attacks and may rely on specific
+	  instruction sequences or control bits being set by the system
+	  firmware.
+
+	  If unsure, say Y.
+
 menuconfig ARMV8_DEPRECATED
 	bool "Emulate deprecated/obsolete ARMv8 instructions"
 	depends on COMPAT
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index b4537ffd1018..51616e77fe6b 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -42,7 +42,8 @@
 #define ARM64_HAS_DCPOP				21
 #define ARM64_SVE				22
 #define ARM64_UNMAP_KERNEL_AT_EL0		23
+#define ARM64_HARDEN_BRANCH_PREDICTOR		24
 
-#define ARM64_NCAPS				24
+#define ARM64_NCAPS				25
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 6f7bdb89817f..6dd83d75b82a 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -41,6 +41,43 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
 	       cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
 }
 
+typedef void (*bp_hardening_cb_t)(void);
+
+struct bp_hardening_data {
+	int			hyp_vectors_slot;
+	bp_hardening_cb_t	fn;
+};
+
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+extern char __bp_harden_hyp_vecs_start[], __bp_harden_hyp_vecs_end[];
+
+DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
+
+static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void)
+{
+	return this_cpu_ptr(&bp_hardening_data);
+}
+
+static inline void arm64_apply_bp_hardening(void)
+{
+	struct bp_hardening_data *d;
+
+	if (!cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR))
+		return;
+
+	d = arm64_get_bp_hardening_data();
+	if (d->fn)
+		d->fn();
+}
+#else
+static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void)
+{
+	return NULL;
+}
+
+static inline void arm64_apply_bp_hardening(void)	{ }
+#endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
+
 extern void paging_init(void);
 extern void bootmem_init(void);
 extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index ae519bbd3f9e..871744973ece 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -438,6 +438,7 @@
 
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_CSV3_SHIFT		60
+#define ID_AA64PFR0_CSV2_SHIFT		56
 #define ID_AA64PFR0_SVE_SHIFT		32
 #define ID_AA64PFR0_GIC_SHIFT		24
 #define ID_AA64PFR0_ASIMD_SHIFT		20
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 067baace74a0..0c760db04858 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -53,6 +53,10 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST)	+= arm64-reloc-test.o
 arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
 arm64-obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
 
+ifeq ($(CONFIG_KVM),y)
+arm64-obj-$(CONFIG_HARDEN_BRANCH_PREDICTOR)	+= bpi.o
+endif
+
 obj-y					+= $(arm64-obj-y) vdso/ probes/
 obj-m					+= $(arm64-obj-m)
 head-y					:= head.o
diff --git a/arch/arm64/kernel/bpi.S b/arch/arm64/kernel/bpi.S
new file mode 100644
index 000000000000..06a931eb2673
--- /dev/null
+++ b/arch/arm64/kernel/bpi.S
@@ -0,0 +1,55 @@
+/*
+ * Contains CPU specific branch predictor invalidation sequences
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/linkage.h>
+
+.macro ventry target
+	.rept 31
+	nop
+	.endr
+	b	\target
+.endm
+
+.macro vectors target
+	ventry \target + 0x000
+	ventry \target + 0x080
+	ventry \target + 0x100
+	ventry \target + 0x180
+
+	ventry \target + 0x200
+	ventry \target + 0x280
+	ventry \target + 0x300
+	ventry \target + 0x380
+
+	ventry \target + 0x400
+	ventry \target + 0x480
+	ventry \target + 0x500
+	ventry \target + 0x580
+
+	ventry \target + 0x600
+	ventry \target + 0x680
+	ventry \target + 0x700
+	ventry \target + 0x780
+.endm
+
+	.align	11
+ENTRY(__bp_harden_hyp_vecs_start)
+	.rept 4
+	vectors __kvm_hyp_vector
+	.endr
+ENTRY(__bp_harden_hyp_vecs_end)
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 0e27f86ee709..16ea5c6f314e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -46,6 +46,80 @@ static int cpu_enable_trap_ctr_access(void *__unused)
 	return 0;
 }
 
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+#include <asm/mmu_context.h>
+#include <asm/cacheflush.h>
+
+DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
+
+#ifdef CONFIG_KVM
+static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
+				const char *hyp_vecs_end)
+{
+	void *dst = lm_alias(__bp_harden_hyp_vecs_start + slot * SZ_2K);
+	int i;
+
+	for (i = 0; i < SZ_2K; i += 0x80)
+		memcpy(dst + i, hyp_vecs_start, hyp_vecs_end - hyp_vecs_start);
+
+	flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K);
+}
+
+static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
+				      const char *hyp_vecs_start,
+				      const char *hyp_vecs_end)
+{
+	static int last_slot = -1;
+	static DEFINE_SPINLOCK(bp_lock);
+	int cpu, slot = -1;
+
+	spin_lock(&bp_lock);
+	for_each_possible_cpu(cpu) {
+		if (per_cpu(bp_hardening_data.fn, cpu) == fn) {
+			slot = per_cpu(bp_hardening_data.hyp_vectors_slot, cpu);
+			break;
+		}
+	}
+
+	if (slot == -1) {
+		last_slot++;
+		BUG_ON(((__bp_harden_hyp_vecs_end - __bp_harden_hyp_vecs_start)
+			/ SZ_2K) <= last_slot);
+		slot = last_slot;
+		__copy_hyp_vect_bpi(slot, hyp_vecs_start, hyp_vecs_end);
+	}
+
+	__this_cpu_write(bp_hardening_data.hyp_vectors_slot, slot);
+	__this_cpu_write(bp_hardening_data.fn, fn);
+	spin_unlock(&bp_lock);
+}
+#else
+static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
+				      const char *hyp_vecs_start,
+				      const char *hyp_vecs_end)
+{
+	__this_cpu_write(bp_hardening_data.fn, fn);
+}
+#endif	/* CONFIG_KVM */
+
+static void  install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry,
+				     bp_hardening_cb_t fn,
+				     const char *hyp_vecs_start,
+				     const char *hyp_vecs_end)
+{
+	u64 pfr0;
+
+	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
+		return;
+
+	pfr0 = read_cpuid(ID_AA64PFR0_EL1);
+	if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT))
+		return;
+
+	__install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end);
+}
+#endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
+
 #define MIDR_RANGE(model, min, max) \
 	.def_scope = SCOPE_LOCAL_CPU, \
 	.matches = is_affected_midr_range, \
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 55712ab4e3bf..9d4d82c11528 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -146,6 +146,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
 
 static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
 	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 80b539845da6..07a7d4db8ec4 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -721,12 +721,15 @@ el0_ia:
 	 * Instruction abort handling
 	 */
 	mrs	x26, far_el1
-	enable_daif
+	enable_da_f
+#ifdef CONFIG_TRACE_IRQFLAGS
+	bl	trace_hardirqs_off
+#endif
 	ct_user_exit
 	mov	x0, x26
 	mov	x1, x25
 	mov	x2, sp
-	bl	do_mem_abort
+	bl	do_el0_ia_bp_hardening
 	b	ret_to_user
 el0_fpsimd_acc:
 	/*
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 5f7097d0cd12..d99b36555a16 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -246,6 +246,8 @@ asmlinkage void post_ttbr_update_workaround(void)
 			"ic iallu; dsb nsh; isb",
 			ARM64_WORKAROUND_CAVIUM_27456,
 			CONFIG_CAVIUM_ERRATUM_27456));
+
+	arm64_apply_bp_hardening();
 }
 
 static int asids_init(void)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 22168cd0dde7..0e671ddf4855 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -708,6 +708,23 @@ asmlinkage void __exception do_mem_abort(unsigned long addr, unsigned int esr,
 	arm64_notify_die("", regs, &info, esr);
 }
 
+asmlinkage void __exception do_el0_ia_bp_hardening(unsigned long addr,
+						   unsigned int esr,
+						   struct pt_regs *regs)
+{
+	/*
+	 * We've taken an instruction abort from userspace and not yet
+	 * re-enabled IRQs. If the address is a kernel address, apply
+	 * BP hardening prior to enabling IRQs and pre-emption.
+	 */
+	if (addr > TASK_SIZE)
+		arm64_apply_bp_hardening();
+
+	local_irq_enable();
+	do_mem_abort(addr, esr, regs);
+}
+
+
 asmlinkage void __exception do_sp_pc_abort(unsigned long addr,
 					   unsigned int esr,
 					   struct pt_regs *regs)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 08/13] arm64: KVM: Use per-CPU vector when BP hardening is enabled
  2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
                   ` (6 preceding siblings ...)
  2018-01-08 17:32 ` [PATCH v3 07/13] arm64: Add skeleton to harden the branch predictor against aliasing attacks Will Deacon
@ 2018-01-08 17:32 ` Will Deacon
  2018-01-08 17:32 ` [PATCH v3 09/13] arm64: KVM: Make PSCI_VERSION a fast path Will Deacon
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Will Deacon @ 2018-01-08 17:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, ard.biesheuvel, marc.zyngier, lorenzo.pieralisi,
	christoffer.dall, linux-kernel, shankerd, jnair, Will Deacon

From: Marc Zyngier <marc.zyngier@arm.com>

Now that we have per-CPU vectors, let's plug then in the KVM/arm64 code.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   | 10 ++++++++++
 arch/arm64/include/asm/kvm_mmu.h | 38 ++++++++++++++++++++++++++++++++++++++
 arch/arm64/kvm/hyp/switch.c      |  2 +-
 virt/kvm/arm/arm.c               |  8 +++++++-
 4 files changed, 56 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index fa6f2174276b..eb46fc81a440 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -221,6 +221,16 @@ static inline unsigned int kvm_get_vmid_bits(void)
 	return 8;
 }
 
+static inline void *kvm_get_hyp_vector(void)
+{
+	return kvm_ksym_ref(__kvm_hyp_vector);
+}
+
+static inline int kvm_map_vectors(void)
+{
+	return 0;
+}
+
 #endif	/* !__ASSEMBLY__ */
 
 #endif /* __ARM_KVM_MMU_H__ */
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 672c8684d5c2..2d6d4bd9de52 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -309,5 +309,43 @@ static inline unsigned int kvm_get_vmid_bits(void)
 	return (cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR1_VMIDBITS_SHIFT) == 2) ? 16 : 8;
 }
 
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+#include <asm/mmu.h>
+
+static inline void *kvm_get_hyp_vector(void)
+{
+	struct bp_hardening_data *data = arm64_get_bp_hardening_data();
+	void *vect = kvm_ksym_ref(__kvm_hyp_vector);
+
+	if (data->fn) {
+		vect = __bp_harden_hyp_vecs_start +
+		       data->hyp_vectors_slot * SZ_2K;
+
+		if (!has_vhe())
+			vect = lm_alias(vect);
+	}
+
+	return vect;
+}
+
+static inline int kvm_map_vectors(void)
+{
+	return create_hyp_mappings(kvm_ksym_ref(__bp_harden_hyp_vecs_start),
+				   kvm_ksym_ref(__bp_harden_hyp_vecs_end),
+				   PAGE_HYP_EXEC);
+}
+
+#else
+static inline void *kvm_get_hyp_vector(void)
+{
+	return kvm_ksym_ref(__kvm_hyp_vector);
+}
+
+static inline int kvm_map_vectors(void)
+{
+	return 0;
+}
+#endif
+
 #endif /* __ASSEMBLY__ */
 #endif /* __ARM64_KVM_MMU_H__ */
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index f7c651f3a8c0..8d4f3c9d6dc4 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -52,7 +52,7 @@ static void __hyp_text __activate_traps_vhe(void)
 	val &= ~(CPACR_EL1_FPEN | CPACR_EL1_ZEN);
 	write_sysreg(val, cpacr_el1);
 
-	write_sysreg(__kvm_hyp_vector, vbar_el1);
+	write_sysreg(kvm_get_hyp_vector(), vbar_el1);
 }
 
 static void __hyp_text __activate_traps_nvhe(void)
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 6b60c98a6e22..1c9fdb6db124 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1158,7 +1158,7 @@ static void cpu_init_hyp_mode(void *dummy)
 	pgd_ptr = kvm_mmu_get_httbr();
 	stack_page = __this_cpu_read(kvm_arm_hyp_stack_page);
 	hyp_stack_ptr = stack_page + PAGE_SIZE;
-	vector_ptr = (unsigned long)kvm_ksym_ref(__kvm_hyp_vector);
+	vector_ptr = (unsigned long)kvm_get_hyp_vector();
 
 	__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
 	__cpu_init_stage2();
@@ -1403,6 +1403,12 @@ static int init_hyp_mode(void)
 		goto out_err;
 	}
 
+	err = kvm_map_vectors();
+	if (err) {
+		kvm_err("Cannot map vectors\n");
+		goto out_err;
+	}
+
 	/*
 	 * Map the Hyp stack pages
 	 */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 09/13] arm64: KVM: Make PSCI_VERSION a fast path
  2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
                   ` (7 preceding siblings ...)
  2018-01-08 17:32 ` [PATCH v3 08/13] arm64: KVM: Use per-CPU vector when BP hardening is enabled Will Deacon
@ 2018-01-08 17:32 ` Will Deacon
  2018-01-08 17:32 ` [PATCH v3 10/13] arm64: cputype: Add missing MIDR values for Cortex-A72 and Cortex-A75 Will Deacon
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Will Deacon @ 2018-01-08 17:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, ard.biesheuvel, marc.zyngier, lorenzo.pieralisi,
	christoffer.dall, linux-kernel, shankerd, jnair, Will Deacon

From: Marc Zyngier <marc.zyngier@arm.com>

For those CPUs that require PSCI to perform a BP invalidation,
going all the way to the PSCI code for not much is a waste of
precious cycles. Let's terminate that call as early as possible.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kvm/hyp/switch.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 8d4f3c9d6dc4..4d273f6d0e69 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -17,6 +17,7 @@
 
 #include <linux/types.h>
 #include <linux/jump_label.h>
+#include <uapi/linux/psci.h>
 
 #include <asm/kvm_asm.h>
 #include <asm/kvm_emulate.h>
@@ -341,6 +342,18 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
 		goto again;
 
+	if (exit_code == ARM_EXCEPTION_TRAP &&
+	    (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_HVC64 ||
+	     kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_HVC32) &&
+	    vcpu_get_reg(vcpu, 0) == PSCI_0_2_FN_PSCI_VERSION) {
+		u64 val = PSCI_RET_NOT_SUPPORTED;
+		if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features))
+			val = 2;
+
+		vcpu_set_reg(vcpu, 0, val);
+		goto again;
+	}
+
 	if (static_branch_unlikely(&vgic_v2_cpuif_trap) &&
 	    exit_code == ARM_EXCEPTION_TRAP) {
 		bool valid;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 10/13] arm64: cputype: Add missing MIDR values for Cortex-A72 and Cortex-A75
  2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
                   ` (8 preceding siblings ...)
  2018-01-08 17:32 ` [PATCH v3 09/13] arm64: KVM: Make PSCI_VERSION a fast path Will Deacon
@ 2018-01-08 17:32 ` Will Deacon
  2018-01-08 17:32 ` [PATCH v3 11/13] arm64: Implement branch predictor hardening for affected Cortex-A CPUs Will Deacon
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Will Deacon @ 2018-01-08 17:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, ard.biesheuvel, marc.zyngier, lorenzo.pieralisi,
	christoffer.dall, linux-kernel, shankerd, jnair, Will Deacon

Hook up MIDR values for the Cortex-A72 and Cortex-A75 CPUs, since they
will soon need MIDR matches for hardening the branch predictor.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/cputype.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index 235e77d98261..84385b94e70b 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -79,8 +79,10 @@
 #define ARM_CPU_PART_AEM_V8		0xD0F
 #define ARM_CPU_PART_FOUNDATION		0xD00
 #define ARM_CPU_PART_CORTEX_A57		0xD07
+#define ARM_CPU_PART_CORTEX_A72		0xD08
 #define ARM_CPU_PART_CORTEX_A53		0xD03
 #define ARM_CPU_PART_CORTEX_A73		0xD09
+#define ARM_CPU_PART_CORTEX_A75		0xD0A
 
 #define APM_CPU_PART_POTENZA		0x000
 
@@ -94,7 +96,9 @@
 
 #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
 #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
+#define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
 #define MIDR_CORTEX_A73 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A73)
+#define MIDR_CORTEX_A75 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A75)
 #define MIDR_THUNDERX	MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
 #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
 #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 11/13] arm64: Implement branch predictor hardening for affected Cortex-A CPUs
  2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
                   ` (9 preceding siblings ...)
  2018-01-08 17:32 ` [PATCH v3 10/13] arm64: cputype: Add missing MIDR values for Cortex-A72 and Cortex-A75 Will Deacon
@ 2018-01-08 17:32 ` Will Deacon
  2018-01-09 16:12   ` Suzuki K Poulose
  2018-01-08 17:32 ` [PATCH v3 12/13] arm64: Implement branch predictor hardening for Falkor Will Deacon
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 24+ messages in thread
From: Will Deacon @ 2018-01-08 17:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, ard.biesheuvel, marc.zyngier, lorenzo.pieralisi,
	christoffer.dall, linux-kernel, shankerd, jnair, Will Deacon

Cortex-A57, A72, A73 and A75 are susceptible to branch predictor aliasing
and can theoretically be attacked by malicious code.

This patch implements a PSCI-based mitigation for these CPUs when available.
The call into firmware will invalidate the branch predictor state, preventing
any malicious entries from affecting other victim contexts.

Co-developed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/bpi.S        | 24 ++++++++++++++++++++++++
 arch/arm64/kernel/cpu_errata.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 66 insertions(+)

diff --git a/arch/arm64/kernel/bpi.S b/arch/arm64/kernel/bpi.S
index 06a931eb2673..dec95bd82e31 100644
--- a/arch/arm64/kernel/bpi.S
+++ b/arch/arm64/kernel/bpi.S
@@ -53,3 +53,27 @@ ENTRY(__bp_harden_hyp_vecs_start)
 	vectors __kvm_hyp_vector
 	.endr
 ENTRY(__bp_harden_hyp_vecs_end)
+ENTRY(__psci_hyp_bp_inval_start)
+	sub	sp, sp, #(8 * 18)
+	stp	x16, x17, [sp, #(16 * 0)]
+	stp	x14, x15, [sp, #(16 * 1)]
+	stp	x12, x13, [sp, #(16 * 2)]
+	stp	x10, x11, [sp, #(16 * 3)]
+	stp	x8, x9, [sp, #(16 * 4)]
+	stp	x6, x7, [sp, #(16 * 5)]
+	stp	x4, x5, [sp, #(16 * 6)]
+	stp	x2, x3, [sp, #(16 * 7)]
+	stp	x0, x1, [sp, #(16 * 8)]
+	mov	x0, #0x84000000
+	smc	#0
+	ldp	x16, x17, [sp, #(16 * 0)]
+	ldp	x14, x15, [sp, #(16 * 1)]
+	ldp	x12, x13, [sp, #(16 * 2)]
+	ldp	x10, x11, [sp, #(16 * 3)]
+	ldp	x8, x9, [sp, #(16 * 4)]
+	ldp	x6, x7, [sp, #(16 * 5)]
+	ldp	x4, x5, [sp, #(16 * 6)]
+	ldp	x2, x3, [sp, #(16 * 7)]
+	ldp	x0, x1, [sp, #(16 * 8)]
+	add	sp, sp, #(8 * 18)
+ENTRY(__psci_hyp_bp_inval_end)
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 16ea5c6f314e..cb0fb3796bb8 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -53,6 +53,8 @@ static int cpu_enable_trap_ctr_access(void *__unused)
 DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
 
 #ifdef CONFIG_KVM
+extern char __psci_hyp_bp_inval_start[], __psci_hyp_bp_inval_end[];
+
 static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
 				const char *hyp_vecs_end)
 {
@@ -94,6 +96,9 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
 	spin_unlock(&bp_lock);
 }
 #else
+#define __psci_hyp_bp_inval_start	NULL
+#define __psci_hyp_bp_inval_end		NULL
+
 static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
 				      const char *hyp_vecs_start,
 				      const char *hyp_vecs_end)
@@ -118,6 +123,21 @@ static void  install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry,
 
 	__install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end);
 }
+
+#include <linux/psci.h>
+
+static int enable_psci_bp_hardening(void *data)
+{
+	const struct arm64_cpu_capabilities *entry = data;
+
+	if (psci_ops.get_version)
+		install_bp_hardening_cb(entry,
+				       (bp_hardening_cb_t)psci_ops.get_version,
+				       __psci_hyp_bp_inval_start,
+				       __psci_hyp_bp_inval_end);
+
+	return 0;
+}
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
 #define MIDR_RANGE(model, min, max) \
@@ -261,6 +281,28 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
 	},
 #endif
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	{
+		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+		MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
+		.enable = enable_psci_bp_hardening,
+	},
+	{
+		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+		MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
+		.enable = enable_psci_bp_hardening,
+	},
+	{
+		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+		MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+		.enable = enable_psci_bp_hardening,
+	},
+	{
+		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+		MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
+		.enable = enable_psci_bp_hardening,
+	},
+#endif
 	{
 	}
 };
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 12/13] arm64: Implement branch predictor hardening for Falkor
  2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
                   ` (10 preceding siblings ...)
  2018-01-08 17:32 ` [PATCH v3 11/13] arm64: Implement branch predictor hardening for affected Cortex-A CPUs Will Deacon
@ 2018-01-08 17:32 ` Will Deacon
  2018-01-12 17:58   ` Shanker Donthineni
  2018-01-08 17:32 ` [PATCH v3 13/13] arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs Will Deacon
  2018-01-08 18:53 ` [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Catalin Marinas
  13 siblings, 1 reply; 24+ messages in thread
From: Will Deacon @ 2018-01-08 17:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, ard.biesheuvel, marc.zyngier, lorenzo.pieralisi,
	christoffer.dall, linux-kernel, shankerd, jnair, Will Deacon

From: Shanker Donthineni <shankerd@codeaurora.org>

Falkor is susceptible to branch predictor aliasing and can
theoretically be attacked by malicious code. This patch
implements a mitigation for these attacks, preventing any
malicious entries from affecting other victim contexts.

Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
[will: fix label name when !CONFIG_KVM]
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/cpucaps.h |  3 ++-
 arch/arm64/include/asm/kvm_asm.h |  2 ++
 arch/arm64/kernel/bpi.S          |  8 +++++++
 arch/arm64/kernel/cpu_errata.c   | 49 ++++++++++++++++++++++++++++++++++++++--
 arch/arm64/kvm/hyp/entry.S       | 12 ++++++++++
 arch/arm64/kvm/hyp/switch.c      | 10 ++++++++
 6 files changed, 81 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 51616e77fe6b..7049b4802587 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -43,7 +43,8 @@
 #define ARM64_SVE				22
 #define ARM64_UNMAP_KERNEL_AT_EL0		23
 #define ARM64_HARDEN_BRANCH_PREDICTOR		24
+#define ARM64_HARDEN_BP_POST_GUEST_EXIT		25
 
-#define ARM64_NCAPS				25
+#define ARM64_NCAPS				26
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index ab4d0a926043..24961b732e65 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -68,6 +68,8 @@ extern u32 __kvm_get_mdcr_el2(void);
 
 extern u32 __init_stage2_translation(void);
 
+extern void __qcom_hyp_sanitize_btac_predictors(void);
+
 #endif
 
 #endif /* __ARM_KVM_ASM_H__ */
diff --git a/arch/arm64/kernel/bpi.S b/arch/arm64/kernel/bpi.S
index dec95bd82e31..76225c2611ea 100644
--- a/arch/arm64/kernel/bpi.S
+++ b/arch/arm64/kernel/bpi.S
@@ -77,3 +77,11 @@ ENTRY(__psci_hyp_bp_inval_start)
 	ldp	x0, x1, [sp, #(16 * 8)]
 	add	sp, sp, #(8 * 18)
 ENTRY(__psci_hyp_bp_inval_end)
+
+ENTRY(__qcom_hyp_sanitize_link_stack_start)
+	stp     x29, x30, [sp, #-16]!
+	.rept	16
+	bl	. + 4
+	.endr
+	ldp	x29, x30, [sp], #16
+ENTRY(__qcom_hyp_sanitize_link_stack_end)
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index cb0fb3796bb8..7b4efde087fc 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -54,6 +54,8 @@ DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
 
 #ifdef CONFIG_KVM
 extern char __psci_hyp_bp_inval_start[], __psci_hyp_bp_inval_end[];
+extern char __qcom_hyp_sanitize_link_stack_start[];
+extern char __qcom_hyp_sanitize_link_stack_end[];
 
 static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
 				const char *hyp_vecs_end)
@@ -96,8 +98,10 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
 	spin_unlock(&bp_lock);
 }
 #else
-#define __psci_hyp_bp_inval_start	NULL
-#define __psci_hyp_bp_inval_end		NULL
+#define __psci_hyp_bp_inval_start		NULL
+#define __psci_hyp_bp_inval_end			NULL
+#define __qcom_hyp_sanitize_link_stack_start	NULL
+#define __qcom_hyp_sanitize_link_stack_end	NULL
 
 static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
 				      const char *hyp_vecs_start,
@@ -138,6 +142,29 @@ static int enable_psci_bp_hardening(void *data)
 
 	return 0;
 }
+
+static void qcom_link_stack_sanitization(void)
+{
+	u64 tmp;
+
+	asm volatile("mov	%0, x30		\n"
+		     ".rept	16		\n"
+		     "bl	. + 4		\n"
+		     ".endr			\n"
+		     "mov	x30, %0		\n"
+		     : "=&r" (tmp));
+}
+
+static int qcom_enable_link_stack_sanitization(void *data)
+{
+	const struct arm64_cpu_capabilities *entry = data;
+
+	install_bp_hardening_cb(entry, qcom_link_stack_sanitization,
+				__qcom_hyp_sanitize_link_stack_start,
+				__qcom_hyp_sanitize_link_stack_end);
+
+	return 0;
+}
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
 #define MIDR_RANGE(model, min, max) \
@@ -302,6 +329,24 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
 		.enable = enable_psci_bp_hardening,
 	},
+	{
+		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+		MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
+		.enable = qcom_enable_link_stack_sanitization,
+	},
+	{
+		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+		MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
+		.enable = qcom_enable_link_stack_sanitization,
+	},
+	{
+		.capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
+		MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
+	},
+	{
+		.capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
+		MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
+	},
 #endif
 	{
 	}
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 12ee62d6d410..9c45c6af1f58 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -196,3 +196,15 @@ alternative_endif
 
 	eret
 ENDPROC(__fpsimd_guest_restore)
+
+ENTRY(__qcom_hyp_sanitize_btac_predictors)
+	/**
+	 * Call SMC64 with Silicon provider serviceID 23<<8 (0xc2001700)
+	 * 0xC2000000-0xC200FFFF: assigned to SiP Service Calls
+	 * b15-b0: contains SiP functionID
+	 */
+	movz    x0, #0x1700
+	movk    x0, #0xc200, lsl #16
+	smc     #0
+	ret
+ENDPROC(__qcom_hyp_sanitize_btac_predictors)
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 4d273f6d0e69..7e373791fad1 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -406,6 +406,16 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 		/* 0 falls through to be handled out of EL2 */
 	}
 
+	if (cpus_have_const_cap(ARM64_HARDEN_BP_POST_GUEST_EXIT)) {
+		u32 midr = read_cpuid_id();
+
+		/* Apply BTAC predictors mitigation to all Falkor chips */
+		if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) ||
+		    ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1)) {
+			__qcom_hyp_sanitize_btac_predictors();
+		}
+	}
+
 	fp_enabled = __fpsimd_enabled();
 
 	__sysreg_save_guest_state(guest_ctxt);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 13/13] arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs
  2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
                   ` (11 preceding siblings ...)
  2018-01-08 17:32 ` [PATCH v3 12/13] arm64: Implement branch predictor hardening for Falkor Will Deacon
@ 2018-01-08 17:32 ` Will Deacon
  2018-01-08 18:53 ` [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Catalin Marinas
  13 siblings, 0 replies; 24+ messages in thread
From: Will Deacon @ 2018-01-08 17:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, ard.biesheuvel, marc.zyngier, lorenzo.pieralisi,
	christoffer.dall, linux-kernel, shankerd, jnair, Will Deacon

From: Jayachandran C <jnair@caviumnetworks.com>

Add the older Broadcom ID as well as the new Cavium ID for ThunderX2
CPUs.

Signed-off-by: Jayachandran C <jnair@caviumnetworks.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/cputype.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index 84385b94e70b..cce5735a677c 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -89,6 +89,7 @@
 #define CAVIUM_CPU_PART_THUNDERX	0x0A1
 #define CAVIUM_CPU_PART_THUNDERX_81XX	0x0A2
 #define CAVIUM_CPU_PART_THUNDERX_83XX	0x0A3
+#define CAVIUM_CPU_PART_THUNDERX2	0x0AF
 
 #define BRCM_CPU_PART_VULCAN		0x516
 
@@ -102,6 +103,8 @@
 #define MIDR_THUNDERX	MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
 #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
 #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+#define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2)
+#define MIDR_BRCM_VULCAN MIDR_CPU_MODEL(ARM_CPU_IMP_BRCM, BRCM_CPU_PART_VULCAN)
 #define MIDR_QCOM_FALKOR_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR_V1)
 
 #ifndef __ASSEMBLY__
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds
  2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
                   ` (12 preceding siblings ...)
  2018-01-08 17:32 ` [PATCH v3 13/13] arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs Will Deacon
@ 2018-01-08 18:53 ` Catalin Marinas
  2018-01-09 14:07   ` Matthias Brugger
  13 siblings, 1 reply; 24+ messages in thread
From: Catalin Marinas @ 2018-01-08 18:53 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, lorenzo.pieralisi, ard.biesheuvel,
	marc.zyngier, linux-kernel, shankerd, christoffer.dall, jnair

On Mon, Jan 08, 2018 at 05:32:25PM +0000, Will Deacon wrote:
> Jayachandran C (1):
>   arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs
> 
> Marc Zyngier (3):
>   arm64: Move post_ttbr_update_workaround to C code
>   arm64: KVM: Use per-CPU vector when BP hardening is enabled
>   arm64: KVM: Make PSCI_VERSION a fast path
> 
> Shanker Donthineni (1):
>   arm64: Implement branch predictor hardening for Falkor
> 
> Will Deacon (8):
>   arm64: use RET instruction for exiting the trampoline
>   arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
>   arm64: Take into account ID_AA64PFR0_EL1.CSV3
>   arm64: cpufeature: Pass capability structure to ->enable callback
>   drivers/firmware: Expose psci_get_version through psci_ops structure
>   arm64: Add skeleton to harden the branch predictor against aliasing
>     attacks
>   arm64: cputype: Add missing MIDR values for Cortex-A72 and Cortex-A75
>   arm64: Implement branch predictor hardening for affected Cortex-A CPUs

I'm queuing these into the arm64 for-next/core (after some overnight
testing). Any additional fixes should be done on top.

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 07/13] arm64: Add skeleton to harden the branch predictor against aliasing attacks
  2018-01-08 17:32 ` [PATCH v3 07/13] arm64: Add skeleton to harden the branch predictor against aliasing attacks Will Deacon
@ 2018-01-09 12:55   ` Philippe Ombredanne
  0 siblings, 0 replies; 24+ messages in thread
From: Philippe Ombredanne @ 2018-01-09 12:55 UTC (permalink / raw)
  To: Will Deacon
  Cc: moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE,
	Catalin Marinas, ard.biesheuvel, marc.zyngier, Lorenzo Pieralisi,
	Christoffer Dall, LKML, shankerd, jnair

Dear Will,

On Mon, Jan 8, 2018 at 6:32 PM, Will Deacon <will.deacon@arm.com> wrote:
> Aliasing attacks against CPU branch predictors can allow an attacker to
> redirect speculative control flow on some CPUs and potentially divulge
> information from one context to another.
>
> This patch adds initial skeleton code behind a new Kconfig option to
> enable implementation-specific mitigations against these attacks for
> CPUs that are affected.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>

<snip>


> --- /dev/null
> +++ b/arch/arm64/kernel/bpi.S
> @@ -0,0 +1,55 @@
> +/*
> + * Contains CPU specific branch predictor invalidation sequences
> + *
> + * Copyright (C) 2018 ARM Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */

Could you consider using the new SDPX tags [1] instead of this long legalese?
Thanks!

[1] https://lkml.org/lkml/2017/12/28/323
-- 
Cordially
Philippe Ombredanne

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds
  2018-01-08 18:53 ` [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Catalin Marinas
@ 2018-01-09 14:07   ` Matthias Brugger
  2018-01-12 15:58     ` Catalin Marinas
  0 siblings, 1 reply; 24+ messages in thread
From: Matthias Brugger @ 2018-01-09 14:07 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: lorenzo.pieralisi, christoffer.dall, ard.biesheuvel,
	marc.zyngier, linux-kernel, linux-arm-kernel, shankerd, jnair,
	Yousaf Kaukab

Hi Catalin,

On 01/08/2018 07:53 PM, Catalin Marinas wrote:
> On Mon, Jan 08, 2018 at 05:32:25PM +0000, Will Deacon wrote:
>> Jayachandran C (1):
>>   arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs
>>
>> Marc Zyngier (3):
>>   arm64: Move post_ttbr_update_workaround to C code
>>   arm64: KVM: Use per-CPU vector when BP hardening is enabled
>>   arm64: KVM: Make PSCI_VERSION a fast path
>>
>> Shanker Donthineni (1):
>>   arm64: Implement branch predictor hardening for Falkor
>>
>> Will Deacon (8):
>>   arm64: use RET instruction for exiting the trampoline
>>   arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
>>   arm64: Take into account ID_AA64PFR0_EL1.CSV3
>>   arm64: cpufeature: Pass capability structure to ->enable callback
>>   drivers/firmware: Expose psci_get_version through psci_ops structure
>>   arm64: Add skeleton to harden the branch predictor against aliasing
>>     attacks
>>   arm64: cputype: Add missing MIDR values for Cortex-A72 and Cortex-A75
>>   arm64: Implement branch predictor hardening for affected Cortex-A CPUs
> 
> I'm queuing these into the arm64 for-next/core (after some overnight
> testing). Any additional fixes should be done on top.
> 

I see these patches are not yet pushed to:
git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git

Did you hit any problems in the overnight tests?

Regards,
Matthias

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re:  [PATCH v3 11/13] arm64: Implement branch predictor hardening for affected Cortex-A CPUs
  2018-01-08 17:32 ` [PATCH v3 11/13] arm64: Implement branch predictor hardening for affected Cortex-A CPUs Will Deacon
@ 2018-01-09 16:12   ` Suzuki K Poulose
  2018-01-15 11:51     ` Marc Zyngier
  2018-01-15 18:01     ` Catalin Marinas
  0 siblings, 2 replies; 24+ messages in thread
From: Suzuki K Poulose @ 2018-01-09 16:12 UTC (permalink / raw)
  To: will.deacon
  Cc: linux-arm-kernel, linux-kernel, marc.zyngier, lorenzo.pieralisi,
	catalin.marinas, mark.rutland, ard.biesheuvel, shankerd,
	christoffer.dall, jnair, suzuki.poulose

On 08/01/18 17:32, Will Deacon wrote:
> Cortex-A57, A72, A73 and A75 are susceptible to branch predictor aliasing
> and can theoretically be attacked by malicious code.
> 
> This patch implements a PSCI-based mitigation for these CPUs when available.
> The call into firmware will invalidate the branch predictor state, preventing
> any malicious entries from affecting other victim contexts.
> 
> Co-developed-by: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Will, Marc,

> +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> +	{
> +		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
> +		MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
> +		.enable = enable_psci_bp_hardening,
> +	},
> +	{
> +		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
> +		MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
> +		.enable = enable_psci_bp_hardening,
> +	},
> +	{
> +		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
> +		MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
> +		.enable = enable_psci_bp_hardening,
> +	},
> +	{
> +		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
> +		MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
> +		.enable = enable_psci_bp_hardening,
> +	},
> +#endif

The introduction of multiple entries for the same capability breaks
some assumptions in this_cpu_has_caps() and verify_local_cpu_features()
as they all stop at the first entry matching the "capability" and could
return wrong results. We need something like the following to make this
work, should someone add duplicate feature entry or use
this_cpu_has_caps() on one of the errata.

---8>---

arm64: capabilities: Handle duplicate entries for a capability

Sometimes a single capability could be listed multiple times with
differing matches(), e.g, CPU errata for different MIDR versions.
This breaks verify_local_cpu_feature() and this_cpu_has_cap() as
we stop checking for a capability on a CPU with the first
entry in the given table, which is not sufficient. Make sure we
run the checks for all entries of the same capability. We do
this by fixing __this_cpu_has_cap() to run through all the
entries in the given table for a match and reuse it for
verify_local_cpu_feature().

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 44 ++++++++++++++++++++++--------------------
 1 file changed, 23 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 862a417ca0e2..0c43447f7406 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1120,6 +1120,26 @@ static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
 			cap_set_elf_hwcap(hwcaps);
 }
 
+/*
+ * Check if the current CPU has a given feature capability.
+ * Should be called from non-preemptible context.
+ */
+static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
+			       unsigned int cap)
+{
+	const struct arm64_cpu_capabilities *caps;
+
+	if (WARN_ON(preemptible()))
+		return false;
+
+	for (caps = cap_array; caps->desc; caps++)
+		if (caps->capability == cap &&
+		    caps->matches &&
+		    caps->matches(caps, SCOPE_LOCAL_CPU))
+			return true;
+	return false;
+}
+
 void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 			    const char *info)
 {
@@ -1183,8 +1203,9 @@ verify_local_elf_hwcaps(const struct arm64_cpu_capabilities *caps)
 }
 
 static void
-verify_local_cpu_features(const struct arm64_cpu_capabilities *caps)
+verify_local_cpu_features(const struct arm64_cpu_capabilities *caps_list)
 {
+	const struct arm64_cpu_capabilities *caps = caps_list;
 	for (; caps->matches; caps++) {
 		if (!cpus_have_cap(caps->capability))
 			continue;
@@ -1192,7 +1213,7 @@ verify_local_cpu_features(const struct arm64_cpu_capabilities *caps)
 		 * If the new CPU misses an advertised feature, we cannot proceed
 		 * further, park the cpu.
 		 */
-		if (!caps->matches(caps, SCOPE_LOCAL_CPU)) {
+		if (!__this_cpu_has_cap(caps_list, caps->capability)) {
 			pr_crit("CPU%d: missing feature: %s\n",
 					smp_processor_id(), caps->desc);
 			cpu_die_early();
@@ -1274,25 +1295,6 @@ static void __init mark_const_caps_ready(void)
 	static_branch_enable(&arm64_const_caps_ready);
 }
 
-/*
- * Check if the current CPU has a given feature capability.
- * Should be called from non-preemptible context.
- */
-static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
-			       unsigned int cap)
-{
-	const struct arm64_cpu_capabilities *caps;
-
-	if (WARN_ON(preemptible()))
-		return false;
-
-	for (caps = cap_array; caps->desc; caps++)
-		if (caps->capability == cap && caps->matches)
-			return caps->matches(caps, SCOPE_LOCAL_CPU);
-
-	return false;
-}
-
 extern const struct arm64_cpu_capabilities arm64_errata[];
 
 bool this_cpu_has_cap(unsigned int cap)
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 02/13] arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
  2018-01-08 17:32 ` [PATCH v3 02/13] arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry Will Deacon
@ 2018-01-09 17:17   ` Christoph Hellwig
  2018-01-10 19:26     ` Will Deacon
  0 siblings, 1 reply; 24+ messages in thread
From: Christoph Hellwig @ 2018-01-09 17:17 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, catalin.marinas, ard.biesheuvel, marc.zyngier,
	lorenzo.pieralisi, christoffer.dall, linux-kernel, shankerd,
	jnair

On Mon, Jan 08, 2018 at 05:32:27PM +0000, Will Deacon wrote:
> Although CONFIG_UNMAP_KERNEL_AT_EL0 does make KASLR more robust, it's
> actually more useful as a mitigation against speculation attacks that
> can leak arbitrary kernel data to userspace through speculation.
> 
> Reword the Kconfig help message to reflect this, and make the option
> depend on EXPERT so that it is on by default for the majority of users.

I still haven't heard an anwer on why this isn't using
CONFIG_PAGE_TABLE_ISOLATION but instead reinvents its own symbol.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 02/13] arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
  2018-01-09 17:17   ` Christoph Hellwig
@ 2018-01-10 19:26     ` Will Deacon
  0 siblings, 0 replies; 24+ messages in thread
From: Will Deacon @ 2018-01-10 19:26 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-arm-kernel, catalin.marinas, ard.biesheuvel, marc.zyngier,
	lorenzo.pieralisi, christoffer.dall, linux-kernel, shankerd,
	jnair

On Tue, Jan 09, 2018 at 09:17:00AM -0800, Christoph Hellwig wrote:
> On Mon, Jan 08, 2018 at 05:32:27PM +0000, Will Deacon wrote:
> > Although CONFIG_UNMAP_KERNEL_AT_EL0 does make KASLR more robust, it's
> > actually more useful as a mitigation against speculation attacks that
> > can leak arbitrary kernel data to userspace through speculation.
> > 
> > Reword the Kconfig help message to reflect this, and make the option
> > depend on EXPERT so that it is on by default for the majority of users.
> 
> I still haven't heard an anwer on why this isn't using
> CONFIG_PAGE_TABLE_ISOLATION but instead reinvents its own symbol.

Mainly because this code was written before CONFIG_PAGE_TABLE_ISOLATION had
been proposed and I wanted to avoid confusion with the ongoing backports
just to align on the naming for an arch-specific config option. We could
CONFIG_PAGE_TABLE_ISOLATION and make it select CONFIG_UNMAP_KERNEL_AT_EL) if
you like, but worth noting that this is default 'y' anyway and depends on
EXPERT.

Will

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds
  2018-01-09 14:07   ` Matthias Brugger
@ 2018-01-12 15:58     ` Catalin Marinas
  0 siblings, 0 replies; 24+ messages in thread
From: Catalin Marinas @ 2018-01-12 15:58 UTC (permalink / raw)
  To: Matthias Brugger
  Cc: Will Deacon, lorenzo.pieralisi, ard.biesheuvel, marc.zyngier,
	linux-kernel, shankerd, christoffer.dall, Yousaf Kaukab,
	linux-arm-kernel, jnair

On Tue, Jan 09, 2018 at 03:07:28PM +0100, Matthias Brugger wrote:
> On 01/08/2018 07:53 PM, Catalin Marinas wrote:
> > On Mon, Jan 08, 2018 at 05:32:25PM +0000, Will Deacon wrote:
> >> Jayachandran C (1):
> >>   arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs
> >>
> >> Marc Zyngier (3):
> >>   arm64: Move post_ttbr_update_workaround to C code
> >>   arm64: KVM: Use per-CPU vector when BP hardening is enabled
> >>   arm64: KVM: Make PSCI_VERSION a fast path
> >>
> >> Shanker Donthineni (1):
> >>   arm64: Implement branch predictor hardening for Falkor
> >>
> >> Will Deacon (8):
> >>   arm64: use RET instruction for exiting the trampoline
> >>   arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
> >>   arm64: Take into account ID_AA64PFR0_EL1.CSV3
> >>   arm64: cpufeature: Pass capability structure to ->enable callback
> >>   drivers/firmware: Expose psci_get_version through psci_ops structure
> >>   arm64: Add skeleton to harden the branch predictor against aliasing
> >>     attacks
> >>   arm64: cputype: Add missing MIDR values for Cortex-A72 and Cortex-A75
> >>   arm64: Implement branch predictor hardening for affected Cortex-A CPUs
> > 
> > I'm queuing these into the arm64 for-next/core (after some overnight
> > testing). Any additional fixes should be done on top.
> 
> I see these patches are not yet pushed to:
> git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git
> 
> Did you hit any problems in the overnight tests?

Yes, I did, but they were not related to these patches but rather the
original kpti. See:

https://marc.info/?l=linux-arm-kernel&m=151576029908809

They are pushed now.

-- 
Catalin

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 12/13] arm64: Implement branch predictor hardening for Falkor
  2018-01-08 17:32 ` [PATCH v3 12/13] arm64: Implement branch predictor hardening for Falkor Will Deacon
@ 2018-01-12 17:58   ` Shanker Donthineni
  0 siblings, 0 replies; 24+ messages in thread
From: Shanker Donthineni @ 2018-01-12 17:58 UTC (permalink / raw)
  To: Will Deacon, linux-arm-kernel
  Cc: lorenzo.pieralisi, ard.biesheuvel, marc.zyngier, catalin.marinas,
	linux-kernel, christoffer.dall, jnair


Hi Will,
 

This patch is the right one for variant2, checks QDF2400 part numbers QCOM_FALKOR and FALKOR_V1
but unfortunately it got modified and merged to linux-next branch causing confusion.Please revert
and merge [V2] patch to fix the problem.
 
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/arch/arm64?h=next-20180112&id=ec82b567a74fbdffdf418d4bb381d55f6a9096af

[v2] https://www.spinics.net/lists/arm-kernel/msg627364.html


Thanks,
Shanker

On 01/08/2018 11:32 AM, Will Deacon wrote:
> From: Shanker Donthineni <shankerd@codeaurora.org>
> 
> Falkor is susceptible to branch predictor aliasing and can
> theoretically be attacked by malicious code. This patch
> implements a mitigation for these attacks, preventing any
> malicious entries from affecting other victim contexts.
> 
> Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
> [will: fix label name when !CONFIG_KVM]
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/cpucaps.h |  3 ++-
>  arch/arm64/include/asm/kvm_asm.h |  2 ++
>  arch/arm64/kernel/bpi.S          |  8 +++++++
>  arch/arm64/kernel/cpu_errata.c   | 49 ++++++++++++++++++++++++++++++++++++++--
>  arch/arm64/kvm/hyp/entry.S       | 12 ++++++++++
>  arch/arm64/kvm/hyp/switch.c      | 10 ++++++++
>  6 files changed, 81 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
> index 51616e77fe6b..7049b4802587 100644
> --- a/arch/arm64/include/asm/cpucaps.h
> +++ b/arch/arm64/include/asm/cpucaps.h
> @@ -43,7 +43,8 @@
>  #define ARM64_SVE				22
>  #define ARM64_UNMAP_KERNEL_AT_EL0		23
>  #define ARM64_HARDEN_BRANCH_PREDICTOR		24
> +#define ARM64_HARDEN_BP_POST_GUEST_EXIT		25
>  
> -#define ARM64_NCAPS				25
> +#define ARM64_NCAPS				26
>  
>  #endif /* __ASM_CPUCAPS_H */
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index ab4d0a926043..24961b732e65 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -68,6 +68,8 @@ extern u32 __kvm_get_mdcr_el2(void);
>  
>  extern u32 __init_stage2_translation(void);
>  
> +extern void __qcom_hyp_sanitize_btac_predictors(void);
> +
>  #endif
>  
>  #endif /* __ARM_KVM_ASM_H__ */
> diff --git a/arch/arm64/kernel/bpi.S b/arch/arm64/kernel/bpi.S
> index dec95bd82e31..76225c2611ea 100644
> --- a/arch/arm64/kernel/bpi.S
> +++ b/arch/arm64/kernel/bpi.S
> @@ -77,3 +77,11 @@ ENTRY(__psci_hyp_bp_inval_start)
>  	ldp	x0, x1, [sp, #(16 * 8)]
>  	add	sp, sp, #(8 * 18)
>  ENTRY(__psci_hyp_bp_inval_end)
> +
> +ENTRY(__qcom_hyp_sanitize_link_stack_start)
> +	stp     x29, x30, [sp, #-16]!
> +	.rept	16
> +	bl	. + 4
> +	.endr
> +	ldp	x29, x30, [sp], #16
> +ENTRY(__qcom_hyp_sanitize_link_stack_end)
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index cb0fb3796bb8..7b4efde087fc 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -54,6 +54,8 @@ DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
>  
>  #ifdef CONFIG_KVM
>  extern char __psci_hyp_bp_inval_start[], __psci_hyp_bp_inval_end[];
> +extern char __qcom_hyp_sanitize_link_stack_start[];
> +extern char __qcom_hyp_sanitize_link_stack_end[];
>  
>  static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
>  				const char *hyp_vecs_end)
> @@ -96,8 +98,10 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
>  	spin_unlock(&bp_lock);
>  }
>  #else
> -#define __psci_hyp_bp_inval_start	NULL
> -#define __psci_hyp_bp_inval_end		NULL
> +#define __psci_hyp_bp_inval_start		NULL
> +#define __psci_hyp_bp_inval_end			NULL
> +#define __qcom_hyp_sanitize_link_stack_start	NULL
> +#define __qcom_hyp_sanitize_link_stack_end	NULL
>  
>  static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
>  				      const char *hyp_vecs_start,
> @@ -138,6 +142,29 @@ static int enable_psci_bp_hardening(void *data)
>  
>  	return 0;
>  }
> +
> +static void qcom_link_stack_sanitization(void)
> +{
> +	u64 tmp;
> +
> +	asm volatile("mov	%0, x30		\n"
> +		     ".rept	16		\n"
> +		     "bl	. + 4		\n"
> +		     ".endr			\n"
> +		     "mov	x30, %0		\n"
> +		     : "=&r" (tmp));
> +}
> +
> +static int qcom_enable_link_stack_sanitization(void *data)
> +{
> +	const struct arm64_cpu_capabilities *entry = data;
> +
> +	install_bp_hardening_cb(entry, qcom_link_stack_sanitization,
> +				__qcom_hyp_sanitize_link_stack_start,
> +				__qcom_hyp_sanitize_link_stack_end);
> +
> +	return 0;
> +}
>  #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
>  
>  #define MIDR_RANGE(model, min, max) \
> @@ -302,6 +329,24 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
>  		MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
>  		.enable = enable_psci_bp_hardening,
>  	},
> +	{
> +		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
> +		MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
> +		.enable = qcom_enable_link_stack_sanitization,
> +	},
> +	{
> +		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
> +		MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
> +		.enable = qcom_enable_link_stack_sanitization,
> +	},
> +	{
> +		.capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
> +		MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
> +	},
> +	{
> +		.capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
> +		MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
> +	},
>  #endif
>  	{
>  	}
> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
> index 12ee62d6d410..9c45c6af1f58 100644
> --- a/arch/arm64/kvm/hyp/entry.S
> +++ b/arch/arm64/kvm/hyp/entry.S
> @@ -196,3 +196,15 @@ alternative_endif
>  
>  	eret
>  ENDPROC(__fpsimd_guest_restore)
> +
> +ENTRY(__qcom_hyp_sanitize_btac_predictors)
> +	/**
> +	 * Call SMC64 with Silicon provider serviceID 23<<8 (0xc2001700)
> +	 * 0xC2000000-0xC200FFFF: assigned to SiP Service Calls
> +	 * b15-b0: contains SiP functionID
> +	 */
> +	movz    x0, #0x1700
> +	movk    x0, #0xc200, lsl #16
> +	smc     #0
> +	ret
> +ENDPROC(__qcom_hyp_sanitize_btac_predictors)
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index 4d273f6d0e69..7e373791fad1 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -406,6 +406,16 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  		/* 0 falls through to be handled out of EL2 */
>  	}
>  
> +	if (cpus_have_const_cap(ARM64_HARDEN_BP_POST_GUEST_EXIT)) {
> +		u32 midr = read_cpuid_id();
> +
> +		/* Apply BTAC predictors mitigation to all Falkor chips */
> +		if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) ||
> +		    ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1)) {
> +			__qcom_hyp_sanitize_btac_predictors();
> +		}
> +	}
> +
>  	fp_enabled = __fpsimd_enabled();
>  
>  	__sysreg_save_guest_state(guest_ctxt);
> 

-- 
Shanker Donthineni
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 11/13] arm64: Implement branch predictor hardening for affected Cortex-A CPUs
  2018-01-09 16:12   ` Suzuki K Poulose
@ 2018-01-15 11:51     ` Marc Zyngier
  2018-01-15 18:01     ` Catalin Marinas
  1 sibling, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2018-01-15 11:51 UTC (permalink / raw)
  To: Suzuki K Poulose, will.deacon
  Cc: linux-arm-kernel, linux-kernel, lorenzo.pieralisi,
	catalin.marinas, mark.rutland, ard.biesheuvel, shankerd,
	christoffer.dall, jnair

Hi Suzuki,

On 09/01/18 16:12, Suzuki K Poulose wrote:
> On 08/01/18 17:32, Will Deacon wrote:
>> Cortex-A57, A72, A73 and A75 are susceptible to branch predictor aliasing
>> and can theoretically be attacked by malicious code.
>>
>> This patch implements a PSCI-based mitigation for these CPUs when available.
>> The call into firmware will invalidate the branch predictor state, preventing
>> any malicious entries from affecting other victim contexts.
>>
>> Co-developed-by: Marc Zyngier <marc.zyngier@arm.com>
>> Signed-off-by: Will Deacon <will.deacon@arm.com>
> 
> Will, Marc,
> 
>> +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>> +	{
>> +		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
>> +		MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
>> +		.enable = enable_psci_bp_hardening,
>> +	},
>> +	{
>> +		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
>> +		MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
>> +		.enable = enable_psci_bp_hardening,
>> +	},
>> +	{
>> +		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
>> +		MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
>> +		.enable = enable_psci_bp_hardening,
>> +	},
>> +	{
>> +		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
>> +		MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
>> +		.enable = enable_psci_bp_hardening,
>> +	},
>> +#endif
> 
> The introduction of multiple entries for the same capability breaks
> some assumptions in this_cpu_has_caps() and verify_local_cpu_features()
> as they all stop at the first entry matching the "capability" and could
> return wrong results. We need something like the following to make this
> work, should someone add duplicate feature entry or use
> this_cpu_has_caps() on one of the errata.
> 
> ---8>---
> 
> arm64: capabilities: Handle duplicate entries for a capability
> 
> Sometimes a single capability could be listed multiple times with
> differing matches(), e.g, CPU errata for different MIDR versions.
> This breaks verify_local_cpu_feature() and this_cpu_has_cap() as
> we stop checking for a capability on a CPU with the first
> entry in the given table, which is not sufficient. Make sure we
> run the checks for all entries of the same capability. We do
> this by fixing __this_cpu_has_cap() to run through all the
> entries in the given table for a match and reuse it for
> verify_local_cpu_feature().
> 
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm64/kernel/cpufeature.c | 44 ++++++++++++++++++++++--------------------
>  1 file changed, 23 insertions(+), 21 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 862a417ca0e2..0c43447f7406 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1120,6 +1120,26 @@ static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
>  			cap_set_elf_hwcap(hwcaps);
>  }
>  
> +/*
> + * Check if the current CPU has a given feature capability.
> + * Should be called from non-preemptible context.
> + */
> +static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
> +			       unsigned int cap)
> +{
> +	const struct arm64_cpu_capabilities *caps;
> +
> +	if (WARN_ON(preemptible()))
> +		return false;
> +
> +	for (caps = cap_array; caps->desc; caps++)
> +		if (caps->capability == cap &&
> +		    caps->matches &&
> +		    caps->matches(caps, SCOPE_LOCAL_CPU))
> +			return true;
> +	return false;
> +}
> +
>  void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
>  			    const char *info)
>  {
> @@ -1183,8 +1203,9 @@ verify_local_elf_hwcaps(const struct arm64_cpu_capabilities *caps)
>  }
>  
>  static void
> -verify_local_cpu_features(const struct arm64_cpu_capabilities *caps)
> +verify_local_cpu_features(const struct arm64_cpu_capabilities *caps_list)
>  {
> +	const struct arm64_cpu_capabilities *caps = caps_list;
>  	for (; caps->matches; caps++) {
>  		if (!cpus_have_cap(caps->capability))
>  			continue;
> @@ -1192,7 +1213,7 @@ verify_local_cpu_features(const struct arm64_cpu_capabilities *caps)
>  		 * If the new CPU misses an advertised feature, we cannot proceed
>  		 * further, park the cpu.
>  		 */
> -		if (!caps->matches(caps, SCOPE_LOCAL_CPU)) {
> +		if (!__this_cpu_has_cap(caps_list, caps->capability)) {
>  			pr_crit("CPU%d: missing feature: %s\n",
>  					smp_processor_id(), caps->desc);
>  			cpu_die_early();
> @@ -1274,25 +1295,6 @@ static void __init mark_const_caps_ready(void)
>  	static_branch_enable(&arm64_const_caps_ready);
>  }
>  
> -/*
> - * Check if the current CPU has a given feature capability.
> - * Should be called from non-preemptible context.
> - */
> -static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
> -			       unsigned int cap)
> -{
> -	const struct arm64_cpu_capabilities *caps;
> -
> -	if (WARN_ON(preemptible()))
> -		return false;
> -
> -	for (caps = cap_array; caps->desc; caps++)
> -		if (caps->capability == cap && caps->matches)
> -			return caps->matches(caps, SCOPE_LOCAL_CPU);
> -
> -	return false;
> -}
> -
>  extern const struct arm64_cpu_capabilities arm64_errata[];
>  
>  bool this_cpu_has_cap(unsigned int cap)
> 

This looks sensible to me.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 11/13] arm64: Implement branch predictor hardening for affected Cortex-A CPUs
  2018-01-09 16:12   ` Suzuki K Poulose
  2018-01-15 11:51     ` Marc Zyngier
@ 2018-01-15 18:01     ` Catalin Marinas
  1 sibling, 0 replies; 24+ messages in thread
From: Catalin Marinas @ 2018-01-15 18:01 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: will.deacon, mark.rutland, lorenzo.pieralisi, christoffer.dall,
	ard.biesheuvel, marc.zyngier, linux-kernel, shankerd,
	linux-arm-kernel, jnair

On Tue, Jan 09, 2018 at 04:12:18PM +0000, Suzuki K. Poulose wrote:
> arm64: capabilities: Handle duplicate entries for a capability
> 
> Sometimes a single capability could be listed multiple times with
> differing matches(), e.g, CPU errata for different MIDR versions.
> This breaks verify_local_cpu_feature() and this_cpu_has_cap() as
> we stop checking for a capability on a CPU with the first
> entry in the given table, which is not sufficient. Make sure we
> run the checks for all entries of the same capability. We do
> this by fixing __this_cpu_has_cap() to run through all the
> entries in the given table for a match and reuse it for
> verify_local_cpu_feature().
> 
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm64/kernel/cpufeature.c | 44 ++++++++++++++++++++++--------------------
>  1 file changed, 23 insertions(+), 21 deletions(-)

Applied. Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2018-01-15 18:01 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-08 17:32 [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Will Deacon
2018-01-08 17:32 ` [PATCH v3 01/13] arm64: use RET instruction for exiting the trampoline Will Deacon
2018-01-08 17:32 ` [PATCH v3 02/13] arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry Will Deacon
2018-01-09 17:17   ` Christoph Hellwig
2018-01-10 19:26     ` Will Deacon
2018-01-08 17:32 ` [PATCH v3 03/13] arm64: Take into account ID_AA64PFR0_EL1.CSV3 Will Deacon
2018-01-08 17:32 ` [PATCH v3 04/13] arm64: cpufeature: Pass capability structure to ->enable callback Will Deacon
2018-01-08 17:32 ` [PATCH v3 05/13] drivers/firmware: Expose psci_get_version through psci_ops structure Will Deacon
2018-01-08 17:32 ` [PATCH v3 06/13] arm64: Move post_ttbr_update_workaround to C code Will Deacon
2018-01-08 17:32 ` [PATCH v3 07/13] arm64: Add skeleton to harden the branch predictor against aliasing attacks Will Deacon
2018-01-09 12:55   ` Philippe Ombredanne
2018-01-08 17:32 ` [PATCH v3 08/13] arm64: KVM: Use per-CPU vector when BP hardening is enabled Will Deacon
2018-01-08 17:32 ` [PATCH v3 09/13] arm64: KVM: Make PSCI_VERSION a fast path Will Deacon
2018-01-08 17:32 ` [PATCH v3 10/13] arm64: cputype: Add missing MIDR values for Cortex-A72 and Cortex-A75 Will Deacon
2018-01-08 17:32 ` [PATCH v3 11/13] arm64: Implement branch predictor hardening for affected Cortex-A CPUs Will Deacon
2018-01-09 16:12   ` Suzuki K Poulose
2018-01-15 11:51     ` Marc Zyngier
2018-01-15 18:01     ` Catalin Marinas
2018-01-08 17:32 ` [PATCH v3 12/13] arm64: Implement branch predictor hardening for Falkor Will Deacon
2018-01-12 17:58   ` Shanker Donthineni
2018-01-08 17:32 ` [PATCH v3 13/13] arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs Will Deacon
2018-01-08 18:53 ` [PATCH v3 00/13] arm64 kpti hardening and variant 2 workarounds Catalin Marinas
2018-01-09 14:07   ` Matthias Brugger
2018-01-12 15:58     ` Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).