All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches
@ 2018-10-31 13:56 David Long
  2018-10-31 13:56 ` [PATCH 4.9 01/24] ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs David Long
                   ` (25 more replies)
  0 siblings, 26 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:56 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: "David A. Long" <dave.long@linaro.org>

V4.9 backport of spectre patches from Russell M. King's spectre branch.
Patches not yet in upstream are excluded.

Marc Zyngier (2):
  ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17
  ARM: KVM: invalidate icache on guest exit for Cortex-A15

Russell King (22):
  ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs
  ARM: bugs: prepare processor bug infrastructure
  ARM: bugs: hook processor bug checking into SMP and suspend paths
  ARM: bugs: add support for per-processor bug checking
  ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre
  ARM: spectre-v2: harden branch predictor on context switches
  ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit
  ARM: spectre-v2: harden user aborts in kernel space
  ARM: spectre-v2: add firmware based hardening
  ARM: spectre-v2: warn about incorrect context switching functions
  ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15
  ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
  ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1
  ARM: spectre-v1: add speculation barrier (csdb) macros
  ARM: spectre-v1: add array_index_mask_nospec() implementation
  ARM: spectre-v1: fix syscall entry
  ARM: signal: copy registers using __copy_from_user()
  ARM: vfp: use __copy_from_user() when restoring VFP state
  ARM: oabi-compat: copy semops using __copy_from_user()
  ARM: use __inttype() in get_user()
  ARM: spectre-v1: use get_user() for __get_user()
  ARM: spectre-v1: mitigate user accesses

 arch/arm/include/asm/assembler.h   |  12 ++
 arch/arm/include/asm/barrier.h     |  32 ++++++
 arch/arm/include/asm/bugs.h        |   6 +-
 arch/arm/include/asm/cp15.h        |   3 +
 arch/arm/include/asm/cputype.h     |   8 ++
 arch/arm/include/asm/kvm_asm.h     |   2 -
 arch/arm/include/asm/kvm_host.h    |  14 ++-
 arch/arm/include/asm/kvm_mmu.h     |  23 +++-
 arch/arm/include/asm/proc-fns.h    |   4 +
 arch/arm/include/asm/system_misc.h |  15 +++
 arch/arm/include/asm/thread_info.h |   4 +-
 arch/arm/include/asm/uaccess.h     |  26 +++--
 arch/arm/kernel/Makefile           |   1 +
 arch/arm/kernel/bugs.c             |  18 +++
 arch/arm/kernel/entry-common.S     |  18 ++-
 arch/arm/kernel/entry-header.S     |  25 +++++
 arch/arm/kernel/signal.c           |  55 ++++-----
 arch/arm/kernel/smp.c              |   4 +
 arch/arm/kernel/suspend.c          |   2 +
 arch/arm/kernel/sys_oabi-compat.c  |   8 +-
 arch/arm/kvm/hyp/hyp-entry.S       | 110 +++++++++++++++++-
 arch/arm/lib/copy_from_user.S      |   9 ++
 arch/arm/mm/Kconfig                |  23 ++++
 arch/arm/mm/Makefile               |   2 +-
 arch/arm/mm/fault.c                |   3 +
 arch/arm/mm/proc-macros.S          |   3 +-
 arch/arm/mm/proc-v7-2level.S       |   6 -
 arch/arm/mm/proc-v7-bugs.c         | 174 +++++++++++++++++++++++++++++
 arch/arm/mm/proc-v7.S              | 154 +++++++++++++++++++------
 arch/arm/vfp/vfpmodule.c           |  17 ++-
 30 files changed, 674 insertions(+), 107 deletions(-)
 create mode 100644 arch/arm/kernel/bugs.c
 create mode 100644 arch/arm/mm/proc-v7-bugs.c

-- 
2.17.1

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 4.9 01/24] ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
@ 2018-10-31 13:56 ` David Long
  2018-10-31 13:56 ` [PATCH 4.9 02/24] ARM: bugs: prepare processor bug infrastructure David Long
                   ` (24 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:56 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit f5683e76f35b4ec5891031b6a29036efe0a1ff84 upstream.

Add CPU part numbers for Cortex A53, A57, A72, A73, A75 and the
Broadcom Brahma B15 CPU.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/cputype.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/arm/include/asm/cputype.h b/arch/arm/include/asm/cputype.h
index b62eaeb147aa..c55db1e22f0c 100644
--- a/arch/arm/include/asm/cputype.h
+++ b/arch/arm/include/asm/cputype.h
@@ -76,8 +76,16 @@
 #define ARM_CPU_PART_CORTEX_A12		0x4100c0d0
 #define ARM_CPU_PART_CORTEX_A17		0x4100c0e0
 #define ARM_CPU_PART_CORTEX_A15		0x4100c0f0
+#define ARM_CPU_PART_CORTEX_A53		0x4100d030
+#define ARM_CPU_PART_CORTEX_A57		0x4100d070
+#define ARM_CPU_PART_CORTEX_A72		0x4100d080
+#define ARM_CPU_PART_CORTEX_A73		0x4100d090
+#define ARM_CPU_PART_CORTEX_A75		0x4100d0a0
 #define ARM_CPU_PART_MASK		0xff00fff0
 
+/* Broadcom cores */
+#define ARM_CPU_PART_BRAHMA_B15		0x420000f0
+
 /* DEC implemented cores */
 #define ARM_CPU_PART_SA1100		0x4400a110
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 02/24] ARM: bugs: prepare processor bug infrastructure
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
  2018-10-31 13:56 ` [PATCH 4.9 01/24] ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs David Long
@ 2018-10-31 13:56 ` David Long
  2018-10-31 13:56 ` [PATCH 4.9 03/24] ARM: bugs: hook processor bug checking into SMP and suspend paths David Long
                   ` (23 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:56 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit a5b9177f69329314721aa7022b7e69dab23fa1f0 upstream.

Prepare the processor bug infrastructure so that it can be expanded to
check for per-processor bugs.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/bugs.h | 4 ++--
 arch/arm/kernel/Makefile    | 1 +
 arch/arm/kernel/bugs.c      | 9 +++++++++
 3 files changed, 12 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm/kernel/bugs.c

diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h
index a97f1ea708d1..ed122d294f3f 100644
--- a/arch/arm/include/asm/bugs.h
+++ b/arch/arm/include/asm/bugs.h
@@ -10,10 +10,10 @@
 #ifndef __ASM_BUGS_H
 #define __ASM_BUGS_H
 
-#ifdef CONFIG_MMU
 extern void check_writebuffer_bugs(void);
 
-#define check_bugs() check_writebuffer_bugs()
+#ifdef CONFIG_MMU
+extern void check_bugs(void);
 #else
 #define check_bugs() do { } while (0)
 #endif
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index ad325a8c7e1e..adb9add28b6f 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -30,6 +30,7 @@ else
 obj-y		+= entry-armv.o
 endif
 
+obj-$(CONFIG_MMU)		+= bugs.o
 obj-$(CONFIG_CPU_IDLE)		+= cpuidle.o
 obj-$(CONFIG_ISA_DMA_API)	+= dma.o
 obj-$(CONFIG_FIQ)		+= fiq.o fiqasm.o
diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
new file mode 100644
index 000000000000..88024028bb70
--- /dev/null
+++ b/arch/arm/kernel/bugs.c
@@ -0,0 +1,9 @@
+// SPDX-Identifier: GPL-2.0
+#include <linux/init.h>
+#include <asm/bugs.h>
+#include <asm/proc-fns.h>
+
+void __init check_bugs(void)
+{
+	check_writebuffer_bugs();
+}
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 03/24] ARM: bugs: hook processor bug checking into SMP and suspend paths
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
  2018-10-31 13:56 ` [PATCH 4.9 01/24] ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs David Long
  2018-10-31 13:56 ` [PATCH 4.9 02/24] ARM: bugs: prepare processor bug infrastructure David Long
@ 2018-10-31 13:56 ` David Long
  2018-10-31 13:56 ` [PATCH 4.9 04/24] ARM: bugs: add support for per-processor bug checking David Long
                   ` (22 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:56 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit 26602161b5ba795928a5a719fe1d5d9f2ab5c3ef upstream.

Check for CPU bugs when secondary processors are being brought online,
and also when CPUs are resuming from a low power mode.  This gives an
opportunity to check that processor specific bug workarounds are
correctly enabled for all paths that a CPU re-enters the kernel.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/bugs.h | 2 ++
 arch/arm/kernel/bugs.c      | 5 +++++
 arch/arm/kernel/smp.c       | 4 ++++
 arch/arm/kernel/suspend.c   | 2 ++
 4 files changed, 13 insertions(+)

diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h
index ed122d294f3f..73a99c72a930 100644
--- a/arch/arm/include/asm/bugs.h
+++ b/arch/arm/include/asm/bugs.h
@@ -14,8 +14,10 @@ extern void check_writebuffer_bugs(void);
 
 #ifdef CONFIG_MMU
 extern void check_bugs(void);
+extern void check_other_bugs(void);
 #else
 #define check_bugs() do { } while (0)
+#define check_other_bugs() do { } while (0)
 #endif
 
 #endif
diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
index 88024028bb70..16e7ba2a9cc4 100644
--- a/arch/arm/kernel/bugs.c
+++ b/arch/arm/kernel/bugs.c
@@ -3,7 +3,12 @@
 #include <asm/bugs.h>
 #include <asm/proc-fns.h>
 
+void check_other_bugs(void)
+{
+}
+
 void __init check_bugs(void)
 {
 	check_writebuffer_bugs();
+	check_other_bugs();
 }
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 7dd14e8395e6..d2ce37da87d8 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -29,6 +29,7 @@
 #include <linux/irq_work.h>
 
 #include <linux/atomic.h>
+#include <asm/bugs.h>
 #include <asm/smp.h>
 #include <asm/cacheflush.h>
 #include <asm/cpu.h>
@@ -400,6 +401,9 @@ asmlinkage void secondary_start_kernel(void)
 	 * before we continue - which happens after __cpu_up returns.
 	 */
 	set_cpu_online(cpu, true);
+
+	check_other_bugs();
+
 	complete(&cpu_running);
 
 	local_irq_enable();
diff --git a/arch/arm/kernel/suspend.c b/arch/arm/kernel/suspend.c
index 9a2f882a0a2d..134f0d432610 100644
--- a/arch/arm/kernel/suspend.c
+++ b/arch/arm/kernel/suspend.c
@@ -1,6 +1,7 @@
 #include <linux/init.h>
 #include <linux/slab.h>
 
+#include <asm/bugs.h>
 #include <asm/cacheflush.h>
 #include <asm/idmap.h>
 #include <asm/pgalloc.h>
@@ -34,6 +35,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
 		cpu_switch_mm(mm->pgd, mm);
 		local_flush_bp_all();
 		local_flush_tlb_all();
+		check_other_bugs();
 	}
 
 	return ret;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 04/24] ARM: bugs: add support for per-processor bug checking
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (2 preceding siblings ...)
  2018-10-31 13:56 ` [PATCH 4.9 03/24] ARM: bugs: hook processor bug checking into SMP and suspend paths David Long
@ 2018-10-31 13:56 ` David Long
  2018-10-31 13:56 ` [PATCH 4.9 05/24] ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre David Long
                   ` (21 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:56 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit 9d3a04925deeabb97c8e26d940b501a2873e8af3 upstream.

Add support for per-processor bug checking - each processor function
descriptor gains a function pointer for this check, which must not be
an __init function.  If non-NULL, this will be called whenever a CPU
enters the kernel via which ever path (boot CPU, secondary CPU startup,
CPU resuming, etc.)

This allows processor specific bug checks to validate that workaround
bits are properly enabled by firmware via all entry paths to the kernel.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/proc-fns.h | 4 ++++
 arch/arm/kernel/bugs.c          | 4 ++++
 arch/arm/mm/proc-macros.S       | 3 ++-
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
index 8877ad5ffe10..f379f5f849a9 100644
--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -36,6 +36,10 @@ extern struct processor {
 	 * Set up any processor specifics
 	 */
 	void (*_proc_init)(void);
+	/*
+	 * Check for processor bugs
+	 */
+	void (*check_bugs)(void);
 	/*
 	 * Disable any processor specifics
 	 */
diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
index 16e7ba2a9cc4..7be511310191 100644
--- a/arch/arm/kernel/bugs.c
+++ b/arch/arm/kernel/bugs.c
@@ -5,6 +5,10 @@
 
 void check_other_bugs(void)
 {
+#ifdef MULTI_CPU
+	if (processor.check_bugs)
+		processor.check_bugs();
+#endif
 }
 
 void __init check_bugs(void)
diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S
index 0d40c285bd86..7d9176c4a21d 100644
--- a/arch/arm/mm/proc-macros.S
+++ b/arch/arm/mm/proc-macros.S
@@ -274,13 +274,14 @@
 	mcr	p15, 0, ip, c7, c10, 4		@ data write barrier
 	.endm
 
-.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0
+.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0, bugs=0
 	.type	\name\()_processor_functions, #object
 	.align 2
 ENTRY(\name\()_processor_functions)
 	.word	\dabort
 	.word	\pabort
 	.word	cpu_\name\()_proc_init
+	.word	\bugs
 	.word	cpu_\name\()_proc_fin
 	.word	cpu_\name\()_reset
 	.word	cpu_\name\()_do_idle
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 05/24] ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (3 preceding siblings ...)
  2018-10-31 13:56 ` [PATCH 4.9 04/24] ARM: bugs: add support for per-processor bug checking David Long
@ 2018-10-31 13:56 ` David Long
  2018-10-31 13:56 ` [PATCH 4.9 06/24] ARM: spectre-v2: harden branch predictor on context switches David Long
                   ` (20 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:56 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit c58d237d0852a57fde9bc2c310972e8f4e3d155d upstream.

Add a Kconfig symbol for CPUs which are vulnerable to the Spectre
attacks.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/mm/Kconfig | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index c1799dd1d0d9..d37af5e63411 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -396,6 +396,7 @@ config CPU_V7
 	select CPU_CP15_MPU if !MMU
 	select CPU_HAS_ASID if MMU
 	select CPU_PABRT_V7
+	select CPU_SPECTRE if MMU
 	select CPU_TLB_V7 if MMU
 
 # ARMv7M
@@ -800,6 +801,9 @@ config CPU_BPREDICT_DISABLE
 	help
 	  Say Y here to disable branch prediction.  If unsure, say N.
 
+config CPU_SPECTRE
+	bool
+
 config TLS_REG_EMUL
 	bool
 	select NEED_KUSER_HELPERS
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 06/24] ARM: spectre-v2: harden branch predictor on context switches
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (4 preceding siblings ...)
  2018-10-31 13:56 ` [PATCH 4.9 05/24] ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre David Long
@ 2018-10-31 13:56 ` David Long
  2018-10-31 13:56 ` [PATCH 4.9 07/24] ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit David Long
                   ` (19 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:56 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit 06c23f5ffe7ad45b908d0fff604dae08a7e334b9 upstream.

Required manual merge of arch/arm/mm/proc-v7.S.

Harden the branch predictor against Spectre v2 attacks on context
switches for ARMv7 and later CPUs.  We do this by:

Cortex A9, A12, A17, A73, A75: invalidating the BTB.
Cortex A15, Brahma B15: invalidating the instruction cache.

Cortex A57 and Cortex A72 are not addressed in this patch.

Cortex R7 and Cortex R8 are also not addressed as we do not enforce
memory protection on these cores.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/mm/Kconfig          |  19 ++++++
 arch/arm/mm/proc-v7-2level.S |   6 --
 arch/arm/mm/proc-v7.S        | 125 +++++++++++++++++++++++++++--------
 3 files changed, 115 insertions(+), 35 deletions(-)

diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index d37af5e63411..7f3760fa9c15 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -804,6 +804,25 @@ config CPU_BPREDICT_DISABLE
 config CPU_SPECTRE
 	bool
 
+config HARDEN_BRANCH_PREDICTOR
+	bool "Harden the branch predictor against aliasing attacks" if EXPERT
+	depends on CPU_SPECTRE
+	default y
+	help
+	   Speculation attacks against some high-performance processors rely
+	   on being able to manipulate the branch predictor for a victim
+	   context by executing aliasing branches in the attacker context.
+	   Such attacks can be partially mitigated against by clearing
+	   internal branch predictor state and limiting the prediction
+	   logic in some situations.
+
+	   This config option will take CPU-specific actions to harden
+	   the branch predictor against aliasing attacks and may rely on
+	   specific instruction sequences or control bits being set by
+	   the system firmware.
+
+	   If unsure, say Y.
+
 config TLS_REG_EMUL
 	bool
 	select NEED_KUSER_HELPERS
diff --git a/arch/arm/mm/proc-v7-2level.S b/arch/arm/mm/proc-v7-2level.S
index c6141a5435c3..f8d45ad2a515 100644
--- a/arch/arm/mm/proc-v7-2level.S
+++ b/arch/arm/mm/proc-v7-2level.S
@@ -41,11 +41,6 @@
  *	even on Cortex-A8 revisions not affected by 430973.
  *	If IBE is not set, the flush BTAC/BTB won't do anything.
  */
-ENTRY(cpu_ca8_switch_mm)
-#ifdef CONFIG_MMU
-	mov	r2, #0
-	mcr	p15, 0, r2, c7, c5, 6		@ flush BTAC/BTB
-#endif
 ENTRY(cpu_v7_switch_mm)
 #ifdef CONFIG_MMU
 	mmid	r1, r1				@ get mm->context.id
@@ -66,7 +61,6 @@ ENTRY(cpu_v7_switch_mm)
 #endif
 	bx	lr
 ENDPROC(cpu_v7_switch_mm)
-ENDPROC(cpu_ca8_switch_mm)
 
 /*
  *	cpu_v7_set_pte_ext(ptep, pte)
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index d00d52c9de3e..bf632d76d392 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -88,6 +88,17 @@ ENTRY(cpu_v7_dcache_clean_area)
 	ret	lr
 ENDPROC(cpu_v7_dcache_clean_area)
 
+ENTRY(cpu_v7_iciallu_switch_mm)
+	mov	r3, #0
+	mcr	p15, 0, r3, c7, c5, 0		@ ICIALLU
+	b	cpu_v7_switch_mm
+ENDPROC(cpu_v7_iciallu_switch_mm)
+ENTRY(cpu_v7_bpiall_switch_mm)
+	mov	r3, #0
+	mcr	p15, 0, r3, c7, c5, 6		@ flush BTAC/BTB
+	b	cpu_v7_switch_mm
+ENDPROC(cpu_v7_bpiall_switch_mm)
+
 	string	cpu_v7_name, "ARMv7 Processor"
 	.align
 
@@ -153,31 +164,6 @@ ENTRY(cpu_v7_do_resume)
 ENDPROC(cpu_v7_do_resume)
 #endif
 
-/*
- * Cortex-A8
- */
-	globl_equ	cpu_ca8_proc_init,	cpu_v7_proc_init
-	globl_equ	cpu_ca8_proc_fin,	cpu_v7_proc_fin
-	globl_equ	cpu_ca8_reset,		cpu_v7_reset
-	globl_equ	cpu_ca8_do_idle,	cpu_v7_do_idle
-	globl_equ	cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area
-	globl_equ	cpu_ca8_set_pte_ext,	cpu_v7_set_pte_ext
-	globl_equ	cpu_ca8_suspend_size,	cpu_v7_suspend_size
-#ifdef CONFIG_ARM_CPU_SUSPEND
-	globl_equ	cpu_ca8_do_suspend,	cpu_v7_do_suspend
-	globl_equ	cpu_ca8_do_resume,	cpu_v7_do_resume
-#endif
-
-/*
- * Cortex-A9 processor functions
- */
-	globl_equ	cpu_ca9mp_proc_init,	cpu_v7_proc_init
-	globl_equ	cpu_ca9mp_proc_fin,	cpu_v7_proc_fin
-	globl_equ	cpu_ca9mp_reset,	cpu_v7_reset
-	globl_equ	cpu_ca9mp_do_idle,	cpu_v7_do_idle
-	globl_equ	cpu_ca9mp_dcache_clean_area, cpu_v7_dcache_clean_area
-	globl_equ	cpu_ca9mp_switch_mm,	cpu_v7_switch_mm
-	globl_equ	cpu_ca9mp_set_pte_ext,	cpu_v7_set_pte_ext
 .globl	cpu_ca9mp_suspend_size
 .equ	cpu_ca9mp_suspend_size, cpu_v7_suspend_size + 4 * 2
 #ifdef CONFIG_ARM_CPU_SUSPEND
@@ -543,10 +529,75 @@ __v7_setup_stack:
 
 	@ define struct processor (see <asm/proc-fns.h> and proc-macros.S)
 	define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	@ generic v7 bpiall on context switch
+	globl_equ	cpu_v7_bpiall_proc_init,	cpu_v7_proc_init
+	globl_equ	cpu_v7_bpiall_proc_fin,		cpu_v7_proc_fin
+	globl_equ	cpu_v7_bpiall_reset,		cpu_v7_reset
+	globl_equ	cpu_v7_bpiall_do_idle,		cpu_v7_do_idle
+	globl_equ	cpu_v7_bpiall_dcache_clean_area, cpu_v7_dcache_clean_area
+	globl_equ	cpu_v7_bpiall_set_pte_ext,	cpu_v7_set_pte_ext
+	globl_equ	cpu_v7_bpiall_suspend_size,	cpu_v7_suspend_size
+#ifdef CONFIG_ARM_CPU_SUSPEND
+	globl_equ	cpu_v7_bpiall_do_suspend,	cpu_v7_do_suspend
+	globl_equ	cpu_v7_bpiall_do_resume,	cpu_v7_do_resume
+#endif
+	define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+
+#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_bpiall_processor_functions
+#else
+#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_processor_functions
+#endif
+
 #ifndef CONFIG_ARM_LPAE
+	@ Cortex-A8 - always needs bpiall switch_mm implementation
+	globl_equ	cpu_ca8_proc_init,	cpu_v7_proc_init
+	globl_equ	cpu_ca8_proc_fin,	cpu_v7_proc_fin
+	globl_equ	cpu_ca8_reset,		cpu_v7_reset
+	globl_equ	cpu_ca8_do_idle,	cpu_v7_do_idle
+	globl_equ	cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area
+	globl_equ	cpu_ca8_set_pte_ext,	cpu_v7_set_pte_ext
+	globl_equ	cpu_ca8_switch_mm,	cpu_v7_bpiall_switch_mm
+	globl_equ	cpu_ca8_suspend_size,	cpu_v7_suspend_size
+#ifdef CONFIG_ARM_CPU_SUSPEND
+	globl_equ	cpu_ca8_do_suspend,	cpu_v7_do_suspend
+	globl_equ	cpu_ca8_do_resume,	cpu_v7_do_resume
+#endif
 	define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+
+	@ Cortex-A9 - needs more registers preserved across suspend/resume
+	@ and bpiall switch_mm for hardening
+	globl_equ	cpu_ca9mp_proc_init,	cpu_v7_proc_init
+	globl_equ	cpu_ca9mp_proc_fin,	cpu_v7_proc_fin
+	globl_equ	cpu_ca9mp_reset,	cpu_v7_reset
+	globl_equ	cpu_ca9mp_do_idle,	cpu_v7_do_idle
+	globl_equ	cpu_ca9mp_dcache_clean_area, cpu_v7_dcache_clean_area
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	globl_equ	cpu_ca9mp_switch_mm,	cpu_v7_bpiall_switch_mm
+#else
+	globl_equ	cpu_ca9mp_switch_mm,	cpu_v7_switch_mm
+#endif
+	globl_equ	cpu_ca9mp_set_pte_ext,	cpu_v7_set_pte_ext
 	define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
 #endif
+
+	@ Cortex-A15 - needs iciallu switch_mm for hardening
+	globl_equ	cpu_ca15_proc_init,	cpu_v7_proc_init
+	globl_equ	cpu_ca15_proc_fin,	cpu_v7_proc_fin
+	globl_equ	cpu_ca15_reset,		cpu_v7_reset
+	globl_equ	cpu_ca15_do_idle,	cpu_v7_do_idle
+	globl_equ	cpu_ca15_dcache_clean_area, cpu_v7_dcache_clean_area
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	globl_equ	cpu_ca15_switch_mm,	cpu_v7_iciallu_switch_mm
+#else
+	globl_equ	cpu_ca15_switch_mm,	cpu_v7_switch_mm
+#endif
+	globl_equ	cpu_ca15_set_pte_ext,	cpu_v7_set_pte_ext
+	globl_equ	cpu_ca15_suspend_size,	cpu_v7_suspend_size
+	globl_equ	cpu_ca15_do_suspend,	cpu_v7_do_suspend
+	globl_equ	cpu_ca15_do_resume,	cpu_v7_do_resume
+	define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
 #ifdef CONFIG_CPU_PJ4B
 	define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
 #endif
@@ -653,7 +704,7 @@ __v7_ca7mp_proc_info:
 __v7_ca12mp_proc_info:
 	.long	0x410fc0d0
 	.long	0xff0ffff0
-	__v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup
+	__v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
 	.size	__v7_ca12mp_proc_info, . - __v7_ca12mp_proc_info
 
 	/*
@@ -663,7 +714,7 @@ __v7_ca12mp_proc_info:
 __v7_ca15mp_proc_info:
 	.long	0x410fc0f0
 	.long	0xff0ffff0
-	__v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup
+	__v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup, proc_fns = ca15_processor_functions
 	.size	__v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info
 
 	/*
@@ -673,7 +724,7 @@ __v7_ca15mp_proc_info:
 __v7_b15mp_proc_info:
 	.long	0x420f00f0
 	.long	0xff0ffff0
-	__v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup
+	__v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup, proc_fns = ca15_processor_functions
 	.size	__v7_b15mp_proc_info, . - __v7_b15mp_proc_info
 
 	/*
@@ -683,9 +734,25 @@ __v7_b15mp_proc_info:
 __v7_ca17mp_proc_info:
 	.long	0x410fc0e0
 	.long	0xff0ffff0
-	__v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup
+	__v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
 	.size	__v7_ca17mp_proc_info, . - __v7_ca17mp_proc_info
 
+	/* ARM Ltd. Cortex A73 processor */
+	.type	__v7_ca73_proc_info, #object
+__v7_ca73_proc_info:
+	.long	0x410fd090
+	.long	0xff0ffff0
+	__v7_proc __v7_ca73_proc_info, __v7_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
+	.size	__v7_ca73_proc_info, . - __v7_ca73_proc_info
+
+	/* ARM Ltd. Cortex A75 processor */
+	.type	__v7_ca75_proc_info, #object
+__v7_ca75_proc_info:
+	.long	0x410fd0a0
+	.long	0xff0ffff0
+	__v7_proc __v7_ca75_proc_info, __v7_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
+	.size	__v7_ca75_proc_info, . - __v7_ca75_proc_info
+
 	/*
 	 * Qualcomm Inc. Krait processors.
 	 */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 07/24] ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (5 preceding siblings ...)
  2018-10-31 13:56 ` [PATCH 4.9 06/24] ARM: spectre-v2: harden branch predictor on context switches David Long
@ 2018-10-31 13:56 ` David Long
  2018-10-31 13:56 ` [PATCH 4.9 08/24] ARM: spectre-v2: harden user aborts in kernel space David Long
                   ` (18 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:56 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit e388b80288aade31135aca23d32eee93dd106795 upstream.

When the branch predictor hardening is enabled, firmware must have set
the IBE bit in the auxiliary control register.  If this bit has not
been set, the Spectre workarounds will not be functional.

Add validation that this bit is set, and print a warning at alert level
if this is not the case.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/mm/Makefile       |  2 +-
 arch/arm/mm/proc-v7-bugs.c | 36 ++++++++++++++++++++++++++++++++++++
 arch/arm/mm/proc-v7.S      |  4 ++--
 3 files changed, 39 insertions(+), 3 deletions(-)
 create mode 100644 arch/arm/mm/proc-v7-bugs.c

diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index e8698241ece9..92d47c8cbbc3 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -94,7 +94,7 @@ obj-$(CONFIG_CPU_MOHAWK)	+= proc-mohawk.o
 obj-$(CONFIG_CPU_FEROCEON)	+= proc-feroceon.o
 obj-$(CONFIG_CPU_V6)		+= proc-v6.o
 obj-$(CONFIG_CPU_V6K)		+= proc-v6.o
-obj-$(CONFIG_CPU_V7)		+= proc-v7.o
+obj-$(CONFIG_CPU_V7)		+= proc-v7.o proc-v7-bugs.o
 obj-$(CONFIG_CPU_V7M)		+= proc-v7m.o
 
 AFLAGS_proc-v6.o	:=-Wa,-march=armv6
diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
new file mode 100644
index 000000000000..e46557db6446
--- /dev/null
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -0,0 +1,36 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/kernel.h>
+#include <linux/smp.h>
+
+static __maybe_unused void cpu_v7_check_auxcr_set(bool *warned,
+						  u32 mask, const char *msg)
+{
+	u32 aux_cr;
+
+	asm("mrc p15, 0, %0, c1, c0, 1" : "=r" (aux_cr));
+
+	if ((aux_cr & mask) != mask) {
+		if (!*warned)
+			pr_err("CPU%u: %s", smp_processor_id(), msg);
+		*warned = true;
+	}
+}
+
+static DEFINE_PER_CPU(bool, spectre_warned);
+
+static void check_spectre_auxcr(bool *warned, u32 bit)
+{
+	if (IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR) &&
+		cpu_v7_check_auxcr_set(warned, bit,
+				       "Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable\n");
+}
+
+void cpu_v7_ca8_ibe(void)
+{
+	check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6));
+}
+
+void cpu_v7_ca15_ibe(void)
+{
+	check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0));
+}
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index bf632d76d392..4e4f794f17ce 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -564,7 +564,7 @@ __v7_setup_stack:
 	globl_equ	cpu_ca8_do_suspend,	cpu_v7_do_suspend
 	globl_equ	cpu_ca8_do_resume,	cpu_v7_do_resume
 #endif
-	define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_ca8_ibe
 
 	@ Cortex-A9 - needs more registers preserved across suspend/resume
 	@ and bpiall switch_mm for hardening
@@ -597,7 +597,7 @@ __v7_setup_stack:
 	globl_equ	cpu_ca15_suspend_size,	cpu_v7_suspend_size
 	globl_equ	cpu_ca15_do_suspend,	cpu_v7_do_suspend
 	globl_equ	cpu_ca15_do_resume,	cpu_v7_do_resume
-	define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_ca15_ibe
 #ifdef CONFIG_CPU_PJ4B
 	define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
 #endif
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 08/24] ARM: spectre-v2: harden user aborts in kernel space
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (6 preceding siblings ...)
  2018-10-31 13:56 ` [PATCH 4.9 07/24] ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit David Long
@ 2018-10-31 13:56 ` David Long
  2018-10-31 13:56 ` [PATCH 4.9 09/24] ARM: spectre-v2: add firmware based hardening David Long
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:56 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit f5fe12b1eaee220ce62ff9afb8b90929c396595f upstream.

In order to prevent aliasing attacks on the branch predictor,
invalidate the BTB or instruction cache on CPUs that are known to be
affected when taking an abort on a address that is outside of a user
task limit:

Cortex A8, A9, A12, A17, A73, A75: flush BTB.
Cortex A15, Brahma B15: invalidate icache.

If the IBE bit is not set, then there is little point to enabling the
workaround.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/cp15.h        |  3 ++
 arch/arm/include/asm/system_misc.h | 15 ++++++
 arch/arm/mm/fault.c                |  3 ++
 arch/arm/mm/proc-v7-bugs.c         | 73 ++++++++++++++++++++++++++++--
 arch/arm/mm/proc-v7.S              |  8 ++--
 5 files changed, 94 insertions(+), 8 deletions(-)

diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index dbdbce1b3a72..b74b174ac9fc 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -64,6 +64,9 @@
 #define __write_sysreg(v, r, w, c, t)	asm volatile(w " " c : : "r" ((t)(v)))
 #define write_sysreg(v, ...)		__write_sysreg(v, __VA_ARGS__)
 
+#define BPIALL				__ACCESS_CP15(c7, 0, c5, 6)
+#define ICIALLU				__ACCESS_CP15(c7, 0, c5, 0)
+
 extern unsigned long cr_alignment;	/* defined in entry-armv.S */
 
 static inline unsigned long get_cr(void)
diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
index a3d61ad984af..1fed41440af9 100644
--- a/arch/arm/include/asm/system_misc.h
+++ b/arch/arm/include/asm/system_misc.h
@@ -7,6 +7,7 @@
 #include <linux/linkage.h>
 #include <linux/irqflags.h>
 #include <linux/reboot.h>
+#include <linux/percpu.h>
 
 extern void cpu_init(void);
 
@@ -14,6 +15,20 @@ void soft_restart(unsigned long);
 extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
 extern void (*arm_pm_idle)(void);
 
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+typedef void (*harden_branch_predictor_fn_t)(void);
+DECLARE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
+static inline void harden_branch_predictor(void)
+{
+	harden_branch_predictor_fn_t fn = per_cpu(harden_branch_predictor_fn,
+						  smp_processor_id());
+	if (fn)
+		fn();
+}
+#else
+#define harden_branch_predictor() do { } while (0)
+#endif
+
 #define UDBG_UNDEFINED	(1 << 0)
 #define UDBG_SYSCALL	(1 << 1)
 #define UDBG_BADABORT	(1 << 2)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index f7861dc83182..5ca207ada852 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk, unsigned long addr,
 {
 	struct siginfo si;
 
+	if (addr > TASK_SIZE)
+		harden_branch_predictor();
+
 #ifdef CONFIG_DEBUG_USER
 	if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) ||
 	    ((user_debug & UDBG_BUS)  && (sig == SIGBUS))) {
diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
index e46557db6446..85a2e3d6263c 100644
--- a/arch/arm/mm/proc-v7-bugs.c
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -2,7 +2,61 @@
 #include <linux/kernel.h>
 #include <linux/smp.h>
 
-static __maybe_unused void cpu_v7_check_auxcr_set(bool *warned,
+#include <asm/cp15.h>
+#include <asm/cputype.h>
+#include <asm/system_misc.h>
+
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
+
+static void harden_branch_predictor_bpiall(void)
+{
+	write_sysreg(0, BPIALL);
+}
+
+static void harden_branch_predictor_iciallu(void)
+{
+	write_sysreg(0, ICIALLU);
+}
+
+static void cpu_v7_spectre_init(void)
+{
+	const char *spectre_v2_method = NULL;
+	int cpu = smp_processor_id();
+
+	if (per_cpu(harden_branch_predictor_fn, cpu))
+		return;
+
+	switch (read_cpuid_part()) {
+	case ARM_CPU_PART_CORTEX_A8:
+	case ARM_CPU_PART_CORTEX_A9:
+	case ARM_CPU_PART_CORTEX_A12:
+	case ARM_CPU_PART_CORTEX_A17:
+	case ARM_CPU_PART_CORTEX_A73:
+	case ARM_CPU_PART_CORTEX_A75:
+		per_cpu(harden_branch_predictor_fn, cpu) =
+			harden_branch_predictor_bpiall;
+		spectre_v2_method = "BPIALL";
+		break;
+
+	case ARM_CPU_PART_CORTEX_A15:
+	case ARM_CPU_PART_BRAHMA_B15:
+		per_cpu(harden_branch_predictor_fn, cpu) =
+			harden_branch_predictor_iciallu;
+		spectre_v2_method = "ICIALLU";
+		break;
+	}
+	if (spectre_v2_method)
+		pr_info("CPU%u: Spectre v2: using %s workaround\n",
+			smp_processor_id(), spectre_v2_method);
+}
+#else
+static void cpu_v7_spectre_init(void)
+{
+}
+#endif
+
+static __maybe_unused bool cpu_v7_check_auxcr_set(bool *warned,
 						  u32 mask, const char *msg)
 {
 	u32 aux_cr;
@@ -13,24 +67,33 @@ static __maybe_unused void cpu_v7_check_auxcr_set(bool *warned,
 		if (!*warned)
 			pr_err("CPU%u: %s", smp_processor_id(), msg);
 		*warned = true;
+		return false;
 	}
+	return true;
 }
 
 static DEFINE_PER_CPU(bool, spectre_warned);
 
-static void check_spectre_auxcr(bool *warned, u32 bit)
+static bool check_spectre_auxcr(bool *warned, u32 bit)
 {
-	if (IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR) &&
+	return IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR) &&
 		cpu_v7_check_auxcr_set(warned, bit,
 				       "Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable\n");
 }
 
 void cpu_v7_ca8_ibe(void)
 {
-	check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6));
+	if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6)))
+		cpu_v7_spectre_init();
 }
 
 void cpu_v7_ca15_ibe(void)
 {
-	check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0));
+	if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0)))
+		cpu_v7_spectre_init();
+}
+
+void cpu_v7_bugs_init(void)
+{
+	cpu_v7_spectre_init();
 }
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 4e4f794f17ce..2d2e5ae85816 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -527,8 +527,10 @@ __v7_setup_stack:
 
 	__INITDATA
 
+	.weak cpu_v7_bugs_init
+
 	@ define struct processor (see <asm/proc-fns.h> and proc-macros.S)
-	define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
 
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	@ generic v7 bpiall on context switch
@@ -543,7 +545,7 @@ __v7_setup_stack:
 	globl_equ	cpu_v7_bpiall_do_suspend,	cpu_v7_do_suspend
 	globl_equ	cpu_v7_bpiall_do_resume,	cpu_v7_do_resume
 #endif
-	define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
 
 #define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_bpiall_processor_functions
 #else
@@ -579,7 +581,7 @@ __v7_setup_stack:
 	globl_equ	cpu_ca9mp_switch_mm,	cpu_v7_switch_mm
 #endif
 	globl_equ	cpu_ca9mp_set_pte_ext,	cpu_v7_set_pte_ext
-	define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
 #endif
 
 	@ Cortex-A15 - needs iciallu switch_mm for hardening
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 09/24] ARM: spectre-v2: add firmware based hardening
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (7 preceding siblings ...)
  2018-10-31 13:56 ` [PATCH 4.9 08/24] ARM: spectre-v2: harden user aborts in kernel space David Long
@ 2018-10-31 13:56 ` David Long
  2018-11-06 10:40   ` Marc Zyngier
  2018-10-31 13:56 ` [PATCH 4.9 10/24] ARM: spectre-v2: warn about incorrect context switching functions David Long
                   ` (16 subsequent siblings)
  25 siblings, 1 reply; 40+ messages in thread
From: David Long @ 2018-10-31 13:56 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit 10115105cb3aa17b5da1cb726ae8dd5f6854bd93 upstream.

Add firmware based hardening for cores that require more complex
handling in firmware.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/mm/proc-v7-bugs.c | 60 ++++++++++++++++++++++++++++++++++++++
 arch/arm/mm/proc-v7.S      | 21 +++++++++++++
 2 files changed, 81 insertions(+)

diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
index 85a2e3d6263c..da25a38e1897 100644
--- a/arch/arm/mm/proc-v7-bugs.c
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -1,14 +1,20 @@
 // SPDX-License-Identifier: GPL-2.0
+#include <linux/arm-smccc.h>
 #include <linux/kernel.h>
+#include <linux/psci.h>
 #include <linux/smp.h>
 
 #include <asm/cp15.h>
 #include <asm/cputype.h>
+#include <asm/proc-fns.h>
 #include <asm/system_misc.h>
 
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
 
+extern void cpu_v7_smc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
+extern void cpu_v7_hvc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
+
 static void harden_branch_predictor_bpiall(void)
 {
 	write_sysreg(0, BPIALL);
@@ -19,6 +25,16 @@ static void harden_branch_predictor_iciallu(void)
 	write_sysreg(0, ICIALLU);
 }
 
+static void __maybe_unused call_smc_arch_workaround_1(void)
+{
+	arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
+}
+
+static void __maybe_unused call_hvc_arch_workaround_1(void)
+{
+	arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
+}
+
 static void cpu_v7_spectre_init(void)
 {
 	const char *spectre_v2_method = NULL;
@@ -45,7 +61,51 @@ static void cpu_v7_spectre_init(void)
 			harden_branch_predictor_iciallu;
 		spectre_v2_method = "ICIALLU";
 		break;
+
+#ifdef CONFIG_ARM_PSCI
+	default:
+		/* Other ARM CPUs require no workaround */
+		if (read_cpuid_implementor() == ARM_CPU_IMP_ARM)
+			break;
+		/* fallthrough */
+		/* Cortex A57/A72 require firmware workaround */
+	case ARM_CPU_PART_CORTEX_A57:
+	case ARM_CPU_PART_CORTEX_A72: {
+		struct arm_smccc_res res;
+
+		if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
+			break;
+
+		switch (psci_ops.conduit) {
+		case PSCI_CONDUIT_HVC:
+			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
+			if ((int)res.a0 != 0)
+				break;
+			per_cpu(harden_branch_predictor_fn, cpu) =
+				call_hvc_arch_workaround_1;
+			processor.switch_mm = cpu_v7_hvc_switch_mm;
+			spectre_v2_method = "hypervisor";
+			break;
+
+		case PSCI_CONDUIT_SMC:
+			arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
+			if ((int)res.a0 != 0)
+				break;
+			per_cpu(harden_branch_predictor_fn, cpu) =
+				call_smc_arch_workaround_1;
+			processor.switch_mm = cpu_v7_smc_switch_mm;
+			spectre_v2_method = "firmware";
+			break;
+
+		default:
+			break;
+		}
 	}
+#endif
+	}
+
 	if (spectre_v2_method)
 		pr_info("CPU%u: Spectre v2: using %s workaround\n",
 			smp_processor_id(), spectre_v2_method);
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 2d2e5ae85816..8fde9edb4a48 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -9,6 +9,7 @@
  *
  *  This is the "shell" of the ARMv7 processor support.
  */
+#include <linux/arm-smccc.h>
 #include <linux/init.h>
 #include <linux/linkage.h>
 #include <asm/assembler.h>
@@ -88,6 +89,26 @@ ENTRY(cpu_v7_dcache_clean_area)
 	ret	lr
 ENDPROC(cpu_v7_dcache_clean_area)
 
+#ifdef CONFIG_ARM_PSCI
+	.arch_extension sec
+ENTRY(cpu_v7_smc_switch_mm)
+	stmfd	sp!, {r0 - r3}
+	movw	r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
+	movt	r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
+	smc	#0
+	ldmfd	sp!, {r0 - r3}
+	b	cpu_v7_switch_mm
+ENDPROC(cpu_v7_smc_switch_mm)
+	.arch_extension virt
+ENTRY(cpu_v7_hvc_switch_mm)
+	stmfd	sp!, {r0 - r3}
+	movw	r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
+	movt	r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
+	hvc	#0
+	ldmfd	sp!, {r0 - r3}
+	b	cpu_v7_switch_mm
+ENDPROC(cpu_v7_smc_switch_mm)
+#endif
 ENTRY(cpu_v7_iciallu_switch_mm)
 	mov	r3, #0
 	mcr	p15, 0, r3, c7, c5, 0		@ ICIALLU
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 10/24] ARM: spectre-v2: warn about incorrect context switching functions
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (8 preceding siblings ...)
  2018-10-31 13:56 ` [PATCH 4.9 09/24] ARM: spectre-v2: add firmware based hardening David Long
@ 2018-10-31 13:56 ` David Long
  2018-10-31 13:57 ` [PATCH 4.9 11/24] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17 David Long
                   ` (15 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:56 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit c44f366ea7c85e1be27d08f2f0880f4120698125 upstream.

Warn at error level if the context switching function is not what we
are expecting.  This can happen with big.Little systems, which we
currently do not support.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/mm/proc-v7-bugs.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
index da25a38e1897..5544b82a2e7a 100644
--- a/arch/arm/mm/proc-v7-bugs.c
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -12,6 +12,8 @@
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
 
+extern void cpu_v7_iciallu_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
+extern void cpu_v7_bpiall_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
 extern void cpu_v7_smc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
 extern void cpu_v7_hvc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
 
@@ -50,6 +52,8 @@ static void cpu_v7_spectre_init(void)
 	case ARM_CPU_PART_CORTEX_A17:
 	case ARM_CPU_PART_CORTEX_A73:
 	case ARM_CPU_PART_CORTEX_A75:
+		if (processor.switch_mm != cpu_v7_bpiall_switch_mm)
+			goto bl_error;
 		per_cpu(harden_branch_predictor_fn, cpu) =
 			harden_branch_predictor_bpiall;
 		spectre_v2_method = "BPIALL";
@@ -57,6 +61,8 @@ static void cpu_v7_spectre_init(void)
 
 	case ARM_CPU_PART_CORTEX_A15:
 	case ARM_CPU_PART_BRAHMA_B15:
+		if (processor.switch_mm != cpu_v7_iciallu_switch_mm)
+			goto bl_error;
 		per_cpu(harden_branch_predictor_fn, cpu) =
 			harden_branch_predictor_iciallu;
 		spectre_v2_method = "ICIALLU";
@@ -82,6 +88,8 @@ static void cpu_v7_spectre_init(void)
 					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
 			if ((int)res.a0 != 0)
 				break;
+			if (processor.switch_mm != cpu_v7_hvc_switch_mm && cpu)
+				goto bl_error;
 			per_cpu(harden_branch_predictor_fn, cpu) =
 				call_hvc_arch_workaround_1;
 			processor.switch_mm = cpu_v7_hvc_switch_mm;
@@ -93,6 +101,8 @@ static void cpu_v7_spectre_init(void)
 					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
 			if ((int)res.a0 != 0)
 				break;
+			if (processor.switch_mm != cpu_v7_smc_switch_mm && cpu)
+				goto bl_error;
 			per_cpu(harden_branch_predictor_fn, cpu) =
 				call_smc_arch_workaround_1;
 			processor.switch_mm = cpu_v7_smc_switch_mm;
@@ -109,6 +119,11 @@ static void cpu_v7_spectre_init(void)
 	if (spectre_v2_method)
 		pr_info("CPU%u: Spectre v2: using %s workaround\n",
 			smp_processor_id(), spectre_v2_method);
+	return;
+
+bl_error:
+	pr_err("CPU%u: Spectre v2: incorrect context switching function, system vulnerable\n",
+		cpu);
 }
 #else
 static void cpu_v7_spectre_init(void)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 11/24] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (9 preceding siblings ...)
  2018-10-31 13:56 ` [PATCH 4.9 10/24] ARM: spectre-v2: warn about incorrect context switching functions David Long
@ 2018-10-31 13:57 ` David Long
  2018-11-05  9:13   ` Marc Zyngier
  2018-10-31 13:57 ` [PATCH 4.9 12/24] ARM: KVM: invalidate icache on guest exit for Cortex-A15 David Long
                   ` (14 subsequent siblings)
  25 siblings, 1 reply; 40+ messages in thread
From: David Long @ 2018-10-31 13:57 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Marc Zyngier <marc.zyngier@arm.com>

Commit 3f7e8e2e1ebda787f156ce46e3f0a9ce2833fa4f upstream.

In order to avoid aliasing attacks against the branch predictor,
let's invalidate the BTB on guest exit. This is made complicated
by the fact that we cannot take a branch before invalidating the
BTB.

We only apply this to A12 and A17, which are the only two ARM
cores on which this useful.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/kvm_asm.h |  2 -
 arch/arm/include/asm/kvm_mmu.h | 17 ++++++++-
 arch/arm/kvm/hyp/hyp-entry.S   | 69 ++++++++++++++++++++++++++++++++++
 3 files changed, 85 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 8ef05381984b..24f3ec7c9fbe 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -61,8 +61,6 @@ struct kvm_vcpu;
 extern char __kvm_hyp_init[];
 extern char __kvm_hyp_init_end[];
 
-extern char __kvm_hyp_vector[];
-
 extern void __kvm_flush_vm_context(void);
 extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
 extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index e2f05cedaf97..625edef2a54f 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -248,7 +248,22 @@ static inline int kvm_read_guest_lock(struct kvm *kvm,
 
 static inline void *kvm_get_hyp_vector(void)
 {
-	return kvm_ksym_ref(__kvm_hyp_vector);
+	switch(read_cpuid_part()) {
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	case ARM_CPU_PART_CORTEX_A12:
+	case ARM_CPU_PART_CORTEX_A17:
+	{
+		extern char __kvm_hyp_vector_bp_inv[];
+		return kvm_ksym_ref(__kvm_hyp_vector_bp_inv);
+	}
+
+#endif
+	default:
+	{
+		extern char __kvm_hyp_vector[];
+		return kvm_ksym_ref(__kvm_hyp_vector);
+	}
+	}
 }
 
 static inline int kvm_map_vectors(void)
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index 96beb53934c9..de242d9598c6 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -71,6 +71,66 @@ __kvm_hyp_vector:
 	W(b)	hyp_irq
 	W(b)	hyp_fiq
 
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	.align 5
+__kvm_hyp_vector_bp_inv:
+	.global __kvm_hyp_vector_bp_inv
+
+	/*
+	 * We encode the exception entry in the bottom 3 bits of
+	 * SP, and we have to guarantee to be 8 bytes aligned.
+	 */
+	W(add)	sp, sp, #1	/* Reset 	  7 */
+	W(add)	sp, sp, #1	/* Undef	  6 */
+	W(add)	sp, sp, #1	/* Syscall	  5 */
+	W(add)	sp, sp, #1	/* Prefetch abort 4 */
+	W(add)	sp, sp, #1	/* Data abort	  3 */
+	W(add)	sp, sp, #1	/* HVC		  2 */
+	W(add)	sp, sp, #1	/* IRQ		  1 */
+	W(nop)			/* FIQ		  0 */
+
+	mcr	p15, 0, r0, c7, c5, 6	/* BPIALL */
+	isb
+
+#ifdef CONFIG_THUMB2_KERNEL
+	/*
+	 * Yet another silly hack: Use VPIDR as a temp register.
+	 * Thumb2 is really a pain, as SP cannot be used with most
+	 * of the bitwise instructions. The vect_br macro ensures
+	 * things gets cleaned-up.
+	 */
+	mcr	p15, 4, r0, c0, c0, 0	/* VPIDR */
+	mov	r0, sp
+	and	r0, r0, #7
+	sub	sp, sp, r0
+	push	{r1, r2}
+	mov	r1, r0
+	mrc	p15, 4, r0, c0, c0, 0	/* VPIDR */
+	mrc	p15, 0, r2, c0, c0, 0	/* MIDR  */
+	mcr	p15, 4, r2, c0, c0, 0	/* VPIDR */
+#endif
+
+.macro vect_br val, targ
+ARM(	eor	sp, sp, #\val	)
+ARM(	tst	sp, #7		)
+ARM(	eorne	sp, sp, #\val	)
+
+THUMB(	cmp	r1, #\val	)
+THUMB(	popeq	{r1, r2}	)
+
+	beq	\targ
+.endm
+
+	vect_br	0, hyp_fiq
+	vect_br	1, hyp_irq
+	vect_br	2, hyp_hvc
+	vect_br	3, hyp_dabt
+	vect_br	4, hyp_pabt
+	vect_br	5, hyp_svc
+	vect_br	6, hyp_undef
+	vect_br	7, hyp_reset
+#endif
+
 .macro invalid_vector label, cause
 	.align
 \label:	mov	r0, #\cause
@@ -132,6 +192,14 @@ hyp_hvc:
 	beq	1f
 
 	push	{lr}
+	/*
+	 * Pushing r2 here is just a way of keeping the stack aligned to
+	 * 8 bytes on any path that can trigger a HYP exception. Here,
+	 * we may well be about to jump into the guest, and the guest
+	 * exit would otherwise be badly decoded by our fancy
+	 * "decode-exception-without-a-branch" code...
+	 */
+	push	{r2, lr}
 
 	mov	lr, r0
 	mov	r0, r1
@@ -142,6 +210,7 @@ THUMB(	orr	lr, #1)
 	blx	lr			@ Call the HYP function
 
 	pop	{lr}
+	pop	{r2, lr}
 1:	eret
 
 guest_trap:
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 12/24] ARM: KVM: invalidate icache on guest exit for Cortex-A15
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (10 preceding siblings ...)
  2018-10-31 13:57 ` [PATCH 4.9 11/24] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17 David Long
@ 2018-10-31 13:57 ` David Long
  2018-10-31 13:57 ` [PATCH 4.9 13/24] ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15 David Long
                   ` (13 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:57 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Marc Zyngier <marc.zyngier@arm.com>

Commit 0c47ac8cd157727e7a532d665d6fb1b5fd333977 upstream.

In order to avoid aliasing attacks against the branch predictor
on Cortex-A15, let's invalidate the BTB on guest exit, which can
only be done by invalidating the icache (with ACTLR[0] being set).

We use the same hack as for A12/A17 to perform the vector decoding.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/kvm_mmu.h |  5 +++++
 arch/arm/kvm/hyp/hyp-entry.S   | 24 ++++++++++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 625edef2a54f..3ad2c44f4137 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -257,6 +257,11 @@ static inline void *kvm_get_hyp_vector(void)
 		return kvm_ksym_ref(__kvm_hyp_vector_bp_inv);
 	}
 
+	case ARM_CPU_PART_CORTEX_A15:
+	{
+		extern char __kvm_hyp_vector_ic_inv[];
+		return kvm_ksym_ref(__kvm_hyp_vector_ic_inv);
+	}
 #endif
 	default:
 	{
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index de242d9598c6..582f50759d80 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -72,6 +72,28 @@ __kvm_hyp_vector:
 	W(b)	hyp_fiq
 
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	.align 5
+__kvm_hyp_vector_ic_inv:
+	.global __kvm_hyp_vector_ic_inv
+
+	/*
+	 * We encode the exception entry in the bottom 3 bits of
+	 * SP, and we have to guarantee to be 8 bytes aligned.
+	 */
+	W(add)	sp, sp, #1	/* Reset 	  7 */
+	W(add)	sp, sp, #1	/* Undef	  6 */
+	W(add)	sp, sp, #1	/* Syscall	  5 */
+	W(add)	sp, sp, #1	/* Prefetch abort 4 */
+	W(add)	sp, sp, #1	/* Data abort	  3 */
+	W(add)	sp, sp, #1	/* HVC		  2 */
+	W(add)	sp, sp, #1	/* IRQ		  1 */
+	W(nop)			/* FIQ		  0 */
+
+	mcr	p15, 0, r0, c7, c5, 0	/* ICIALLU */
+	isb
+
+	b	decode_vectors
+
 	.align 5
 __kvm_hyp_vector_bp_inv:
 	.global __kvm_hyp_vector_bp_inv
@@ -92,6 +114,8 @@ __kvm_hyp_vector_bp_inv:
 	mcr	p15, 0, r0, c7, c5, 6	/* BPIALL */
 	isb
 
+decode_vectors:
+
 #ifdef CONFIG_THUMB2_KERNEL
 	/*
 	 * Yet another silly hack: Use VPIDR as a temp register.
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 13/24] ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (11 preceding siblings ...)
  2018-10-31 13:57 ` [PATCH 4.9 12/24] ARM: KVM: invalidate icache on guest exit for Cortex-A15 David Long
@ 2018-10-31 13:57 ` David Long
  2018-10-31 13:57 ` [PATCH 4.9 14/24] ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling David Long
                   ` (12 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:57 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit 3c908e16396d130608e831b7fac4b167a2ede6ba upstream.

Include Brahma B15 in the Spectre v2 KVM workarounds.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/kvm_mmu.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 3ad2c44f4137..d26395754b56 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -257,6 +257,7 @@ static inline void *kvm_get_hyp_vector(void)
 		return kvm_ksym_ref(__kvm_hyp_vector_bp_inv);
 	}
 
+	case ARM_CPU_PART_BRAHMA_B15:
 	case ARM_CPU_PART_CORTEX_A15:
 	{
 		extern char __kvm_hyp_vector_ic_inv[];
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 14/24] ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (12 preceding siblings ...)
  2018-10-31 13:57 ` [PATCH 4.9 13/24] ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15 David Long
@ 2018-10-31 13:57 ` David Long
  2018-10-31 13:57 ` [PATCH 4.9 15/24] ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1 David Long
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:57 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit b800acfc70d9fb81fbd6df70f2cf5e20f70023d0 upstream.

We want SMCCC_ARCH_WORKAROUND_1 to be fast. As fast as possible.
So let's intercept it as early as we can by testing for the
function call number as soon as we've identified a HVC call
coming from the guest.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/kvm/hyp/hyp-entry.S | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index 582f50759d80..a3c81bb7ce8b 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -16,6 +16,7 @@
  * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
  */
 
+#include <linux/arm-smccc.h>
 #include <linux/linkage.h>
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
@@ -202,7 +203,7 @@ hyp_hvc:
 	lsr     r2, r2, #16
 	and     r2, r2, #0xff
 	cmp     r2, #0
-	bne	guest_trap		@ Guest called HVC
+	bne	guest_hvc_trap		@ Guest called HVC
 
 	/*
 	 * Getting here means host called HVC, we shift parameters and branch
@@ -237,6 +238,20 @@ THUMB(	orr	lr, #1)
 	pop	{r2, lr}
 1:	eret
 
+guest_hvc_trap:
+	movw	r2, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
+	movt	r2, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
+	ldr	r0, [sp]		@ Guest's r0
+	teq	r0, r2
+	bne	guest_trap
+	add	sp, sp, #12
+	@ Returns:
+	@ r0 = 0
+	@ r1 = HSR value (perfectly predictable)
+	@ r2 = ARM_SMCCC_ARCH_WORKAROUND_1
+	mov	r0, #0
+	eret
+
 guest_trap:
 	load_vcpu r0			@ Load VCPU pointer to r0
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 15/24] ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (13 preceding siblings ...)
  2018-10-31 13:57 ` [PATCH 4.9 14/24] ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling David Long
@ 2018-10-31 13:57 ` David Long
  2018-10-31 13:57 ` [PATCH 4.9 16/24] ARM: spectre-v1: add speculation barrier (csdb) macros David Long
                   ` (10 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:57 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit add5609877c6785cc002c6ed7e008b1d61064439 upstream.

Report support for SMCCC_ARCH_WORKAROUND_1 to KVM guests for affected
CPUs.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/kvm_host.h | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 0833d8a1dbbb..2fda7e905754 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -21,6 +21,7 @@
 
 #include <linux/types.h>
 #include <linux/kvm_types.h>
+#include <asm/cputype.h>
 #include <asm/kvm.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_mmio.h>
@@ -323,8 +324,17 @@ static inline int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 
 static inline bool kvm_arm_harden_branch_predictor(void)
 {
-	/* No way to detect it yet, pretend it is not there. */
-	return false;
+	switch(read_cpuid_part()) {
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	case ARM_CPU_PART_BRAHMA_B15:
+	case ARM_CPU_PART_CORTEX_A12:
+	case ARM_CPU_PART_CORTEX_A15:
+	case ARM_CPU_PART_CORTEX_A17:
+		return true;
+#endif
+	default:
+		return false;
+	}
 }
 
 #define KVM_SSBD_UNKNOWN		-1
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 16/24] ARM: spectre-v1: add speculation barrier (csdb) macros
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (14 preceding siblings ...)
  2018-10-31 13:57 ` [PATCH 4.9 15/24] ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1 David Long
@ 2018-10-31 13:57 ` David Long
  2018-10-31 13:57 ` [PATCH 4.9 17/24] ARM: spectre-v1: add array_index_mask_nospec() implementation David Long
                   ` (9 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:57 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit a78d156587931a2c3b354534aa772febf6c9e855 upstream.

Add assembly and C macros for the new CSDB instruction.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/assembler.h |  8 ++++++++
 arch/arm/include/asm/barrier.h   | 13 +++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
index 3aed4492c9a7..189f3b42baea 100644
--- a/arch/arm/include/asm/assembler.h
+++ b/arch/arm/include/asm/assembler.h
@@ -445,6 +445,14 @@ THUMB(	orr	\reg , \reg , #PSR_T_BIT	)
 	.size \name , . - \name
 	.endm
 
+	.macro	csdb
+#ifdef CONFIG_THUMB2_KERNEL
+	.inst.w	0xf3af8014
+#else
+	.inst	0xe320f014
+#endif
+	.endm
+
 	.macro check_uaccess, addr:req, size:req, limit:req, tmp:req, bad:req
 #ifndef CONFIG_CPU_USE_DOMAINS
 	adds	\tmp, \addr, #\size - 1
diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index f5d698182d50..6f00dac6ad8e 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -16,6 +16,12 @@
 #define isb(option) __asm__ __volatile__ ("isb " #option : : : "memory")
 #define dsb(option) __asm__ __volatile__ ("dsb " #option : : : "memory")
 #define dmb(option) __asm__ __volatile__ ("dmb " #option : : : "memory")
+#ifdef CONFIG_THUMB2_KERNEL
+#define CSDB	".inst.w 0xf3af8014"
+#else
+#define CSDB	".inst	0xe320f014"
+#endif
+#define csdb() __asm__ __volatile__(CSDB : : : "memory")
 #elif defined(CONFIG_CPU_XSC3) || __LINUX_ARM_ARCH__ == 6
 #define isb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c5, 4" \
 				    : : "r" (0) : "memory")
@@ -36,6 +42,13 @@
 #define dmb(x) __asm__ __volatile__ ("" : : : "memory")
 #endif
 
+#ifndef CSDB
+#define CSDB
+#endif
+#ifndef csdb
+#define csdb()
+#endif
+
 #ifdef CONFIG_ARM_HEAVY_MB
 extern void (*soc_mb)(void);
 extern void arm_heavy_mb(void);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 17/24] ARM: spectre-v1: add array_index_mask_nospec() implementation
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (15 preceding siblings ...)
  2018-10-31 13:57 ` [PATCH 4.9 16/24] ARM: spectre-v1: add speculation barrier (csdb) macros David Long
@ 2018-10-31 13:57 ` David Long
  2018-10-31 13:57 ` [PATCH 4.9 18/24] ARM: spectre-v1: fix syscall entry David Long
                   ` (8 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:57 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit 1d4238c56f9816ce0f9c8dbe42d7f2ad81cb6613 upstream.

Add an implementation of the array_index_mask_nospec() function for
mitigating Spectre variant 1 throughout the kernel.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/barrier.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 6f00dac6ad8e..513e03d138ea 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -75,6 +75,25 @@ extern void arm_heavy_mb(void);
 #define __smp_rmb()	__smp_mb()
 #define __smp_wmb()	dmb(ishst)
 
+#ifdef CONFIG_CPU_SPECTRE
+static inline unsigned long array_index_mask_nospec(unsigned long idx,
+						    unsigned long sz)
+{
+	unsigned long mask;
+
+	asm volatile(
+		"cmp	%1, %2\n"
+	"	sbc	%0, %1, %1\n"
+	CSDB
+	: "=r" (mask)
+	: "r" (idx), "Ir" (sz)
+	: "cc");
+
+	return mask;
+}
+#define array_index_mask_nospec array_index_mask_nospec
+#endif
+
 #include <asm-generic/barrier.h>
 
 #endif /* !__ASSEMBLY__ */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 18/24] ARM: spectre-v1: fix syscall entry
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (16 preceding siblings ...)
  2018-10-31 13:57 ` [PATCH 4.9 17/24] ARM: spectre-v1: add array_index_mask_nospec() implementation David Long
@ 2018-10-31 13:57 ` David Long
  2018-10-31 13:57 ` [PATCH 4.9 19/24] ARM: signal: copy registers using __copy_from_user() David Long
                   ` (7 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:57 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit 10573ae547c85b2c61417ff1a106cffbfceada35 upstream.

Prevent speculation at the syscall table decoding by clamping the index
used to zero on invalid system call numbers, and using the csdb
speculative barrier.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/kernel/entry-common.S | 18 +++++++-----------
 arch/arm/kernel/entry-header.S | 25 +++++++++++++++++++++++++
 2 files changed, 32 insertions(+), 11 deletions(-)

diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
index 10c3283d6c19..56be67ecf0fa 100644
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -223,9 +223,7 @@ local_restart:
 	tst	r10, #_TIF_SYSCALL_WORK		@ are we tracing syscalls?
 	bne	__sys_trace
 
-	cmp	scno, #NR_syscalls		@ check upper syscall limit
-	badr	lr, ret_fast_syscall		@ return address
-	ldrcc	pc, [tbl, scno, lsl #2]		@ call sys_* routine
+	invoke_syscall tbl, scno, r10, ret_fast_syscall
 
 	add	r1, sp, #S_OFF
 2:	cmp	scno, #(__ARM_NR_BASE - __NR_SYSCALL_BASE)
@@ -258,14 +256,8 @@ __sys_trace:
 	mov	r1, scno
 	add	r0, sp, #S_OFF
 	bl	syscall_trace_enter
-
-	badr	lr, __sys_trace_return		@ return address
-	mov	scno, r0			@ syscall number (possibly new)
-	add	r1, sp, #S_R0 + S_OFF		@ pointer to regs
-	cmp	scno, #NR_syscalls		@ check upper syscall limit
-	ldmccia	r1, {r0 - r6}			@ have to reload r0 - r6
-	stmccia	sp, {r4, r5}			@ and update the stack args
-	ldrcc	pc, [tbl, scno, lsl #2]		@ call sys_* routine
+	mov	scno, r0
+	invoke_syscall tbl, scno, r10, __sys_trace_return, reload=1
 	cmp	scno, #-1			@ skip the syscall?
 	bne	2b
 	add	sp, sp, #S_OFF			@ restore stack
@@ -317,6 +309,10 @@ sys_syscall:
 		bic	scno, r0, #__NR_OABI_SYSCALL_BASE
 		cmp	scno, #__NR_syscall - __NR_SYSCALL_BASE
 		cmpne	scno, #NR_syscalls	@ check range
+#ifdef CONFIG_CPU_SPECTRE
+		movhs	scno, #0
+		csdb
+#endif
 		stmloia	sp, {r5, r6}		@ shuffle args
 		movlo	r0, r1
 		movlo	r1, r2
diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S
index e056c9a9aa9d..fa7c6e5c17e7 100644
--- a/arch/arm/kernel/entry-header.S
+++ b/arch/arm/kernel/entry-header.S
@@ -377,6 +377,31 @@
 #endif
 	.endm
 
+	.macro	invoke_syscall, table, nr, tmp, ret, reload=0
+#ifdef CONFIG_CPU_SPECTRE
+	mov	\tmp, \nr
+	cmp	\tmp, #NR_syscalls		@ check upper syscall limit
+	movcs	\tmp, #0
+	csdb
+	badr	lr, \ret			@ return address
+	.if	\reload
+	add	r1, sp, #S_R0 + S_OFF		@ pointer to regs
+	ldmccia	r1, {r0 - r6}			@ reload r0-r6
+	stmccia	sp, {r4, r5}			@ update stack arguments
+	.endif
+	ldrcc	pc, [\table, \tmp, lsl #2]	@ call sys_* routine
+#else
+	cmp	\nr, #NR_syscalls		@ check upper syscall limit
+	badr	lr, \ret			@ return address
+	.if	\reload
+	add	r1, sp, #S_R0 + S_OFF		@ pointer to regs
+	ldmccia	r1, {r0 - r6}			@ reload r0-r6
+	stmccia	sp, {r4, r5}			@ update stack arguments
+	.endif
+	ldrcc	pc, [\table, \nr, lsl #2]	@ call sys_* routine
+#endif
+	.endm
+
 /*
  * These are the registers used in the syscall handler, and allow us to
  * have in theory up to 7 arguments to a function - r0 to r6.
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 19/24] ARM: signal: copy registers using __copy_from_user()
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (17 preceding siblings ...)
  2018-10-31 13:57 ` [PATCH 4.9 18/24] ARM: spectre-v1: fix syscall entry David Long
@ 2018-10-31 13:57 ` David Long
  2018-10-31 13:57 ` [PATCH 4.9 20/24] ARM: vfp: use __copy_from_user() when restoring VFP state David Long
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:57 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit c32cd419d6650e42b9cdebb83c672ec945e6bd7e upstream.

__get_user_error() is used as a fast accessor to make copying structure
members in the signal handling path as efficient as possible.  However,
with software PAN and the recent Spectre variant 1, the efficiency is
reduced as these are no longer fast accessors.

In the case of software PAN, it has to switch the domain register around
each access, and with Spectre variant 1, it would have to repeat the
access_ok() check for each access.

It becomes much more efficient to use __copy_from_user() instead, so
let's use this for the ARM integer registers.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/kernel/signal.c | 38 +++++++++++++++++++++-----------------
 1 file changed, 21 insertions(+), 17 deletions(-)

diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
index 7b8f2141427b..a592bc0287f8 100644
--- a/arch/arm/kernel/signal.c
+++ b/arch/arm/kernel/signal.c
@@ -141,6 +141,7 @@ struct rt_sigframe {
 
 static int restore_sigframe(struct pt_regs *regs, struct sigframe __user *sf)
 {
+	struct sigcontext context;
 	struct aux_sigframe __user *aux;
 	sigset_t set;
 	int err;
@@ -149,23 +150,26 @@ static int restore_sigframe(struct pt_regs *regs, struct sigframe __user *sf)
 	if (err == 0)
 		set_current_blocked(&set);
 
-	__get_user_error(regs->ARM_r0, &sf->uc.uc_mcontext.arm_r0, err);
-	__get_user_error(regs->ARM_r1, &sf->uc.uc_mcontext.arm_r1, err);
-	__get_user_error(regs->ARM_r2, &sf->uc.uc_mcontext.arm_r2, err);
-	__get_user_error(regs->ARM_r3, &sf->uc.uc_mcontext.arm_r3, err);
-	__get_user_error(regs->ARM_r4, &sf->uc.uc_mcontext.arm_r4, err);
-	__get_user_error(regs->ARM_r5, &sf->uc.uc_mcontext.arm_r5, err);
-	__get_user_error(regs->ARM_r6, &sf->uc.uc_mcontext.arm_r6, err);
-	__get_user_error(regs->ARM_r7, &sf->uc.uc_mcontext.arm_r7, err);
-	__get_user_error(regs->ARM_r8, &sf->uc.uc_mcontext.arm_r8, err);
-	__get_user_error(regs->ARM_r9, &sf->uc.uc_mcontext.arm_r9, err);
-	__get_user_error(regs->ARM_r10, &sf->uc.uc_mcontext.arm_r10, err);
-	__get_user_error(regs->ARM_fp, &sf->uc.uc_mcontext.arm_fp, err);
-	__get_user_error(regs->ARM_ip, &sf->uc.uc_mcontext.arm_ip, err);
-	__get_user_error(regs->ARM_sp, &sf->uc.uc_mcontext.arm_sp, err);
-	__get_user_error(regs->ARM_lr, &sf->uc.uc_mcontext.arm_lr, err);
-	__get_user_error(regs->ARM_pc, &sf->uc.uc_mcontext.arm_pc, err);
-	__get_user_error(regs->ARM_cpsr, &sf->uc.uc_mcontext.arm_cpsr, err);
+	err |= __copy_from_user(&context, &sf->uc.uc_mcontext, sizeof(context));
+	if (err == 0) {
+		regs->ARM_r0 = context.arm_r0;
+		regs->ARM_r1 = context.arm_r1;
+		regs->ARM_r2 = context.arm_r2;
+		regs->ARM_r3 = context.arm_r3;
+		regs->ARM_r4 = context.arm_r4;
+		regs->ARM_r5 = context.arm_r5;
+		regs->ARM_r6 = context.arm_r6;
+		regs->ARM_r7 = context.arm_r7;
+		regs->ARM_r8 = context.arm_r8;
+		regs->ARM_r9 = context.arm_r9;
+		regs->ARM_r10 = context.arm_r10;
+		regs->ARM_fp = context.arm_fp;
+		regs->ARM_ip = context.arm_ip;
+		regs->ARM_sp = context.arm_sp;
+		regs->ARM_lr = context.arm_lr;
+		regs->ARM_pc = context.arm_pc;
+		regs->ARM_cpsr = context.arm_cpsr;
+	}
 
 	err |= !valid_user_regs(regs);
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 20/24] ARM: vfp: use __copy_from_user() when restoring VFP state
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (18 preceding siblings ...)
  2018-10-31 13:57 ` [PATCH 4.9 19/24] ARM: signal: copy registers using __copy_from_user() David Long
@ 2018-10-31 13:57 ` David Long
  2018-10-31 13:57 ` [PATCH 4.9 21/24] ARM: oabi-compat: copy semops using __copy_from_user() David Long
                   ` (5 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:57 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit 42019fc50dfadb219f9e6ddf4c354f3837057d80 upstream.

__get_user_error() is used as a fast accessor to make copying structure
members in the signal handling path as efficient as possible.  However,
with software PAN and the recent Spectre variant 1, the efficiency is
reduced as these are no longer fast accessors.

In the case of software PAN, it has to switch the domain register around
each access, and with Spectre variant 1, it would have to repeat the
access_ok() check for each access.

Use __copy_from_user() rather than __get_user_err() for individual
members when restoring VFP state.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/thread_info.h |  4 ++--
 arch/arm/kernel/signal.c           | 17 ++++++++---------
 arch/arm/vfp/vfpmodule.c           | 17 +++++++----------
 3 files changed, 17 insertions(+), 21 deletions(-)

diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 776757d1604a..57d2ad9c75ca 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -126,8 +126,8 @@ struct user_vfp_exc;
 
 extern int vfp_preserve_user_clear_hwstate(struct user_vfp __user *,
 					   struct user_vfp_exc __user *);
-extern int vfp_restore_user_hwstate(struct user_vfp __user *,
-				    struct user_vfp_exc __user *);
+extern int vfp_restore_user_hwstate(struct user_vfp *,
+				    struct user_vfp_exc *);
 #endif
 
 /*
diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
index a592bc0287f8..6bee5c9b1133 100644
--- a/arch/arm/kernel/signal.c
+++ b/arch/arm/kernel/signal.c
@@ -107,21 +107,20 @@ static int preserve_vfp_context(struct vfp_sigframe __user *frame)
 	return vfp_preserve_user_clear_hwstate(&frame->ufp, &frame->ufp_exc);
 }
 
-static int restore_vfp_context(struct vfp_sigframe __user *frame)
+static int restore_vfp_context(struct vfp_sigframe __user *auxp)
 {
-	unsigned long magic;
-	unsigned long size;
-	int err = 0;
+	struct vfp_sigframe frame;
+	int err;
 
-	__get_user_error(magic, &frame->magic, err);
-	__get_user_error(size, &frame->size, err);
+	err = __copy_from_user(&frame, (char __user *) auxp, sizeof(frame));
 
 	if (err)
-		return -EFAULT;
-	if (magic != VFP_MAGIC || size != VFP_STORAGE_SIZE)
+		return err;
+
+	if (frame.magic != VFP_MAGIC || frame.size != VFP_STORAGE_SIZE)
 		return -EINVAL;
 
-	return vfp_restore_user_hwstate(&frame->ufp, &frame->ufp_exc);
+	return vfp_restore_user_hwstate(&frame.ufp, &frame.ufp_exc);
 }
 
 #endif
diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
index 5629d7580973..8e5e97989fda 100644
--- a/arch/arm/vfp/vfpmodule.c
+++ b/arch/arm/vfp/vfpmodule.c
@@ -597,13 +597,11 @@ int vfp_preserve_user_clear_hwstate(struct user_vfp __user *ufp,
 }
 
 /* Sanitise and restore the current VFP state from the provided structures. */
-int vfp_restore_user_hwstate(struct user_vfp __user *ufp,
-			     struct user_vfp_exc __user *ufp_exc)
+int vfp_restore_user_hwstate(struct user_vfp *ufp, struct user_vfp_exc *ufp_exc)
 {
 	struct thread_info *thread = current_thread_info();
 	struct vfp_hard_struct *hwstate = &thread->vfpstate.hard;
 	unsigned long fpexc;
-	int err = 0;
 
 	/* Disable VFP to avoid corrupting the new thread state. */
 	vfp_flush_hwstate(thread);
@@ -612,17 +610,16 @@ int vfp_restore_user_hwstate(struct user_vfp __user *ufp,
 	 * Copy the floating point registers. There can be unused
 	 * registers see asm/hwcap.h for details.
 	 */
-	err |= __copy_from_user(&hwstate->fpregs, &ufp->fpregs,
-				sizeof(hwstate->fpregs));
+	memcpy(&hwstate->fpregs, &ufp->fpregs, sizeof(hwstate->fpregs));
 	/*
 	 * Copy the status and control register.
 	 */
-	__get_user_error(hwstate->fpscr, &ufp->fpscr, err);
+	hwstate->fpscr = ufp->fpscr;
 
 	/*
 	 * Sanitise and restore the exception registers.
 	 */
-	__get_user_error(fpexc, &ufp_exc->fpexc, err);
+	fpexc = ufp_exc->fpexc;
 
 	/* Ensure the VFP is enabled. */
 	fpexc |= FPEXC_EN;
@@ -631,10 +628,10 @@ int vfp_restore_user_hwstate(struct user_vfp __user *ufp,
 	fpexc &= ~(FPEXC_EX | FPEXC_FP2V);
 	hwstate->fpexc = fpexc;
 
-	__get_user_error(hwstate->fpinst, &ufp_exc->fpinst, err);
-	__get_user_error(hwstate->fpinst2, &ufp_exc->fpinst2, err);
+	hwstate->fpinst = ufp_exc->fpinst;
+	hwstate->fpinst2 = ufp_exc->fpinst2;
 
-	return err ? -EFAULT : 0;
+	return 0;
 }
 
 /*
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 21/24] ARM: oabi-compat: copy semops using __copy_from_user()
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (19 preceding siblings ...)
  2018-10-31 13:57 ` [PATCH 4.9 20/24] ARM: vfp: use __copy_from_user() when restoring VFP state David Long
@ 2018-10-31 13:57 ` David Long
  2018-10-31 13:57 ` [PATCH 4.9 22/24] ARM: use __inttype() in get_user() David Long
                   ` (4 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:57 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit 8c8484a1c18e3231648f5ba7cc5ffb7fd70b3ca4 upstream.

__get_user_error() is used as a fast accessor to make copying structure
members as efficient as possible.  However, with software PAN and the
recent Spectre variant 1, the efficiency is reduced as these are no
longer fast accessors.

In the case of software PAN, it has to switch the domain register around
each access, and with Spectre variant 1, it would have to repeat the
access_ok() check for each access.

Rather than using __get_user_error() to copy each semops element member,
copy each semops element in full using __copy_from_user().

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/kernel/sys_oabi-compat.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/arm/kernel/sys_oabi-compat.c b/arch/arm/kernel/sys_oabi-compat.c
index 5f221acd21ae..640748e27035 100644
--- a/arch/arm/kernel/sys_oabi-compat.c
+++ b/arch/arm/kernel/sys_oabi-compat.c
@@ -328,9 +328,11 @@ asmlinkage long sys_oabi_semtimedop(int semid,
 		return -ENOMEM;
 	err = 0;
 	for (i = 0; i < nsops; i++) {
-		__get_user_error(sops[i].sem_num, &tsops->sem_num, err);
-		__get_user_error(sops[i].sem_op,  &tsops->sem_op,  err);
-		__get_user_error(sops[i].sem_flg, &tsops->sem_flg, err);
+		struct oabi_sembuf osb;
+		err |= __copy_from_user(&osb, tsops, sizeof(osb));
+		sops[i].sem_num = osb.sem_num;
+		sops[i].sem_op = osb.sem_op;
+		sops[i].sem_flg = osb.sem_flg;
 		tsops++;
 	}
 	if (timeout) {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 22/24] ARM: use __inttype() in get_user()
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (20 preceding siblings ...)
  2018-10-31 13:57 ` [PATCH 4.9 21/24] ARM: oabi-compat: copy semops using __copy_from_user() David Long
@ 2018-10-31 13:57 ` David Long
  2018-10-31 13:57 ` [PATCH 4.9 23/24] ARM: spectre-v1: use get_user() for __get_user() David Long
                   ` (3 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:57 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit d09fbb327d670737ab40fd8bbb0765ae06b8b739 upstream.

Borrow the x86 implementation of __inttype() to use in get_user() to
select an integer type suitable to temporarily hold the result value.
This is necessary to avoid propagating the volatile nature of the
result argument, which can cause the following warning:

lib/iov_iter.c:413:5: warning: optimization may eliminate reads and/or writes to register variables [-Wvolatile-register-var]

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/uaccess.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
index b7e0125c0bbf..4a61f36c7397 100644
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -114,6 +114,13 @@ static inline void set_fs(mm_segment_t fs)
 		: "cc"); \
 	flag; })
 
+/*
+ * This is a type: either unsigned long, if the argument fits into
+ * that type, or otherwise unsigned long long.
+ */
+#define __inttype(x) \
+	__typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
+
 /*
  * Single-value transfer routines.  They automatically use the right
  * size if we just have the right pointer type.  Note that the functions
@@ -183,7 +190,7 @@ extern int __get_user_64t_4(void *);
 	({								\
 		unsigned long __limit = current_thread_info()->addr_limit - 1; \
 		register const typeof(*(p)) __user *__p asm("r0") = (p);\
-		register typeof(x) __r2 asm("r2");			\
+		register __inttype(x) __r2 asm("r2");			\
 		register unsigned long __l asm("r1") = __limit;		\
 		register int __e asm("r0");				\
 		unsigned int __ua_flags = uaccess_save_and_enable();	\
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 23/24] ARM: spectre-v1: use get_user() for __get_user()
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (21 preceding siblings ...)
  2018-10-31 13:57 ` [PATCH 4.9 22/24] ARM: use __inttype() in get_user() David Long
@ 2018-10-31 13:57 ` David Long
  2018-10-31 13:57 ` [PATCH 4.9 24/24] ARM: spectre-v1: mitigate user accesses David Long
                   ` (2 subsequent siblings)
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:57 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit b1cd0a14806321721aae45f5446ed83a3647c914 upstream.

Fixing __get_user() for spectre variant 1 is not sane: we would have to
add address space bounds checking in order to validate that the location
should be accessed, and then zero the address if found to be invalid.

Since __get_user() is supposed to avoid the bounds check, and this is
exactly what get_user() does, there's no point having two different
implementations that are doing the same thing.  So, when the Spectre
workarounds are required, make __get_user() an alias of get_user().

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/uaccess.h | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
index 4a61f36c7397..7b17460127fd 100644
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -280,6 +280,16 @@ static inline void set_fs(mm_segment_t fs)
 #define user_addr_max() \
 	(segment_eq(get_fs(), KERNEL_DS) ? ~0UL : get_fs())
 
+#ifdef CONFIG_CPU_SPECTRE
+/*
+ * When mitigating Spectre variant 1, it is not worth fixing the non-
+ * verifying accessors, because we need to add verification of the
+ * address space there.  Force these to use the standard get_user()
+ * version instead.
+ */
+#define __get_user(x, ptr) get_user(x, ptr)
+#else
+
 /*
  * The "__xxx" versions of the user access functions do not verify the
  * address space - it must have been done previously with a separate
@@ -296,12 +306,6 @@ static inline void set_fs(mm_segment_t fs)
 	__gu_err;							\
 })
 
-#define __get_user_error(x, ptr, err)					\
-({									\
-	__get_user_err((x), (ptr), err);				\
-	(void) 0;							\
-})
-
 #define __get_user_err(x, ptr, err)					\
 do {									\
 	unsigned long __gu_addr = (unsigned long)(ptr);			\
@@ -361,6 +365,7 @@ do {									\
 
 #define __get_user_asm_word(x, addr, err)			\
 	__get_user_asm(x, addr, err, ldr)
+#endif
 
 
 #define __put_user_switch(x, ptr, __err, __fn)				\
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4.9 24/24] ARM: spectre-v1: mitigate user accesses
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (22 preceding siblings ...)
  2018-10-31 13:57 ` [PATCH 4.9 23/24] ARM: spectre-v1: use get_user() for __get_user() David Long
@ 2018-10-31 13:57 ` David Long
  2018-10-31 21:23 ` [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches Florian Fainelli
  2018-11-02  1:18 ` David Long
  25 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-10-31 13:57 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

From: Russell King <rmk+kernel@armlinux.org.uk>

Commit a3c0f84765bb429ba0fd23de1c57b5e1591c9389 upstream.

Spectre variant 1 attacks are about this sequence of pseudo-code:

	index = load(user-manipulated pointer);
	access(base + index * stride);

In order for the cache side-channel to work, the access() must me made
to memory which userspace can detect whether cache lines have been
loaded.  On 32-bit ARM, this must be either user accessible memory, or
a kernel mapping of that same user accessible memory.

The problem occurs when the load() speculatively loads privileged data,
and the subsequent access() is made to user accessible memory.

Any load() which makes use of a user-maniplated pointer is a potential
problem if the data it has loaded is used in a subsequent access.  This
also applies for the access() if the data loaded by that access is used
by a subsequent access.

Harden the get_user() accessors against Spectre attacks by forcing out
of bounds addresses to a NULL pointer.  This prevents get_user() being
used as the load() step above.  As a side effect, put_user() will also
be affected even though it isn't implicated.

Also harden copy_from_user() by redoing the bounds check within the
arm_copy_from_user() code, and NULLing the pointer if out of bounds.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm/include/asm/assembler.h | 4 ++++
 arch/arm/lib/copy_from_user.S    | 9 +++++++++
 2 files changed, 13 insertions(+)

diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
index 189f3b42baea..e616f61f859d 100644
--- a/arch/arm/include/asm/assembler.h
+++ b/arch/arm/include/asm/assembler.h
@@ -458,6 +458,10 @@ THUMB(	orr	\reg , \reg , #PSR_T_BIT	)
 	adds	\tmp, \addr, #\size - 1
 	sbcccs	\tmp, \tmp, \limit
 	bcs	\bad
+#ifdef CONFIG_CPU_SPECTRE
+	movcs	\addr, #0
+	csdb
+#endif
 #endif
 	.endm
 
diff --git a/arch/arm/lib/copy_from_user.S b/arch/arm/lib/copy_from_user.S
index 7a4b06049001..a826df3d3814 100644
--- a/arch/arm/lib/copy_from_user.S
+++ b/arch/arm/lib/copy_from_user.S
@@ -90,6 +90,15 @@
 	.text
 
 ENTRY(arm_copy_from_user)
+#ifdef CONFIG_CPU_SPECTRE
+	get_thread_info r3
+	ldr	r3, [r3, #TI_ADDR_LIMIT]
+	adds	ip, r1, r2	@ ip=addr+size
+	sub	r3, r3, #1	@ addr_limit - 1
+	cmpcc	ip, r3		@ if (addr+size > addr_limit - 1)
+	movcs	r1, #0		@ addr = NULL
+	csdb
+#endif
 
 #include "copy_template.S"
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (23 preceding siblings ...)
  2018-10-31 13:57 ` [PATCH 4.9 24/24] ARM: spectre-v1: mitigate user accesses David Long
@ 2018-10-31 21:23 ` Florian Fainelli
  2018-11-02  1:18 ` David Long
  25 siblings, 0 replies; 40+ messages in thread
From: Florian Fainelli @ 2018-10-31 21:23 UTC (permalink / raw)
  To: David Long, stable, Russell King - ARM Linux, Tony Lindgren,
	Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

Hi David,

On 10/31/18 6:56 AM, David Long wrote:
> From: "David A. Long" <dave.long@linaro.org>
> 
> V4.9 backport of spectre patches from Russell M. King's spectre branch.
> Patches not yet in upstream are excluded.

Thanks for submitting those patches!

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>

Boot tested on a Brahma-B15 based platform and did not see any
regressions or issues, seeing about the same hackbench performance
before and after.

Test:
#!/bin/sh
for i in $(seq 0 9)
do
	hackbench 13 process 10000
done

before:

min: 140.800
max: 142.571
avg: 141.7233

after:

min: 140.004
max: 141.600
avg: 141.0242


> 
> Marc Zyngier (2):
>   ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17
>   ARM: KVM: invalidate icache on guest exit for Cortex-A15
> 
> Russell King (22):
>   ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs
>   ARM: bugs: prepare processor bug infrastructure
>   ARM: bugs: hook processor bug checking into SMP and suspend paths
>   ARM: bugs: add support for per-processor bug checking
>   ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre
>   ARM: spectre-v2: harden branch predictor on context switches
>   ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit
>   ARM: spectre-v2: harden user aborts in kernel space
>   ARM: spectre-v2: add firmware based hardening
>   ARM: spectre-v2: warn about incorrect context switching functions
>   ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15
>   ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
>   ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1
>   ARM: spectre-v1: add speculation barrier (csdb) macros
>   ARM: spectre-v1: add array_index_mask_nospec() implementation
>   ARM: spectre-v1: fix syscall entry
>   ARM: signal: copy registers using __copy_from_user()
>   ARM: vfp: use __copy_from_user() when restoring VFP state
>   ARM: oabi-compat: copy semops using __copy_from_user()
>   ARM: use __inttype() in get_user()
>   ARM: spectre-v1: use get_user() for __get_user()
>   ARM: spectre-v1: mitigate user accesses
> 
>  arch/arm/include/asm/assembler.h   |  12 ++
>  arch/arm/include/asm/barrier.h     |  32 ++++++
>  arch/arm/include/asm/bugs.h        |   6 +-
>  arch/arm/include/asm/cp15.h        |   3 +
>  arch/arm/include/asm/cputype.h     |   8 ++
>  arch/arm/include/asm/kvm_asm.h     |   2 -
>  arch/arm/include/asm/kvm_host.h    |  14 ++-
>  arch/arm/include/asm/kvm_mmu.h     |  23 +++-
>  arch/arm/include/asm/proc-fns.h    |   4 +
>  arch/arm/include/asm/system_misc.h |  15 +++
>  arch/arm/include/asm/thread_info.h |   4 +-
>  arch/arm/include/asm/uaccess.h     |  26 +++--
>  arch/arm/kernel/Makefile           |   1 +
>  arch/arm/kernel/bugs.c             |  18 +++
>  arch/arm/kernel/entry-common.S     |  18 ++-
>  arch/arm/kernel/entry-header.S     |  25 +++++
>  arch/arm/kernel/signal.c           |  55 ++++-----
>  arch/arm/kernel/smp.c              |   4 +
>  arch/arm/kernel/suspend.c          |   2 +
>  arch/arm/kernel/sys_oabi-compat.c  |   8 +-
>  arch/arm/kvm/hyp/hyp-entry.S       | 110 +++++++++++++++++-
>  arch/arm/lib/copy_from_user.S      |   9 ++
>  arch/arm/mm/Kconfig                |  23 ++++
>  arch/arm/mm/Makefile               |   2 +-
>  arch/arm/mm/fault.c                |   3 +
>  arch/arm/mm/proc-macros.S          |   3 +-
>  arch/arm/mm/proc-v7-2level.S       |   6 -
>  arch/arm/mm/proc-v7-bugs.c         | 174 +++++++++++++++++++++++++++++
>  arch/arm/mm/proc-v7.S              | 154 +++++++++++++++++++------
>  arch/arm/vfp/vfpmodule.c           |  17 ++-
>  30 files changed, 674 insertions(+), 107 deletions(-)
>  create mode 100644 arch/arm/kernel/bugs.c
>  create mode 100644 arch/arm/mm/proc-v7-bugs.c
> 


-- 
Florian

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches
  2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
                   ` (24 preceding siblings ...)
  2018-10-31 21:23 ` [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches Florian Fainelli
@ 2018-11-02  1:18 ` David Long
  2018-11-02  8:54   ` Marc Zyngier
  2018-11-02 11:28   ` Russell King - ARM Linux
  25 siblings, 2 replies; 40+ messages in thread
From: David Long @ 2018-11-02  1:18 UTC (permalink / raw)
  To: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Marc Zyngier, Mark Rutland
  Cc: Greg KH, Mark Brown

On 10/31/18 9:56 AM, David Long wrote:
> From: "David A. Long" <dave.long@linaro.org>
> 
> V4.9 backport of spectre patches from Russell M. King's spectre branch.
> Patches not yet in upstream are excluded.
> 
> Marc Zyngier (2):
>    ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17
>    ARM: KVM: invalidate icache on guest exit for Cortex-A15
> 
> Russell King (22):
>    ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs
>    ARM: bugs: prepare processor bug infrastructure
>    ARM: bugs: hook processor bug checking into SMP and suspend paths
>    ARM: bugs: add support for per-processor bug checking
>    ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre
>    ARM: spectre-v2: harden branch predictor on context switches
>    ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit
>    ARM: spectre-v2: harden user aborts in kernel space
>    ARM: spectre-v2: add firmware based hardening
>    ARM: spectre-v2: warn about incorrect context switching functions
>    ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15
>    ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
>    ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1
>    ARM: spectre-v1: add speculation barrier (csdb) macros
>    ARM: spectre-v1: add array_index_mask_nospec() implementation
>    ARM: spectre-v1: fix syscall entry
>    ARM: signal: copy registers using __copy_from_user()
>    ARM: vfp: use __copy_from_user() when restoring VFP state
>    ARM: oabi-compat: copy semops using __copy_from_user()
>    ARM: use __inttype() in get_user()
>    ARM: spectre-v1: use get_user() for __get_user()
>    ARM: spectre-v1: mitigate user accesses
> 
>   arch/arm/include/asm/assembler.h   |  12 ++
>   arch/arm/include/asm/barrier.h     |  32 ++++++
>   arch/arm/include/asm/bugs.h        |   6 +-
>   arch/arm/include/asm/cp15.h        |   3 +
>   arch/arm/include/asm/cputype.h     |   8 ++
>   arch/arm/include/asm/kvm_asm.h     |   2 -
>   arch/arm/include/asm/kvm_host.h    |  14 ++-
>   arch/arm/include/asm/kvm_mmu.h     |  23 +++-
>   arch/arm/include/asm/proc-fns.h    |   4 +
>   arch/arm/include/asm/system_misc.h |  15 +++
>   arch/arm/include/asm/thread_info.h |   4 +-
>   arch/arm/include/asm/uaccess.h     |  26 +++--
>   arch/arm/kernel/Makefile           |   1 +
>   arch/arm/kernel/bugs.c             |  18 +++
>   arch/arm/kernel/entry-common.S     |  18 ++-
>   arch/arm/kernel/entry-header.S     |  25 +++++
>   arch/arm/kernel/signal.c           |  55 ++++-----
>   arch/arm/kernel/smp.c              |   4 +
>   arch/arm/kernel/suspend.c          |   2 +
>   arch/arm/kernel/sys_oabi-compat.c  |   8 +-
>   arch/arm/kvm/hyp/hyp-entry.S       | 110 +++++++++++++++++-
>   arch/arm/lib/copy_from_user.S      |   9 ++
>   arch/arm/mm/Kconfig                |  23 ++++
>   arch/arm/mm/Makefile               |   2 +-
>   arch/arm/mm/fault.c                |   3 +
>   arch/arm/mm/proc-macros.S          |   3 +-
>   arch/arm/mm/proc-v7-2level.S       |   6 -
>   arch/arm/mm/proc-v7-bugs.c         | 174 +++++++++++++++++++++++++++++
>   arch/arm/mm/proc-v7.S              | 154 +++++++++++++++++++------
>   arch/arm/vfp/vfpmodule.c           |  17 ++-
>   30 files changed, 674 insertions(+), 107 deletions(-)
>   create mode 100644 arch/arm/kernel/bugs.c
>   create mode 100644 arch/arm/mm/proc-v7-bugs.c
> 

kvm-unit-test'ing of this results in a hypervisor panic that doesn't 
happen without the patches. This needs to be figured out before it is 
accepted into stable. Looks like a V2 will be needed. Clearly kernelci 
testing alone is not sufficient when dealing with kvm changes.

-dl

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches
  2018-11-02  1:18 ` David Long
@ 2018-11-02  8:54   ` Marc Zyngier
  2018-11-02 17:22     ` David Long
  2018-11-02 11:28   ` Russell King - ARM Linux
  1 sibling, 1 reply; 40+ messages in thread
From: Marc Zyngier @ 2018-11-02  8:54 UTC (permalink / raw)
  To: David Long, stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Mark Rutland
  Cc: Greg KH, Mark Brown

On 02/11/18 01:18, David Long wrote:
> On 10/31/18 9:56 AM, David Long wrote:
>> From: "David A. Long" <dave.long@linaro.org>
>>
>> V4.9 backport of spectre patches from Russell M. King's spectre branch.
>> Patches not yet in upstream are excluded.
>>
>> Marc Zyngier (2):
>>    ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17
>>    ARM: KVM: invalidate icache on guest exit for Cortex-A15
>>
>> Russell King (22):
>>    ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs
>>    ARM: bugs: prepare processor bug infrastructure
>>    ARM: bugs: hook processor bug checking into SMP and suspend paths
>>    ARM: bugs: add support for per-processor bug checking
>>    ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre
>>    ARM: spectre-v2: harden branch predictor on context switches
>>    ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit
>>    ARM: spectre-v2: harden user aborts in kernel space
>>    ARM: spectre-v2: add firmware based hardening
>>    ARM: spectre-v2: warn about incorrect context switching functions
>>    ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15
>>    ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
>>    ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1
>>    ARM: spectre-v1: add speculation barrier (csdb) macros
>>    ARM: spectre-v1: add array_index_mask_nospec() implementation
>>    ARM: spectre-v1: fix syscall entry
>>    ARM: signal: copy registers using __copy_from_user()
>>    ARM: vfp: use __copy_from_user() when restoring VFP state
>>    ARM: oabi-compat: copy semops using __copy_from_user()
>>    ARM: use __inttype() in get_user()
>>    ARM: spectre-v1: use get_user() for __get_user()
>>    ARM: spectre-v1: mitigate user accesses
>>
>>   arch/arm/include/asm/assembler.h   |  12 ++
>>   arch/arm/include/asm/barrier.h     |  32 ++++++
>>   arch/arm/include/asm/bugs.h        |   6 +-
>>   arch/arm/include/asm/cp15.h        |   3 +
>>   arch/arm/include/asm/cputype.h     |   8 ++
>>   arch/arm/include/asm/kvm_asm.h     |   2 -
>>   arch/arm/include/asm/kvm_host.h    |  14 ++-
>>   arch/arm/include/asm/kvm_mmu.h     |  23 +++-
>>   arch/arm/include/asm/proc-fns.h    |   4 +
>>   arch/arm/include/asm/system_misc.h |  15 +++
>>   arch/arm/include/asm/thread_info.h |   4 +-
>>   arch/arm/include/asm/uaccess.h     |  26 +++--
>>   arch/arm/kernel/Makefile           |   1 +
>>   arch/arm/kernel/bugs.c             |  18 +++
>>   arch/arm/kernel/entry-common.S     |  18 ++-
>>   arch/arm/kernel/entry-header.S     |  25 +++++
>>   arch/arm/kernel/signal.c           |  55 ++++-----
>>   arch/arm/kernel/smp.c              |   4 +
>>   arch/arm/kernel/suspend.c          |   2 +
>>   arch/arm/kernel/sys_oabi-compat.c  |   8 +-
>>   arch/arm/kvm/hyp/hyp-entry.S       | 110 +++++++++++++++++-
>>   arch/arm/lib/copy_from_user.S      |   9 ++
>>   arch/arm/mm/Kconfig                |  23 ++++
>>   arch/arm/mm/Makefile               |   2 +-
>>   arch/arm/mm/fault.c                |   3 +
>>   arch/arm/mm/proc-macros.S          |   3 +-
>>   arch/arm/mm/proc-v7-2level.S       |   6 -
>>   arch/arm/mm/proc-v7-bugs.c         | 174 +++++++++++++++++++++++++++++
>>   arch/arm/mm/proc-v7.S              | 154 +++++++++++++++++++------
>>   arch/arm/vfp/vfpmodule.c           |  17 ++-
>>   30 files changed, 674 insertions(+), 107 deletions(-)
>>   create mode 100644 arch/arm/kernel/bugs.c
>>   create mode 100644 arch/arm/mm/proc-v7-bugs.c
>>
> 
> kvm-unit-test'ing of this results in a hypervisor panic that doesn't 
> happen without the patches. This needs to be figured out before it is 
> accepted into stable. Looks like a V2 will be needed. Clearly kernelci 
> testing alone is not sufficient when dealing with kvm changes.

How about posting the panic message, a description of what you were
doing when that happened, and details of the configuration (HW used,
Thumb-2 or not...)? If you cannot perform the analysis yourself, at
least give us enough information to help you.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches
  2018-11-02  1:18 ` David Long
  2018-11-02  8:54   ` Marc Zyngier
@ 2018-11-02 11:28   ` Russell King - ARM Linux
  1 sibling, 0 replies; 40+ messages in thread
From: Russell King - ARM Linux @ 2018-11-02 11:28 UTC (permalink / raw)
  To: David Long
  Cc: stable, Florian Fainelli, Tony Lindgren, Marc Zyngier,
	Mark Rutland, Greg KH, Mark Brown

On Thu, Nov 01, 2018 at 09:18:02PM -0400, David Long wrote:
> On 10/31/18 9:56 AM, David Long wrote:
> >From: "David A. Long" <dave.long@linaro.org>
> >
> >V4.9 backport of spectre patches from Russell M. King's spectre branch.
> >Patches not yet in upstream are excluded.
> >
> >Marc Zyngier (2):
> >   ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17
> >   ARM: KVM: invalidate icache on guest exit for Cortex-A15
> >
> >Russell King (22):
> >   ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs
> >   ARM: bugs: prepare processor bug infrastructure
> >   ARM: bugs: hook processor bug checking into SMP and suspend paths
> >   ARM: bugs: add support for per-processor bug checking
> >   ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre
> >   ARM: spectre-v2: harden branch predictor on context switches
> >   ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit
> >   ARM: spectre-v2: harden user aborts in kernel space
> >   ARM: spectre-v2: add firmware based hardening
> >   ARM: spectre-v2: warn about incorrect context switching functions
> >   ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15
> >   ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
> >   ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1
> >   ARM: spectre-v1: add speculation barrier (csdb) macros
> >   ARM: spectre-v1: add array_index_mask_nospec() implementation
> >   ARM: spectre-v1: fix syscall entry
> >   ARM: signal: copy registers using __copy_from_user()
> >   ARM: vfp: use __copy_from_user() when restoring VFP state
> >   ARM: oabi-compat: copy semops using __copy_from_user()
> >   ARM: use __inttype() in get_user()
> >   ARM: spectre-v1: use get_user() for __get_user()
> >   ARM: spectre-v1: mitigate user accesses
> >
> >  arch/arm/include/asm/assembler.h   |  12 ++
> >  arch/arm/include/asm/barrier.h     |  32 ++++++
> >  arch/arm/include/asm/bugs.h        |   6 +-
> >  arch/arm/include/asm/cp15.h        |   3 +
> >  arch/arm/include/asm/cputype.h     |   8 ++
> >  arch/arm/include/asm/kvm_asm.h     |   2 -
> >  arch/arm/include/asm/kvm_host.h    |  14 ++-
> >  arch/arm/include/asm/kvm_mmu.h     |  23 +++-
> >  arch/arm/include/asm/proc-fns.h    |   4 +
> >  arch/arm/include/asm/system_misc.h |  15 +++
> >  arch/arm/include/asm/thread_info.h |   4 +-
> >  arch/arm/include/asm/uaccess.h     |  26 +++--
> >  arch/arm/kernel/Makefile           |   1 +
> >  arch/arm/kernel/bugs.c             |  18 +++
> >  arch/arm/kernel/entry-common.S     |  18 ++-
> >  arch/arm/kernel/entry-header.S     |  25 +++++
> >  arch/arm/kernel/signal.c           |  55 ++++-----
> >  arch/arm/kernel/smp.c              |   4 +
> >  arch/arm/kernel/suspend.c          |   2 +
> >  arch/arm/kernel/sys_oabi-compat.c  |   8 +-
> >  arch/arm/kvm/hyp/hyp-entry.S       | 110 +++++++++++++++++-
> >  arch/arm/lib/copy_from_user.S      |   9 ++
> >  arch/arm/mm/Kconfig                |  23 ++++
> >  arch/arm/mm/Makefile               |   2 +-
> >  arch/arm/mm/fault.c                |   3 +
> >  arch/arm/mm/proc-macros.S          |   3 +-
> >  arch/arm/mm/proc-v7-2level.S       |   6 -
> >  arch/arm/mm/proc-v7-bugs.c         | 174 +++++++++++++++++++++++++++++
> >  arch/arm/mm/proc-v7.S              | 154 +++++++++++++++++++------
> >  arch/arm/vfp/vfpmodule.c           |  17 ++-
> >  30 files changed, 674 insertions(+), 107 deletions(-)
> >  create mode 100644 arch/arm/kernel/bugs.c
> >  create mode 100644 arch/arm/mm/proc-v7-bugs.c
> >
> 
> kvm-unit-test'ing of this results in a hypervisor panic that doesn't happen
> without the patches. This needs to be figured out before it is accepted into
> stable. Looks like a V2 will be needed. Clearly kernelci testing alone is
> not sufficient when dealing with kvm changes.

I've discovered in the last few days that kernelci boot testing is
next to useless - it bases its pass/fail result on whether it gets to
a shell prompt, which can happen even if the kernel hits a BUG() or
warning that doesn't prevent the system getting to a shell prompt.

For example, see:

01:08:40.181846  [    9.309984] Unable to handle kernel paging request at virtual address e7fddef0

which is the kernel hitting a BUG() in:

https://storage.kernelci.org/rmk/to-build/v4.16-38-g9fa10446d304/arm/multi_v7_defconfig/lab-collabora/boot-exynos5800-peach-pi.html

and that results in a "pass" result for that boot test.

So, don't believe a "pass" result from kernelci at the moment, it's
meaningless in determining whether anything has broken.  The only
way around this is to manually read each and every boot log, which
is tedious, or run your own tests on local systems.

I've reported this to info@kernelci.org earlier this week, and am
waiting for a response.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches
  2018-11-02  8:54   ` Marc Zyngier
@ 2018-11-02 17:22     ` David Long
  0 siblings, 0 replies; 40+ messages in thread
From: David Long @ 2018-11-02 17:22 UTC (permalink / raw)
  To: Marc Zyngier, stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Mark Rutland
  Cc: Greg KH, Mark Brown

On 11/2/18 4:54 AM, Marc Zyngier wrote:
> On 02/11/18 01:18, David Long wrote:
>> On 10/31/18 9:56 AM, David Long wrote:
>>> From: "David A. Long" <dave.long@linaro.org>
>>>
>>> V4.9 backport of spectre patches from Russell M. King's spectre branch.
>>> Patches not yet in upstream are excluded.
>>>
>>> Marc Zyngier (2):
>>>     ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17
>>>     ARM: KVM: invalidate icache on guest exit for Cortex-A15
>>>
>>> Russell King (22):
>>>     ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs
>>>     ARM: bugs: prepare processor bug infrastructure
>>>     ARM: bugs: hook processor bug checking into SMP and suspend paths
>>>     ARM: bugs: add support for per-processor bug checking
>>>     ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre
>>>     ARM: spectre-v2: harden branch predictor on context switches
>>>     ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit
>>>     ARM: spectre-v2: harden user aborts in kernel space
>>>     ARM: spectre-v2: add firmware based hardening
>>>     ARM: spectre-v2: warn about incorrect context switching functions
>>>     ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15
>>>     ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
>>>     ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1
>>>     ARM: spectre-v1: add speculation barrier (csdb) macros
>>>     ARM: spectre-v1: add array_index_mask_nospec() implementation
>>>     ARM: spectre-v1: fix syscall entry
>>>     ARM: signal: copy registers using __copy_from_user()
>>>     ARM: vfp: use __copy_from_user() when restoring VFP state
>>>     ARM: oabi-compat: copy semops using __copy_from_user()
>>>     ARM: use __inttype() in get_user()
>>>     ARM: spectre-v1: use get_user() for __get_user()
>>>     ARM: spectre-v1: mitigate user accesses
>>>
>>>    arch/arm/include/asm/assembler.h   |  12 ++
>>>    arch/arm/include/asm/barrier.h     |  32 ++++++
>>>    arch/arm/include/asm/bugs.h        |   6 +-
>>>    arch/arm/include/asm/cp15.h        |   3 +
>>>    arch/arm/include/asm/cputype.h     |   8 ++
>>>    arch/arm/include/asm/kvm_asm.h     |   2 -
>>>    arch/arm/include/asm/kvm_host.h    |  14 ++-
>>>    arch/arm/include/asm/kvm_mmu.h     |  23 +++-
>>>    arch/arm/include/asm/proc-fns.h    |   4 +
>>>    arch/arm/include/asm/system_misc.h |  15 +++
>>>    arch/arm/include/asm/thread_info.h |   4 +-
>>>    arch/arm/include/asm/uaccess.h     |  26 +++--
>>>    arch/arm/kernel/Makefile           |   1 +
>>>    arch/arm/kernel/bugs.c             |  18 +++
>>>    arch/arm/kernel/entry-common.S     |  18 ++-
>>>    arch/arm/kernel/entry-header.S     |  25 +++++
>>>    arch/arm/kernel/signal.c           |  55 ++++-----
>>>    arch/arm/kernel/smp.c              |   4 +
>>>    arch/arm/kernel/suspend.c          |   2 +
>>>    arch/arm/kernel/sys_oabi-compat.c  |   8 +-
>>>    arch/arm/kvm/hyp/hyp-entry.S       | 110 +++++++++++++++++-
>>>    arch/arm/lib/copy_from_user.S      |   9 ++
>>>    arch/arm/mm/Kconfig                |  23 ++++
>>>    arch/arm/mm/Makefile               |   2 +-
>>>    arch/arm/mm/fault.c                |   3 +
>>>    arch/arm/mm/proc-macros.S          |   3 +-
>>>    arch/arm/mm/proc-v7-2level.S       |   6 -
>>>    arch/arm/mm/proc-v7-bugs.c         | 174 +++++++++++++++++++++++++++++
>>>    arch/arm/mm/proc-v7.S              | 154 +++++++++++++++++++------
>>>    arch/arm/vfp/vfpmodule.c           |  17 ++-
>>>    30 files changed, 674 insertions(+), 107 deletions(-)
>>>    create mode 100644 arch/arm/kernel/bugs.c
>>>    create mode 100644 arch/arm/mm/proc-v7-bugs.c
>>>
>>
>> kvm-unit-test'ing of this results in a hypervisor panic that doesn't
>> happen without the patches. This needs to be figured out before it is
>> accepted into stable. Looks like a V2 will be needed. Clearly kernelci
>> testing alone is not sufficient when dealing with kvm changes.
> 
> How about posting the panic message, a description of what you were
> doing when that happened, and details of the configuration (HW used,
> Thumb-2 or not...)? If you cannot perform the analysis yourself, at
> least give us enough information to help you.
> 
> Thanks,
> 
> 	M.
> 

The goal of my email was to make sure this didn't end up going out as-is 
in the next v4.9-stable, not to beg help debugging. But I can see how 
the mail might have been interpreted differently.  The intent was that I 
was going to do the "figuring out", and ask for help when and if I 
needed it.

If anyone is interested though:  The test is the kvm-unit-tests run on 
an exynos arndale 5250, built from the default config for that platform 
plus turning on most of the virtualization config lines I could find, 
and not trying to turn on any thumb-2. The problem goes away if I remove 
patches 12/24 and 13/24 (and probably just one of those). The kernel 
panic messages are below:


> [ 1388.419157] Kernel panic - not syncing: 
> [ 1388.419157] HYP panic: UNDEF PC:40000000 CPSR:000001d3
> [ 1388.426742] CPU: 0 PID: 1345 Comm: qemu-system-arm Not tainted 4.9.135-00024-g0cf93698e984 #1
> [ 1388.435242] Hardware name: SAMSUNG EXYNOS (Flattened Device Tree)
> [ 1388.441324] [<c0222bd4>] (unwind_backtrace) from [<c021f8d8>] (show_stack+0x10/0x14)
> [ 1388.449049] [<c021f8d8>] (show_stack) from [<c044a1b0>] (dump_stack+0x78/0x8c)
> [ 1388.456254] [<c044a1b0>] (dump_stack) from [<c02b02b4>] (panic+0xdc/0x258)
> [ 1388.463104] [<c02b02b4>] (panic) from [<c020fa98>] (kvm_arch_vcpu_ioctl_run+0xa4/0x468)
> [ 1388.471096] [<c020fa98>] (kvm_arch_vcpu_ioctl_run) from [<c0208dc4>] (kvm_vcpu_ioctl+0x374/0x6fc)
> [ 1388.479950] [<c0208dc4>] (kvm_vcpu_ioctl) from [<c03100ac>] (do_vfs_ioctl+0x9c/0x7e4)
> [ 1388.487760] [<c03100ac>] (do_vfs_ioctl) from [<c0310828>] (SyS_ioctl+0x34/0x58)
> [ 1388.495052] [<c0310828>] (SyS_ioctl) from [<c021c8c0>] (ret_fast_syscall+0x0/0x40)
> [ 1388.502604] CPU1: stopping
> [ 1388.505285] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.9.135-00024-g0cf93698e984 #1
> [ 1388.513014] Hardware name: SAMSUNG EXYNOS (Flattened Device Tree)
> [ 1388.519091] [<c0222bd4>] (unwind_backtrace) from [<c021f8d8>] (show_stack+0x10/0x14)
> [ 1388.526820] [<c021f8d8>] (show_stack) from [<c044a1b0>] (dump_stack+0x78/0x8c)
> [ 1388.534025] [<c044a1b0>] (dump_stack) from [<c0221e98>] (handle_IPI+0x198/0x1ac)
> [ 1388.541401] [<c0221e98>] (handle_IPI) from [<c0201540>] (gic_handle_irq+0x94/0x98)
> [ 1388.548953] [<c0201540>] (gic_handle_irq) from [<c0220378>] (__irq_svc+0x58/0x8c)
> [ 1388.556415] Exception stack(0xee89bf50 to 0xee89bf98)
> [ 1388.561443] bf40:                                     00000000 c10308b4 00000001 2e193000
> [ 1388.569610] bf60: ffffe000 c1003bf4 00000000 00000143 00000000 eeffb148 49344906 c10308b4
> [ 1388.577768] bf80: fffffff5 ee89bfa0 c065b03c c065b12c 600d0013 ffffffff
> [ 1388.584365] [<c0220378>] (__irq_svc) from [<c065b12c>] (cpuidle_enter_state+0x264/0x320)
> [ 1388.592443] [<c065b12c>] (cpuidle_enter_state) from [<c026a1c4>] (cpu_startup_entry+0x168/0x228)
> [ 1388.601208] [<c026a1c4>] (cpu_startup_entry) from [<402016ec>] (0x402016ec)
> [ 1388.608161] ---[ end Kernel panic - not syncing: 
> [ 1388.608161] HYP panic: UNDEF PC:40000000 CPSR:000001d3
> [ 1388.619517] ------------[ cut here ]------------
> [ 1388.622669] WARNING: CPU: 0 PID: 1345 at kernel/workqueue.c:857 wq_worker_waking_up+0x78/0x80
> [ 1388.631175] Modules linked in: s5p_mfc videobuf2_dma_contig v4l2_common videobuf2_memops videobuf2_v4l2 videobuf2_core vid
> eodev media
> [ 1388.643146] CPU: 0 PID: 1345 Comm: qemu-system-arm Not tainted 4.9.135-00024-g0cf93698e984 #1
> [ 1388.651660] Hardware name: SAMSUNG EXYNOS (Flattened Device Tree)
> [ 1388.657735] [<c0222bd4>] (unwind_backtrace) from [<c021f8d8>] (show_stack+0x10/0x14)
> [ 1388.665467] [<c021f8d8>] (show_stack) from [<c044a1b0>] (dump_stack+0x78/0x8c)
> [ 1388.672670] [<c044a1b0>] (dump_stack) from [<c0230070>] (__warn+0xe8/0x100)
> [ 1388.679608] [<c0230070>] (__warn) from [<c0230138>] (warn_slowpath_null+0x20/0x28)
> [ 1388.687164] [<c0230138>] (warn_slowpath_null) from [<c0246b28>] (wq_worker_waking_up+0x78/0x80)
> [ 1388.695850] [<c0246b28>] (wq_worker_waking_up) from [<c0251cd0>] (ttwu_do_activate+0x58/0x70)
> [ 1388.704354] [<c0251cd0>] (ttwu_do_activate) from [<c0253898>] (try_to_wake_up+0x19c/0x290)
> [ 1388.712598] [<c0253898>] (try_to_wake_up) from [<c0269a84>] (autoremove_wake_function+0xc/0x34)
> [ 1388.721280] [<c0269a84>] (autoremove_wake_function) from [<c0269494>] (__wake_up_common+0x4c/0x80)
> [ 1388.730220] [<c0269494>] (__wake_up_common) from [<c0269500>] (__wake_up+0x38/0x4c)
> [ 1388.737858] [<c0269500>] (__wake_up) from [<c06325b0>] (i2c_s3c_irq_nextbyte+0x488/0x4bc)
> [ 1388.746016] [<c06325b0>] (i2c_s3c_irq_nextbyte) from [<c063332c>] (s3c24xx_i2c_irq+0x34/0x78)
> [ 1388.754523] [<c063332c>] (s3c24xx_i2c_irq) from [<c0278688>] (__handle_irq_event_percpu+0x50/0x11c)
> [ 1388.763550] [<c0278688>] (__handle_irq_event_percpu) from [<c0278770>] (handle_irq_event_percpu+0x1c/0x58)
> [ 1388.773184] [<c0278770>] (handle_irq_event_percpu) from [<c02787e4>] (handle_irq_event+0x38/0x5c)
> [ 1388.782039] [<c02787e4>] (handle_irq_event) from [<c027bb70>] (handle_fasteoi_irq+0xd0/0x1a0)
> [ 1388.790544] [<c027bb70>] (handle_fasteoi_irq) from [<c0277984>] (generic_handle_irq+0x24/0x34)
> [ 1388.799137] [<c0277984>] (generic_handle_irq) from [<c0277eac>] (__handle_domain_irq+0x7c/0xec)
> [ 1388.807816] [<c0277eac>] (__handle_domain_irq) from [<c0201500>] (gic_handle_irq+0x54/0x98)
> [ 1388.816149] [<c0201500>] (gic_handle_irq) from [<c0220378>] (__irq_svc+0x58/0x8c)
> [ 1388.823611] Exception stack(0xed63be00 to 0xed63be48)
> [ 1388.828641] be00: 00002bee 00000007 fac81000 c0680298 c10836d8 00005dbf c26f7ba3 000000c8
> [ 1388.836806] be20: c0c04390 00000063 199996c0 00000000 00000007 ed63be50 c0222558 c0447fec
> [ 1388.844963] be40: 80000153 ffffffff
> [ 1388.848432] [<c0220378>] (__irq_svc) from [<c0447fec>] (__timer_delay+0x44/0x58)
> [ 1388.855818] [<c0447fec>] (__timer_delay) from [<c02b0418>] (panic+0x240/0x258)
> [ 1388.863022] [<c02b0418>] (panic) from [<c020fa98>] (kvm_arch_vcpu_ioctl_run+0xa4/0x468)
> [ 1388.871009] [<c020fa98>] (kvm_arch_vcpu_ioctl_run) from [<c0208dc4>] (kvm_vcpu_ioctl+0x374/0x6fc)
> [ 1388.879862] [<c0208dc4>] (kvm_vcpu_ioctl) from [<c03100ac>] (do_vfs_ioctl+0x9c/0x7e4)
> [ 1388.887673] [<c03100ac>] (do_vfs_ioctl) from [<c0310828>] (SyS_ioctl+0x34/0x58)
> [ 1388.894965] [<c0310828>] (SyS_ioctl) from [<c021c8c0>] (ret_fast_syscall+0x0/0x40)
> [ 1388.902512] ---[ end trace 9f81df9f2aa3f954 ]---


Thanks,
-dl

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 11/24] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17
  2018-10-31 13:57 ` [PATCH 4.9 11/24] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17 David Long
@ 2018-11-05  9:13   ` Marc Zyngier
  2018-11-07  2:22     ` David Long
  2018-11-07  2:23     ` David Long
  0 siblings, 2 replies; 40+ messages in thread
From: Marc Zyngier @ 2018-11-05  9:13 UTC (permalink / raw)
  To: David Long, stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Mark Rutland
  Cc: Greg KH, Mark Brown

David,

On 31/10/18 13:57, David Long wrote:
> From: Marc Zyngier <marc.zyngier@arm.com>
> 
> Commit 3f7e8e2e1ebda787f156ce46e3f0a9ce2833fa4f upstream.
> 
> In order to avoid aliasing attacks against the branch predictor,
> let's invalidate the BTB on guest exit. This is made complicated
> by the fact that we cannot take a branch before invalidating the
> BTB.
> 
> We only apply this to A12 and A17, which are the only two ARM
> cores on which this useful.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> Boot-tested-by: Tony Lindgren <tony@atomide.com>
> Reviewed-by: Tony Lindgren <tony@atomide.com>
> Signed-off-by: David A. Long <dave.long@linaro.org>
> ---
>  arch/arm/include/asm/kvm_asm.h |  2 -
>  arch/arm/include/asm/kvm_mmu.h | 17 ++++++++-
>  arch/arm/kvm/hyp/hyp-entry.S   | 69 ++++++++++++++++++++++++++++++++++
>  3 files changed, 85 insertions(+), 3 deletions(-)
> 

[...]

> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
> index 96beb53934c9..de242d9598c6 100644
> --- a/arch/arm/kvm/hyp/hyp-entry.S
> +++ b/arch/arm/kvm/hyp/hyp-entry.S
> @@ -71,6 +71,66 @@ __kvm_hyp_vector:
>  	W(b)	hyp_irq
>  	W(b)	hyp_fiq
>  
> +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> +	.align 5
> +__kvm_hyp_vector_bp_inv:
> +	.global __kvm_hyp_vector_bp_inv
> +
> +	/*
> +	 * We encode the exception entry in the bottom 3 bits of
> +	 * SP, and we have to guarantee to be 8 bytes aligned.
> +	 */
> +	W(add)	sp, sp, #1	/* Reset 	  7 */
> +	W(add)	sp, sp, #1	/* Undef	  6 */
> +	W(add)	sp, sp, #1	/* Syscall	  5 */
> +	W(add)	sp, sp, #1	/* Prefetch abort 4 */
> +	W(add)	sp, sp, #1	/* Data abort	  3 */
> +	W(add)	sp, sp, #1	/* HVC		  2 */
> +	W(add)	sp, sp, #1	/* IRQ		  1 */
> +	W(nop)			/* FIQ		  0 */
> +
> +	mcr	p15, 0, r0, c7, c5, 6	/* BPIALL */
> +	isb
> +
> +#ifdef CONFIG_THUMB2_KERNEL
> +	/*
> +	 * Yet another silly hack: Use VPIDR as a temp register.
> +	 * Thumb2 is really a pain, as SP cannot be used with most
> +	 * of the bitwise instructions. The vect_br macro ensures
> +	 * things gets cleaned-up.
> +	 */
> +	mcr	p15, 4, r0, c0, c0, 0	/* VPIDR */
> +	mov	r0, sp
> +	and	r0, r0, #7
> +	sub	sp, sp, r0
> +	push	{r1, r2}
> +	mov	r1, r0
> +	mrc	p15, 4, r0, c0, c0, 0	/* VPIDR */
> +	mrc	p15, 0, r2, c0, c0, 0	/* MIDR  */
> +	mcr	p15, 4, r2, c0, c0, 0	/* VPIDR */
> +#endif
> +
> +.macro vect_br val, targ
> +ARM(	eor	sp, sp, #\val	)
> +ARM(	tst	sp, #7		)
> +ARM(	eorne	sp, sp, #\val	)
> +
> +THUMB(	cmp	r1, #\val	)
> +THUMB(	popeq	{r1, r2}	)
> +
> +	beq	\targ
> +.endm
> +
> +	vect_br	0, hyp_fiq
> +	vect_br	1, hyp_irq
> +	vect_br	2, hyp_hvc
> +	vect_br	3, hyp_dabt
> +	vect_br	4, hyp_pabt
> +	vect_br	5, hyp_svc
> +	vect_br	6, hyp_undef
> +	vect_br	7, hyp_reset
> +#endif
> +
>  .macro invalid_vector label, cause
>  	.align
>  \label:	mov	r0, #\cause
> @@ -132,6 +192,14 @@ hyp_hvc:
>  	beq	1f
>  
>  	push	{lr}
> +	/*
> +	 * Pushing r2 here is just a way of keeping the stack aligned to
> +	 * 8 bytes on any path that can trigger a HYP exception. Here,
> +	 * we may well be about to jump into the guest, and the guest
> +	 * exit would otherwise be badly decoded by our fancy
> +	 * "decode-exception-without-a-branch" code...
> +	 */
> +	push	{r2, lr}
>  
>  	mov	lr, r0
>  	mov	r0, r1
> @@ -142,6 +210,7 @@ THUMB(	orr	lr, #1)
>  	blx	lr			@ Call the HYP function
>  
>  	pop	{lr}
> +	pop	{r2, lr}


I don't see how this can work. This clearly isn't the right resolution
for merging 3f7e8e2e1ebda787f156ce46e3f0a9ce2833fa4f, as it contradicts
the very comment you are merging here.

I wouldn't be surprised if the crash you're observing would be due to
this problem (unaligned stack, bad decoding of the vector, branch to the
wrong handler, HYP on fire).

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 09/24] ARM: spectre-v2: add firmware based hardening
  2018-10-31 13:56 ` [PATCH 4.9 09/24] ARM: spectre-v2: add firmware based hardening David Long
@ 2018-11-06 10:40   ` Marc Zyngier
  2018-11-06 10:55     ` Russell King - ARM Linux
  2018-11-06 16:20     ` David Long
  0 siblings, 2 replies; 40+ messages in thread
From: Marc Zyngier @ 2018-11-06 10:40 UTC (permalink / raw)
  To: David Long
  Cc: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Mark Rutland, Greg KH, Mark Brown

On Wed, 31 Oct 2018 13:56:58 +0000,
David Long <dave.long@linaro.org> wrote:
> 
> From: Russell King <rmk+kernel@armlinux.org.uk>
> 
> Commit 10115105cb3aa17b5da1cb726ae8dd5f6854bd93 upstream.
> 
> Add firmware based hardening for cores that require more complex
> handling in firmware.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> Boot-tested-by: Tony Lindgren <tony@atomide.com>
> Reviewed-by: Tony Lindgren <tony@atomide.com>
> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: David A. Long <dave.long@linaro.org>
> ---
>  arch/arm/mm/proc-v7-bugs.c | 60 ++++++++++++++++++++++++++++++++++++++
>  arch/arm/mm/proc-v7.S      | 21 +++++++++++++
>  2 files changed, 81 insertions(+)
> 

[...]

> diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
> index 2d2e5ae85816..8fde9edb4a48 100644
> --- a/arch/arm/mm/proc-v7.S
> +++ b/arch/arm/mm/proc-v7.S
> @@ -9,6 +9,7 @@
>   *
>   *  This is the "shell" of the ARMv7 processor support.
>   */
> +#include <linux/arm-smccc.h>
>  #include <linux/init.h>
>  #include <linux/linkage.h>
>  #include <asm/assembler.h>
> @@ -88,6 +89,26 @@ ENTRY(cpu_v7_dcache_clean_area)
>  	ret	lr
>  ENDPROC(cpu_v7_dcache_clean_area)
>  
> +#ifdef CONFIG_ARM_PSCI
> +	.arch_extension sec
> +ENTRY(cpu_v7_smc_switch_mm)
> +	stmfd	sp!, {r0 - r3}
> +	movw	r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
> +	movt	r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
> +	smc	#0
> +	ldmfd	sp!, {r0 - r3}
> +	b	cpu_v7_switch_mm
> +ENDPROC(cpu_v7_smc_switch_mm)
> +	.arch_extension virt
> +ENTRY(cpu_v7_hvc_switch_mm)
> +	stmfd	sp!, {r0 - r3}
> +	movw	r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
> +	movt	r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
> +	hvc	#0
> +	ldmfd	sp!, {r0 - r3}
> +	b	cpu_v7_switch_mm
> +ENDPROC(cpu_v7_smc_switch_mm)

As pointed out by Ard a while ago [1], this breaks Thumb-2 kernels.
Please keep this series on hold until this is fixed in mainline and
you can cherry-pick the corresponding patch.

Thanks,

	M.

[1] https://patchwork.kernel.org/patch/10475033/

-- 
Jazz is not dead, it just smell funny.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 09/24] ARM: spectre-v2: add firmware based hardening
  2018-11-06 10:40   ` Marc Zyngier
@ 2018-11-06 10:55     ` Russell King - ARM Linux
  2018-11-06 16:19       ` Mark Brown
  2018-11-06 16:20     ` David Long
  1 sibling, 1 reply; 40+ messages in thread
From: Russell King - ARM Linux @ 2018-11-06 10:55 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: David Long, stable, Florian Fainelli, Tony Lindgren,
	Mark Rutland, Greg KH, Mark Brown

On Tue, Nov 06, 2018 at 10:40:33AM +0000, Marc Zyngier wrote:
> On Wed, 31 Oct 2018 13:56:58 +0000,
> David Long <dave.long@linaro.org> wrote:
> > 
> > From: Russell King <rmk+kernel@armlinux.org.uk>
> > 
> > Commit 10115105cb3aa17b5da1cb726ae8dd5f6854bd93 upstream.
> > 
> > Add firmware based hardening for cores that require more complex
> > handling in firmware.
> > 
> > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> > Boot-tested-by: Tony Lindgren <tony@atomide.com>
> > Reviewed-by: Tony Lindgren <tony@atomide.com>
> > Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> > Signed-off-by: David A. Long <dave.long@linaro.org>
> > ---
> >  arch/arm/mm/proc-v7-bugs.c | 60 ++++++++++++++++++++++++++++++++++++++
> >  arch/arm/mm/proc-v7.S      | 21 +++++++++++++
> >  2 files changed, 81 insertions(+)
> > 
> 
> [...]
> 
> > diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
> > index 2d2e5ae85816..8fde9edb4a48 100644
> > --- a/arch/arm/mm/proc-v7.S
> > +++ b/arch/arm/mm/proc-v7.S
> > @@ -9,6 +9,7 @@
> >   *
> >   *  This is the "shell" of the ARMv7 processor support.
> >   */
> > +#include <linux/arm-smccc.h>
> >  #include <linux/init.h>
> >  #include <linux/linkage.h>
> >  #include <asm/assembler.h>
> > @@ -88,6 +89,26 @@ ENTRY(cpu_v7_dcache_clean_area)
> >  	ret	lr
> >  ENDPROC(cpu_v7_dcache_clean_area)
> >  
> > +#ifdef CONFIG_ARM_PSCI
> > +	.arch_extension sec
> > +ENTRY(cpu_v7_smc_switch_mm)
> > +	stmfd	sp!, {r0 - r3}
> > +	movw	r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
> > +	movt	r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
> > +	smc	#0
> > +	ldmfd	sp!, {r0 - r3}
> > +	b	cpu_v7_switch_mm
> > +ENDPROC(cpu_v7_smc_switch_mm)
> > +	.arch_extension virt
> > +ENTRY(cpu_v7_hvc_switch_mm)
> > +	stmfd	sp!, {r0 - r3}
> > +	movw	r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
> > +	movt	r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
> > +	hvc	#0
> > +	ldmfd	sp!, {r0 - r3}
> > +	b	cpu_v7_switch_mm
> > +ENDPROC(cpu_v7_smc_switch_mm)
> 
> As pointed out by Ard a while ago [1], this breaks Thumb-2 kernels.
> Please keep this series on hold until this is fixed in mainline and
> you can cherry-pick the corresponding patch.

You have to wonder at the effectiveness of the autobooters if stuff
like this is not caught.  There's way too many configuration
combinations and firmwares for individuals to be able to test every
code path, we need autobooters to have sufficient diversity (and to
pick up on failures better) to be able to exercise these in an
automated fashion and report decent, reliable results.

It's taken from May to November to find this, which is _way_ too long
a timeframe.

The Thumb annotations for functions are always going to be very
troublesome as there's no automated way to validate them except by
actually exercising the code.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 09/24] ARM: spectre-v2: add firmware based hardening
  2018-11-06 10:55     ` Russell King - ARM Linux
@ 2018-11-06 16:19       ` Mark Brown
  2018-11-06 16:30         ` Russell King - ARM Linux
  0 siblings, 1 reply; 40+ messages in thread
From: Mark Brown @ 2018-11-06 16:19 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Marc Zyngier, David Long, stable, Florian Fainelli,
	Tony Lindgren, Mark Rutland, Greg KH

[-- Attachment #1: Type: text/plain, Size: 1647 bytes --]

On Tue, Nov 06, 2018 at 10:55:00AM +0000, Russell King - ARM Linux wrote:
> On Tue, Nov 06, 2018 at 10:40:33AM +0000, Marc Zyngier wrote:

> > As pointed out by Ard a while ago [1], this breaks Thumb-2 kernels.
> > Please keep this series on hold until this is fixed in mainline and
> > you can cherry-pick the corresponding patch.

> You have to wonder at the effectiveness of the autobooters if stuff
> like this is not caught.  There's way too many configuration
> combinations and firmwares for individuals to be able to test every
> code path, we need autobooters to have sufficient diversity (and to
> pick up on failures better) to be able to exercise these in an
> automated fashion and report decent, reliable results.

Right, and it depends on what people are willing to contribute hardware
wise.  However in the case of Thumb it's just a config option so we
should probably ensure that there's at least one config that's at least
getting booted, we could put something in the kernel source but I'm
thinking that the easiest thing would be to teach at least KernelCI to
just add a multi_v7+THUMB2 build (and then there's userspace too!).
I'll try look into that after Plumbers, I've got some other stuff queued
up there anyway.

> It's taken from May to November to find this, which is _way_ too long
> a timeframe.

> The Thumb annotations for functions are always going to be very
> troublesome as there's no automated way to validate them except by
> actually exercising the code.

There's some work going on on adding runtime testing which will help a
bit there as it'll improve coverage but it's never going to exercise
everything.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 09/24] ARM: spectre-v2: add firmware based hardening
  2018-11-06 10:40   ` Marc Zyngier
  2018-11-06 10:55     ` Russell King - ARM Linux
@ 2018-11-06 16:20     ` David Long
  2018-11-06 16:23       ` Russell King - ARM Linux
  1 sibling, 1 reply; 40+ messages in thread
From: David Long @ 2018-11-06 16:20 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Mark Rutland, Greg KH, Mark Brown

On 11/6/18 5:40 AM, Marc Zyngier wrote:
> On Wed, 31 Oct 2018 13:56:58 +0000,
> David Long <dave.long@linaro.org> wrote:
>>
>> From: Russell King <rmk+kernel@armlinux.org.uk>
>>
>> Commit 10115105cb3aa17b5da1cb726ae8dd5f6854bd93 upstream.
>>
>> Add firmware based hardening for cores that require more complex
>> handling in firmware.
>>
>> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
>> Boot-tested-by: Tony Lindgren <tony@atomide.com>
>> Reviewed-by: Tony Lindgren <tony@atomide.com>
>> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
>> Signed-off-by: David A. Long <dave.long@linaro.org>
>> ---
>>   arch/arm/mm/proc-v7-bugs.c | 60 ++++++++++++++++++++++++++++++++++++++
>>   arch/arm/mm/proc-v7.S      | 21 +++++++++++++
>>   2 files changed, 81 insertions(+)
>>
> 
> [...]
> 
>> diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
>> index 2d2e5ae85816..8fde9edb4a48 100644
>> --- a/arch/arm/mm/proc-v7.S
>> +++ b/arch/arm/mm/proc-v7.S
>> @@ -9,6 +9,7 @@
>>    *
>>    *  This is the "shell" of the ARMv7 processor support.
>>    */
>> +#include <linux/arm-smccc.h>
>>   #include <linux/init.h>
>>   #include <linux/linkage.h>
>>   #include <asm/assembler.h>
>> @@ -88,6 +89,26 @@ ENTRY(cpu_v7_dcache_clean_area)
>>   	ret	lr
>>   ENDPROC(cpu_v7_dcache_clean_area)
>>   
>> +#ifdef CONFIG_ARM_PSCI
>> +	.arch_extension sec
>> +ENTRY(cpu_v7_smc_switch_mm)
>> +	stmfd	sp!, {r0 - r3}
>> +	movw	r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
>> +	movt	r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
>> +	smc	#0
>> +	ldmfd	sp!, {r0 - r3}
>> +	b	cpu_v7_switch_mm
>> +ENDPROC(cpu_v7_smc_switch_mm)
>> +	.arch_extension virt
>> +ENTRY(cpu_v7_hvc_switch_mm)
>> +	stmfd	sp!, {r0 - r3}
>> +	movw	r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
>> +	movt	r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
>> +	hvc	#0
>> +	ldmfd	sp!, {r0 - r3}
>> +	b	cpu_v7_switch_mm
>> +ENDPROC(cpu_v7_smc_switch_mm)
> 
> As pointed out by Ard a while ago [1], this breaks Thumb-2 kernels.
> Please keep this series on hold until this is fixed in mainline and
> you can cherry-pick the corresponding patch.
> 
> Thanks,
> 
> 	M.
> 
> [1] https://patchwork.kernel.org/patch/10475033/
> 

Note that it looks like this problem is now in v4.14 stable too.

-dl

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 09/24] ARM: spectre-v2: add firmware based hardening
  2018-11-06 16:20     ` David Long
@ 2018-11-06 16:23       ` Russell King - ARM Linux
  0 siblings, 0 replies; 40+ messages in thread
From: Russell King - ARM Linux @ 2018-11-06 16:23 UTC (permalink / raw)
  To: David Long
  Cc: Marc Zyngier, stable, Florian Fainelli, Tony Lindgren,
	Mark Rutland, Greg KH, Mark Brown

On Tue, Nov 06, 2018 at 11:20:19AM -0500, David Long wrote:
> On 11/6/18 5:40 AM, Marc Zyngier wrote:
> >On Wed, 31 Oct 2018 13:56:58 +0000,
> >David Long <dave.long@linaro.org> wrote:
> >>
> >>From: Russell King <rmk+kernel@armlinux.org.uk>
> >>
> >>Commit 10115105cb3aa17b5da1cb726ae8dd5f6854bd93 upstream.
> >>
> >>Add firmware based hardening for cores that require more complex
> >>handling in firmware.
> >>
> >>Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> >>Boot-tested-by: Tony Lindgren <tony@atomide.com>
> >>Reviewed-by: Tony Lindgren <tony@atomide.com>
> >>Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> >>Signed-off-by: David A. Long <dave.long@linaro.org>
> >>---
> >>  arch/arm/mm/proc-v7-bugs.c | 60 ++++++++++++++++++++++++++++++++++++++
> >>  arch/arm/mm/proc-v7.S      | 21 +++++++++++++
> >>  2 files changed, 81 insertions(+)
> >>
> >
> >[...]
> >
> >>diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
> >>index 2d2e5ae85816..8fde9edb4a48 100644
> >>--- a/arch/arm/mm/proc-v7.S
> >>+++ b/arch/arm/mm/proc-v7.S
> >>@@ -9,6 +9,7 @@
> >>   *
> >>   *  This is the "shell" of the ARMv7 processor support.
> >>   */
> >>+#include <linux/arm-smccc.h>
> >>  #include <linux/init.h>
> >>  #include <linux/linkage.h>
> >>  #include <asm/assembler.h>
> >>@@ -88,6 +89,26 @@ ENTRY(cpu_v7_dcache_clean_area)
> >>  	ret	lr
> >>  ENDPROC(cpu_v7_dcache_clean_area)
> >>+#ifdef CONFIG_ARM_PSCI
> >>+	.arch_extension sec
> >>+ENTRY(cpu_v7_smc_switch_mm)
> >>+	stmfd	sp!, {r0 - r3}
> >>+	movw	r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
> >>+	movt	r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
> >>+	smc	#0
> >>+	ldmfd	sp!, {r0 - r3}
> >>+	b	cpu_v7_switch_mm
> >>+ENDPROC(cpu_v7_smc_switch_mm)
> >>+	.arch_extension virt
> >>+ENTRY(cpu_v7_hvc_switch_mm)
> >>+	stmfd	sp!, {r0 - r3}
> >>+	movw	r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
> >>+	movt	r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
> >>+	hvc	#0
> >>+	ldmfd	sp!, {r0 - r3}
> >>+	b	cpu_v7_switch_mm
> >>+ENDPROC(cpu_v7_smc_switch_mm)
> >
> >As pointed out by Ard a while ago [1], this breaks Thumb-2 kernels.
> >Please keep this series on hold until this is fixed in mainline and
> >you can cherry-pick the corresponding patch.
> >
> >Thanks,
> >
> >	M.
> >
> >[1] https://patchwork.kernel.org/patch/10475033/
> >
> 
> Note that it looks like this problem is now in v4.14 stable too.

The good news is that Linus has just pulled the fix into mainline, so
we can now poke Greg to pick it up for all stable kernels - but, as a
result, we're going to get into a bit of a mess because it's going to
require careful management of which stable kernels, and getting it
applied by indirect reference along with _these_ patches.

I'm not sure if we've just made things easier or harder.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 09/24] ARM: spectre-v2: add firmware based hardening
  2018-11-06 16:19       ` Mark Brown
@ 2018-11-06 16:30         ` Russell King - ARM Linux
  2018-11-06 16:53           ` Mark Brown
  0 siblings, 1 reply; 40+ messages in thread
From: Russell King - ARM Linux @ 2018-11-06 16:30 UTC (permalink / raw)
  To: Mark Brown
  Cc: Marc Zyngier, David Long, stable, Florian Fainelli,
	Tony Lindgren, Mark Rutland, Greg KH

On Tue, Nov 06, 2018 at 04:19:32PM +0000, Mark Brown wrote:
> On Tue, Nov 06, 2018 at 10:55:00AM +0000, Russell King - ARM Linux wrote:
> > On Tue, Nov 06, 2018 at 10:40:33AM +0000, Marc Zyngier wrote:
> 
> > > As pointed out by Ard a while ago [1], this breaks Thumb-2 kernels.
> > > Please keep this series on hold until this is fixed in mainline and
> > > you can cherry-pick the corresponding patch.
> 
> > You have to wonder at the effectiveness of the autobooters if stuff
> > like this is not caught.  There's way too many configuration
> > combinations and firmwares for individuals to be able to test every
> > code path, we need autobooters to have sufficient diversity (and to
> > pick up on failures better) to be able to exercise these in an
> > automated fashion and report decent, reliable results.
> 
> Right, and it depends on what people are willing to contribute hardware
> wise.  However in the case of Thumb it's just a config option so we
> should probably ensure that there's at least one config that's at least
> getting booted, we could put something in the kernel source but I'm
> thinking that the easiest thing would be to teach at least KernelCI to
> just add a multi_v7+THUMB2 build (and then there's userspace too!).
> I'll try look into that after Plumbers, I've got some other stuff queued
> up there anyway.

With any missing ENDPROC(), the only way to currently detect it is by
trying to run the code path - building alone does not flag any warnings
or errors.  That's because the assembler has no idea whether what is
being assembled is code or data, and the purpose of ENDPROC() is to
mark it as code for the rest of the toolchain, so that it can apply the
bit 0 "fixup" for Thumb2 code.

So, in this case, the only way the error is detectable is to have a
platform where we boot a kernel which makes use of the HVC fixup path.

If that's too much to ask, then we're just going to have to accept that
Thumb2 kernels are going to be more fragile than ARM kernels because
there's no way to be certain that we have the correct annotations
everywhere - so we're going to have to rely on users reporting these
bugs _after_ the changes have hit mainline.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 09/24] ARM: spectre-v2: add firmware based hardening
  2018-11-06 16:30         ` Russell King - ARM Linux
@ 2018-11-06 16:53           ` Mark Brown
  0 siblings, 0 replies; 40+ messages in thread
From: Mark Brown @ 2018-11-06 16:53 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Marc Zyngier, David Long, stable, Florian Fainelli,
	Tony Lindgren, Mark Rutland, Greg KH

[-- Attachment #1: Type: text/plain, Size: 1161 bytes --]

On Tue, Nov 06, 2018 at 04:30:44PM +0000, Russell King - ARM Linux wrote:

> With any missing ENDPROC(), the only way to currently detect it is by
> trying to run the code path - building alone does not flag any warnings
> or errors.  That's because the assembler has no idea whether what is
> being assembled is code or data, and the purpose of ENDPROC() is to
> mark it as code for the rest of the toolchain, so that it can apply the
> bit 0 "fixup" for Thumb2 code.

> So, in this case, the only way the error is detectable is to have a
> platform where we boot a kernel which makes use of the HVC fixup path.

> If that's too much to ask, then we're just going to have to accept that
> Thumb2 kernels are going to be more fragile than ARM kernels because
> there's no way to be certain that we have the correct annotations
> everywhere - so we're going to have to rely on users reporting these
> bugs _after_ the changes have hit mainline.

Oh, totally - but currently none of the automated stuff is even trying
to boot it so if we can do literally anything we'll be better off in
terms of coverage even though it doesn't address the fundemental
fragility.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 11/24] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17
  2018-11-05  9:13   ` Marc Zyngier
@ 2018-11-07  2:22     ` David Long
  2018-11-07  2:23     ` David Long
  1 sibling, 0 replies; 40+ messages in thread
From: David Long @ 2018-11-07  2:22 UTC (permalink / raw)
  To: Marc Zyngier, stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Mark Rutland
  Cc: Greg KH, Mark Brown

On 11/5/18 4:13 AM, Marc Zyngier wrote:
> David,
> 
> On 31/10/18 13:57, David Long wrote:
>> From: Marc Zyngier <marc.zyngier@arm.com>
>>
>> Commit 3f7e8e2e1ebda787f156ce46e3f0a9ce2833fa4f upstream.
>>
>> In order to avoid aliasing attacks against the branch predictor,
>> let's invalidate the BTB on guest exit. This is made complicated
>> by the fact that we cannot take a branch before invalidating the
>> BTB.
>>
>> We only apply this to A12 and A17, which are the only two ARM
>> cores on which this useful.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
>> Boot-tested-by: Tony Lindgren <tony@atomide.com>
>> Reviewed-by: Tony Lindgren <tony@atomide.com>
>> Signed-off-by: David A. Long <dave.long@linaro.org>
>> ---
>>   arch/arm/include/asm/kvm_asm.h |  2 -
>>   arch/arm/include/asm/kvm_mmu.h | 17 ++++++++-
>>   arch/arm/kvm/hyp/hyp-entry.S   | 69 ++++++++++++++++++++++++++++++++++
>>   3 files changed, 85 insertions(+), 3 deletions(-)
>>
> 
> [...]
> 
>> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
>> index 96beb53934c9..de242d9598c6 100644
>> --- a/arch/arm/kvm/hyp/hyp-entry.S
>> +++ b/arch/arm/kvm/hyp/hyp-entry.S
>> @@ -71,6 +71,66 @@ __kvm_hyp_vector:
>>   	W(b)	hyp_irq
>>   	W(b)	hyp_fiq
>>   
>> +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>> +	.align 5
>> +__kvm_hyp_vector_bp_inv:
>> +	.global __kvm_hyp_vector_bp_inv
>> +
>> +	/*
>> +	 * We encode the exception entry in the bottom 3 bits of
>> +	 * SP, and we have to guarantee to be 8 bytes aligned.
>> +	 */
>> +	W(add)	sp, sp, #1	/* Reset 	  7 */
>> +	W(add)	sp, sp, #1	/* Undef	  6 */
>> +	W(add)	sp, sp, #1	/* Syscall	  5 */
>> +	W(add)	sp, sp, #1	/* Prefetch abort 4 */
>> +	W(add)	sp, sp, #1	/* Data abort	  3 */
>> +	W(add)	sp, sp, #1	/* HVC		  2 */
>> +	W(add)	sp, sp, #1	/* IRQ		  1 */
>> +	W(nop)			/* FIQ		  0 */
>> +
>> +	mcr	p15, 0, r0, c7, c5, 6	/* BPIALL */
>> +	isb
>> +
>> +#ifdef CONFIG_THUMB2_KERNEL
>> +	/*
>> +	 * Yet another silly hack: Use VPIDR as a temp register.
>> +	 * Thumb2 is really a pain, as SP cannot be used with most
>> +	 * of the bitwise instructions. The vect_br macro ensures
>> +	 * things gets cleaned-up.
>> +	 */
>> +	mcr	p15, 4, r0, c0, c0, 0	/* VPIDR */
>> +	mov	r0, sp
>> +	and	r0, r0, #7
>> +	sub	sp, sp, r0
>> +	push	{r1, r2}
>> +	mov	r1, r0
>> +	mrc	p15, 4, r0, c0, c0, 0	/* VPIDR */
>> +	mrc	p15, 0, r2, c0, c0, 0	/* MIDR  */
>> +	mcr	p15, 4, r2, c0, c0, 0	/* VPIDR */
>> +#endif
>> +
>> +.macro vect_br val, targ
>> +ARM(	eor	sp, sp, #\val	)
>> +ARM(	tst	sp, #7		)
>> +ARM(	eorne	sp, sp, #\val	)
>> +
>> +THUMB(	cmp	r1, #\val	)
>> +THUMB(	popeq	{r1, r2}	)
>> +
>> +	beq	\targ
>> +.endm
>> +
>> +	vect_br	0, hyp_fiq
>> +	vect_br	1, hyp_irq
>> +	vect_br	2, hyp_hvc
>> +	vect_br	3, hyp_dabt
>> +	vect_br	4, hyp_pabt
>> +	vect_br	5, hyp_svc
>> +	vect_br	6, hyp_undef
>> +	vect_br	7, hyp_reset
>> +#endif
>> +
>>   .macro invalid_vector label, cause
>>   	.align
>>   \label:	mov	r0, #\cause
>> @@ -132,6 +192,14 @@ hyp_hvc:
>>   	beq	1f
>>   
>>   	push	{lr}
>> +	/*
>> +	 * Pushing r2 here is just a way of keeping the stack aligned to
>> +	 * 8 bytes on any path that can trigger a HYP exception. Here,
>> +	 * we may well be about to jump into the guest, and the guest
>> +	 * exit would otherwise be badly decoded by our fancy
>> +	 * "decode-exception-without-a-branch" code...
>> +	 */
>> +	push	{r2, lr}
>>   
>>   	mov	lr, r0
>>   	mov	r0, r1
>> @@ -142,6 +210,7 @@ THUMB(	orr	lr, #1)
>>   	blx	lr			@ Call the HYP function
>>   
>>   	pop	{lr}
>> +	pop	{r2, lr}
> 
> 
> I don't see how this can work. This clearly isn't the right resolution
> for merging 3f7e8e2e1ebda787f156ce46e3f0a9ce2833fa4f, as it contradicts
> the very comment you are merging here.
> 
> I wouldn't be surprised if the crash you're observing would be due to
> this problem (unaligned stack, bad decoding of the vector, branch to the
> wrong handler, HYP on fire).
> 
> 	M.
> 

Thanks, I see the problem now.  I removed the redundant (and 
asymmetrical) push/pop of r2 and it passes kvm-unit-tests without 
regressions.  I'll send out a v2 patch soon.

-dl

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4.9 11/24] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17
  2018-11-05  9:13   ` Marc Zyngier
  2018-11-07  2:22     ` David Long
@ 2018-11-07  2:23     ` David Long
  1 sibling, 0 replies; 40+ messages in thread
From: David Long @ 2018-11-07  2:23 UTC (permalink / raw)
  To: Marc Zyngier, stable, Russell King - ARM Linux, Florian Fainelli,
	Tony Lindgren, Mark Rutland
  Cc: Greg KH, Mark Brown

On 11/5/18 4:13 AM, Marc Zyngier wrote:
> David,
> 
> On 31/10/18 13:57, David Long wrote:
>> From: Marc Zyngier <marc.zyngier@arm.com>
>>
>> Commit 3f7e8e2e1ebda787f156ce46e3f0a9ce2833fa4f upstream.
>>
>> In order to avoid aliasing attacks against the branch predictor,
>> let's invalidate the BTB on guest exit. This is made complicated
>> by the fact that we cannot take a branch before invalidating the
>> BTB.
>>
>> We only apply this to A12 and A17, which are the only two ARM
>> cores on which this useful.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
>> Boot-tested-by: Tony Lindgren <tony@atomide.com>
>> Reviewed-by: Tony Lindgren <tony@atomide.com>
>> Signed-off-by: David A. Long <dave.long@linaro.org>
>> ---
>>   arch/arm/include/asm/kvm_asm.h |  2 -
>>   arch/arm/include/asm/kvm_mmu.h | 17 ++++++++-
>>   arch/arm/kvm/hyp/hyp-entry.S   | 69 ++++++++++++++++++++++++++++++++++
>>   3 files changed, 85 insertions(+), 3 deletions(-)
>>
> 
> [...]
> 
>> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
>> index 96beb53934c9..de242d9598c6 100644
>> --- a/arch/arm/kvm/hyp/hyp-entry.S
>> +++ b/arch/arm/kvm/hyp/hyp-entry.S
>> @@ -71,6 +71,66 @@ __kvm_hyp_vector:
>>   	W(b)	hyp_irq
>>   	W(b)	hyp_fiq
>>   
>> +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>> +	.align 5
>> +__kvm_hyp_vector_bp_inv:
>> +	.global __kvm_hyp_vector_bp_inv
>> +
>> +	/*
>> +	 * We encode the exception entry in the bottom 3 bits of
>> +	 * SP, and we have to guarantee to be 8 bytes aligned.
>> +	 */
>> +	W(add)	sp, sp, #1	/* Reset 	  7 */
>> +	W(add)	sp, sp, #1	/* Undef	  6 */
>> +	W(add)	sp, sp, #1	/* Syscall	  5 */
>> +	W(add)	sp, sp, #1	/* Prefetch abort 4 */
>> +	W(add)	sp, sp, #1	/* Data abort	  3 */
>> +	W(add)	sp, sp, #1	/* HVC		  2 */
>> +	W(add)	sp, sp, #1	/* IRQ		  1 */
>> +	W(nop)			/* FIQ		  0 */
>> +
>> +	mcr	p15, 0, r0, c7, c5, 6	/* BPIALL */
>> +	isb
>> +
>> +#ifdef CONFIG_THUMB2_KERNEL
>> +	/*
>> +	 * Yet another silly hack: Use VPIDR as a temp register.
>> +	 * Thumb2 is really a pain, as SP cannot be used with most
>> +	 * of the bitwise instructions. The vect_br macro ensures
>> +	 * things gets cleaned-up.
>> +	 */
>> +	mcr	p15, 4, r0, c0, c0, 0	/* VPIDR */
>> +	mov	r0, sp
>> +	and	r0, r0, #7
>> +	sub	sp, sp, r0
>> +	push	{r1, r2}
>> +	mov	r1, r0
>> +	mrc	p15, 4, r0, c0, c0, 0	/* VPIDR */
>> +	mrc	p15, 0, r2, c0, c0, 0	/* MIDR  */
>> +	mcr	p15, 4, r2, c0, c0, 0	/* VPIDR */
>> +#endif
>> +
>> +.macro vect_br val, targ
>> +ARM(	eor	sp, sp, #\val	)
>> +ARM(	tst	sp, #7		)
>> +ARM(	eorne	sp, sp, #\val	)
>> +
>> +THUMB(	cmp	r1, #\val	)
>> +THUMB(	popeq	{r1, r2}	)
>> +
>> +	beq	\targ
>> +.endm
>> +
>> +	vect_br	0, hyp_fiq
>> +	vect_br	1, hyp_irq
>> +	vect_br	2, hyp_hvc
>> +	vect_br	3, hyp_dabt
>> +	vect_br	4, hyp_pabt
>> +	vect_br	5, hyp_svc
>> +	vect_br	6, hyp_undef
>> +	vect_br	7, hyp_reset
>> +#endif
>> +
>>   .macro invalid_vector label, cause
>>   	.align
>>   \label:	mov	r0, #\cause
>> @@ -132,6 +192,14 @@ hyp_hvc:
>>   	beq	1f
>>   
>>   	push	{lr}
>> +	/*
>> +	 * Pushing r2 here is just a way of keeping the stack aligned to
>> +	 * 8 bytes on any path that can trigger a HYP exception. Here,
>> +	 * we may well be about to jump into the guest, and the guest
>> +	 * exit would otherwise be badly decoded by our fancy
>> +	 * "decode-exception-without-a-branch" code...
>> +	 */
>> +	push	{r2, lr}
>>   
>>   	mov	lr, r0
>>   	mov	r0, r1
>> @@ -142,6 +210,7 @@ THUMB(	orr	lr, #1)
>>   	blx	lr			@ Call the HYP function
>>   
>>   	pop	{lr}
>> +	pop	{r2, lr}
> 
> 
> I don't see how this can work. This clearly isn't the right resolution
> for merging 3f7e8e2e1ebda787f156ce46e3f0a9ce2833fa4f, as it contradicts
> the very comment you are merging here.
> 
> I wouldn't be surprised if the crash you're observing would be due to
> this problem (unaligned stack, bad decoding of the vector, branch to the
> wrong handler, HYP on fire).
> 
> 	M.
> 

Sorry, I meant I removed the "lr" push/pop.

-dl

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2018-11-07 11:51 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-31 13:56 [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches David Long
2018-10-31 13:56 ` [PATCH 4.9 01/24] ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs David Long
2018-10-31 13:56 ` [PATCH 4.9 02/24] ARM: bugs: prepare processor bug infrastructure David Long
2018-10-31 13:56 ` [PATCH 4.9 03/24] ARM: bugs: hook processor bug checking into SMP and suspend paths David Long
2018-10-31 13:56 ` [PATCH 4.9 04/24] ARM: bugs: add support for per-processor bug checking David Long
2018-10-31 13:56 ` [PATCH 4.9 05/24] ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre David Long
2018-10-31 13:56 ` [PATCH 4.9 06/24] ARM: spectre-v2: harden branch predictor on context switches David Long
2018-10-31 13:56 ` [PATCH 4.9 07/24] ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit David Long
2018-10-31 13:56 ` [PATCH 4.9 08/24] ARM: spectre-v2: harden user aborts in kernel space David Long
2018-10-31 13:56 ` [PATCH 4.9 09/24] ARM: spectre-v2: add firmware based hardening David Long
2018-11-06 10:40   ` Marc Zyngier
2018-11-06 10:55     ` Russell King - ARM Linux
2018-11-06 16:19       ` Mark Brown
2018-11-06 16:30         ` Russell King - ARM Linux
2018-11-06 16:53           ` Mark Brown
2018-11-06 16:20     ` David Long
2018-11-06 16:23       ` Russell King - ARM Linux
2018-10-31 13:56 ` [PATCH 4.9 10/24] ARM: spectre-v2: warn about incorrect context switching functions David Long
2018-10-31 13:57 ` [PATCH 4.9 11/24] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17 David Long
2018-11-05  9:13   ` Marc Zyngier
2018-11-07  2:22     ` David Long
2018-11-07  2:23     ` David Long
2018-10-31 13:57 ` [PATCH 4.9 12/24] ARM: KVM: invalidate icache on guest exit for Cortex-A15 David Long
2018-10-31 13:57 ` [PATCH 4.9 13/24] ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15 David Long
2018-10-31 13:57 ` [PATCH 4.9 14/24] ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling David Long
2018-10-31 13:57 ` [PATCH 4.9 15/24] ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1 David Long
2018-10-31 13:57 ` [PATCH 4.9 16/24] ARM: spectre-v1: add speculation barrier (csdb) macros David Long
2018-10-31 13:57 ` [PATCH 4.9 17/24] ARM: spectre-v1: add array_index_mask_nospec() implementation David Long
2018-10-31 13:57 ` [PATCH 4.9 18/24] ARM: spectre-v1: fix syscall entry David Long
2018-10-31 13:57 ` [PATCH 4.9 19/24] ARM: signal: copy registers using __copy_from_user() David Long
2018-10-31 13:57 ` [PATCH 4.9 20/24] ARM: vfp: use __copy_from_user() when restoring VFP state David Long
2018-10-31 13:57 ` [PATCH 4.9 21/24] ARM: oabi-compat: copy semops using __copy_from_user() David Long
2018-10-31 13:57 ` [PATCH 4.9 22/24] ARM: use __inttype() in get_user() David Long
2018-10-31 13:57 ` [PATCH 4.9 23/24] ARM: spectre-v1: use get_user() for __get_user() David Long
2018-10-31 13:57 ` [PATCH 4.9 24/24] ARM: spectre-v1: mitigate user accesses David Long
2018-10-31 21:23 ` [PATCH 4.9 00/24] V4.9 backport of 32-bit arm spectre patches Florian Fainelli
2018-11-02  1:18 ` David Long
2018-11-02  8:54   ` Marc Zyngier
2018-11-02 17:22     ` David Long
2018-11-02 11:28   ` Russell King - ARM Linux

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.