All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/14] ARM Spectre variant 2 fixes
@ 2018-05-21 11:42 ` Russell King - ARM Linux
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-21 11:42 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, kvmarm, Christoffer Dall

This is the second posting - the original cover note is below.  Comments
from previous series addresesd:
- Drop R7 and R8 changes.
- Remove "PSCI" from the hypervisor version of the workaround.

 arch/arm/include/asm/bugs.h        |   6 +-
 arch/arm/include/asm/cp15.h        |   3 +
 arch/arm/include/asm/cputype.h     |   5 ++
 arch/arm/include/asm/kvm_asm.h     |   2 -
 arch/arm/include/asm/kvm_host.h    |  14 +++-
 arch/arm/include/asm/kvm_mmu.h     |  23 +++++-
 arch/arm/include/asm/proc-fns.h    |   4 +
 arch/arm/include/asm/system_misc.h |   8 ++
 arch/arm/kernel/Makefile           |   1 +
 arch/arm/kernel/bugs.c             |  18 +++++
 arch/arm/kernel/smp.c              |   4 +
 arch/arm/kernel/suspend.c          |   2 +
 arch/arm/kvm/hyp/hyp-entry.S       | 108 +++++++++++++++++++++++++-
 arch/arm/mm/Kconfig                |  23 ++++++
 arch/arm/mm/Makefile               |   2 +-
 arch/arm/mm/fault.c                |   3 +
 arch/arm/mm/proc-macros.S          |   3 +-
 arch/arm/mm/proc-v7-2level.S       |   6 --
 arch/arm/mm/proc-v7-bugs.c         | 130 +++++++++++++++++++++++++++++++
 arch/arm/mm/proc-v7.S              | 154 +++++++++++++++++++++++++++++--------
 20 files changed, 469 insertions(+), 50 deletions(-)
 create mode 100644 arch/arm/kernel/bugs.c
 create mode 100644 arch/arm/mm/proc-v7-bugs.c

On Wed, May 16, 2018 at 11:59:49AM +0100, Russell King - ARM Linux wrote:
> This series addresses the Spectre variant 2 issues on ARM Cortex and
> Broadcom Brahma B15 CPUs.  Due to the complexity of the bug, it is not
> possible to verify that this series fixes any of the bugs, since it
> has not been able to reproduce these exact scenarios using test
> programs.
> 
> I believe that this covers the entire extent of the Spectre variant 2
> issues, with the exception of Cortex A53 and Cortex A72 processors as
> these require a substantially more complex solution (except where the
> workaround is implemented in PSCI firmware.)
> 
> Spectre variant 1 is not covered by this series.
> 
> The patch series is based partly on Marc Zyngier's work from February -
> two of the KVM patches are from Marc's work.
> 
> The main differences are:
> - Inclusion of more processors as per current ARM Ltd security update
>   documentation.
> - Extension of "bugs" infrastructure to detect Cortex A8 and Cortex A15
>   CPUs missing out on the IBE bit being set on (re-)entry to the kernel
>   through all paths.
> - Handle all suspect userspace-touching-kernelspace aborts irrespective
>   of mapping type.
> 
> The first patch will trivially conflict with the Broadcom Brahma
> updates already in arm-soc - it has been necessary to independently
> add the ID definitions for the B15 CPU.
> 
> Having worked through this series, I'm of the opinion that the
> define_processor_functions macro in proc-v7 are probably  more hassle
> than they're worth - here, we don't need the global equivalent symbols,
> because we never refer to them from the kernel code for any V7
> processor (MULTI_CPU is always defined.)
> 
> This series is currently in my "spectre" branch (along with some
> Spectre variant 1 patches.)
> 
> Please carefully review.
> 
>  arch/arm/include/asm/bugs.h        |   6 +-
>  arch/arm/include/asm/cp15.h        |   3 +
>  arch/arm/include/asm/cputype.h     |   5 ++
>  arch/arm/include/asm/kvm_asm.h     |   2 -
>  arch/arm/include/asm/kvm_host.h    |  14 +++-
>  arch/arm/include/asm/kvm_mmu.h     |  23 +++++-
>  arch/arm/include/asm/proc-fns.h    |   4 +
>  arch/arm/include/asm/system_misc.h |   8 ++
>  arch/arm/kernel/Makefile           |   1 +
>  arch/arm/kernel/bugs.c             |  18 +++++
>  arch/arm/kernel/smp.c              |   4 +
>  arch/arm/kernel/suspend.c          |   2 +
>  arch/arm/kvm/hyp/hyp-entry.S       | 108 ++++++++++++++++++++++++-
>  arch/arm/mm/Kconfig                |  23 ++++++
>  arch/arm/mm/Makefile               |   2 +-
>  arch/arm/mm/fault.c                |   3 +
>  arch/arm/mm/proc-macros.S          |   3 +-
>  arch/arm/mm/proc-v7-2level.S       |   6 --
>  arch/arm/mm/proc-v7-bugs.c         | 130 ++++++++++++++++++++++++++++++
>  arch/arm/mm/proc-v7.S              | 158 +++++++++++++++++++++++++++++--------
>  20 files changed, 471 insertions(+), 52 deletions(-)
> 
> -- 
> RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
> According to speedtest.net: 8.21Mbps down 510kbps up
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v2 00/14] ARM Spectre variant 2 fixes
@ 2018-05-21 11:42 ` Russell King - ARM Linux
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-21 11:42 UTC (permalink / raw)
  To: linux-arm-kernel

This is the second posting - the original cover note is below.  Comments
from previous series addresesd:
- Drop R7 and R8 changes.
- Remove "PSCI" from the hypervisor version of the workaround.

 arch/arm/include/asm/bugs.h        |   6 +-
 arch/arm/include/asm/cp15.h        |   3 +
 arch/arm/include/asm/cputype.h     |   5 ++
 arch/arm/include/asm/kvm_asm.h     |   2 -
 arch/arm/include/asm/kvm_host.h    |  14 +++-
 arch/arm/include/asm/kvm_mmu.h     |  23 +++++-
 arch/arm/include/asm/proc-fns.h    |   4 +
 arch/arm/include/asm/system_misc.h |   8 ++
 arch/arm/kernel/Makefile           |   1 +
 arch/arm/kernel/bugs.c             |  18 +++++
 arch/arm/kernel/smp.c              |   4 +
 arch/arm/kernel/suspend.c          |   2 +
 arch/arm/kvm/hyp/hyp-entry.S       | 108 +++++++++++++++++++++++++-
 arch/arm/mm/Kconfig                |  23 ++++++
 arch/arm/mm/Makefile               |   2 +-
 arch/arm/mm/fault.c                |   3 +
 arch/arm/mm/proc-macros.S          |   3 +-
 arch/arm/mm/proc-v7-2level.S       |   6 --
 arch/arm/mm/proc-v7-bugs.c         | 130 +++++++++++++++++++++++++++++++
 arch/arm/mm/proc-v7.S              | 154 +++++++++++++++++++++++++++++--------
 20 files changed, 469 insertions(+), 50 deletions(-)
 create mode 100644 arch/arm/kernel/bugs.c
 create mode 100644 arch/arm/mm/proc-v7-bugs.c

On Wed, May 16, 2018 at 11:59:49AM +0100, Russell King - ARM Linux wrote:
> This series addresses the Spectre variant 2 issues on ARM Cortex and
> Broadcom Brahma B15 CPUs.  Due to the complexity of the bug, it is not
> possible to verify that this series fixes any of the bugs, since it
> has not been able to reproduce these exact scenarios using test
> programs.
> 
> I believe that this covers the entire extent of the Spectre variant 2
> issues, with the exception of Cortex A53 and Cortex A72 processors as
> these require a substantially more complex solution (except where the
> workaround is implemented in PSCI firmware.)
> 
> Spectre variant 1 is not covered by this series.
> 
> The patch series is based partly on Marc Zyngier's work from February -
> two of the KVM patches are from Marc's work.
> 
> The main differences are:
> - Inclusion of more processors as per current ARM Ltd security update
>   documentation.
> - Extension of "bugs" infrastructure to detect Cortex A8 and Cortex A15
>   CPUs missing out on the IBE bit being set on (re-)entry to the kernel
>   through all paths.
> - Handle all suspect userspace-touching-kernelspace aborts irrespective
>   of mapping type.
> 
> The first patch will trivially conflict with the Broadcom Brahma
> updates already in arm-soc - it has been necessary to independently
> add the ID definitions for the B15 CPU.
> 
> Having worked through this series, I'm of the opinion that the
> define_processor_functions macro in proc-v7 are probably  more hassle
> than they're worth - here, we don't need the global equivalent symbols,
> because we never refer to them from the kernel code for any V7
> processor (MULTI_CPU is always defined.)
> 
> This series is currently in my "spectre" branch (along with some
> Spectre variant 1 patches.)
> 
> Please carefully review.
> 
>  arch/arm/include/asm/bugs.h        |   6 +-
>  arch/arm/include/asm/cp15.h        |   3 +
>  arch/arm/include/asm/cputype.h     |   5 ++
>  arch/arm/include/asm/kvm_asm.h     |   2 -
>  arch/arm/include/asm/kvm_host.h    |  14 +++-
>  arch/arm/include/asm/kvm_mmu.h     |  23 +++++-
>  arch/arm/include/asm/proc-fns.h    |   4 +
>  arch/arm/include/asm/system_misc.h |   8 ++
>  arch/arm/kernel/Makefile           |   1 +
>  arch/arm/kernel/bugs.c             |  18 +++++
>  arch/arm/kernel/smp.c              |   4 +
>  arch/arm/kernel/suspend.c          |   2 +
>  arch/arm/kvm/hyp/hyp-entry.S       | 108 ++++++++++++++++++++++++-
>  arch/arm/mm/Kconfig                |  23 ++++++
>  arch/arm/mm/Makefile               |   2 +-
>  arch/arm/mm/fault.c                |   3 +
>  arch/arm/mm/proc-macros.S          |   3 +-
>  arch/arm/mm/proc-v7-2level.S       |   6 --
>  arch/arm/mm/proc-v7-bugs.c         | 130 ++++++++++++++++++++++++++++++
>  arch/arm/mm/proc-v7.S              | 158 +++++++++++++++++++++++++++++--------
>  20 files changed, 471 insertions(+), 52 deletions(-)
> 
> -- 
> RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
> According to speedtest.net: 8.21Mbps down 510kbps up
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 01/14] ARM: add CPU part numbers for Cortex A73, A75 and Brahma B15
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-21 11:44   ` Russell King
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:44 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, kvmarm

Add CPU part numbers for the above mentioned CPUs

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
---
 arch/arm/include/asm/cputype.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm/include/asm/cputype.h b/arch/arm/include/asm/cputype.h
index cb546425da8a..adc4a3eef815 100644
--- a/arch/arm/include/asm/cputype.h
+++ b/arch/arm/include/asm/cputype.h
@@ -77,8 +77,13 @@
 #define ARM_CPU_PART_CORTEX_A12		0x4100c0d0
 #define ARM_CPU_PART_CORTEX_A17		0x4100c0e0
 #define ARM_CPU_PART_CORTEX_A15		0x4100c0f0
+#define ARM_CPU_PART_CORTEX_A73		0x4100d090
+#define ARM_CPU_PART_CORTEX_A75		0x4100d0a0
 #define ARM_CPU_PART_MASK		0xff00fff0
 
+/* Broadcom cores */
+#define ARM_CPU_PART_BRAHMA_B15		0x420000f0
+
 /* DEC implemented cores */
 #define ARM_CPU_PART_SA1100		0x4400a110
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 01/14] ARM: add CPU part numbers for Cortex A73, A75 and Brahma B15
@ 2018-05-21 11:44   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:44 UTC (permalink / raw)
  To: linux-arm-kernel

Add CPU part numbers for the above mentioned CPUs

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
---
 arch/arm/include/asm/cputype.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm/include/asm/cputype.h b/arch/arm/include/asm/cputype.h
index cb546425da8a..adc4a3eef815 100644
--- a/arch/arm/include/asm/cputype.h
+++ b/arch/arm/include/asm/cputype.h
@@ -77,8 +77,13 @@
 #define ARM_CPU_PART_CORTEX_A12		0x4100c0d0
 #define ARM_CPU_PART_CORTEX_A17		0x4100c0e0
 #define ARM_CPU_PART_CORTEX_A15		0x4100c0f0
+#define ARM_CPU_PART_CORTEX_A73		0x4100d090
+#define ARM_CPU_PART_CORTEX_A75		0x4100d0a0
 #define ARM_CPU_PART_MASK		0xff00fff0
 
+/* Broadcom cores */
+#define ARM_CPU_PART_BRAHMA_B15		0x420000f0
+
 /* DEC implemented cores */
 #define ARM_CPU_PART_SA1100		0x4400a110
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 02/14] ARM: bugs: prepare processor bug infrastructure
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-21 11:44   ` Russell King
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:44 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, Christoffer Dall, kvmarm

Prepare the processor bug infrastructure so that it can be expanded to
check for per-processor bugs.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
---
 arch/arm/include/asm/bugs.h | 4 ++--
 arch/arm/kernel/Makefile    | 1 +
 arch/arm/kernel/bugs.c      | 9 +++++++++
 3 files changed, 12 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm/kernel/bugs.c

diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h
index a97f1ea708d1..ed122d294f3f 100644
--- a/arch/arm/include/asm/bugs.h
+++ b/arch/arm/include/asm/bugs.h
@@ -10,10 +10,10 @@
 #ifndef __ASM_BUGS_H
 #define __ASM_BUGS_H
 
-#ifdef CONFIG_MMU
 extern void check_writebuffer_bugs(void);
 
-#define check_bugs() check_writebuffer_bugs()
+#ifdef CONFIG_MMU
+extern void check_bugs(void);
 #else
 #define check_bugs() do { } while (0)
 #endif
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index b59ac4bf82b8..8cad59465af3 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -31,6 +31,7 @@ else
 obj-y		+= entry-armv.o
 endif
 
+obj-$(CONFIG_MMU)		+= bugs.o
 obj-$(CONFIG_CPU_IDLE)		+= cpuidle.o
 obj-$(CONFIG_ISA_DMA_API)	+= dma.o
 obj-$(CONFIG_FIQ)		+= fiq.o fiqasm.o
diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
new file mode 100644
index 000000000000..88024028bb70
--- /dev/null
+++ b/arch/arm/kernel/bugs.c
@@ -0,0 +1,9 @@
+// SPDX-Identifier: GPL-2.0
+#include <linux/init.h>
+#include <asm/bugs.h>
+#include <asm/proc-fns.h>
+
+void __init check_bugs(void)
+{
+	check_writebuffer_bugs();
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 02/14] ARM: bugs: prepare processor bug infrastructure
@ 2018-05-21 11:44   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:44 UTC (permalink / raw)
  To: linux-arm-kernel

Prepare the processor bug infrastructure so that it can be expanded to
check for per-processor bugs.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
---
 arch/arm/include/asm/bugs.h | 4 ++--
 arch/arm/kernel/Makefile    | 1 +
 arch/arm/kernel/bugs.c      | 9 +++++++++
 3 files changed, 12 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm/kernel/bugs.c

diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h
index a97f1ea708d1..ed122d294f3f 100644
--- a/arch/arm/include/asm/bugs.h
+++ b/arch/arm/include/asm/bugs.h
@@ -10,10 +10,10 @@
 #ifndef __ASM_BUGS_H
 #define __ASM_BUGS_H
 
-#ifdef CONFIG_MMU
 extern void check_writebuffer_bugs(void);
 
-#define check_bugs() check_writebuffer_bugs()
+#ifdef CONFIG_MMU
+extern void check_bugs(void);
 #else
 #define check_bugs() do { } while (0)
 #endif
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index b59ac4bf82b8..8cad59465af3 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -31,6 +31,7 @@ else
 obj-y		+= entry-armv.o
 endif
 
+obj-$(CONFIG_MMU)		+= bugs.o
 obj-$(CONFIG_CPU_IDLE)		+= cpuidle.o
 obj-$(CONFIG_ISA_DMA_API)	+= dma.o
 obj-$(CONFIG_FIQ)		+= fiq.o fiqasm.o
diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
new file mode 100644
index 000000000000..88024028bb70
--- /dev/null
+++ b/arch/arm/kernel/bugs.c
@@ -0,0 +1,9 @@
+// SPDX-Identifier: GPL-2.0
+#include <linux/init.h>
+#include <asm/bugs.h>
+#include <asm/proc-fns.h>
+
+void __init check_bugs(void)
+{
+	check_writebuffer_bugs();
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-21 11:44   ` Russell King
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:44 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, Christoffer Dall, kvmarm

Check for CPU bugs when secondary processors are being brought online,
and also when CPUs are resuming from a low power mode.  This gives an
opportunity to check that processor specific bug workarounds are
correctly enabled for all paths that a CPU re-enters the kernel.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
---
 arch/arm/include/asm/bugs.h | 2 ++
 arch/arm/kernel/bugs.c      | 5 +++++
 arch/arm/kernel/smp.c       | 4 ++++
 arch/arm/kernel/suspend.c   | 2 ++
 4 files changed, 13 insertions(+)

diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h
index ed122d294f3f..73a99c72a930 100644
--- a/arch/arm/include/asm/bugs.h
+++ b/arch/arm/include/asm/bugs.h
@@ -14,8 +14,10 @@ extern void check_writebuffer_bugs(void);
 
 #ifdef CONFIG_MMU
 extern void check_bugs(void);
+extern void check_other_bugs(void);
 #else
 #define check_bugs() do { } while (0)
+#define check_other_bugs() do { } while (0)
 #endif
 
 #endif
diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
index 88024028bb70..16e7ba2a9cc4 100644
--- a/arch/arm/kernel/bugs.c
+++ b/arch/arm/kernel/bugs.c
@@ -3,7 +3,12 @@
 #include <asm/bugs.h>
 #include <asm/proc-fns.h>
 
+void check_other_bugs(void)
+{
+}
+
 void __init check_bugs(void)
 {
 	check_writebuffer_bugs();
+	check_other_bugs();
 }
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 2da087926ebe..5ad0b67b9e33 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -31,6 +31,7 @@
 #include <linux/irq_work.h>
 
 #include <linux/atomic.h>
+#include <asm/bugs.h>
 #include <asm/smp.h>
 #include <asm/cacheflush.h>
 #include <asm/cpu.h>
@@ -405,6 +406,9 @@ asmlinkage void secondary_start_kernel(void)
 	 * before we continue - which happens after __cpu_up returns.
 	 */
 	set_cpu_online(cpu, true);
+
+	check_other_bugs();
+
 	complete(&cpu_running);
 
 	local_irq_enable();
diff --git a/arch/arm/kernel/suspend.c b/arch/arm/kernel/suspend.c
index a40ebb7c0896..d08099269e35 100644
--- a/arch/arm/kernel/suspend.c
+++ b/arch/arm/kernel/suspend.c
@@ -3,6 +3,7 @@
 #include <linux/slab.h>
 #include <linux/mm_types.h>
 
+#include <asm/bugs.h>
 #include <asm/cacheflush.h>
 #include <asm/idmap.h>
 #include <asm/pgalloc.h>
@@ -36,6 +37,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
 		cpu_switch_mm(mm->pgd, mm);
 		local_flush_bp_all();
 		local_flush_tlb_all();
+		check_other_bugs();
 	}
 
 	return ret;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths
@ 2018-05-21 11:44   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:44 UTC (permalink / raw)
  To: linux-arm-kernel

Check for CPU bugs when secondary processors are being brought online,
and also when CPUs are resuming from a low power mode.  This gives an
opportunity to check that processor specific bug workarounds are
correctly enabled for all paths that a CPU re-enters the kernel.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
---
 arch/arm/include/asm/bugs.h | 2 ++
 arch/arm/kernel/bugs.c      | 5 +++++
 arch/arm/kernel/smp.c       | 4 ++++
 arch/arm/kernel/suspend.c   | 2 ++
 4 files changed, 13 insertions(+)

diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h
index ed122d294f3f..73a99c72a930 100644
--- a/arch/arm/include/asm/bugs.h
+++ b/arch/arm/include/asm/bugs.h
@@ -14,8 +14,10 @@ extern void check_writebuffer_bugs(void);
 
 #ifdef CONFIG_MMU
 extern void check_bugs(void);
+extern void check_other_bugs(void);
 #else
 #define check_bugs() do { } while (0)
+#define check_other_bugs() do { } while (0)
 #endif
 
 #endif
diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
index 88024028bb70..16e7ba2a9cc4 100644
--- a/arch/arm/kernel/bugs.c
+++ b/arch/arm/kernel/bugs.c
@@ -3,7 +3,12 @@
 #include <asm/bugs.h>
 #include <asm/proc-fns.h>
 
+void check_other_bugs(void)
+{
+}
+
 void __init check_bugs(void)
 {
 	check_writebuffer_bugs();
+	check_other_bugs();
 }
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 2da087926ebe..5ad0b67b9e33 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -31,6 +31,7 @@
 #include <linux/irq_work.h>
 
 #include <linux/atomic.h>
+#include <asm/bugs.h>
 #include <asm/smp.h>
 #include <asm/cacheflush.h>
 #include <asm/cpu.h>
@@ -405,6 +406,9 @@ asmlinkage void secondary_start_kernel(void)
 	 * before we continue - which happens after __cpu_up returns.
 	 */
 	set_cpu_online(cpu, true);
+
+	check_other_bugs();
+
 	complete(&cpu_running);
 
 	local_irq_enable();
diff --git a/arch/arm/kernel/suspend.c b/arch/arm/kernel/suspend.c
index a40ebb7c0896..d08099269e35 100644
--- a/arch/arm/kernel/suspend.c
+++ b/arch/arm/kernel/suspend.c
@@ -3,6 +3,7 @@
 #include <linux/slab.h>
 #include <linux/mm_types.h>
 
+#include <asm/bugs.h>
 #include <asm/cacheflush.h>
 #include <asm/idmap.h>
 #include <asm/pgalloc.h>
@@ -36,6 +37,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
 		cpu_switch_mm(mm->pgd, mm);
 		local_flush_bp_all();
 		local_flush_tlb_all();
+		check_other_bugs();
 	}
 
 	return ret;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 04/14] ARM: bugs: add support for per-processor bug checking
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-21 11:44   ` Russell King
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:44 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, kvmarm

Add support for per-processor bug checking - each processor function
descriptor gains a function pointer for this check, which must not be
an __init function.  If non-NULL, this will be called whenever a CPU
enters the kernel via which ever path (boot CPU, secondary CPU startup,
CPU resuming, etc.)

This allows processor specific bug checks to validate that workaround
bits are properly enabled by firmware via all entry paths to the kernel.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
---
 arch/arm/include/asm/proc-fns.h | 4 ++++
 arch/arm/kernel/bugs.c          | 4 ++++
 arch/arm/mm/proc-macros.S       | 3 ++-
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
index f2e1af45bd6f..e25f4392e1b2 100644
--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -37,6 +37,10 @@ extern struct processor {
 	 */
 	void (*_proc_init)(void);
 	/*
+	 * Check for processor bugs
+	 */
+	void (*check_bugs)(void);
+	/*
 	 * Disable any processor specifics
 	 */
 	void (*_proc_fin)(void);
diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
index 16e7ba2a9cc4..7be511310191 100644
--- a/arch/arm/kernel/bugs.c
+++ b/arch/arm/kernel/bugs.c
@@ -5,6 +5,10 @@
 
 void check_other_bugs(void)
 {
+#ifdef MULTI_CPU
+	if (processor.check_bugs)
+		processor.check_bugs();
+#endif
 }
 
 void __init check_bugs(void)
diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S
index f10e31d0730a..81d0efb055c6 100644
--- a/arch/arm/mm/proc-macros.S
+++ b/arch/arm/mm/proc-macros.S
@@ -273,13 +273,14 @@
 	mcr	p15, 0, ip, c7, c10, 4		@ data write barrier
 	.endm
 
-.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0
+.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0, bugs=0
 	.type	\name\()_processor_functions, #object
 	.align 2
 ENTRY(\name\()_processor_functions)
 	.word	\dabort
 	.word	\pabort
 	.word	cpu_\name\()_proc_init
+	.word	\bugs
 	.word	cpu_\name\()_proc_fin
 	.word	cpu_\name\()_reset
 	.word	cpu_\name\()_do_idle
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 04/14] ARM: bugs: add support for per-processor bug checking
@ 2018-05-21 11:44   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:44 UTC (permalink / raw)
  To: linux-arm-kernel

Add support for per-processor bug checking - each processor function
descriptor gains a function pointer for this check, which must not be
an __init function.  If non-NULL, this will be called whenever a CPU
enters the kernel via which ever path (boot CPU, secondary CPU startup,
CPU resuming, etc.)

This allows processor specific bug checks to validate that workaround
bits are properly enabled by firmware via all entry paths to the kernel.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
---
 arch/arm/include/asm/proc-fns.h | 4 ++++
 arch/arm/kernel/bugs.c          | 4 ++++
 arch/arm/mm/proc-macros.S       | 3 ++-
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
index f2e1af45bd6f..e25f4392e1b2 100644
--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -37,6 +37,10 @@ extern struct processor {
 	 */
 	void (*_proc_init)(void);
 	/*
+	 * Check for processor bugs
+	 */
+	void (*check_bugs)(void);
+	/*
 	 * Disable any processor specifics
 	 */
 	void (*_proc_fin)(void);
diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
index 16e7ba2a9cc4..7be511310191 100644
--- a/arch/arm/kernel/bugs.c
+++ b/arch/arm/kernel/bugs.c
@@ -5,6 +5,10 @@
 
 void check_other_bugs(void)
 {
+#ifdef MULTI_CPU
+	if (processor.check_bugs)
+		processor.check_bugs();
+#endif
 }
 
 void __init check_bugs(void)
diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S
index f10e31d0730a..81d0efb055c6 100644
--- a/arch/arm/mm/proc-macros.S
+++ b/arch/arm/mm/proc-macros.S
@@ -273,13 +273,14 @@
 	mcr	p15, 0, ip, c7, c10, 4		@ data write barrier
 	.endm
 
-.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0
+.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0, bugs=0
 	.type	\name\()_processor_functions, #object
 	.align 2
 ENTRY(\name\()_processor_functions)
 	.word	\dabort
 	.word	\pabort
 	.word	cpu_\name\()_proc_init
+	.word	\bugs
 	.word	cpu_\name\()_proc_fin
 	.word	cpu_\name\()_reset
 	.word	cpu_\name\()_do_idle
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 05/14] ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-21 11:44   ` Russell King
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:44 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, kvmarm

Add a Kconfig symbol for CPUs which are vulnerable to the Spectre
attacks.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
---
 arch/arm/mm/Kconfig | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index 7f14acf67caf..6f3ef86b4cb7 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -415,6 +415,7 @@ config CPU_V7
 	select CPU_CP15_MPU if !MMU
 	select CPU_HAS_ASID if MMU
 	select CPU_PABRT_V7
+	select CPU_SPECTRE if MMU
 	select CPU_THUMB_CAPABLE
 	select CPU_TLB_V7 if MMU
 
@@ -826,6 +827,9 @@ config CPU_BPREDICT_DISABLE
 	help
 	  Say Y here to disable branch prediction.  If unsure, say N.
 
+config CPU_SPECTRE
+	bool
+
 config TLS_REG_EMUL
 	bool
 	select NEED_KUSER_HELPERS
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 05/14] ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre
@ 2018-05-21 11:44   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:44 UTC (permalink / raw)
  To: linux-arm-kernel

Add a Kconfig symbol for CPUs which are vulnerable to the Spectre
attacks.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
---
 arch/arm/mm/Kconfig | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index 7f14acf67caf..6f3ef86b4cb7 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -415,6 +415,7 @@ config CPU_V7
 	select CPU_CP15_MPU if !MMU
 	select CPU_HAS_ASID if MMU
 	select CPU_PABRT_V7
+	select CPU_SPECTRE if MMU
 	select CPU_THUMB_CAPABLE
 	select CPU_TLB_V7 if MMU
 
@@ -826,6 +827,9 @@ config CPU_BPREDICT_DISABLE
 	help
 	  Say Y here to disable branch prediction.  If unsure, say N.
 
+config CPU_SPECTRE
+	bool
+
 config TLS_REG_EMUL
 	bool
 	select NEED_KUSER_HELPERS
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 06/14] ARM: spectre-v2: harden branch predictor on context switches
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-21 11:44   ` Russell King
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:44 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, kvmarm

Harden the branch predictor against Spectre v2 attacks on context
switches for ARMv7 and later CPUs.  We do this by:

Cortex A9, A12, A17, A73, A75: invalidating the BTB.
Cortex A15, Brahma B15: invalidating the instruction cache.

Cortex A57 and Cortex A72 are not addressed in this patch.

Cortex R7 and Cortex R8 are also not addressed as we do not enforce
memory protection on these cores.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
---
 arch/arm/mm/Kconfig          |  19 +++++++
 arch/arm/mm/proc-v7-2level.S |   6 ---
 arch/arm/mm/proc-v7.S        | 125 +++++++++++++++++++++++++++++++++----------
 3 files changed, 115 insertions(+), 35 deletions(-)

diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index 6f3ef86b4cb7..9357ff52c221 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -830,6 +830,25 @@ config CPU_BPREDICT_DISABLE
 config CPU_SPECTRE
 	bool
 
+config HARDEN_BRANCH_PREDICTOR
+	bool "Harden the branch predictor against aliasing attacks" if EXPERT
+	depends on CPU_SPECTRE
+	default y
+	help
+	   Speculation attacks against some high-performance processors rely
+	   on being able to manipulate the branch predictor for a victim
+	   context by executing aliasing branches in the attacker context.
+	   Such attacks can be partially mitigated against by clearing
+	   internal branch predictor state and limiting the prediction
+	   logic in some situations.
+
+	   This config option will take CPU-specific actions to harden
+	   the branch predictor against aliasing attacks and may rely on
+	   specific instruction sequences or control bits being set by
+	   the system firmware.
+
+	   If unsure, say Y.
+
 config TLS_REG_EMUL
 	bool
 	select NEED_KUSER_HELPERS
diff --git a/arch/arm/mm/proc-v7-2level.S b/arch/arm/mm/proc-v7-2level.S
index c6141a5435c3..f8d45ad2a515 100644
--- a/arch/arm/mm/proc-v7-2level.S
+++ b/arch/arm/mm/proc-v7-2level.S
@@ -41,11 +41,6 @@
  *	even on Cortex-A8 revisions not affected by 430973.
  *	If IBE is not set, the flush BTAC/BTB won't do anything.
  */
-ENTRY(cpu_ca8_switch_mm)
-#ifdef CONFIG_MMU
-	mov	r2, #0
-	mcr	p15, 0, r2, c7, c5, 6		@ flush BTAC/BTB
-#endif
 ENTRY(cpu_v7_switch_mm)
 #ifdef CONFIG_MMU
 	mmid	r1, r1				@ get mm->context.id
@@ -66,7 +61,6 @@ ENTRY(cpu_v7_switch_mm)
 #endif
 	bx	lr
 ENDPROC(cpu_v7_switch_mm)
-ENDPROC(cpu_ca8_switch_mm)
 
 /*
  *	cpu_v7_set_pte_ext(ptep, pte)
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index d55d493f9a1e..a2d433d59848 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -93,6 +93,17 @@ ENTRY(cpu_v7_dcache_clean_area)
 	ret	lr
 ENDPROC(cpu_v7_dcache_clean_area)
 
+ENTRY(cpu_v7_iciallu_switch_mm)
+	mov	r3, #0
+	mcr	p15, 0, r3, c7, c5, 0		@ ICIALLU
+	b	cpu_v7_switch_mm
+ENDPROC(cpu_v7_iciallu_switch_mm)
+ENTRY(cpu_v7_bpiall_switch_mm)
+	mov	r3, #0
+	mcr	p15, 0, r3, c7, c5, 6		@ flush BTAC/BTB
+	b	cpu_v7_switch_mm
+ENDPROC(cpu_v7_bpiall_switch_mm)
+
 	string	cpu_v7_name, "ARMv7 Processor"
 	.align
 
@@ -158,31 +169,6 @@ ENTRY(cpu_v7_do_resume)
 ENDPROC(cpu_v7_do_resume)
 #endif
 
-/*
- * Cortex-A8
- */
-	globl_equ	cpu_ca8_proc_init,	cpu_v7_proc_init
-	globl_equ	cpu_ca8_proc_fin,	cpu_v7_proc_fin
-	globl_equ	cpu_ca8_reset,		cpu_v7_reset
-	globl_equ	cpu_ca8_do_idle,	cpu_v7_do_idle
-	globl_equ	cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area
-	globl_equ	cpu_ca8_set_pte_ext,	cpu_v7_set_pte_ext
-	globl_equ	cpu_ca8_suspend_size,	cpu_v7_suspend_size
-#ifdef CONFIG_ARM_CPU_SUSPEND
-	globl_equ	cpu_ca8_do_suspend,	cpu_v7_do_suspend
-	globl_equ	cpu_ca8_do_resume,	cpu_v7_do_resume
-#endif
-
-/*
- * Cortex-A9 processor functions
- */
-	globl_equ	cpu_ca9mp_proc_init,	cpu_v7_proc_init
-	globl_equ	cpu_ca9mp_proc_fin,	cpu_v7_proc_fin
-	globl_equ	cpu_ca9mp_reset,	cpu_v7_reset
-	globl_equ	cpu_ca9mp_do_idle,	cpu_v7_do_idle
-	globl_equ	cpu_ca9mp_dcache_clean_area, cpu_v7_dcache_clean_area
-	globl_equ	cpu_ca9mp_switch_mm,	cpu_v7_switch_mm
-	globl_equ	cpu_ca9mp_set_pte_ext,	cpu_v7_set_pte_ext
 .globl	cpu_ca9mp_suspend_size
 .equ	cpu_ca9mp_suspend_size, cpu_v7_suspend_size + 4 * 2
 #ifdef CONFIG_ARM_CPU_SUSPEND
@@ -548,10 +534,75 @@ ENDPROC(__v7_setup)
 
 	@ define struct processor (see <asm/proc-fns.h> and proc-macros.S)
 	define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	@ generic v7 bpiall on context switch
+	globl_equ	cpu_v7_bpiall_proc_init,	cpu_v7_proc_init
+	globl_equ	cpu_v7_bpiall_proc_fin,		cpu_v7_proc_fin
+	globl_equ	cpu_v7_bpiall_reset,		cpu_v7_reset
+	globl_equ	cpu_v7_bpiall_do_idle,		cpu_v7_do_idle
+	globl_equ	cpu_v7_bpiall_dcache_clean_area, cpu_v7_dcache_clean_area
+	globl_equ	cpu_v7_bpiall_set_pte_ext,	cpu_v7_set_pte_ext
+	globl_equ	cpu_v7_bpiall_suspend_size,	cpu_v7_suspend_size
+#ifdef CONFIG_ARM_CPU_SUSPEND
+	globl_equ	cpu_v7_bpiall_do_suspend,	cpu_v7_do_suspend
+	globl_equ	cpu_v7_bpiall_do_resume,	cpu_v7_do_resume
+#endif
+	define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+
+#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_bpiall_processor_functions
+#else
+#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_processor_functions
+#endif
+
 #ifndef CONFIG_ARM_LPAE
+	@ Cortex-A8 - always needs bpiall switch_mm implementation
+	globl_equ	cpu_ca8_proc_init,	cpu_v7_proc_init
+	globl_equ	cpu_ca8_proc_fin,	cpu_v7_proc_fin
+	globl_equ	cpu_ca8_reset,		cpu_v7_reset
+	globl_equ	cpu_ca8_do_idle,	cpu_v7_do_idle
+	globl_equ	cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area
+	globl_equ	cpu_ca8_set_pte_ext,	cpu_v7_set_pte_ext
+	globl_equ	cpu_ca8_switch_mm,	cpu_v7_bpiall_switch_mm
+	globl_equ	cpu_ca8_suspend_size,	cpu_v7_suspend_size
+#ifdef CONFIG_ARM_CPU_SUSPEND
+	globl_equ	cpu_ca8_do_suspend,	cpu_v7_do_suspend
+	globl_equ	cpu_ca8_do_resume,	cpu_v7_do_resume
+#endif
 	define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+
+	@ Cortex-A9 - needs more registers preserved across suspend/resume
+	@ and bpiall switch_mm for hardening
+	globl_equ	cpu_ca9mp_proc_init,	cpu_v7_proc_init
+	globl_equ	cpu_ca9mp_proc_fin,	cpu_v7_proc_fin
+	globl_equ	cpu_ca9mp_reset,	cpu_v7_reset
+	globl_equ	cpu_ca9mp_do_idle,	cpu_v7_do_idle
+	globl_equ	cpu_ca9mp_dcache_clean_area, cpu_v7_dcache_clean_area
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	globl_equ	cpu_ca9mp_switch_mm,	cpu_v7_bpiall_switch_mm
+#else
+	globl_equ	cpu_ca9mp_switch_mm,	cpu_v7_switch_mm
+#endif
+	globl_equ	cpu_ca9mp_set_pte_ext,	cpu_v7_set_pte_ext
 	define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
 #endif
+
+	@ Cortex-A15 - needs iciallu switch_mm for hardening
+	globl_equ	cpu_ca15_proc_init,	cpu_v7_proc_init
+	globl_equ	cpu_ca15_proc_fin,	cpu_v7_proc_fin
+	globl_equ	cpu_ca15_reset,		cpu_v7_reset
+	globl_equ	cpu_ca15_do_idle,	cpu_v7_do_idle
+	globl_equ	cpu_ca15_dcache_clean_area, cpu_v7_dcache_clean_area
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	globl_equ	cpu_ca15_switch_mm,	cpu_v7_iciallu_switch_mm
+#else
+	globl_equ	cpu_ca15_switch_mm,	cpu_v7_switch_mm
+#endif
+	globl_equ	cpu_ca15_set_pte_ext,	cpu_v7_set_pte_ext
+	globl_equ	cpu_ca15_suspend_size,	cpu_v7_suspend_size
+	globl_equ	cpu_ca15_do_suspend,	cpu_v7_do_suspend
+	globl_equ	cpu_ca15_do_resume,	cpu_v7_do_resume
+	define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
 #ifdef CONFIG_CPU_PJ4B
 	define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
 #endif
@@ -658,7 +709,7 @@ ENDPROC(__v7_setup)
 __v7_ca12mp_proc_info:
 	.long	0x410fc0d0
 	.long	0xff0ffff0
-	__v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup
+	__v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
 	.size	__v7_ca12mp_proc_info, . - __v7_ca12mp_proc_info
 
 	/*
@@ -668,7 +719,7 @@ ENDPROC(__v7_setup)
 __v7_ca15mp_proc_info:
 	.long	0x410fc0f0
 	.long	0xff0ffff0
-	__v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup
+	__v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup, proc_fns = ca15_processor_functions
 	.size	__v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info
 
 	/*
@@ -678,7 +729,7 @@ ENDPROC(__v7_setup)
 __v7_b15mp_proc_info:
 	.long	0x420f00f0
 	.long	0xff0ffff0
-	__v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup, cache_fns = b15_cache_fns
+	__v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup, proc_fns = ca15_processor_functions, cache_fns = b15_cache_fns
 	.size	__v7_b15mp_proc_info, . - __v7_b15mp_proc_info
 
 	/*
@@ -688,9 +739,25 @@ ENDPROC(__v7_setup)
 __v7_ca17mp_proc_info:
 	.long	0x410fc0e0
 	.long	0xff0ffff0
-	__v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup
+	__v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
 	.size	__v7_ca17mp_proc_info, . - __v7_ca17mp_proc_info
 
+	/* ARM Ltd. Cortex A73 processor */
+	.type	__v7_ca73_proc_info, #object
+__v7_ca73_proc_info:
+	.long	0x410fd090
+	.long	0xff0ffff0
+	__v7_proc __v7_ca73_proc_info, __v7_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
+	.size	__v7_ca73_proc_info, . - __v7_ca73_proc_info
+
+	/* ARM Ltd. Cortex A75 processor */
+	.type	__v7_ca75_proc_info, #object
+__v7_ca75_proc_info:
+	.long	0x410fd0a0
+	.long	0xff0ffff0
+	__v7_proc __v7_ca75_proc_info, __v7_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
+	.size	__v7_ca75_proc_info, . - __v7_ca75_proc_info
+
 	/*
 	 * Qualcomm Inc. Krait processors.
 	 */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 06/14] ARM: spectre-v2: harden branch predictor on context switches
@ 2018-05-21 11:44   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:44 UTC (permalink / raw)
  To: linux-arm-kernel

Harden the branch predictor against Spectre v2 attacks on context
switches for ARMv7 and later CPUs.  We do this by:

Cortex A9, A12, A17, A73, A75: invalidating the BTB.
Cortex A15, Brahma B15: invalidating the instruction cache.

Cortex A57 and Cortex A72 are not addressed in this patch.

Cortex R7 and Cortex R8 are also not addressed as we do not enforce
memory protection on these cores.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
---
 arch/arm/mm/Kconfig          |  19 +++++++
 arch/arm/mm/proc-v7-2level.S |   6 ---
 arch/arm/mm/proc-v7.S        | 125 +++++++++++++++++++++++++++++++++----------
 3 files changed, 115 insertions(+), 35 deletions(-)

diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index 6f3ef86b4cb7..9357ff52c221 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -830,6 +830,25 @@ config CPU_BPREDICT_DISABLE
 config CPU_SPECTRE
 	bool
 
+config HARDEN_BRANCH_PREDICTOR
+	bool "Harden the branch predictor against aliasing attacks" if EXPERT
+	depends on CPU_SPECTRE
+	default y
+	help
+	   Speculation attacks against some high-performance processors rely
+	   on being able to manipulate the branch predictor for a victim
+	   context by executing aliasing branches in the attacker context.
+	   Such attacks can be partially mitigated against by clearing
+	   internal branch predictor state and limiting the prediction
+	   logic in some situations.
+
+	   This config option will take CPU-specific actions to harden
+	   the branch predictor against aliasing attacks and may rely on
+	   specific instruction sequences or control bits being set by
+	   the system firmware.
+
+	   If unsure, say Y.
+
 config TLS_REG_EMUL
 	bool
 	select NEED_KUSER_HELPERS
diff --git a/arch/arm/mm/proc-v7-2level.S b/arch/arm/mm/proc-v7-2level.S
index c6141a5435c3..f8d45ad2a515 100644
--- a/arch/arm/mm/proc-v7-2level.S
+++ b/arch/arm/mm/proc-v7-2level.S
@@ -41,11 +41,6 @@
  *	even on Cortex-A8 revisions not affected by 430973.
  *	If IBE is not set, the flush BTAC/BTB won't do anything.
  */
-ENTRY(cpu_ca8_switch_mm)
-#ifdef CONFIG_MMU
-	mov	r2, #0
-	mcr	p15, 0, r2, c7, c5, 6		@ flush BTAC/BTB
-#endif
 ENTRY(cpu_v7_switch_mm)
 #ifdef CONFIG_MMU
 	mmid	r1, r1				@ get mm->context.id
@@ -66,7 +61,6 @@ ENTRY(cpu_v7_switch_mm)
 #endif
 	bx	lr
 ENDPROC(cpu_v7_switch_mm)
-ENDPROC(cpu_ca8_switch_mm)
 
 /*
  *	cpu_v7_set_pte_ext(ptep, pte)
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index d55d493f9a1e..a2d433d59848 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -93,6 +93,17 @@ ENTRY(cpu_v7_dcache_clean_area)
 	ret	lr
 ENDPROC(cpu_v7_dcache_clean_area)
 
+ENTRY(cpu_v7_iciallu_switch_mm)
+	mov	r3, #0
+	mcr	p15, 0, r3, c7, c5, 0		@ ICIALLU
+	b	cpu_v7_switch_mm
+ENDPROC(cpu_v7_iciallu_switch_mm)
+ENTRY(cpu_v7_bpiall_switch_mm)
+	mov	r3, #0
+	mcr	p15, 0, r3, c7, c5, 6		@ flush BTAC/BTB
+	b	cpu_v7_switch_mm
+ENDPROC(cpu_v7_bpiall_switch_mm)
+
 	string	cpu_v7_name, "ARMv7 Processor"
 	.align
 
@@ -158,31 +169,6 @@ ENTRY(cpu_v7_do_resume)
 ENDPROC(cpu_v7_do_resume)
 #endif
 
-/*
- * Cortex-A8
- */
-	globl_equ	cpu_ca8_proc_init,	cpu_v7_proc_init
-	globl_equ	cpu_ca8_proc_fin,	cpu_v7_proc_fin
-	globl_equ	cpu_ca8_reset,		cpu_v7_reset
-	globl_equ	cpu_ca8_do_idle,	cpu_v7_do_idle
-	globl_equ	cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area
-	globl_equ	cpu_ca8_set_pte_ext,	cpu_v7_set_pte_ext
-	globl_equ	cpu_ca8_suspend_size,	cpu_v7_suspend_size
-#ifdef CONFIG_ARM_CPU_SUSPEND
-	globl_equ	cpu_ca8_do_suspend,	cpu_v7_do_suspend
-	globl_equ	cpu_ca8_do_resume,	cpu_v7_do_resume
-#endif
-
-/*
- * Cortex-A9 processor functions
- */
-	globl_equ	cpu_ca9mp_proc_init,	cpu_v7_proc_init
-	globl_equ	cpu_ca9mp_proc_fin,	cpu_v7_proc_fin
-	globl_equ	cpu_ca9mp_reset,	cpu_v7_reset
-	globl_equ	cpu_ca9mp_do_idle,	cpu_v7_do_idle
-	globl_equ	cpu_ca9mp_dcache_clean_area, cpu_v7_dcache_clean_area
-	globl_equ	cpu_ca9mp_switch_mm,	cpu_v7_switch_mm
-	globl_equ	cpu_ca9mp_set_pte_ext,	cpu_v7_set_pte_ext
 .globl	cpu_ca9mp_suspend_size
 .equ	cpu_ca9mp_suspend_size, cpu_v7_suspend_size + 4 * 2
 #ifdef CONFIG_ARM_CPU_SUSPEND
@@ -548,10 +534,75 @@ ENDPROC(__v7_setup)
 
 	@ define struct processor (see <asm/proc-fns.h> and proc-macros.S)
 	define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	@ generic v7 bpiall on context switch
+	globl_equ	cpu_v7_bpiall_proc_init,	cpu_v7_proc_init
+	globl_equ	cpu_v7_bpiall_proc_fin,		cpu_v7_proc_fin
+	globl_equ	cpu_v7_bpiall_reset,		cpu_v7_reset
+	globl_equ	cpu_v7_bpiall_do_idle,		cpu_v7_do_idle
+	globl_equ	cpu_v7_bpiall_dcache_clean_area, cpu_v7_dcache_clean_area
+	globl_equ	cpu_v7_bpiall_set_pte_ext,	cpu_v7_set_pte_ext
+	globl_equ	cpu_v7_bpiall_suspend_size,	cpu_v7_suspend_size
+#ifdef CONFIG_ARM_CPU_SUSPEND
+	globl_equ	cpu_v7_bpiall_do_suspend,	cpu_v7_do_suspend
+	globl_equ	cpu_v7_bpiall_do_resume,	cpu_v7_do_resume
+#endif
+	define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+
+#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_bpiall_processor_functions
+#else
+#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_processor_functions
+#endif
+
 #ifndef CONFIG_ARM_LPAE
+	@ Cortex-A8 - always needs bpiall switch_mm implementation
+	globl_equ	cpu_ca8_proc_init,	cpu_v7_proc_init
+	globl_equ	cpu_ca8_proc_fin,	cpu_v7_proc_fin
+	globl_equ	cpu_ca8_reset,		cpu_v7_reset
+	globl_equ	cpu_ca8_do_idle,	cpu_v7_do_idle
+	globl_equ	cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area
+	globl_equ	cpu_ca8_set_pte_ext,	cpu_v7_set_pte_ext
+	globl_equ	cpu_ca8_switch_mm,	cpu_v7_bpiall_switch_mm
+	globl_equ	cpu_ca8_suspend_size,	cpu_v7_suspend_size
+#ifdef CONFIG_ARM_CPU_SUSPEND
+	globl_equ	cpu_ca8_do_suspend,	cpu_v7_do_suspend
+	globl_equ	cpu_ca8_do_resume,	cpu_v7_do_resume
+#endif
 	define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+
+	@ Cortex-A9 - needs more registers preserved across suspend/resume
+	@ and bpiall switch_mm for hardening
+	globl_equ	cpu_ca9mp_proc_init,	cpu_v7_proc_init
+	globl_equ	cpu_ca9mp_proc_fin,	cpu_v7_proc_fin
+	globl_equ	cpu_ca9mp_reset,	cpu_v7_reset
+	globl_equ	cpu_ca9mp_do_idle,	cpu_v7_do_idle
+	globl_equ	cpu_ca9mp_dcache_clean_area, cpu_v7_dcache_clean_area
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	globl_equ	cpu_ca9mp_switch_mm,	cpu_v7_bpiall_switch_mm
+#else
+	globl_equ	cpu_ca9mp_switch_mm,	cpu_v7_switch_mm
+#endif
+	globl_equ	cpu_ca9mp_set_pte_ext,	cpu_v7_set_pte_ext
 	define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
 #endif
+
+	@ Cortex-A15 - needs iciallu switch_mm for hardening
+	globl_equ	cpu_ca15_proc_init,	cpu_v7_proc_init
+	globl_equ	cpu_ca15_proc_fin,	cpu_v7_proc_fin
+	globl_equ	cpu_ca15_reset,		cpu_v7_reset
+	globl_equ	cpu_ca15_do_idle,	cpu_v7_do_idle
+	globl_equ	cpu_ca15_dcache_clean_area, cpu_v7_dcache_clean_area
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	globl_equ	cpu_ca15_switch_mm,	cpu_v7_iciallu_switch_mm
+#else
+	globl_equ	cpu_ca15_switch_mm,	cpu_v7_switch_mm
+#endif
+	globl_equ	cpu_ca15_set_pte_ext,	cpu_v7_set_pte_ext
+	globl_equ	cpu_ca15_suspend_size,	cpu_v7_suspend_size
+	globl_equ	cpu_ca15_do_suspend,	cpu_v7_do_suspend
+	globl_equ	cpu_ca15_do_resume,	cpu_v7_do_resume
+	define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
 #ifdef CONFIG_CPU_PJ4B
 	define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
 #endif
@@ -658,7 +709,7 @@ ENDPROC(__v7_setup)
 __v7_ca12mp_proc_info:
 	.long	0x410fc0d0
 	.long	0xff0ffff0
-	__v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup
+	__v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
 	.size	__v7_ca12mp_proc_info, . - __v7_ca12mp_proc_info
 
 	/*
@@ -668,7 +719,7 @@ ENDPROC(__v7_setup)
 __v7_ca15mp_proc_info:
 	.long	0x410fc0f0
 	.long	0xff0ffff0
-	__v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup
+	__v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup, proc_fns = ca15_processor_functions
 	.size	__v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info
 
 	/*
@@ -678,7 +729,7 @@ ENDPROC(__v7_setup)
 __v7_b15mp_proc_info:
 	.long	0x420f00f0
 	.long	0xff0ffff0
-	__v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup, cache_fns = b15_cache_fns
+	__v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup, proc_fns = ca15_processor_functions, cache_fns = b15_cache_fns
 	.size	__v7_b15mp_proc_info, . - __v7_b15mp_proc_info
 
 	/*
@@ -688,9 +739,25 @@ ENDPROC(__v7_setup)
 __v7_ca17mp_proc_info:
 	.long	0x410fc0e0
 	.long	0xff0ffff0
-	__v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup
+	__v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
 	.size	__v7_ca17mp_proc_info, . - __v7_ca17mp_proc_info
 
+	/* ARM Ltd. Cortex A73 processor */
+	.type	__v7_ca73_proc_info, #object
+__v7_ca73_proc_info:
+	.long	0x410fd090
+	.long	0xff0ffff0
+	__v7_proc __v7_ca73_proc_info, __v7_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
+	.size	__v7_ca73_proc_info, . - __v7_ca73_proc_info
+
+	/* ARM Ltd. Cortex A75 processor */
+	.type	__v7_ca75_proc_info, #object
+__v7_ca75_proc_info:
+	.long	0x410fd0a0
+	.long	0xff0ffff0
+	__v7_proc __v7_ca75_proc_info, __v7_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
+	.size	__v7_ca75_proc_info, . - __v7_ca75_proc_info
+
 	/*
 	 * Qualcomm Inc. Krait processors.
 	 */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 07/14] ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-21 11:44   ` Russell King
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:44 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, kvmarm

When the branch predictor hardening is enabled, firmware must have set
the IBE bit in the auxiliary control register.  If this bit has not
been set, the Spectre workarounds will not be functional.

Add validation that this bit is set, and print a warning at alert level
if this is not the case.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
---
 arch/arm/mm/Makefile       |  2 +-
 arch/arm/mm/proc-v7-bugs.c | 29 +++++++++++++++++++++++++++++
 arch/arm/mm/proc-v7.S      |  4 ++--
 3 files changed, 32 insertions(+), 3 deletions(-)
 create mode 100644 arch/arm/mm/proc-v7-bugs.c

diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 9dbb84923e12..a0c40610210c 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -97,7 +97,7 @@ obj-$(CONFIG_CPU_MOHAWK)	+= proc-mohawk.o
 obj-$(CONFIG_CPU_FEROCEON)	+= proc-feroceon.o
 obj-$(CONFIG_CPU_V6)		+= proc-v6.o
 obj-$(CONFIG_CPU_V6K)		+= proc-v6.o
-obj-$(CONFIG_CPU_V7)		+= proc-v7.o
+obj-$(CONFIG_CPU_V7)		+= proc-v7.o proc-v7-bugs.o
 obj-$(CONFIG_CPU_V7M)		+= proc-v7m.o
 
 AFLAGS_proc-v6.o	:=-Wa,-march=armv6
diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
new file mode 100644
index 000000000000..a32ce13479d9
--- /dev/null
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -0,0 +1,29 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/kernel.h>
+#include <linux/smp.h>
+
+static __maybe_unused void cpu_v7_check_auxcr_set(u32 mask, const char *msg)
+{
+	u32 aux_cr;
+
+	asm("mrc p15, 0, %0, c1, c0, 1" : "=r" (aux_cr));
+
+	if ((aux_cr & mask) != mask)
+		pr_err("CPU%u: %s", smp_processor_id(), msg);
+}
+
+static void check_spectre_auxcr(u32 bit)
+{
+	if (IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR))
+		cpu_v7_check_auxcr_set(bit, "Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable\n");
+}
+
+void cpu_v7_ca8_ibe(void)
+{
+	check_spectre_auxcr(BIT(6));
+}
+
+void cpu_v7_ca15_ibe(void)
+{
+	check_spectre_auxcr(BIT(0));
+}
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index a2d433d59848..fa9214036fb3 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -569,7 +569,7 @@ ENDPROC(__v7_setup)
 	globl_equ	cpu_ca8_do_suspend,	cpu_v7_do_suspend
 	globl_equ	cpu_ca8_do_resume,	cpu_v7_do_resume
 #endif
-	define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_ca8_ibe
 
 	@ Cortex-A9 - needs more registers preserved across suspend/resume
 	@ and bpiall switch_mm for hardening
@@ -602,7 +602,7 @@ ENDPROC(__v7_setup)
 	globl_equ	cpu_ca15_suspend_size,	cpu_v7_suspend_size
 	globl_equ	cpu_ca15_do_suspend,	cpu_v7_do_suspend
 	globl_equ	cpu_ca15_do_resume,	cpu_v7_do_resume
-	define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_ca15_ibe
 #ifdef CONFIG_CPU_PJ4B
 	define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
 #endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 07/14] ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit
@ 2018-05-21 11:44   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:44 UTC (permalink / raw)
  To: linux-arm-kernel

When the branch predictor hardening is enabled, firmware must have set
the IBE bit in the auxiliary control register.  If this bit has not
been set, the Spectre workarounds will not be functional.

Add validation that this bit is set, and print a warning at alert level
if this is not the case.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
---
 arch/arm/mm/Makefile       |  2 +-
 arch/arm/mm/proc-v7-bugs.c | 29 +++++++++++++++++++++++++++++
 arch/arm/mm/proc-v7.S      |  4 ++--
 3 files changed, 32 insertions(+), 3 deletions(-)
 create mode 100644 arch/arm/mm/proc-v7-bugs.c

diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 9dbb84923e12..a0c40610210c 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -97,7 +97,7 @@ obj-$(CONFIG_CPU_MOHAWK)	+= proc-mohawk.o
 obj-$(CONFIG_CPU_FEROCEON)	+= proc-feroceon.o
 obj-$(CONFIG_CPU_V6)		+= proc-v6.o
 obj-$(CONFIG_CPU_V6K)		+= proc-v6.o
-obj-$(CONFIG_CPU_V7)		+= proc-v7.o
+obj-$(CONFIG_CPU_V7)		+= proc-v7.o proc-v7-bugs.o
 obj-$(CONFIG_CPU_V7M)		+= proc-v7m.o
 
 AFLAGS_proc-v6.o	:=-Wa,-march=armv6
diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
new file mode 100644
index 000000000000..a32ce13479d9
--- /dev/null
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -0,0 +1,29 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/kernel.h>
+#include <linux/smp.h>
+
+static __maybe_unused void cpu_v7_check_auxcr_set(u32 mask, const char *msg)
+{
+	u32 aux_cr;
+
+	asm("mrc p15, 0, %0, c1, c0, 1" : "=r" (aux_cr));
+
+	if ((aux_cr & mask) != mask)
+		pr_err("CPU%u: %s", smp_processor_id(), msg);
+}
+
+static void check_spectre_auxcr(u32 bit)
+{
+	if (IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR))
+		cpu_v7_check_auxcr_set(bit, "Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable\n");
+}
+
+void cpu_v7_ca8_ibe(void)
+{
+	check_spectre_auxcr(BIT(6));
+}
+
+void cpu_v7_ca15_ibe(void)
+{
+	check_spectre_auxcr(BIT(0));
+}
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index a2d433d59848..fa9214036fb3 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -569,7 +569,7 @@ ENDPROC(__v7_setup)
 	globl_equ	cpu_ca8_do_suspend,	cpu_v7_do_suspend
 	globl_equ	cpu_ca8_do_resume,	cpu_v7_do_resume
 #endif
-	define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_ca8_ibe
 
 	@ Cortex-A9 - needs more registers preserved across suspend/resume
 	@ and bpiall switch_mm for hardening
@@ -602,7 +602,7 @@ ENDPROC(__v7_setup)
 	globl_equ	cpu_ca15_suspend_size,	cpu_v7_suspend_size
 	globl_equ	cpu_ca15_do_suspend,	cpu_v7_do_suspend
 	globl_equ	cpu_ca15_do_resume,	cpu_v7_do_resume
-	define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_ca15_ibe
 #ifdef CONFIG_CPU_PJ4B
 	define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
 #endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 08/14] ARM: spectre-v2: harden user aborts in kernel space
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-21 11:45   ` Russell King
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:45 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, kvmarm

In order to prevent aliasing attacks on the branch predictor,
invalidate the BTB or instruction cache on CPUs that are known to be
affected when taking an abort on a address that is outside of a user
task limit:

Cortex A8, A9, A12, A17, A73, A75: flush BTB.
Cortex A15, Brahma B15: invalidate icache.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
---
 arch/arm/include/asm/cp15.h        |  3 +++
 arch/arm/include/asm/system_misc.h |  8 ++++++
 arch/arm/mm/fault.c                |  3 +++
 arch/arm/mm/proc-v7-bugs.c         | 51 ++++++++++++++++++++++++++++++++++++++
 arch/arm/mm/proc-v7.S              |  8 +++---
 5 files changed, 70 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index 4c9fa72b59f5..07e27f212dc7 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -65,6 +65,9 @@
 #define __write_sysreg(v, r, w, c, t)	asm volatile(w " " c : : "r" ((t)(v)))
 #define write_sysreg(v, ...)		__write_sysreg(v, __VA_ARGS__)
 
+#define BPIALL				__ACCESS_CP15(c7, 0, c5, 6)
+#define ICIALLU				__ACCESS_CP15(c7, 0, c5, 0)
+
 extern unsigned long cr_alignment;	/* defined in entry-armv.S */
 
 static inline unsigned long get_cr(void)
diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
index 78f6db114faf..3cfe010c5734 100644
--- a/arch/arm/include/asm/system_misc.h
+++ b/arch/arm/include/asm/system_misc.h
@@ -15,6 +15,14 @@ void soft_restart(unsigned long);
 extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
 extern void (*arm_pm_idle)(void);
 
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+extern void (*harden_branch_predictor)(void);
+#define harden_branch_predictor() \
+	do { if (harden_branch_predictor) harden_branch_predictor(); } while (0)
+#else
+#define harden_branch_predictor() do { } while (0)
+#endif
+
 #define UDBG_UNDEFINED	(1 << 0)
 #define UDBG_SYSCALL	(1 << 1)
 #define UDBG_BADABORT	(1 << 2)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index b75eada23d0a..3b1ba003c4f9 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk, unsigned long addr,
 {
 	struct siginfo si;
 
+	if (addr > TASK_SIZE)
+		harden_branch_predictor();
+
 #ifdef CONFIG_DEBUG_USER
 	if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) ||
 	    ((user_debug & UDBG_BUS)  && (sig == SIGBUS))) {
diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
index a32ce13479d9..65a9b8141f86 100644
--- a/arch/arm/mm/proc-v7-bugs.c
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -2,6 +2,12 @@
 #include <linux/kernel.h>
 #include <linux/smp.h>
 
+#include <asm/cp15.h>
+#include <asm/cputype.h>
+#include <asm/system_misc.h>
+
+void cpu_v7_bugs_init(void);
+
 static __maybe_unused void cpu_v7_check_auxcr_set(u32 mask, const char *msg)
 {
 	u32 aux_cr;
@@ -21,9 +27,54 @@ static void check_spectre_auxcr(u32 bit)
 void cpu_v7_ca8_ibe(void)
 {
 	check_spectre_auxcr(BIT(6));
+	cpu_v7_bugs_init();
 }
 
 void cpu_v7_ca15_ibe(void)
 {
 	check_spectre_auxcr(BIT(0));
+	cpu_v7_bugs_init();
+}
+
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+void (*harden_branch_predictor)(void);
+
+static void harden_branch_predictor_bpiall(void)
+{
+	write_sysreg(0, BPIALL);
+}
+
+static void harden_branch_predictor_iciallu(void)
+{
+	write_sysreg(0, ICIALLU);
+}
+
+void cpu_v7_bugs_init(void)
+{
+	const char *spectre_v2_method = NULL;
+
+	if (harden_branch_predictor)
+		return;
+
+	switch (read_cpuid_part()) {
+	case ARM_CPU_PART_CORTEX_A8:
+	case ARM_CPU_PART_CORTEX_A9:
+	case ARM_CPU_PART_CORTEX_A12:
+	case ARM_CPU_PART_CORTEX_A17:
+	case ARM_CPU_PART_CORTEX_A73:
+	case ARM_CPU_PART_CORTEX_A75:
+		harden_branch_predictor = harden_branch_predictor_bpiall;
+		spectre_v2_method = "BPIALL";
+		break;
+
+	case ARM_CPU_PART_CORTEX_A15:
+	case ARM_CPU_PART_BRAHMA_B15:
+		harden_branch_predictor = harden_branch_predictor_iciallu;
+		spectre_v2_method = "ICIALLU";
+		break;
+	}
+	if (spectre_v2_method)
+		pr_info("CPU: Spectre v2: using %s workaround\n",
+			spectre_v2_method);
 }
+#endif
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index fa9214036fb3..79510011e7eb 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -532,8 +532,10 @@ ENDPROC(__v7_setup)
 
 	__INITDATA
 
+	.weak cpu_v7_bugs_init
+
 	@ define struct processor (see <asm/proc-fns.h> and proc-macros.S)
-	define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
 
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	@ generic v7 bpiall on context switch
@@ -548,7 +550,7 @@ ENDPROC(__v7_setup)
 	globl_equ	cpu_v7_bpiall_do_suspend,	cpu_v7_do_suspend
 	globl_equ	cpu_v7_bpiall_do_resume,	cpu_v7_do_resume
 #endif
-	define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
 
 #define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_bpiall_processor_functions
 #else
@@ -584,7 +586,7 @@ ENDPROC(__v7_setup)
 	globl_equ	cpu_ca9mp_switch_mm,	cpu_v7_switch_mm
 #endif
 	globl_equ	cpu_ca9mp_set_pte_ext,	cpu_v7_set_pte_ext
-	define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
 #endif
 
 	@ Cortex-A15 - needs iciallu switch_mm for hardening
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 08/14] ARM: spectre-v2: harden user aborts in kernel space
@ 2018-05-21 11:45   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:45 UTC (permalink / raw)
  To: linux-arm-kernel

In order to prevent aliasing attacks on the branch predictor,
invalidate the BTB or instruction cache on CPUs that are known to be
affected when taking an abort on a address that is outside of a user
task limit:

Cortex A8, A9, A12, A17, A73, A75: flush BTB.
Cortex A15, Brahma B15: invalidate icache.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
---
 arch/arm/include/asm/cp15.h        |  3 +++
 arch/arm/include/asm/system_misc.h |  8 ++++++
 arch/arm/mm/fault.c                |  3 +++
 arch/arm/mm/proc-v7-bugs.c         | 51 ++++++++++++++++++++++++++++++++++++++
 arch/arm/mm/proc-v7.S              |  8 +++---
 5 files changed, 70 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index 4c9fa72b59f5..07e27f212dc7 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -65,6 +65,9 @@
 #define __write_sysreg(v, r, w, c, t)	asm volatile(w " " c : : "r" ((t)(v)))
 #define write_sysreg(v, ...)		__write_sysreg(v, __VA_ARGS__)
 
+#define BPIALL				__ACCESS_CP15(c7, 0, c5, 6)
+#define ICIALLU				__ACCESS_CP15(c7, 0, c5, 0)
+
 extern unsigned long cr_alignment;	/* defined in entry-armv.S */
 
 static inline unsigned long get_cr(void)
diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
index 78f6db114faf..3cfe010c5734 100644
--- a/arch/arm/include/asm/system_misc.h
+++ b/arch/arm/include/asm/system_misc.h
@@ -15,6 +15,14 @@ void soft_restart(unsigned long);
 extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
 extern void (*arm_pm_idle)(void);
 
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+extern void (*harden_branch_predictor)(void);
+#define harden_branch_predictor() \
+	do { if (harden_branch_predictor) harden_branch_predictor(); } while (0)
+#else
+#define harden_branch_predictor() do { } while (0)
+#endif
+
 #define UDBG_UNDEFINED	(1 << 0)
 #define UDBG_SYSCALL	(1 << 1)
 #define UDBG_BADABORT	(1 << 2)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index b75eada23d0a..3b1ba003c4f9 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk, unsigned long addr,
 {
 	struct siginfo si;
 
+	if (addr > TASK_SIZE)
+		harden_branch_predictor();
+
 #ifdef CONFIG_DEBUG_USER
 	if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) ||
 	    ((user_debug & UDBG_BUS)  && (sig == SIGBUS))) {
diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
index a32ce13479d9..65a9b8141f86 100644
--- a/arch/arm/mm/proc-v7-bugs.c
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -2,6 +2,12 @@
 #include <linux/kernel.h>
 #include <linux/smp.h>
 
+#include <asm/cp15.h>
+#include <asm/cputype.h>
+#include <asm/system_misc.h>
+
+void cpu_v7_bugs_init(void);
+
 static __maybe_unused void cpu_v7_check_auxcr_set(u32 mask, const char *msg)
 {
 	u32 aux_cr;
@@ -21,9 +27,54 @@ static void check_spectre_auxcr(u32 bit)
 void cpu_v7_ca8_ibe(void)
 {
 	check_spectre_auxcr(BIT(6));
+	cpu_v7_bugs_init();
 }
 
 void cpu_v7_ca15_ibe(void)
 {
 	check_spectre_auxcr(BIT(0));
+	cpu_v7_bugs_init();
+}
+
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+void (*harden_branch_predictor)(void);
+
+static void harden_branch_predictor_bpiall(void)
+{
+	write_sysreg(0, BPIALL);
+}
+
+static void harden_branch_predictor_iciallu(void)
+{
+	write_sysreg(0, ICIALLU);
+}
+
+void cpu_v7_bugs_init(void)
+{
+	const char *spectre_v2_method = NULL;
+
+	if (harden_branch_predictor)
+		return;
+
+	switch (read_cpuid_part()) {
+	case ARM_CPU_PART_CORTEX_A8:
+	case ARM_CPU_PART_CORTEX_A9:
+	case ARM_CPU_PART_CORTEX_A12:
+	case ARM_CPU_PART_CORTEX_A17:
+	case ARM_CPU_PART_CORTEX_A73:
+	case ARM_CPU_PART_CORTEX_A75:
+		harden_branch_predictor = harden_branch_predictor_bpiall;
+		spectre_v2_method = "BPIALL";
+		break;
+
+	case ARM_CPU_PART_CORTEX_A15:
+	case ARM_CPU_PART_BRAHMA_B15:
+		harden_branch_predictor = harden_branch_predictor_iciallu;
+		spectre_v2_method = "ICIALLU";
+		break;
+	}
+	if (spectre_v2_method)
+		pr_info("CPU: Spectre v2: using %s workaround\n",
+			spectre_v2_method);
 }
+#endif
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index fa9214036fb3..79510011e7eb 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -532,8 +532,10 @@ ENDPROC(__v7_setup)
 
 	__INITDATA
 
+	.weak cpu_v7_bugs_init
+
 	@ define struct processor (see <asm/proc-fns.h> and proc-macros.S)
-	define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
 
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	@ generic v7 bpiall on context switch
@@ -548,7 +550,7 @@ ENDPROC(__v7_setup)
 	globl_equ	cpu_v7_bpiall_do_suspend,	cpu_v7_do_suspend
 	globl_equ	cpu_v7_bpiall_do_resume,	cpu_v7_do_resume
 #endif
-	define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
 
 #define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_bpiall_processor_functions
 #else
@@ -584,7 +586,7 @@ ENDPROC(__v7_setup)
 	globl_equ	cpu_ca9mp_switch_mm,	cpu_v7_switch_mm
 #endif
 	globl_equ	cpu_ca9mp_set_pte_ext,	cpu_v7_set_pte_ext
-	define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+	define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
 #endif
 
 	@ Cortex-A15 - needs iciallu switch_mm for hardening
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-21 11:45   ` Russell King
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:45 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, Christoffer Dall, kvmarm

Add PSCI based hardening for cores that require more complex handling in
firmware.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/mm/proc-v7-bugs.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++
 arch/arm/mm/proc-v7.S      | 21 +++++++++++++++++++
 2 files changed, 71 insertions(+)

diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
index 65a9b8141f86..0c37e6a2830d 100644
--- a/arch/arm/mm/proc-v7-bugs.c
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -1,9 +1,12 @@
 // SPDX-License-Identifier: GPL-2.0
+#include <linux/arm-smccc.h>
 #include <linux/kernel.h>
+#include <linux/psci.h>
 #include <linux/smp.h>
 
 #include <asm/cp15.h>
 #include <asm/cputype.h>
+#include <asm/proc-fns.h>
 #include <asm/system_misc.h>
 
 void cpu_v7_bugs_init(void);
@@ -39,6 +42,9 @@ void cpu_v7_ca15_ibe(void)
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 void (*harden_branch_predictor)(void);
 
+extern void cpu_v7_smc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
+extern void cpu_v7_hvc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
+
 static void harden_branch_predictor_bpiall(void)
 {
 	write_sysreg(0, BPIALL);
@@ -49,6 +55,18 @@ static void harden_branch_predictor_iciallu(void)
 	write_sysreg(0, ICIALLU);
 }
 
+#ifdef CONFIG_ARM_PSCI
+static void call_smc_arch_workaround_1(void)
+{
+	arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
+}
+
+static void call_hvc_arch_workaround_1(void)
+{
+	arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
+}
+#endif
+
 void cpu_v7_bugs_init(void)
 {
 	const char *spectre_v2_method = NULL;
@@ -73,6 +91,38 @@ void cpu_v7_bugs_init(void)
 		spectre_v2_method = "ICIALLU";
 		break;
 	}
+
+#ifdef CONFIG_ARM_PSCI
+	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
+		struct arm_smccc_res res;
+
+		switch (psci_ops.conduit) {
+		case PSCI_CONDUIT_HVC:
+			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
+			if ((int)res.a0 < 0)
+				break;
+			harden_branch_predictor = call_hvc_arch_workaround_1;
+			processor.switch_mm = cpu_v7_hvc_switch_mm;
+			spectre_v2_method = "hypervisor";
+			break;
+
+		case PSCI_CONDUIT_SMC:
+			arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
+			if ((int)res.a0 < 0)
+				break;
+			harden_branch_predictor = call_smc_arch_workaround_1;
+			processor.switch_mm = cpu_v7_smc_switch_mm;
+			spectre_v2_method = "firmware PSCI";
+			break;
+
+		default:
+			break;
+		}
+	}
+#endif
+
 	if (spectre_v2_method)
 		pr_info("CPU: Spectre v2: using %s workaround\n",
 			spectre_v2_method);
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 79510011e7eb..b78d59a1cc05 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -9,6 +9,7 @@
  *
  *  This is the "shell" of the ARMv7 processor support.
  */
+#include <linux/arm-smccc.h>
 #include <linux/init.h>
 #include <linux/linkage.h>
 #include <asm/assembler.h>
@@ -93,6 +94,26 @@ ENTRY(cpu_v7_dcache_clean_area)
 	ret	lr
 ENDPROC(cpu_v7_dcache_clean_area)
 
+#ifdef CONFIG_ARM_PSCI
+	.arch_extension sec
+ENTRY(cpu_v7_smc_switch_mm)
+	stmfd	sp!, {r0 - r3}
+	movw	r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
+	movt	r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
+	smc	#0
+	ldmfd	sp!, {r0 - r3}
+	b	cpu_v7_switch_mm
+ENDPROC(cpu_v7_smc_switch_mm)
+	.arch_extension virt
+ENTRY(cpu_v7_hvc_switch_mm)
+	stmfd	sp!, {r0 - r3}
+	movw	r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
+	movt	r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
+	hvc	#0
+	ldmfd	sp!, {r0 - r3}
+	b	cpu_v7_switch_mm
+ENDPROC(cpu_v7_smc_switch_mm)
+#endif
 ENTRY(cpu_v7_iciallu_switch_mm)
 	mov	r3, #0
 	mcr	p15, 0, r3, c7, c5, 0		@ ICIALLU
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
@ 2018-05-21 11:45   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:45 UTC (permalink / raw)
  To: linux-arm-kernel

Add PSCI based hardening for cores that require more complex handling in
firmware.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/mm/proc-v7-bugs.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++
 arch/arm/mm/proc-v7.S      | 21 +++++++++++++++++++
 2 files changed, 71 insertions(+)

diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
index 65a9b8141f86..0c37e6a2830d 100644
--- a/arch/arm/mm/proc-v7-bugs.c
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -1,9 +1,12 @@
 // SPDX-License-Identifier: GPL-2.0
+#include <linux/arm-smccc.h>
 #include <linux/kernel.h>
+#include <linux/psci.h>
 #include <linux/smp.h>
 
 #include <asm/cp15.h>
 #include <asm/cputype.h>
+#include <asm/proc-fns.h>
 #include <asm/system_misc.h>
 
 void cpu_v7_bugs_init(void);
@@ -39,6 +42,9 @@ void cpu_v7_ca15_ibe(void)
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 void (*harden_branch_predictor)(void);
 
+extern void cpu_v7_smc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
+extern void cpu_v7_hvc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
+
 static void harden_branch_predictor_bpiall(void)
 {
 	write_sysreg(0, BPIALL);
@@ -49,6 +55,18 @@ static void harden_branch_predictor_iciallu(void)
 	write_sysreg(0, ICIALLU);
 }
 
+#ifdef CONFIG_ARM_PSCI
+static void call_smc_arch_workaround_1(void)
+{
+	arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
+}
+
+static void call_hvc_arch_workaround_1(void)
+{
+	arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
+}
+#endif
+
 void cpu_v7_bugs_init(void)
 {
 	const char *spectre_v2_method = NULL;
@@ -73,6 +91,38 @@ void cpu_v7_bugs_init(void)
 		spectre_v2_method = "ICIALLU";
 		break;
 	}
+
+#ifdef CONFIG_ARM_PSCI
+	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
+		struct arm_smccc_res res;
+
+		switch (psci_ops.conduit) {
+		case PSCI_CONDUIT_HVC:
+			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
+			if ((int)res.a0 < 0)
+				break;
+			harden_branch_predictor = call_hvc_arch_workaround_1;
+			processor.switch_mm = cpu_v7_hvc_switch_mm;
+			spectre_v2_method = "hypervisor";
+			break;
+
+		case PSCI_CONDUIT_SMC:
+			arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
+			if ((int)res.a0 < 0)
+				break;
+			harden_branch_predictor = call_smc_arch_workaround_1;
+			processor.switch_mm = cpu_v7_smc_switch_mm;
+			spectre_v2_method = "firmware PSCI";
+			break;
+
+		default:
+			break;
+		}
+	}
+#endif
+
 	if (spectre_v2_method)
 		pr_info("CPU: Spectre v2: using %s workaround\n",
 			spectre_v2_method);
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 79510011e7eb..b78d59a1cc05 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -9,6 +9,7 @@
  *
  *  This is the "shell" of the ARMv7 processor support.
  */
+#include <linux/arm-smccc.h>
 #include <linux/init.h>
 #include <linux/linkage.h>
 #include <asm/assembler.h>
@@ -93,6 +94,26 @@ ENTRY(cpu_v7_dcache_clean_area)
 	ret	lr
 ENDPROC(cpu_v7_dcache_clean_area)
 
+#ifdef CONFIG_ARM_PSCI
+	.arch_extension sec
+ENTRY(cpu_v7_smc_switch_mm)
+	stmfd	sp!, {r0 - r3}
+	movw	r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
+	movt	r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
+	smc	#0
+	ldmfd	sp!, {r0 - r3}
+	b	cpu_v7_switch_mm
+ENDPROC(cpu_v7_smc_switch_mm)
+	.arch_extension virt
+ENTRY(cpu_v7_hvc_switch_mm)
+	stmfd	sp!, {r0 - r3}
+	movw	r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
+	movt	r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
+	hvc	#0
+	ldmfd	sp!, {r0 - r3}
+	b	cpu_v7_switch_mm
+ENDPROC(cpu_v7_smc_switch_mm)
+#endif
 ENTRY(cpu_v7_iciallu_switch_mm)
 	mov	r3, #0
 	mcr	p15, 0, r3, c7, c5, 0		@ ICIALLU
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 10/14] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-21 11:45   ` Russell King
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:45 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, kvmarm

From: Marc Zyngier <marc.zyngier@arm.com>

In order to avoid aliasing attacks against the branch predictor,
let's invalidate the BTB on guest exit. This is made complicated
by the fact that we cannot take a branch before invalidating the
BTB.

We only apply this to A12 and A17, which are the only two ARM
cores on which this useful.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
---
 arch/arm/include/asm/kvm_asm.h |  2 --
 arch/arm/include/asm/kvm_mmu.h | 17 +++++++++-
 arch/arm/kvm/hyp/hyp-entry.S   | 71 ++++++++++++++++++++++++++++++++++++++++--
 3 files changed, 85 insertions(+), 5 deletions(-)

diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 36dd2962a42d..df24ed48977d 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -61,8 +61,6 @@ struct kvm_vcpu;
 extern char __kvm_hyp_init[];
 extern char __kvm_hyp_init_end[];
 
-extern char __kvm_hyp_vector[];
-
 extern void __kvm_flush_vm_context(void);
 extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
 extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index de1b919404e4..d08ce9c41df4 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -297,7 +297,22 @@ static inline unsigned int kvm_get_vmid_bits(void)
 
 static inline void *kvm_get_hyp_vector(void)
 {
-	return kvm_ksym_ref(__kvm_hyp_vector);
+	switch(read_cpuid_part()) {
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	case ARM_CPU_PART_CORTEX_A12:
+	case ARM_CPU_PART_CORTEX_A17:
+	{
+		extern char __kvm_hyp_vector_bp_inv[];
+		return kvm_ksym_ref(__kvm_hyp_vector_bp_inv);
+	}
+
+#endif
+	default:
+	{
+		extern char __kvm_hyp_vector[];
+		return kvm_ksym_ref(__kvm_hyp_vector);
+	}
+	}
 }
 
 static inline int kvm_map_vectors(void)
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index 95a2faefc070..e789f52a5129 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -71,6 +71,66 @@
 	W(b)	hyp_irq
 	W(b)	hyp_fiq
 
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	.align 5
+__kvm_hyp_vector_bp_inv:
+	.global __kvm_hyp_vector_bp_inv
+
+	/*
+	 * We encode the exception entry in the bottom 3 bits of
+	 * SP, and we have to guarantee to be 8 bytes aligned.
+	 */
+	W(add)	sp, sp, #1	/* Reset 	  7 */
+	W(add)	sp, sp, #1	/* Undef	  6 */
+	W(add)	sp, sp, #1	/* Syscall	  5 */
+	W(add)	sp, sp, #1	/* Prefetch abort 4 */
+	W(add)	sp, sp, #1	/* Data abort	  3 */
+	W(add)	sp, sp, #1	/* HVC		  2 */
+	W(add)	sp, sp, #1	/* IRQ		  1 */
+	W(nop)			/* FIQ		  0 */
+
+	mcr	p15, 0, r0, c7, c5, 6	/* BPIALL */
+	isb
+
+#ifdef CONFIG_THUMB2_KERNEL
+	/*
+	 * Yet another silly hack: Use VPIDR as a temp register.
+	 * Thumb2 is really a pain, as SP cannot be used with most
+	 * of the bitwise instructions. The vect_br macro ensures
+	 * things gets cleaned-up.
+	 */
+	mcr	p15, 4, r0, c0, c0, 0	/* VPIDR */
+	mov	r0, sp
+	and	r0, r0, #7
+	sub	sp, sp, r0
+	push	{r1, r2}
+	mov	r1, r0
+	mrc	p15, 4, r0, c0, c0, 0	/* VPIDR */
+	mrc	p15, 0, r2, c0, c0, 0	/* MIDR  */
+	mcr	p15, 4, r2, c0, c0, 0	/* VPIDR */
+#endif
+
+.macro vect_br val, targ
+ARM(	eor	sp, sp, #\val	)
+ARM(	tst	sp, #7		)
+ARM(	eorne	sp, sp, #\val	)
+
+THUMB(	cmp	r1, #\val	)
+THUMB(	popeq	{r1, r2}	)
+
+	beq	\targ
+.endm
+
+	vect_br	0, hyp_fiq
+	vect_br	1, hyp_irq
+	vect_br	2, hyp_hvc
+	vect_br	3, hyp_dabt
+	vect_br	4, hyp_pabt
+	vect_br	5, hyp_svc
+	vect_br	6, hyp_undef
+	vect_br	7, hyp_reset
+#endif
+
 .macro invalid_vector label, cause
 	.align
 \label:	mov	r0, #\cause
@@ -149,7 +209,14 @@ ENDPROC(__hyp_do_panic)
 	bx	ip
 
 1:
-	push	{lr}
+	/*
+	 * Pushing r2 here is just a way of keeping the stack aligned to
+	 * 8 bytes on any path that can trigger a HYP exception. Here,
+	 * we may well be about to jump into the guest, and the guest
+	 * exit would otherwise be badly decoded by our fancy
+	 * "decode-exception-without-a-branch" code...
+	 */
+	push	{r2, lr}
 
 	mov	lr, r0
 	mov	r0, r1
@@ -159,7 +226,7 @@ ENDPROC(__hyp_do_panic)
 THUMB(	orr	lr, #1)
 	blx	lr			@ Call the HYP function
 
-	pop	{lr}
+	pop	{r2, lr}
 	eret
 
 guest_trap:
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 10/14] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17
@ 2018-05-21 11:45   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:45 UTC (permalink / raw)
  To: linux-arm-kernel

From: Marc Zyngier <marc.zyngier@arm.com>

In order to avoid aliasing attacks against the branch predictor,
let's invalidate the BTB on guest exit. This is made complicated
by the fact that we cannot take a branch before invalidating the
BTB.

We only apply this to A12 and A17, which are the only two ARM
cores on which this useful.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
---
 arch/arm/include/asm/kvm_asm.h |  2 --
 arch/arm/include/asm/kvm_mmu.h | 17 +++++++++-
 arch/arm/kvm/hyp/hyp-entry.S   | 71 ++++++++++++++++++++++++++++++++++++++++--
 3 files changed, 85 insertions(+), 5 deletions(-)

diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 36dd2962a42d..df24ed48977d 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -61,8 +61,6 @@ struct kvm_vcpu;
 extern char __kvm_hyp_init[];
 extern char __kvm_hyp_init_end[];
 
-extern char __kvm_hyp_vector[];
-
 extern void __kvm_flush_vm_context(void);
 extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
 extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index de1b919404e4..d08ce9c41df4 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -297,7 +297,22 @@ static inline unsigned int kvm_get_vmid_bits(void)
 
 static inline void *kvm_get_hyp_vector(void)
 {
-	return kvm_ksym_ref(__kvm_hyp_vector);
+	switch(read_cpuid_part()) {
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	case ARM_CPU_PART_CORTEX_A12:
+	case ARM_CPU_PART_CORTEX_A17:
+	{
+		extern char __kvm_hyp_vector_bp_inv[];
+		return kvm_ksym_ref(__kvm_hyp_vector_bp_inv);
+	}
+
+#endif
+	default:
+	{
+		extern char __kvm_hyp_vector[];
+		return kvm_ksym_ref(__kvm_hyp_vector);
+	}
+	}
 }
 
 static inline int kvm_map_vectors(void)
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index 95a2faefc070..e789f52a5129 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -71,6 +71,66 @@
 	W(b)	hyp_irq
 	W(b)	hyp_fiq
 
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	.align 5
+__kvm_hyp_vector_bp_inv:
+	.global __kvm_hyp_vector_bp_inv
+
+	/*
+	 * We encode the exception entry in the bottom 3 bits of
+	 * SP, and we have to guarantee to be 8 bytes aligned.
+	 */
+	W(add)	sp, sp, #1	/* Reset 	  7 */
+	W(add)	sp, sp, #1	/* Undef	  6 */
+	W(add)	sp, sp, #1	/* Syscall	  5 */
+	W(add)	sp, sp, #1	/* Prefetch abort 4 */
+	W(add)	sp, sp, #1	/* Data abort	  3 */
+	W(add)	sp, sp, #1	/* HVC		  2 */
+	W(add)	sp, sp, #1	/* IRQ		  1 */
+	W(nop)			/* FIQ		  0 */
+
+	mcr	p15, 0, r0, c7, c5, 6	/* BPIALL */
+	isb
+
+#ifdef CONFIG_THUMB2_KERNEL
+	/*
+	 * Yet another silly hack: Use VPIDR as a temp register.
+	 * Thumb2 is really a pain, as SP cannot be used with most
+	 * of the bitwise instructions. The vect_br macro ensures
+	 * things gets cleaned-up.
+	 */
+	mcr	p15, 4, r0, c0, c0, 0	/* VPIDR */
+	mov	r0, sp
+	and	r0, r0, #7
+	sub	sp, sp, r0
+	push	{r1, r2}
+	mov	r1, r0
+	mrc	p15, 4, r0, c0, c0, 0	/* VPIDR */
+	mrc	p15, 0, r2, c0, c0, 0	/* MIDR  */
+	mcr	p15, 4, r2, c0, c0, 0	/* VPIDR */
+#endif
+
+.macro vect_br val, targ
+ARM(	eor	sp, sp, #\val	)
+ARM(	tst	sp, #7		)
+ARM(	eorne	sp, sp, #\val	)
+
+THUMB(	cmp	r1, #\val	)
+THUMB(	popeq	{r1, r2}	)
+
+	beq	\targ
+.endm
+
+	vect_br	0, hyp_fiq
+	vect_br	1, hyp_irq
+	vect_br	2, hyp_hvc
+	vect_br	3, hyp_dabt
+	vect_br	4, hyp_pabt
+	vect_br	5, hyp_svc
+	vect_br	6, hyp_undef
+	vect_br	7, hyp_reset
+#endif
+
 .macro invalid_vector label, cause
 	.align
 \label:	mov	r0, #\cause
@@ -149,7 +209,14 @@ ENDPROC(__hyp_do_panic)
 	bx	ip
 
 1:
-	push	{lr}
+	/*
+	 * Pushing r2 here is just a way of keeping the stack aligned to
+	 * 8 bytes on any path that can trigger a HYP exception. Here,
+	 * we may well be about to jump into the guest, and the guest
+	 * exit would otherwise be badly decoded by our fancy
+	 * "decode-exception-without-a-branch" code...
+	 */
+	push	{r2, lr}
 
 	mov	lr, r0
 	mov	r0, r1
@@ -159,7 +226,7 @@ ENDPROC(__hyp_do_panic)
 THUMB(	orr	lr, #1)
 	blx	lr			@ Call the HYP function
 
-	pop	{lr}
+	pop	{r2, lr}
 	eret
 
 guest_trap:
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 11/14] ARM: KVM: invalidate icache on guest exit for Cortex-A15
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-21 11:45   ` Russell King
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:45 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, kvmarm

From: Marc Zyngier <marc.zyngier@arm.com>

In order to avoid aliasing attacks against the branch predictor
on Cortex-A15, let's invalidate the BTB on guest exit, which can
only be done by invalidating the icache (with ACTLR[0] being set).

We use the same hack as for A12/A17 to perform the vector decoding.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
---
 arch/arm/include/asm/kvm_mmu.h |  5 +++++
 arch/arm/kvm/hyp/hyp-entry.S   | 24 ++++++++++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index d08ce9c41df4..48edb1f4ced4 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -306,6 +306,11 @@ static inline void *kvm_get_hyp_vector(void)
 		return kvm_ksym_ref(__kvm_hyp_vector_bp_inv);
 	}
 
+	case ARM_CPU_PART_CORTEX_A15:
+	{
+		extern char __kvm_hyp_vector_ic_inv[];
+		return kvm_ksym_ref(__kvm_hyp_vector_ic_inv);
+	}
 #endif
 	default:
 	{
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index e789f52a5129..918a05dd2d63 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -73,6 +73,28 @@
 
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	.align 5
+__kvm_hyp_vector_ic_inv:
+	.global __kvm_hyp_vector_ic_inv
+
+	/*
+	 * We encode the exception entry in the bottom 3 bits of
+	 * SP, and we have to guarantee to be 8 bytes aligned.
+	 */
+	W(add)	sp, sp, #1	/* Reset 	  7 */
+	W(add)	sp, sp, #1	/* Undef	  6 */
+	W(add)	sp, sp, #1	/* Syscall	  5 */
+	W(add)	sp, sp, #1	/* Prefetch abort 4 */
+	W(add)	sp, sp, #1	/* Data abort	  3 */
+	W(add)	sp, sp, #1	/* HVC		  2 */
+	W(add)	sp, sp, #1	/* IRQ		  1 */
+	W(nop)			/* FIQ		  0 */
+
+	mcr	p15, 0, r0, c7, c5, 0	/* ICIALLU */
+	isb
+
+	b	decode_vectors
+
+	.align 5
 __kvm_hyp_vector_bp_inv:
 	.global __kvm_hyp_vector_bp_inv
 
@@ -92,6 +114,8 @@
 	mcr	p15, 0, r0, c7, c5, 6	/* BPIALL */
 	isb
 
+decode_vectors:
+
 #ifdef CONFIG_THUMB2_KERNEL
 	/*
 	 * Yet another silly hack: Use VPIDR as a temp register.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 11/14] ARM: KVM: invalidate icache on guest exit for Cortex-A15
@ 2018-05-21 11:45   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:45 UTC (permalink / raw)
  To: linux-arm-kernel

From: Marc Zyngier <marc.zyngier@arm.com>

In order to avoid aliasing attacks against the branch predictor
on Cortex-A15, let's invalidate the BTB on guest exit, which can
only be done by invalidating the icache (with ACTLR[0] being set).

We use the same hack as for A12/A17 to perform the vector decoding.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
---
 arch/arm/include/asm/kvm_mmu.h |  5 +++++
 arch/arm/kvm/hyp/hyp-entry.S   | 24 ++++++++++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index d08ce9c41df4..48edb1f4ced4 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -306,6 +306,11 @@ static inline void *kvm_get_hyp_vector(void)
 		return kvm_ksym_ref(__kvm_hyp_vector_bp_inv);
 	}
 
+	case ARM_CPU_PART_CORTEX_A15:
+	{
+		extern char __kvm_hyp_vector_ic_inv[];
+		return kvm_ksym_ref(__kvm_hyp_vector_ic_inv);
+	}
 #endif
 	default:
 	{
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index e789f52a5129..918a05dd2d63 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -73,6 +73,28 @@
 
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	.align 5
+__kvm_hyp_vector_ic_inv:
+	.global __kvm_hyp_vector_ic_inv
+
+	/*
+	 * We encode the exception entry in the bottom 3 bits of
+	 * SP, and we have to guarantee to be 8 bytes aligned.
+	 */
+	W(add)	sp, sp, #1	/* Reset 	  7 */
+	W(add)	sp, sp, #1	/* Undef	  6 */
+	W(add)	sp, sp, #1	/* Syscall	  5 */
+	W(add)	sp, sp, #1	/* Prefetch abort 4 */
+	W(add)	sp, sp, #1	/* Data abort	  3 */
+	W(add)	sp, sp, #1	/* HVC		  2 */
+	W(add)	sp, sp, #1	/* IRQ		  1 */
+	W(nop)			/* FIQ		  0 */
+
+	mcr	p15, 0, r0, c7, c5, 0	/* ICIALLU */
+	isb
+
+	b	decode_vectors
+
+	.align 5
 __kvm_hyp_vector_bp_inv:
 	.global __kvm_hyp_vector_bp_inv
 
@@ -92,6 +114,8 @@
 	mcr	p15, 0, r0, c7, c5, 6	/* BPIALL */
 	isb
 
+decode_vectors:
+
 #ifdef CONFIG_THUMB2_KERNEL
 	/*
 	 * Yet another silly hack: Use VPIDR as a temp register.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 12/14] ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-21 11:45   ` Russell King
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:45 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, kvmarm

Include Brahma B15 in the Spectre v2 KVM workarounds.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
---
 arch/arm/include/asm/kvm_mmu.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 48edb1f4ced4..fea770f78144 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -306,6 +306,7 @@ static inline void *kvm_get_hyp_vector(void)
 		return kvm_ksym_ref(__kvm_hyp_vector_bp_inv);
 	}
 
+	case ARM_CPU_PART_BRAHMA_B15:
 	case ARM_CPU_PART_CORTEX_A15:
 	{
 		extern char __kvm_hyp_vector_ic_inv[];
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 12/14] ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15
@ 2018-05-21 11:45   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:45 UTC (permalink / raw)
  To: linux-arm-kernel

Include Brahma B15 in the Spectre v2 KVM workarounds.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
---
 arch/arm/include/asm/kvm_mmu.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 48edb1f4ced4..fea770f78144 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -306,6 +306,7 @@ static inline void *kvm_get_hyp_vector(void)
 		return kvm_ksym_ref(__kvm_hyp_vector_bp_inv);
 	}
 
+	case ARM_CPU_PART_BRAHMA_B15:
 	case ARM_CPU_PART_CORTEX_A15:
 	{
 		extern char __kvm_hyp_vector_ic_inv[];
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 13/14] ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-21 11:45   ` Russell King
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:45 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, kvmarm

We want SMCCC_ARCH_WORKAROUND_1 to be fast. As fast as possible.
So let's intercept it as early as we can by testing for the
function call number as soon as we've identified a HVC call
coming from the guest.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
---
 arch/arm/kvm/hyp/hyp-entry.S | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index 918a05dd2d63..67de45685e29 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -16,6 +16,7 @@
  * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
  */
 
+#include <linux/arm-smccc.h>
 #include <linux/linkage.h>
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
@@ -202,7 +203,7 @@ ENDPROC(__hyp_do_panic)
 	lsr     r2, r2, #16
 	and     r2, r2, #0xff
 	cmp     r2, #0
-	bne	guest_trap		@ Guest called HVC
+	bne	guest_hvc_trap		@ Guest called HVC
 
 	/*
 	 * Getting here means host called HVC, we shift parameters and branch
@@ -253,6 +254,16 @@ THUMB(	orr	lr, #1)
 	pop	{r2, lr}
 	eret
 
+guest_hvc_trap:
+	movw	ip, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
+	movt	ip, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
+	ldr	r0, [sp]		@ Guest's r0
+	teq	r0, ip
+	bne	guest_trap
+	pop	{r0, r1, r2}
+	mov	r0, #0
+	eret
+
 guest_trap:
 	load_vcpu r0			@ Load VCPU pointer to r0
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 13/14] ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
@ 2018-05-21 11:45   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:45 UTC (permalink / raw)
  To: linux-arm-kernel

We want SMCCC_ARCH_WORKAROUND_1 to be fast. As fast as possible.
So let's intercept it as early as we can by testing for the
function call number as soon as we've identified a HVC call
coming from the guest.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
---
 arch/arm/kvm/hyp/hyp-entry.S | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index 918a05dd2d63..67de45685e29 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -16,6 +16,7 @@
  * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
  */
 
+#include <linux/arm-smccc.h>
 #include <linux/linkage.h>
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
@@ -202,7 +203,7 @@ ENDPROC(__hyp_do_panic)
 	lsr     r2, r2, #16
 	and     r2, r2, #0xff
 	cmp     r2, #0
-	bne	guest_trap		@ Guest called HVC
+	bne	guest_hvc_trap		@ Guest called HVC
 
 	/*
 	 * Getting here means host called HVC, we shift parameters and branch
@@ -253,6 +254,16 @@ THUMB(	orr	lr, #1)
 	pop	{r2, lr}
 	eret
 
+guest_hvc_trap:
+	movw	ip, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
+	movt	ip, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
+	ldr	r0, [sp]		@ Guest's r0
+	teq	r0, ip
+	bne	guest_trap
+	pop	{r0, r1, r2}
+	mov	r0, #0
+	eret
+
 guest_trap:
 	load_vcpu r0			@ Load VCPU pointer to r0
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 14/14] ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-21 11:45   ` Russell King
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:45 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, kvmarm

Report support for SMCCC_ARCH_WORKAROUND_1 to KVM guests for affected
CPUs.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
---
 arch/arm/include/asm/kvm_host.h | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 248b930563e5..11f91744ffb0 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -21,6 +21,7 @@
 
 #include <linux/types.h>
 #include <linux/kvm_types.h>
+#include <asm/cputype.h>
 #include <asm/kvm.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_mmio.h>
@@ -311,8 +312,17 @@ static inline void kvm_arm_vhe_guest_exit(void) {}
 
 static inline bool kvm_arm_harden_branch_predictor(void)
 {
-	/* No way to detect it yet, pretend it is not there. */
-	return false;
+	switch(read_cpuid_part()) {
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	case ARM_CPU_PART_BRAHMA_B15:
+	case ARM_CPU_PART_CORTEX_A12:
+	case ARM_CPU_PART_CORTEX_A15:
+	case ARM_CPU_PART_CORTEX_A17:
+		return true;
+#endif
+	default:
+		return false;
+	}
 }
 
 #endif /* __ARM_KVM_HOST_H__ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 14/14] ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1
@ 2018-05-21 11:45   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-21 11:45 UTC (permalink / raw)
  To: linux-arm-kernel

Report support for SMCCC_ARCH_WORKAROUND_1 to KVM guests for affected
CPUs.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
---
 arch/arm/include/asm/kvm_host.h | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 248b930563e5..11f91744ffb0 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -21,6 +21,7 @@
 
 #include <linux/types.h>
 #include <linux/kvm_types.h>
+#include <asm/cputype.h>
 #include <asm/kvm.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_mmio.h>
@@ -311,8 +312,17 @@ static inline void kvm_arm_vhe_guest_exit(void) {}
 
 static inline bool kvm_arm_harden_branch_predictor(void)
 {
-	/* No way to detect it yet, pretend it is not there. */
-	return false;
+	switch(read_cpuid_part()) {
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+	case ARM_CPU_PART_BRAHMA_B15:
+	case ARM_CPU_PART_CORTEX_A12:
+	case ARM_CPU_PART_CORTEX_A15:
+	case ARM_CPU_PART_CORTEX_A17:
+		return true;
+#endif
+	default:
+		return false;
+	}
 }
 
 #endif /* __ARM_KVM_HOST_H__ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* Re: [PATCH 06/14] ARM: spectre-v2: harden branch predictor on context switches
  2018-05-21 11:44   ` Russell King
@ 2018-05-22  3:21     ` Florian Fainelli
  -1 siblings, 0 replies; 84+ messages in thread
From: Florian Fainelli @ 2018-05-22  3:21 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel; +Cc: Marc Zyngier, kvmarm



On 05/21/2018 04:44 AM, Russell King wrote:
> Harden the branch predictor against Spectre v2 attacks on context
> switches for ARMv7 and later CPUs.  We do this by:
> 
> Cortex A9, A12, A17, A73, A75: invalidating the BTB.
> Cortex A15, Brahma B15: invalidating the instruction cache.
> 
> Cortex A57 and Cortex A72 are not addressed in this patch.
> 
> Cortex R7 and Cortex R8 are also not addressed as we do not enforce
> memory protection on these cores.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 06/14] ARM: spectre-v2: harden branch predictor on context switches
@ 2018-05-22  3:21     ` Florian Fainelli
  0 siblings, 0 replies; 84+ messages in thread
From: Florian Fainelli @ 2018-05-22  3:21 UTC (permalink / raw)
  To: linux-arm-kernel



On 05/21/2018 04:44 AM, Russell King wrote:
> Harden the branch predictor against Spectre v2 attacks on context
> switches for ARMv7 and later CPUs.  We do this by:
> 
> Cortex A9, A12, A17, A73, A75: invalidating the BTB.
> Cortex A15, Brahma B15: invalidating the instruction cache.
> 
> Cortex A57 and Cortex A72 are not addressed in this patch.
> 
> Cortex R7 and Cortex R8 are also not addressed as we do not enforce
> memory protection on these cores.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 12/14] ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15
  2018-05-21 11:45   ` Russell King
@ 2018-05-22  3:22     ` Florian Fainelli
  -1 siblings, 0 replies; 84+ messages in thread
From: Florian Fainelli @ 2018-05-22  3:22 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel; +Cc: Marc Zyngier, kvmarm



On 05/21/2018 04:45 AM, Russell King wrote:
> Include Brahma B15 in the Spectre v2 KVM workarounds.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>

Acked-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 12/14] ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15
@ 2018-05-22  3:22     ` Florian Fainelli
  0 siblings, 0 replies; 84+ messages in thread
From: Florian Fainelli @ 2018-05-22  3:22 UTC (permalink / raw)
  To: linux-arm-kernel



On 05/21/2018 04:45 AM, Russell King wrote:
> Include Brahma B15 in the Spectre v2 KVM workarounds.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>

Acked-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 06/14] ARM: spectre-v2: harden branch predictor on context switches
  2018-05-22  3:21     ` Florian Fainelli
@ 2018-05-22  9:55       ` Russell King - ARM Linux
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-22  9:55 UTC (permalink / raw)
  To: Florian Fainelli; +Cc: Marc Zyngier, linux-arm-kernel, kvmarm

On Mon, May 21, 2018 at 08:21:58PM -0700, Florian Fainelli wrote:
> 
> 
> On 05/21/2018 04:44 AM, Russell King wrote:
> > Harden the branch predictor against Spectre v2 attacks on context
> > switches for ARMv7 and later CPUs.  We do this by:
> > 
> > Cortex A9, A12, A17, A73, A75: invalidating the BTB.
> > Cortex A15, Brahma B15: invalidating the instruction cache.
> > 
> > Cortex A57 and Cortex A72 are not addressed in this patch.
> > 
> > Cortex R7 and Cortex R8 are also not addressed as we do not enforce
> > memory protection on these cores.
> > 
> > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> 
> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>

It does need this additional patch to avoid some build errors - I'm
surprised that my autobuilder found it before the 0-day builder...

diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
index 0c37e6a2830d..526d07ab6b7a 100644
--- a/arch/arm/mm/proc-v7-bugs.c
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -127,4 +127,8 @@ void cpu_v7_bugs_init(void)
 		pr_info("CPU: Spectre v2: using %s workaround\n",
 			spectre_v2_method);
 }
+#else
+void cpu_v7_bugs_init(void)
+{
+}
 #endif


-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 06/14] ARM: spectre-v2: harden branch predictor on context switches
@ 2018-05-22  9:55       ` Russell King - ARM Linux
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-22  9:55 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 21, 2018 at 08:21:58PM -0700, Florian Fainelli wrote:
> 
> 
> On 05/21/2018 04:44 AM, Russell King wrote:
> > Harden the branch predictor against Spectre v2 attacks on context
> > switches for ARMv7 and later CPUs.  We do this by:
> > 
> > Cortex A9, A12, A17, A73, A75: invalidating the BTB.
> > Cortex A15, Brahma B15: invalidating the instruction cache.
> > 
> > Cortex A57 and Cortex A72 are not addressed in this patch.
> > 
> > Cortex R7 and Cortex R8 are also not addressed as we do not enforce
> > memory protection on these cores.
> > 
> > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> 
> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>

It does need this additional patch to avoid some build errors - I'm
surprised that my autobuilder found it before the 0-day builder...

diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
index 0c37e6a2830d..526d07ab6b7a 100644
--- a/arch/arm/mm/proc-v7-bugs.c
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -127,4 +127,8 @@ void cpu_v7_bugs_init(void)
 		pr_info("CPU: Spectre v2: using %s workaround\n",
 			spectre_v2_method);
 }
+#else
+void cpu_v7_bugs_init(void)
+{
+}
 #endif


-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* Re: [PATCH 08/14] ARM: spectre-v2: harden user aborts in kernel space
  2018-05-21 11:45   ` Russell King
@ 2018-05-22 17:15     ` Marc Zyngier
  -1 siblings, 0 replies; 84+ messages in thread
From: Marc Zyngier @ 2018-05-22 17:15 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel; +Cc: Florian Fainelli, kvmarm

On 21/05/18 12:45, Russell King wrote:
> In order to prevent aliasing attacks on the branch predictor,
> invalidate the BTB or instruction cache on CPUs that are known to be
> affected when taking an abort on a address that is outside of a user
> task limit:
> 
> Cortex A8, A9, A12, A17, A73, A75: flush BTB.
> Cortex A15, Brahma B15: invalidate icache.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
> ---
>  arch/arm/include/asm/cp15.h        |  3 +++
>  arch/arm/include/asm/system_misc.h |  8 ++++++
>  arch/arm/mm/fault.c                |  3 +++
>  arch/arm/mm/proc-v7-bugs.c         | 51 ++++++++++++++++++++++++++++++++++++++
>  arch/arm/mm/proc-v7.S              |  8 +++---
>  5 files changed, 70 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> index 4c9fa72b59f5..07e27f212dc7 100644
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -65,6 +65,9 @@
>  #define __write_sysreg(v, r, w, c, t)	asm volatile(w " " c : : "r" ((t)(v)))
>  #define write_sysreg(v, ...)		__write_sysreg(v, __VA_ARGS__)
>  
> +#define BPIALL				__ACCESS_CP15(c7, 0, c5, 6)
> +#define ICIALLU				__ACCESS_CP15(c7, 0, c5, 0)
> +
>  extern unsigned long cr_alignment;	/* defined in entry-armv.S */
>  
>  static inline unsigned long get_cr(void)
> diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
> index 78f6db114faf..3cfe010c5734 100644
> --- a/arch/arm/include/asm/system_misc.h
> +++ b/arch/arm/include/asm/system_misc.h
> @@ -15,6 +15,14 @@ void soft_restart(unsigned long);
>  extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
>  extern void (*arm_pm_idle)(void);
>  
> +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> +extern void (*harden_branch_predictor)(void);
> +#define harden_branch_predictor() \
> +	do { if (harden_branch_predictor) harden_branch_predictor(); } while (0)
> +#else
> +#define harden_branch_predictor() do { } while (0)
> +#endif
> +
>  #define UDBG_UNDEFINED	(1 << 0)
>  #define UDBG_SYSCALL	(1 << 1)
>  #define UDBG_BADABORT	(1 << 2)
> diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
> index b75eada23d0a..3b1ba003c4f9 100644
> --- a/arch/arm/mm/fault.c
> +++ b/arch/arm/mm/fault.c
> @@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk, unsigned long addr,
>  {
>  	struct siginfo si;
>  
> +	if (addr > TASK_SIZE)
> +		harden_branch_predictor();
> +
>  #ifdef CONFIG_DEBUG_USER
>  	if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) ||
>  	    ((user_debug & UDBG_BUS)  && (sig == SIGBUS))) {
> diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
> index a32ce13479d9..65a9b8141f86 100644
> --- a/arch/arm/mm/proc-v7-bugs.c
> +++ b/arch/arm/mm/proc-v7-bugs.c
> @@ -2,6 +2,12 @@
>  #include <linux/kernel.h>
>  #include <linux/smp.h>
>  
> +#include <asm/cp15.h>
> +#include <asm/cputype.h>
> +#include <asm/system_misc.h>
> +
> +void cpu_v7_bugs_init(void);
> +
>  static __maybe_unused void cpu_v7_check_auxcr_set(u32 mask, const char *msg)
>  {
>  	u32 aux_cr;
> @@ -21,9 +27,54 @@ static void check_spectre_auxcr(u32 bit)
>  void cpu_v7_ca8_ibe(void)
>  {
>  	check_spectre_auxcr(BIT(6));
> +	cpu_v7_bugs_init();
>  }
>  
>  void cpu_v7_ca15_ibe(void)
>  {
>  	check_spectre_auxcr(BIT(0));
> +	cpu_v7_bugs_init();
> +}
> +
> +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> +void (*harden_branch_predictor)(void);
> +
> +static void harden_branch_predictor_bpiall(void)
> +{
> +	write_sysreg(0, BPIALL);
> +}
> +
> +static void harden_branch_predictor_iciallu(void)
> +{
> +	write_sysreg(0, ICIALLU);
> +}
> +
> +void cpu_v7_bugs_init(void)
> +{
> +	const char *spectre_v2_method = NULL;
> +
> +	if (harden_branch_predictor)
> +		return;

How does it work on a big-little systems where two CPUs have diverging
mitigation methods? Let's say an hypothetical A15/A17 system? Or even a
more common A15/A7 system, where the small core doesn't require the
mitigation?

> +
> +	switch (read_cpuid_part()) {
> +	case ARM_CPU_PART_CORTEX_A8:
> +	case ARM_CPU_PART_CORTEX_A9:
> +	case ARM_CPU_PART_CORTEX_A12:
> +	case ARM_CPU_PART_CORTEX_A17:
> +	case ARM_CPU_PART_CORTEX_A73:
> +	case ARM_CPU_PART_CORTEX_A75:
> +		harden_branch_predictor = harden_branch_predictor_bpiall;
> +		spectre_v2_method = "BPIALL";
> +		break;

You don't seem to take into account the PFR0.CSV2 field which indicates
that the CPU has a branch predictor that is immune to Spectre-v2.

See for example the Cortex-A75 r3p0 TRM[1].

> +
> +	case ARM_CPU_PART_CORTEX_A15:
> +	case ARM_CPU_PART_BRAHMA_B15:
> +		harden_branch_predictor = harden_branch_predictor_iciallu;
> +		spectre_v2_method = "ICIALLU";
> +		break;
> +	}
> +	if (spectre_v2_method)
> +		pr_info("CPU: Spectre v2: using %s workaround\n",
> +			spectre_v2_method);
>  }
> +#endif
> diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
> index fa9214036fb3..79510011e7eb 100644
> --- a/arch/arm/mm/proc-v7.S
> +++ b/arch/arm/mm/proc-v7.S
> @@ -532,8 +532,10 @@ ENDPROC(__v7_setup)
>  
>  	__INITDATA
>  
> +	.weak cpu_v7_bugs_init
> +
>  	@ define struct processor (see <asm/proc-fns.h> and proc-macros.S)
> -	define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
> +	define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
>  
>  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>  	@ generic v7 bpiall on context switch
> @@ -548,7 +550,7 @@ ENDPROC(__v7_setup)
>  	globl_equ	cpu_v7_bpiall_do_suspend,	cpu_v7_do_suspend
>  	globl_equ	cpu_v7_bpiall_do_resume,	cpu_v7_do_resume
>  #endif
> -	define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
> +	define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
>  
>  #define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_bpiall_processor_functions
>  #else
> @@ -584,7 +586,7 @@ ENDPROC(__v7_setup)
>  	globl_equ	cpu_ca9mp_switch_mm,	cpu_v7_switch_mm
>  #endif
>  	globl_equ	cpu_ca9mp_set_pte_ext,	cpu_v7_set_pte_ext
> -	define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
> +	define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
>  #endif
>  
>  	@ Cortex-A15 - needs iciallu switch_mm for hardening
> 

Thanks,

	M.

[1]
http://infocenter.arm.com/help/topic/com.arm.doc.100403_0300_00_en/axa1518783469631.html
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 08/14] ARM: spectre-v2: harden user aborts in kernel space
@ 2018-05-22 17:15     ` Marc Zyngier
  0 siblings, 0 replies; 84+ messages in thread
From: Marc Zyngier @ 2018-05-22 17:15 UTC (permalink / raw)
  To: linux-arm-kernel

On 21/05/18 12:45, Russell King wrote:
> In order to prevent aliasing attacks on the branch predictor,
> invalidate the BTB or instruction cache on CPUs that are known to be
> affected when taking an abort on a address that is outside of a user
> task limit:
> 
> Cortex A8, A9, A12, A17, A73, A75: flush BTB.
> Cortex A15, Brahma B15: invalidate icache.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
> ---
>  arch/arm/include/asm/cp15.h        |  3 +++
>  arch/arm/include/asm/system_misc.h |  8 ++++++
>  arch/arm/mm/fault.c                |  3 +++
>  arch/arm/mm/proc-v7-bugs.c         | 51 ++++++++++++++++++++++++++++++++++++++
>  arch/arm/mm/proc-v7.S              |  8 +++---
>  5 files changed, 70 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> index 4c9fa72b59f5..07e27f212dc7 100644
> --- a/arch/arm/include/asm/cp15.h
> +++ b/arch/arm/include/asm/cp15.h
> @@ -65,6 +65,9 @@
>  #define __write_sysreg(v, r, w, c, t)	asm volatile(w " " c : : "r" ((t)(v)))
>  #define write_sysreg(v, ...)		__write_sysreg(v, __VA_ARGS__)
>  
> +#define BPIALL				__ACCESS_CP15(c7, 0, c5, 6)
> +#define ICIALLU				__ACCESS_CP15(c7, 0, c5, 0)
> +
>  extern unsigned long cr_alignment;	/* defined in entry-armv.S */
>  
>  static inline unsigned long get_cr(void)
> diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
> index 78f6db114faf..3cfe010c5734 100644
> --- a/arch/arm/include/asm/system_misc.h
> +++ b/arch/arm/include/asm/system_misc.h
> @@ -15,6 +15,14 @@ void soft_restart(unsigned long);
>  extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
>  extern void (*arm_pm_idle)(void);
>  
> +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> +extern void (*harden_branch_predictor)(void);
> +#define harden_branch_predictor() \
> +	do { if (harden_branch_predictor) harden_branch_predictor(); } while (0)
> +#else
> +#define harden_branch_predictor() do { } while (0)
> +#endif
> +
>  #define UDBG_UNDEFINED	(1 << 0)
>  #define UDBG_SYSCALL	(1 << 1)
>  #define UDBG_BADABORT	(1 << 2)
> diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
> index b75eada23d0a..3b1ba003c4f9 100644
> --- a/arch/arm/mm/fault.c
> +++ b/arch/arm/mm/fault.c
> @@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk, unsigned long addr,
>  {
>  	struct siginfo si;
>  
> +	if (addr > TASK_SIZE)
> +		harden_branch_predictor();
> +
>  #ifdef CONFIG_DEBUG_USER
>  	if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) ||
>  	    ((user_debug & UDBG_BUS)  && (sig == SIGBUS))) {
> diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
> index a32ce13479d9..65a9b8141f86 100644
> --- a/arch/arm/mm/proc-v7-bugs.c
> +++ b/arch/arm/mm/proc-v7-bugs.c
> @@ -2,6 +2,12 @@
>  #include <linux/kernel.h>
>  #include <linux/smp.h>
>  
> +#include <asm/cp15.h>
> +#include <asm/cputype.h>
> +#include <asm/system_misc.h>
> +
> +void cpu_v7_bugs_init(void);
> +
>  static __maybe_unused void cpu_v7_check_auxcr_set(u32 mask, const char *msg)
>  {
>  	u32 aux_cr;
> @@ -21,9 +27,54 @@ static void check_spectre_auxcr(u32 bit)
>  void cpu_v7_ca8_ibe(void)
>  {
>  	check_spectre_auxcr(BIT(6));
> +	cpu_v7_bugs_init();
>  }
>  
>  void cpu_v7_ca15_ibe(void)
>  {
>  	check_spectre_auxcr(BIT(0));
> +	cpu_v7_bugs_init();
> +}
> +
> +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> +void (*harden_branch_predictor)(void);
> +
> +static void harden_branch_predictor_bpiall(void)
> +{
> +	write_sysreg(0, BPIALL);
> +}
> +
> +static void harden_branch_predictor_iciallu(void)
> +{
> +	write_sysreg(0, ICIALLU);
> +}
> +
> +void cpu_v7_bugs_init(void)
> +{
> +	const char *spectre_v2_method = NULL;
> +
> +	if (harden_branch_predictor)
> +		return;

How does it work on a big-little systems where two CPUs have diverging
mitigation methods? Let's say an hypothetical A15/A17 system? Or even a
more common A15/A7 system, where the small core doesn't require the
mitigation?

> +
> +	switch (read_cpuid_part()) {
> +	case ARM_CPU_PART_CORTEX_A8:
> +	case ARM_CPU_PART_CORTEX_A9:
> +	case ARM_CPU_PART_CORTEX_A12:
> +	case ARM_CPU_PART_CORTEX_A17:
> +	case ARM_CPU_PART_CORTEX_A73:
> +	case ARM_CPU_PART_CORTEX_A75:
> +		harden_branch_predictor = harden_branch_predictor_bpiall;
> +		spectre_v2_method = "BPIALL";
> +		break;

You don't seem to take into account the PFR0.CSV2 field which indicates
that the CPU has a branch predictor that is immune to Spectre-v2.

See for example the Cortex-A75 r3p0 TRM[1].

> +
> +	case ARM_CPU_PART_CORTEX_A15:
> +	case ARM_CPU_PART_BRAHMA_B15:
> +		harden_branch_predictor = harden_branch_predictor_iciallu;
> +		spectre_v2_method = "ICIALLU";
> +		break;
> +	}
> +	if (spectre_v2_method)
> +		pr_info("CPU: Spectre v2: using %s workaround\n",
> +			spectre_v2_method);
>  }
> +#endif
> diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
> index fa9214036fb3..79510011e7eb 100644
> --- a/arch/arm/mm/proc-v7.S
> +++ b/arch/arm/mm/proc-v7.S
> @@ -532,8 +532,10 @@ ENDPROC(__v7_setup)
>  
>  	__INITDATA
>  
> +	.weak cpu_v7_bugs_init
> +
>  	@ define struct processor (see <asm/proc-fns.h> and proc-macros.S)
> -	define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
> +	define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
>  
>  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>  	@ generic v7 bpiall on context switch
> @@ -548,7 +550,7 @@ ENDPROC(__v7_setup)
>  	globl_equ	cpu_v7_bpiall_do_suspend,	cpu_v7_do_suspend
>  	globl_equ	cpu_v7_bpiall_do_resume,	cpu_v7_do_resume
>  #endif
> -	define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
> +	define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
>  
>  #define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_bpiall_processor_functions
>  #else
> @@ -584,7 +586,7 @@ ENDPROC(__v7_setup)
>  	globl_equ	cpu_ca9mp_switch_mm,	cpu_v7_switch_mm
>  #endif
>  	globl_equ	cpu_ca9mp_set_pte_ext,	cpu_v7_set_pte_ext
> -	define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
> +	define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
>  #endif
>  
>  	@ Cortex-A15 - needs iciallu switch_mm for hardening
> 

Thanks,

	M.

[1]
http://infocenter.arm.com/help/topic/com.arm.doc.100403_0300_00_en/axa1518783469631.html
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
  2018-05-21 11:45   ` Russell King
@ 2018-05-22 17:24     ` Marc Zyngier
  -1 siblings, 0 replies; 84+ messages in thread
From: Marc Zyngier @ 2018-05-22 17:24 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel; +Cc: Florian Fainelli, Christoffer Dall, kvmarm

On 21/05/18 12:45, Russell King wrote:
> Add PSCI based hardening for cores that require more complex handling in
> firmware.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm/mm/proc-v7-bugs.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++
>  arch/arm/mm/proc-v7.S      | 21 +++++++++++++++++++
>  2 files changed, 71 insertions(+)
> 
> diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
> index 65a9b8141f86..0c37e6a2830d 100644
> --- a/arch/arm/mm/proc-v7-bugs.c
> +++ b/arch/arm/mm/proc-v7-bugs.c
> @@ -1,9 +1,12 @@
>  // SPDX-License-Identifier: GPL-2.0
> +#include <linux/arm-smccc.h>
>  #include <linux/kernel.h>
> +#include <linux/psci.h>
>  #include <linux/smp.h>
>  
>  #include <asm/cp15.h>
>  #include <asm/cputype.h>
> +#include <asm/proc-fns.h>
>  #include <asm/system_misc.h>
>  
>  void cpu_v7_bugs_init(void);
> @@ -39,6 +42,9 @@ void cpu_v7_ca15_ibe(void)
>  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>  void (*harden_branch_predictor)(void);
>  
> +extern void cpu_v7_smc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
> +extern void cpu_v7_hvc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
> +
>  static void harden_branch_predictor_bpiall(void)
>  {
>  	write_sysreg(0, BPIALL);
> @@ -49,6 +55,18 @@ static void harden_branch_predictor_iciallu(void)
>  	write_sysreg(0, ICIALLU);
>  }
>  
> +#ifdef CONFIG_ARM_PSCI
> +static void call_smc_arch_workaround_1(void)
> +{
> +	arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
> +}
> +
> +static void call_hvc_arch_workaround_1(void)
> +{
> +	arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
> +}
> +#endif
> +
>  void cpu_v7_bugs_init(void)
>  {
>  	const char *spectre_v2_method = NULL;
> @@ -73,6 +91,38 @@ void cpu_v7_bugs_init(void)
>  		spectre_v2_method = "ICIALLU";
>  		break;
>  	}
> +
> +#ifdef CONFIG_ARM_PSCI
> +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
> +		struct arm_smccc_res res;
> +
> +		switch (psci_ops.conduit) {
> +		case PSCI_CONDUIT_HVC:
> +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> +			if ((int)res.a0 < 0)
> +				break;

I just realised that there is a small, but significant difference
between this and the arm64 version: On arm64, we have a table of
vulnerable implementations, and we try the mitigation on a per-cpu
basis. Here, you entirely rely on the firmware to discover whether the
CPU needs mitigation or not. You then need to check for a return value
of 1, which indicates that although the mitigation is implemented, it is
not required on this particular CPU.

But that's probably moot if you don't support BL systems.

> +			harden_branch_predictor = call_hvc_arch_workaround_1;
> +			processor.switch_mm = cpu_v7_hvc_switch_mm;
> +			spectre_v2_method = "hypervisor";
> +			break;
> +
> +		case PSCI_CONDUIT_SMC:
> +			arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> +			if ((int)res.a0 < 0)
> +				break;
> +			harden_branch_predictor = call_smc_arch_workaround_1;
> +			processor.switch_mm = cpu_v7_smc_switch_mm;
> +			spectre_v2_method = "firmware PSCI";

My previous remark still stands: this is not really PSCI.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...
-

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
@ 2018-05-22 17:24     ` Marc Zyngier
  0 siblings, 0 replies; 84+ messages in thread
From: Marc Zyngier @ 2018-05-22 17:24 UTC (permalink / raw)
  To: linux-arm-kernel

On 21/05/18 12:45, Russell King wrote:
> Add PSCI based hardening for cores that require more complex handling in
> firmware.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm/mm/proc-v7-bugs.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++
>  arch/arm/mm/proc-v7.S      | 21 +++++++++++++++++++
>  2 files changed, 71 insertions(+)
> 
> diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
> index 65a9b8141f86..0c37e6a2830d 100644
> --- a/arch/arm/mm/proc-v7-bugs.c
> +++ b/arch/arm/mm/proc-v7-bugs.c
> @@ -1,9 +1,12 @@
>  // SPDX-License-Identifier: GPL-2.0
> +#include <linux/arm-smccc.h>
>  #include <linux/kernel.h>
> +#include <linux/psci.h>
>  #include <linux/smp.h>
>  
>  #include <asm/cp15.h>
>  #include <asm/cputype.h>
> +#include <asm/proc-fns.h>
>  #include <asm/system_misc.h>
>  
>  void cpu_v7_bugs_init(void);
> @@ -39,6 +42,9 @@ void cpu_v7_ca15_ibe(void)
>  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>  void (*harden_branch_predictor)(void);
>  
> +extern void cpu_v7_smc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
> +extern void cpu_v7_hvc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
> +
>  static void harden_branch_predictor_bpiall(void)
>  {
>  	write_sysreg(0, BPIALL);
> @@ -49,6 +55,18 @@ static void harden_branch_predictor_iciallu(void)
>  	write_sysreg(0, ICIALLU);
>  }
>  
> +#ifdef CONFIG_ARM_PSCI
> +static void call_smc_arch_workaround_1(void)
> +{
> +	arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
> +}
> +
> +static void call_hvc_arch_workaround_1(void)
> +{
> +	arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
> +}
> +#endif
> +
>  void cpu_v7_bugs_init(void)
>  {
>  	const char *spectre_v2_method = NULL;
> @@ -73,6 +91,38 @@ void cpu_v7_bugs_init(void)
>  		spectre_v2_method = "ICIALLU";
>  		break;
>  	}
> +
> +#ifdef CONFIG_ARM_PSCI
> +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
> +		struct arm_smccc_res res;
> +
> +		switch (psci_ops.conduit) {
> +		case PSCI_CONDUIT_HVC:
> +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> +			if ((int)res.a0 < 0)
> +				break;

I just realised that there is a small, but significant difference
between this and the arm64 version: On arm64, we have a table of
vulnerable implementations, and we try the mitigation on a per-cpu
basis. Here, you entirely rely on the firmware to discover whether the
CPU needs mitigation or not. You then need to check for a return value
of 1, which indicates that although the mitigation is implemented, it is
not required on this particular CPU.

But that's probably moot if you don't support BL systems.

> +			harden_branch_predictor = call_hvc_arch_workaround_1;
> +			processor.switch_mm = cpu_v7_hvc_switch_mm;
> +			spectre_v2_method = "hypervisor";
> +			break;
> +
> +		case PSCI_CONDUIT_SMC:
> +			arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> +			if ((int)res.a0 < 0)
> +				break;
> +			harden_branch_predictor = call_smc_arch_workaround_1;
> +			processor.switch_mm = cpu_v7_smc_switch_mm;
> +			spectre_v2_method = "firmware PSCI";

My previous remark still stands: this is not really PSCI.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...
-

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 08/14] ARM: spectre-v2: harden user aborts in kernel space
  2018-05-22 17:15     ` Marc Zyngier
@ 2018-05-22 17:56       ` Russell King - ARM Linux
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-22 17:56 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: Florian Fainelli, Christoffer Dall, linux-arm-kernel, kvmarm

On Tue, May 22, 2018 at 06:15:02PM +0100, Marc Zyngier wrote:
> On 21/05/18 12:45, Russell King wrote:
> > In order to prevent aliasing attacks on the branch predictor,
> > invalidate the BTB or instruction cache on CPUs that are known to be
> > affected when taking an abort on a address that is outside of a user
> > task limit:
> > 
> > Cortex A8, A9, A12, A17, A73, A75: flush BTB.
> > Cortex A15, Brahma B15: invalidate icache.
> > 
> > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> > Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
> > ---
> >  arch/arm/include/asm/cp15.h        |  3 +++
> >  arch/arm/include/asm/system_misc.h |  8 ++++++
> >  arch/arm/mm/fault.c                |  3 +++
> >  arch/arm/mm/proc-v7-bugs.c         | 51 ++++++++++++++++++++++++++++++++++++++
> >  arch/arm/mm/proc-v7.S              |  8 +++---
> >  5 files changed, 70 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> > index 4c9fa72b59f5..07e27f212dc7 100644
> > --- a/arch/arm/include/asm/cp15.h
> > +++ b/arch/arm/include/asm/cp15.h
> > @@ -65,6 +65,9 @@
> >  #define __write_sysreg(v, r, w, c, t)	asm volatile(w " " c : : "r" ((t)(v)))
> >  #define write_sysreg(v, ...)		__write_sysreg(v, __VA_ARGS__)
> >  
> > +#define BPIALL				__ACCESS_CP15(c7, 0, c5, 6)
> > +#define ICIALLU				__ACCESS_CP15(c7, 0, c5, 0)
> > +
> >  extern unsigned long cr_alignment;	/* defined in entry-armv.S */
> >  
> >  static inline unsigned long get_cr(void)
> > diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
> > index 78f6db114faf..3cfe010c5734 100644
> > --- a/arch/arm/include/asm/system_misc.h
> > +++ b/arch/arm/include/asm/system_misc.h
> > @@ -15,6 +15,14 @@ void soft_restart(unsigned long);
> >  extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
> >  extern void (*arm_pm_idle)(void);
> >  
> > +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> > +extern void (*harden_branch_predictor)(void);
> > +#define harden_branch_predictor() \
> > +	do { if (harden_branch_predictor) harden_branch_predictor(); } while (0)
> > +#else
> > +#define harden_branch_predictor() do { } while (0)
> > +#endif
> > +
> >  #define UDBG_UNDEFINED	(1 << 0)
> >  #define UDBG_SYSCALL	(1 << 1)
> >  #define UDBG_BADABORT	(1 << 2)
> > diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
> > index b75eada23d0a..3b1ba003c4f9 100644
> > --- a/arch/arm/mm/fault.c
> > +++ b/arch/arm/mm/fault.c
> > @@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk, unsigned long addr,
> >  {
> >  	struct siginfo si;
> >  
> > +	if (addr > TASK_SIZE)
> > +		harden_branch_predictor();
> > +
> >  #ifdef CONFIG_DEBUG_USER
> >  	if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) ||
> >  	    ((user_debug & UDBG_BUS)  && (sig == SIGBUS))) {
> > diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
> > index a32ce13479d9..65a9b8141f86 100644
> > --- a/arch/arm/mm/proc-v7-bugs.c
> > +++ b/arch/arm/mm/proc-v7-bugs.c
> > @@ -2,6 +2,12 @@
> >  #include <linux/kernel.h>
> >  #include <linux/smp.h>
> >  
> > +#include <asm/cp15.h>
> > +#include <asm/cputype.h>
> > +#include <asm/system_misc.h>
> > +
> > +void cpu_v7_bugs_init(void);
> > +
> >  static __maybe_unused void cpu_v7_check_auxcr_set(u32 mask, const char *msg)
> >  {
> >  	u32 aux_cr;
> > @@ -21,9 +27,54 @@ static void check_spectre_auxcr(u32 bit)
> >  void cpu_v7_ca8_ibe(void)
> >  {
> >  	check_spectre_auxcr(BIT(6));
> > +	cpu_v7_bugs_init();
> >  }
> >  
> >  void cpu_v7_ca15_ibe(void)
> >  {
> >  	check_spectre_auxcr(BIT(0));
> > +	cpu_v7_bugs_init();
> > +}
> > +
> > +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> > +void (*harden_branch_predictor)(void);
> > +
> > +static void harden_branch_predictor_bpiall(void)
> > +{
> > +	write_sysreg(0, BPIALL);
> > +}
> > +
> > +static void harden_branch_predictor_iciallu(void)
> > +{
> > +	write_sysreg(0, ICIALLU);
> > +}
> > +
> > +void cpu_v7_bugs_init(void)
> > +{
> > +	const char *spectre_v2_method = NULL;
> > +
> > +	if (harden_branch_predictor)
> > +		return;
> 
> How does it work on a big-little systems where two CPUs have diverging
> mitigation methods? Let's say an hypothetical A15/A17 system? Or even a
> more common A15/A7 system, where the small core doesn't require the
> mitigation?

Hmm, I'd forgotten about those, because I don't have them.

We don't have the ability to mitigate this on such systems at all at
present, it would require a per-CPU cpu_switch_mm() implementation, and
the code has no structure to support that at present without considerable
rewrite of the CPU glue support.

I'm not even sure it could without checking deeper - I think there's some
situations where we call this before we're sufficiently setup.

I'll drop this series from the for-next branch, I suspect it won't be
making this merge window as a result, sorry.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 08/14] ARM: spectre-v2: harden user aborts in kernel space
@ 2018-05-22 17:56       ` Russell King - ARM Linux
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-22 17:56 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 06:15:02PM +0100, Marc Zyngier wrote:
> On 21/05/18 12:45, Russell King wrote:
> > In order to prevent aliasing attacks on the branch predictor,
> > invalidate the BTB or instruction cache on CPUs that are known to be
> > affected when taking an abort on a address that is outside of a user
> > task limit:
> > 
> > Cortex A8, A9, A12, A17, A73, A75: flush BTB.
> > Cortex A15, Brahma B15: invalidate icache.
> > 
> > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> > Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
> > ---
> >  arch/arm/include/asm/cp15.h        |  3 +++
> >  arch/arm/include/asm/system_misc.h |  8 ++++++
> >  arch/arm/mm/fault.c                |  3 +++
> >  arch/arm/mm/proc-v7-bugs.c         | 51 ++++++++++++++++++++++++++++++++++++++
> >  arch/arm/mm/proc-v7.S              |  8 +++---
> >  5 files changed, 70 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> > index 4c9fa72b59f5..07e27f212dc7 100644
> > --- a/arch/arm/include/asm/cp15.h
> > +++ b/arch/arm/include/asm/cp15.h
> > @@ -65,6 +65,9 @@
> >  #define __write_sysreg(v, r, w, c, t)	asm volatile(w " " c : : "r" ((t)(v)))
> >  #define write_sysreg(v, ...)		__write_sysreg(v, __VA_ARGS__)
> >  
> > +#define BPIALL				__ACCESS_CP15(c7, 0, c5, 6)
> > +#define ICIALLU				__ACCESS_CP15(c7, 0, c5, 0)
> > +
> >  extern unsigned long cr_alignment;	/* defined in entry-armv.S */
> >  
> >  static inline unsigned long get_cr(void)
> > diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
> > index 78f6db114faf..3cfe010c5734 100644
> > --- a/arch/arm/include/asm/system_misc.h
> > +++ b/arch/arm/include/asm/system_misc.h
> > @@ -15,6 +15,14 @@ void soft_restart(unsigned long);
> >  extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
> >  extern void (*arm_pm_idle)(void);
> >  
> > +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> > +extern void (*harden_branch_predictor)(void);
> > +#define harden_branch_predictor() \
> > +	do { if (harden_branch_predictor) harden_branch_predictor(); } while (0)
> > +#else
> > +#define harden_branch_predictor() do { } while (0)
> > +#endif
> > +
> >  #define UDBG_UNDEFINED	(1 << 0)
> >  #define UDBG_SYSCALL	(1 << 1)
> >  #define UDBG_BADABORT	(1 << 2)
> > diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
> > index b75eada23d0a..3b1ba003c4f9 100644
> > --- a/arch/arm/mm/fault.c
> > +++ b/arch/arm/mm/fault.c
> > @@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk, unsigned long addr,
> >  {
> >  	struct siginfo si;
> >  
> > +	if (addr > TASK_SIZE)
> > +		harden_branch_predictor();
> > +
> >  #ifdef CONFIG_DEBUG_USER
> >  	if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) ||
> >  	    ((user_debug & UDBG_BUS)  && (sig == SIGBUS))) {
> > diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
> > index a32ce13479d9..65a9b8141f86 100644
> > --- a/arch/arm/mm/proc-v7-bugs.c
> > +++ b/arch/arm/mm/proc-v7-bugs.c
> > @@ -2,6 +2,12 @@
> >  #include <linux/kernel.h>
> >  #include <linux/smp.h>
> >  
> > +#include <asm/cp15.h>
> > +#include <asm/cputype.h>
> > +#include <asm/system_misc.h>
> > +
> > +void cpu_v7_bugs_init(void);
> > +
> >  static __maybe_unused void cpu_v7_check_auxcr_set(u32 mask, const char *msg)
> >  {
> >  	u32 aux_cr;
> > @@ -21,9 +27,54 @@ static void check_spectre_auxcr(u32 bit)
> >  void cpu_v7_ca8_ibe(void)
> >  {
> >  	check_spectre_auxcr(BIT(6));
> > +	cpu_v7_bugs_init();
> >  }
> >  
> >  void cpu_v7_ca15_ibe(void)
> >  {
> >  	check_spectre_auxcr(BIT(0));
> > +	cpu_v7_bugs_init();
> > +}
> > +
> > +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> > +void (*harden_branch_predictor)(void);
> > +
> > +static void harden_branch_predictor_bpiall(void)
> > +{
> > +	write_sysreg(0, BPIALL);
> > +}
> > +
> > +static void harden_branch_predictor_iciallu(void)
> > +{
> > +	write_sysreg(0, ICIALLU);
> > +}
> > +
> > +void cpu_v7_bugs_init(void)
> > +{
> > +	const char *spectre_v2_method = NULL;
> > +
> > +	if (harden_branch_predictor)
> > +		return;
> 
> How does it work on a big-little systems where two CPUs have diverging
> mitigation methods? Let's say an hypothetical A15/A17 system? Or even a
> more common A15/A7 system, where the small core doesn't require the
> mitigation?

Hmm, I'd forgotten about those, because I don't have them.

We don't have the ability to mitigate this on such systems at all at
present, it would require a per-CPU cpu_switch_mm() implementation, and
the code has no structure to support that at present without considerable
rewrite of the CPU glue support.

I'm not even sure it could without checking deeper - I think there's some
situations where we call this before we're sufficiently setup.

I'll drop this series from the for-next branch, I suspect it won't be
making this merge window as a result, sorry.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
  2018-05-22 17:24     ` Marc Zyngier
@ 2018-05-22 17:57       ` Russell King - ARM Linux
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-22 17:57 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: Florian Fainelli, linux-arm-kernel, kvmarm

On Tue, May 22, 2018 at 06:24:13PM +0100, Marc Zyngier wrote:
> On 21/05/18 12:45, Russell King wrote:
> > Add PSCI based hardening for cores that require more complex handling in
> > firmware.
> > 
> > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> > Acked-by: Marc Zyngier <marc.zyngier@arm.com>
> > ---
> >  arch/arm/mm/proc-v7-bugs.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++
> >  arch/arm/mm/proc-v7.S      | 21 +++++++++++++++++++
> >  2 files changed, 71 insertions(+)
> > 
> > diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
> > index 65a9b8141f86..0c37e6a2830d 100644
> > --- a/arch/arm/mm/proc-v7-bugs.c
> > +++ b/arch/arm/mm/proc-v7-bugs.c
> > @@ -1,9 +1,12 @@
> >  // SPDX-License-Identifier: GPL-2.0
> > +#include <linux/arm-smccc.h>
> >  #include <linux/kernel.h>
> > +#include <linux/psci.h>
> >  #include <linux/smp.h>
> >  
> >  #include <asm/cp15.h>
> >  #include <asm/cputype.h>
> > +#include <asm/proc-fns.h>
> >  #include <asm/system_misc.h>
> >  
> >  void cpu_v7_bugs_init(void);
> > @@ -39,6 +42,9 @@ void cpu_v7_ca15_ibe(void)
> >  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> >  void (*harden_branch_predictor)(void);
> >  
> > +extern void cpu_v7_smc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
> > +extern void cpu_v7_hvc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
> > +
> >  static void harden_branch_predictor_bpiall(void)
> >  {
> >  	write_sysreg(0, BPIALL);
> > @@ -49,6 +55,18 @@ static void harden_branch_predictor_iciallu(void)
> >  	write_sysreg(0, ICIALLU);
> >  }
> >  
> > +#ifdef CONFIG_ARM_PSCI
> > +static void call_smc_arch_workaround_1(void)
> > +{
> > +	arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
> > +}
> > +
> > +static void call_hvc_arch_workaround_1(void)
> > +{
> > +	arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
> > +}
> > +#endif
> > +
> >  void cpu_v7_bugs_init(void)
> >  {
> >  	const char *spectre_v2_method = NULL;
> > @@ -73,6 +91,38 @@ void cpu_v7_bugs_init(void)
> >  		spectre_v2_method = "ICIALLU";
> >  		break;
> >  	}
> > +
> > +#ifdef CONFIG_ARM_PSCI
> > +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
> > +		struct arm_smccc_res res;
> > +
> > +		switch (psci_ops.conduit) {
> > +		case PSCI_CONDUIT_HVC:
> > +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> > +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> > +			if ((int)res.a0 < 0)
> > +				break;
> 
> I just realised that there is a small, but significant difference
> between this and the arm64 version: On arm64, we have a table of
> vulnerable implementations, and we try the mitigation on a per-cpu
> basis. Here, you entirely rely on the firmware to discover whether the
> CPU needs mitigation or not. You then need to check for a return value
> of 1, which indicates that although the mitigation is implemented, it is
> not required on this particular CPU.
> 
> But that's probably moot if you don't support BL systems.
> 
> > +			harden_branch_predictor = call_hvc_arch_workaround_1;
> > +			processor.switch_mm = cpu_v7_hvc_switch_mm;
> > +			spectre_v2_method = "hypervisor";
> > +			break;
> > +
> > +		case PSCI_CONDUIT_SMC:
> > +			arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> > +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> > +			if ((int)res.a0 < 0)
> > +				break;
> > +			harden_branch_predictor = call_smc_arch_workaround_1;
> > +			processor.switch_mm = cpu_v7_smc_switch_mm;
> > +			spectre_v2_method = "firmware PSCI";
> 
> My previous remark still stands: this is not really PSCI.

Sorry, no.  Your comment was for the HVC call, not the SMC.  You said
nothing about this one.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
@ 2018-05-22 17:57       ` Russell King - ARM Linux
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-22 17:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 06:24:13PM +0100, Marc Zyngier wrote:
> On 21/05/18 12:45, Russell King wrote:
> > Add PSCI based hardening for cores that require more complex handling in
> > firmware.
> > 
> > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> > Acked-by: Marc Zyngier <marc.zyngier@arm.com>
> > ---
> >  arch/arm/mm/proc-v7-bugs.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++
> >  arch/arm/mm/proc-v7.S      | 21 +++++++++++++++++++
> >  2 files changed, 71 insertions(+)
> > 
> > diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
> > index 65a9b8141f86..0c37e6a2830d 100644
> > --- a/arch/arm/mm/proc-v7-bugs.c
> > +++ b/arch/arm/mm/proc-v7-bugs.c
> > @@ -1,9 +1,12 @@
> >  // SPDX-License-Identifier: GPL-2.0
> > +#include <linux/arm-smccc.h>
> >  #include <linux/kernel.h>
> > +#include <linux/psci.h>
> >  #include <linux/smp.h>
> >  
> >  #include <asm/cp15.h>
> >  #include <asm/cputype.h>
> > +#include <asm/proc-fns.h>
> >  #include <asm/system_misc.h>
> >  
> >  void cpu_v7_bugs_init(void);
> > @@ -39,6 +42,9 @@ void cpu_v7_ca15_ibe(void)
> >  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> >  void (*harden_branch_predictor)(void);
> >  
> > +extern void cpu_v7_smc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
> > +extern void cpu_v7_hvc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
> > +
> >  static void harden_branch_predictor_bpiall(void)
> >  {
> >  	write_sysreg(0, BPIALL);
> > @@ -49,6 +55,18 @@ static void harden_branch_predictor_iciallu(void)
> >  	write_sysreg(0, ICIALLU);
> >  }
> >  
> > +#ifdef CONFIG_ARM_PSCI
> > +static void call_smc_arch_workaround_1(void)
> > +{
> > +	arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
> > +}
> > +
> > +static void call_hvc_arch_workaround_1(void)
> > +{
> > +	arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
> > +}
> > +#endif
> > +
> >  void cpu_v7_bugs_init(void)
> >  {
> >  	const char *spectre_v2_method = NULL;
> > @@ -73,6 +91,38 @@ void cpu_v7_bugs_init(void)
> >  		spectre_v2_method = "ICIALLU";
> >  		break;
> >  	}
> > +
> > +#ifdef CONFIG_ARM_PSCI
> > +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
> > +		struct arm_smccc_res res;
> > +
> > +		switch (psci_ops.conduit) {
> > +		case PSCI_CONDUIT_HVC:
> > +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> > +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> > +			if ((int)res.a0 < 0)
> > +				break;
> 
> I just realised that there is a small, but significant difference
> between this and the arm64 version: On arm64, we have a table of
> vulnerable implementations, and we try the mitigation on a per-cpu
> basis. Here, you entirely rely on the firmware to discover whether the
> CPU needs mitigation or not. You then need to check for a return value
> of 1, which indicates that although the mitigation is implemented, it is
> not required on this particular CPU.
> 
> But that's probably moot if you don't support BL systems.
> 
> > +			harden_branch_predictor = call_hvc_arch_workaround_1;
> > +			processor.switch_mm = cpu_v7_hvc_switch_mm;
> > +			spectre_v2_method = "hypervisor";
> > +			break;
> > +
> > +		case PSCI_CONDUIT_SMC:
> > +			arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> > +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> > +			if ((int)res.a0 < 0)
> > +				break;
> > +			harden_branch_predictor = call_smc_arch_workaround_1;
> > +			processor.switch_mm = cpu_v7_smc_switch_mm;
> > +			spectre_v2_method = "firmware PSCI";
> 
> My previous remark still stands: this is not really PSCI.

Sorry, no.  Your comment was for the HVC call, not the SMC.  You said
nothing about this one.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 08/14] ARM: spectre-v2: harden user aborts in kernel space
  2018-05-22 17:56       ` Russell King - ARM Linux
@ 2018-05-22 18:12         ` Russell King - ARM Linux
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-22 18:12 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: Florian Fainelli, linux-arm-kernel, kvmarm

On Tue, May 22, 2018 at 06:56:03PM +0100, Russell King - ARM Linux wrote:
> On Tue, May 22, 2018 at 06:15:02PM +0100, Marc Zyngier wrote:
> > On 21/05/18 12:45, Russell King wrote:
> > > In order to prevent aliasing attacks on the branch predictor,
> > > invalidate the BTB or instruction cache on CPUs that are known to be
> > > affected when taking an abort on a address that is outside of a user
> > > task limit:
> > > 
> > > Cortex A8, A9, A12, A17, A73, A75: flush BTB.
> > > Cortex A15, Brahma B15: invalidate icache.
> > > 
> > > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> > > Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
> > > ---
> > >  arch/arm/include/asm/cp15.h        |  3 +++
> > >  arch/arm/include/asm/system_misc.h |  8 ++++++
> > >  arch/arm/mm/fault.c                |  3 +++
> > >  arch/arm/mm/proc-v7-bugs.c         | 51 ++++++++++++++++++++++++++++++++++++++
> > >  arch/arm/mm/proc-v7.S              |  8 +++---
> > >  5 files changed, 70 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> > > index 4c9fa72b59f5..07e27f212dc7 100644
> > > --- a/arch/arm/include/asm/cp15.h
> > > +++ b/arch/arm/include/asm/cp15.h
> > > @@ -65,6 +65,9 @@
> > >  #define __write_sysreg(v, r, w, c, t)	asm volatile(w " " c : : "r" ((t)(v)))
> > >  #define write_sysreg(v, ...)		__write_sysreg(v, __VA_ARGS__)
> > >  
> > > +#define BPIALL				__ACCESS_CP15(c7, 0, c5, 6)
> > > +#define ICIALLU				__ACCESS_CP15(c7, 0, c5, 0)
> > > +
> > >  extern unsigned long cr_alignment;	/* defined in entry-armv.S */
> > >  
> > >  static inline unsigned long get_cr(void)
> > > diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
> > > index 78f6db114faf..3cfe010c5734 100644
> > > --- a/arch/arm/include/asm/system_misc.h
> > > +++ b/arch/arm/include/asm/system_misc.h
> > > @@ -15,6 +15,14 @@ void soft_restart(unsigned long);
> > >  extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
> > >  extern void (*arm_pm_idle)(void);
> > >  
> > > +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> > > +extern void (*harden_branch_predictor)(void);
> > > +#define harden_branch_predictor() \
> > > +	do { if (harden_branch_predictor) harden_branch_predictor(); } while (0)
> > > +#else
> > > +#define harden_branch_predictor() do { } while (0)
> > > +#endif
> > > +
> > >  #define UDBG_UNDEFINED	(1 << 0)
> > >  #define UDBG_SYSCALL	(1 << 1)
> > >  #define UDBG_BADABORT	(1 << 2)
> > > diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
> > > index b75eada23d0a..3b1ba003c4f9 100644
> > > --- a/arch/arm/mm/fault.c
> > > +++ b/arch/arm/mm/fault.c
> > > @@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk, unsigned long addr,
> > >  {
> > >  	struct siginfo si;
> > >  
> > > +	if (addr > TASK_SIZE)
> > > +		harden_branch_predictor();
> > > +
> > >  #ifdef CONFIG_DEBUG_USER
> > >  	if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) ||
> > >  	    ((user_debug & UDBG_BUS)  && (sig == SIGBUS))) {
> > > diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
> > > index a32ce13479d9..65a9b8141f86 100644
> > > --- a/arch/arm/mm/proc-v7-bugs.c
> > > +++ b/arch/arm/mm/proc-v7-bugs.c
> > > @@ -2,6 +2,12 @@
> > >  #include <linux/kernel.h>
> > >  #include <linux/smp.h>
> > >  
> > > +#include <asm/cp15.h>
> > > +#include <asm/cputype.h>
> > > +#include <asm/system_misc.h>
> > > +
> > > +void cpu_v7_bugs_init(void);
> > > +
> > >  static __maybe_unused void cpu_v7_check_auxcr_set(u32 mask, const char *msg)
> > >  {
> > >  	u32 aux_cr;
> > > @@ -21,9 +27,54 @@ static void check_spectre_auxcr(u32 bit)
> > >  void cpu_v7_ca8_ibe(void)
> > >  {
> > >  	check_spectre_auxcr(BIT(6));
> > > +	cpu_v7_bugs_init();
> > >  }
> > >  
> > >  void cpu_v7_ca15_ibe(void)
> > >  {
> > >  	check_spectre_auxcr(BIT(0));
> > > +	cpu_v7_bugs_init();
> > > +}
> > > +
> > > +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> > > +void (*harden_branch_predictor)(void);
> > > +
> > > +static void harden_branch_predictor_bpiall(void)
> > > +{
> > > +	write_sysreg(0, BPIALL);
> > > +}
> > > +
> > > +static void harden_branch_predictor_iciallu(void)
> > > +{
> > > +	write_sysreg(0, ICIALLU);
> > > +}
> > > +
> > > +void cpu_v7_bugs_init(void)
> > > +{
> > > +	const char *spectre_v2_method = NULL;
> > > +
> > > +	if (harden_branch_predictor)
> > > +		return;
> > 
> > How does it work on a big-little systems where two CPUs have diverging
> > mitigation methods? Let's say an hypothetical A15/A17 system? Or even a
> > more common A15/A7 system, where the small core doesn't require the
> > mitigation?
> 
> Hmm, I'd forgotten about those, because I don't have them.
> 
> We don't have the ability to mitigate this on such systems at all at
> present, it would require a per-CPU cpu_switch_mm() implementation, and
> the code has no structure to support that at present without considerable
> rewrite of the CPU glue support.
> 
> I'm not even sure it could without checking deeper - I think there's some
> situations where we call this before we're sufficiently setup.

Confirmed.  We can't access per_cpu variables via cpu_switch_mm()
because it is used prior to the per_cpu offset being initialised in
the CPU.  Eg,

secondary_start_kernel
{
...
	cpu_switch_mm(mm->pgd, mm);
...
        cpu_init(); /* <== per cpu setup */
}

However, we can change harden_branch_predictor() to be a per-cpu
function pointer to solve some of your concern, but it's still
insufficient.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 08/14] ARM: spectre-v2: harden user aborts in kernel space
@ 2018-05-22 18:12         ` Russell King - ARM Linux
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-22 18:12 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 06:56:03PM +0100, Russell King - ARM Linux wrote:
> On Tue, May 22, 2018 at 06:15:02PM +0100, Marc Zyngier wrote:
> > On 21/05/18 12:45, Russell King wrote:
> > > In order to prevent aliasing attacks on the branch predictor,
> > > invalidate the BTB or instruction cache on CPUs that are known to be
> > > affected when taking an abort on a address that is outside of a user
> > > task limit:
> > > 
> > > Cortex A8, A9, A12, A17, A73, A75: flush BTB.
> > > Cortex A15, Brahma B15: invalidate icache.
> > > 
> > > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> > > Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
> > > ---
> > >  arch/arm/include/asm/cp15.h        |  3 +++
> > >  arch/arm/include/asm/system_misc.h |  8 ++++++
> > >  arch/arm/mm/fault.c                |  3 +++
> > >  arch/arm/mm/proc-v7-bugs.c         | 51 ++++++++++++++++++++++++++++++++++++++
> > >  arch/arm/mm/proc-v7.S              |  8 +++---
> > >  5 files changed, 70 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
> > > index 4c9fa72b59f5..07e27f212dc7 100644
> > > --- a/arch/arm/include/asm/cp15.h
> > > +++ b/arch/arm/include/asm/cp15.h
> > > @@ -65,6 +65,9 @@
> > >  #define __write_sysreg(v, r, w, c, t)	asm volatile(w " " c : : "r" ((t)(v)))
> > >  #define write_sysreg(v, ...)		__write_sysreg(v, __VA_ARGS__)
> > >  
> > > +#define BPIALL				__ACCESS_CP15(c7, 0, c5, 6)
> > > +#define ICIALLU				__ACCESS_CP15(c7, 0, c5, 0)
> > > +
> > >  extern unsigned long cr_alignment;	/* defined in entry-armv.S */
> > >  
> > >  static inline unsigned long get_cr(void)
> > > diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
> > > index 78f6db114faf..3cfe010c5734 100644
> > > --- a/arch/arm/include/asm/system_misc.h
> > > +++ b/arch/arm/include/asm/system_misc.h
> > > @@ -15,6 +15,14 @@ void soft_restart(unsigned long);
> > >  extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
> > >  extern void (*arm_pm_idle)(void);
> > >  
> > > +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> > > +extern void (*harden_branch_predictor)(void);
> > > +#define harden_branch_predictor() \
> > > +	do { if (harden_branch_predictor) harden_branch_predictor(); } while (0)
> > > +#else
> > > +#define harden_branch_predictor() do { } while (0)
> > > +#endif
> > > +
> > >  #define UDBG_UNDEFINED	(1 << 0)
> > >  #define UDBG_SYSCALL	(1 << 1)
> > >  #define UDBG_BADABORT	(1 << 2)
> > > diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
> > > index b75eada23d0a..3b1ba003c4f9 100644
> > > --- a/arch/arm/mm/fault.c
> > > +++ b/arch/arm/mm/fault.c
> > > @@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk, unsigned long addr,
> > >  {
> > >  	struct siginfo si;
> > >  
> > > +	if (addr > TASK_SIZE)
> > > +		harden_branch_predictor();
> > > +
> > >  #ifdef CONFIG_DEBUG_USER
> > >  	if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) ||
> > >  	    ((user_debug & UDBG_BUS)  && (sig == SIGBUS))) {
> > > diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
> > > index a32ce13479d9..65a9b8141f86 100644
> > > --- a/arch/arm/mm/proc-v7-bugs.c
> > > +++ b/arch/arm/mm/proc-v7-bugs.c
> > > @@ -2,6 +2,12 @@
> > >  #include <linux/kernel.h>
> > >  #include <linux/smp.h>
> > >  
> > > +#include <asm/cp15.h>
> > > +#include <asm/cputype.h>
> > > +#include <asm/system_misc.h>
> > > +
> > > +void cpu_v7_bugs_init(void);
> > > +
> > >  static __maybe_unused void cpu_v7_check_auxcr_set(u32 mask, const char *msg)
> > >  {
> > >  	u32 aux_cr;
> > > @@ -21,9 +27,54 @@ static void check_spectre_auxcr(u32 bit)
> > >  void cpu_v7_ca8_ibe(void)
> > >  {
> > >  	check_spectre_auxcr(BIT(6));
> > > +	cpu_v7_bugs_init();
> > >  }
> > >  
> > >  void cpu_v7_ca15_ibe(void)
> > >  {
> > >  	check_spectre_auxcr(BIT(0));
> > > +	cpu_v7_bugs_init();
> > > +}
> > > +
> > > +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> > > +void (*harden_branch_predictor)(void);
> > > +
> > > +static void harden_branch_predictor_bpiall(void)
> > > +{
> > > +	write_sysreg(0, BPIALL);
> > > +}
> > > +
> > > +static void harden_branch_predictor_iciallu(void)
> > > +{
> > > +	write_sysreg(0, ICIALLU);
> > > +}
> > > +
> > > +void cpu_v7_bugs_init(void)
> > > +{
> > > +	const char *spectre_v2_method = NULL;
> > > +
> > > +	if (harden_branch_predictor)
> > > +		return;
> > 
> > How does it work on a big-little systems where two CPUs have diverging
> > mitigation methods? Let's say an hypothetical A15/A17 system? Or even a
> > more common A15/A7 system, where the small core doesn't require the
> > mitigation?
> 
> Hmm, I'd forgotten about those, because I don't have them.
> 
> We don't have the ability to mitigate this on such systems at all at
> present, it would require a per-CPU cpu_switch_mm() implementation, and
> the code has no structure to support that at present without considerable
> rewrite of the CPU glue support.
> 
> I'm not even sure it could without checking deeper - I think there's some
> situations where we call this before we're sufficiently setup.

Confirmed.  We can't access per_cpu variables via cpu_switch_mm()
because it is used prior to the per_cpu offset being initialised in
the CPU.  Eg,

secondary_start_kernel
{
...
	cpu_switch_mm(mm->pgd, mm);
...
        cpu_init(); /* <== per cpu setup */
}

However, we can change harden_branch_predictor() to be a per-cpu
function pointer to solve some of your concern, but it's still
insufficient.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync@8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 08/14] ARM: spectre-v2: harden user aborts in kernel space
  2018-05-22 18:12         ` Russell King - ARM Linux
@ 2018-05-22 18:19           ` Florian Fainelli
  -1 siblings, 0 replies; 84+ messages in thread
From: Florian Fainelli @ 2018-05-22 18:19 UTC (permalink / raw)
  To: Russell King - ARM Linux, Marc Zyngier
  Cc: Christoffer Dall, linux-arm-kernel, kvmarm

On 05/22/2018 11:12 AM, Russell King - ARM Linux wrote:
> On Tue, May 22, 2018 at 06:56:03PM +0100, Russell King - ARM Linux wrote:
>> On Tue, May 22, 2018 at 06:15:02PM +0100, Marc Zyngier wrote:
>>> On 21/05/18 12:45, Russell King wrote:
>>>> In order to prevent aliasing attacks on the branch predictor,
>>>> invalidate the BTB or instruction cache on CPUs that are known to be
>>>> affected when taking an abort on a address that is outside of a user
>>>> task limit:
>>>>
>>>> Cortex A8, A9, A12, A17, A73, A75: flush BTB.
>>>> Cortex A15, Brahma B15: invalidate icache.
>>>>
>>>> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
>>>> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
>>>> ---
>>>>  arch/arm/include/asm/cp15.h        |  3 +++
>>>>  arch/arm/include/asm/system_misc.h |  8 ++++++
>>>>  arch/arm/mm/fault.c                |  3 +++
>>>>  arch/arm/mm/proc-v7-bugs.c         | 51 ++++++++++++++++++++++++++++++++++++++
>>>>  arch/arm/mm/proc-v7.S              |  8 +++---
>>>>  5 files changed, 70 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
>>>> index 4c9fa72b59f5..07e27f212dc7 100644
>>>> --- a/arch/arm/include/asm/cp15.h
>>>> +++ b/arch/arm/include/asm/cp15.h
>>>> @@ -65,6 +65,9 @@
>>>>  #define __write_sysreg(v, r, w, c, t)	asm volatile(w " " c : : "r" ((t)(v)))
>>>>  #define write_sysreg(v, ...)		__write_sysreg(v, __VA_ARGS__)
>>>>  
>>>> +#define BPIALL				__ACCESS_CP15(c7, 0, c5, 6)
>>>> +#define ICIALLU				__ACCESS_CP15(c7, 0, c5, 0)
>>>> +
>>>>  extern unsigned long cr_alignment;	/* defined in entry-armv.S */
>>>>  
>>>>  static inline unsigned long get_cr(void)
>>>> diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
>>>> index 78f6db114faf..3cfe010c5734 100644
>>>> --- a/arch/arm/include/asm/system_misc.h
>>>> +++ b/arch/arm/include/asm/system_misc.h
>>>> @@ -15,6 +15,14 @@ void soft_restart(unsigned long);
>>>>  extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
>>>>  extern void (*arm_pm_idle)(void);
>>>>  
>>>> +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>>>> +extern void (*harden_branch_predictor)(void);
>>>> +#define harden_branch_predictor() \
>>>> +	do { if (harden_branch_predictor) harden_branch_predictor(); } while (0)
>>>> +#else
>>>> +#define harden_branch_predictor() do { } while (0)
>>>> +#endif
>>>> +
>>>>  #define UDBG_UNDEFINED	(1 << 0)
>>>>  #define UDBG_SYSCALL	(1 << 1)
>>>>  #define UDBG_BADABORT	(1 << 2)
>>>> diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
>>>> index b75eada23d0a..3b1ba003c4f9 100644
>>>> --- a/arch/arm/mm/fault.c
>>>> +++ b/arch/arm/mm/fault.c
>>>> @@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk, unsigned long addr,
>>>>  {
>>>>  	struct siginfo si;
>>>>  
>>>> +	if (addr > TASK_SIZE)
>>>> +		harden_branch_predictor();
>>>> +
>>>>  #ifdef CONFIG_DEBUG_USER
>>>>  	if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) ||
>>>>  	    ((user_debug & UDBG_BUS)  && (sig == SIGBUS))) {
>>>> diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
>>>> index a32ce13479d9..65a9b8141f86 100644
>>>> --- a/arch/arm/mm/proc-v7-bugs.c
>>>> +++ b/arch/arm/mm/proc-v7-bugs.c
>>>> @@ -2,6 +2,12 @@
>>>>  #include <linux/kernel.h>
>>>>  #include <linux/smp.h>
>>>>  
>>>> +#include <asm/cp15.h>
>>>> +#include <asm/cputype.h>
>>>> +#include <asm/system_misc.h>
>>>> +
>>>> +void cpu_v7_bugs_init(void);
>>>> +
>>>>  static __maybe_unused void cpu_v7_check_auxcr_set(u32 mask, const char *msg)
>>>>  {
>>>>  	u32 aux_cr;
>>>> @@ -21,9 +27,54 @@ static void check_spectre_auxcr(u32 bit)
>>>>  void cpu_v7_ca8_ibe(void)
>>>>  {
>>>>  	check_spectre_auxcr(BIT(6));
>>>> +	cpu_v7_bugs_init();
>>>>  }
>>>>  
>>>>  void cpu_v7_ca15_ibe(void)
>>>>  {
>>>>  	check_spectre_auxcr(BIT(0));
>>>> +	cpu_v7_bugs_init();
>>>> +}
>>>> +
>>>> +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>>>> +void (*harden_branch_predictor)(void);
>>>> +
>>>> +static void harden_branch_predictor_bpiall(void)
>>>> +{
>>>> +	write_sysreg(0, BPIALL);
>>>> +}
>>>> +
>>>> +static void harden_branch_predictor_iciallu(void)
>>>> +{
>>>> +	write_sysreg(0, ICIALLU);
>>>> +}
>>>> +
>>>> +void cpu_v7_bugs_init(void)
>>>> +{
>>>> +	const char *spectre_v2_method = NULL;
>>>> +
>>>> +	if (harden_branch_predictor)
>>>> +		return;
>>>
>>> How does it work on a big-little systems where two CPUs have diverging
>>> mitigation methods? Let's say an hypothetical A15/A17 system? Or even a
>>> more common A15/A7 system, where the small core doesn't require the
>>> mitigation?
>>
>> Hmm, I'd forgotten about those, because I don't have them.
>>
>> We don't have the ability to mitigate this on such systems at all at
>> present, it would require a per-CPU cpu_switch_mm() implementation, and
>> the code has no structure to support that at present without considerable
>> rewrite of the CPU glue support.
>>
>> I'm not even sure it could without checking deeper - I think there's some
>> situations where we call this before we're sufficiently setup.
> 
> Confirmed.  We can't access per_cpu variables via cpu_switch_mm()
> because it is used prior to the per_cpu offset being initialised in
> the CPU.  Eg,
> 
> secondary_start_kernel
> {
> ...
> 	cpu_switch_mm(mm->pgd, mm);
> ...
>         cpu_init(); /* <== per cpu setup */
> }
> 
> However, we can change harden_branch_predictor() to be a per-cpu
> function pointer to solve some of your concern, but it's still
> insufficient.

I hate to play that card, but we have all been waiting for these patches
to land in Linus' tree, so on one hand, waiting for the next merge
window is probably doable, since nothing is there. On the other hand,
does this absolutely needs to be addressed right now?
-- 
Florian

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 08/14] ARM: spectre-v2: harden user aborts in kernel space
@ 2018-05-22 18:19           ` Florian Fainelli
  0 siblings, 0 replies; 84+ messages in thread
From: Florian Fainelli @ 2018-05-22 18:19 UTC (permalink / raw)
  To: linux-arm-kernel

On 05/22/2018 11:12 AM, Russell King - ARM Linux wrote:
> On Tue, May 22, 2018 at 06:56:03PM +0100, Russell King - ARM Linux wrote:
>> On Tue, May 22, 2018 at 06:15:02PM +0100, Marc Zyngier wrote:
>>> On 21/05/18 12:45, Russell King wrote:
>>>> In order to prevent aliasing attacks on the branch predictor,
>>>> invalidate the BTB or instruction cache on CPUs that are known to be
>>>> affected when taking an abort on a address that is outside of a user
>>>> task limit:
>>>>
>>>> Cortex A8, A9, A12, A17, A73, A75: flush BTB.
>>>> Cortex A15, Brahma B15: invalidate icache.
>>>>
>>>> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
>>>> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
>>>> ---
>>>>  arch/arm/include/asm/cp15.h        |  3 +++
>>>>  arch/arm/include/asm/system_misc.h |  8 ++++++
>>>>  arch/arm/mm/fault.c                |  3 +++
>>>>  arch/arm/mm/proc-v7-bugs.c         | 51 ++++++++++++++++++++++++++++++++++++++
>>>>  arch/arm/mm/proc-v7.S              |  8 +++---
>>>>  5 files changed, 70 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
>>>> index 4c9fa72b59f5..07e27f212dc7 100644
>>>> --- a/arch/arm/include/asm/cp15.h
>>>> +++ b/arch/arm/include/asm/cp15.h
>>>> @@ -65,6 +65,9 @@
>>>>  #define __write_sysreg(v, r, w, c, t)	asm volatile(w " " c : : "r" ((t)(v)))
>>>>  #define write_sysreg(v, ...)		__write_sysreg(v, __VA_ARGS__)
>>>>  
>>>> +#define BPIALL				__ACCESS_CP15(c7, 0, c5, 6)
>>>> +#define ICIALLU				__ACCESS_CP15(c7, 0, c5, 0)
>>>> +
>>>>  extern unsigned long cr_alignment;	/* defined in entry-armv.S */
>>>>  
>>>>  static inline unsigned long get_cr(void)
>>>> diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
>>>> index 78f6db114faf..3cfe010c5734 100644
>>>> --- a/arch/arm/include/asm/system_misc.h
>>>> +++ b/arch/arm/include/asm/system_misc.h
>>>> @@ -15,6 +15,14 @@ void soft_restart(unsigned long);
>>>>  extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
>>>>  extern void (*arm_pm_idle)(void);
>>>>  
>>>> +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>>>> +extern void (*harden_branch_predictor)(void);
>>>> +#define harden_branch_predictor() \
>>>> +	do { if (harden_branch_predictor) harden_branch_predictor(); } while (0)
>>>> +#else
>>>> +#define harden_branch_predictor() do { } while (0)
>>>> +#endif
>>>> +
>>>>  #define UDBG_UNDEFINED	(1 << 0)
>>>>  #define UDBG_SYSCALL	(1 << 1)
>>>>  #define UDBG_BADABORT	(1 << 2)
>>>> diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
>>>> index b75eada23d0a..3b1ba003c4f9 100644
>>>> --- a/arch/arm/mm/fault.c
>>>> +++ b/arch/arm/mm/fault.c
>>>> @@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk, unsigned long addr,
>>>>  {
>>>>  	struct siginfo si;
>>>>  
>>>> +	if (addr > TASK_SIZE)
>>>> +		harden_branch_predictor();
>>>> +
>>>>  #ifdef CONFIG_DEBUG_USER
>>>>  	if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) ||
>>>>  	    ((user_debug & UDBG_BUS)  && (sig == SIGBUS))) {
>>>> diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
>>>> index a32ce13479d9..65a9b8141f86 100644
>>>> --- a/arch/arm/mm/proc-v7-bugs.c
>>>> +++ b/arch/arm/mm/proc-v7-bugs.c
>>>> @@ -2,6 +2,12 @@
>>>>  #include <linux/kernel.h>
>>>>  #include <linux/smp.h>
>>>>  
>>>> +#include <asm/cp15.h>
>>>> +#include <asm/cputype.h>
>>>> +#include <asm/system_misc.h>
>>>> +
>>>> +void cpu_v7_bugs_init(void);
>>>> +
>>>>  static __maybe_unused void cpu_v7_check_auxcr_set(u32 mask, const char *msg)
>>>>  {
>>>>  	u32 aux_cr;
>>>> @@ -21,9 +27,54 @@ static void check_spectre_auxcr(u32 bit)
>>>>  void cpu_v7_ca8_ibe(void)
>>>>  {
>>>>  	check_spectre_auxcr(BIT(6));
>>>> +	cpu_v7_bugs_init();
>>>>  }
>>>>  
>>>>  void cpu_v7_ca15_ibe(void)
>>>>  {
>>>>  	check_spectre_auxcr(BIT(0));
>>>> +	cpu_v7_bugs_init();
>>>> +}
>>>> +
>>>> +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>>>> +void (*harden_branch_predictor)(void);
>>>> +
>>>> +static void harden_branch_predictor_bpiall(void)
>>>> +{
>>>> +	write_sysreg(0, BPIALL);
>>>> +}
>>>> +
>>>> +static void harden_branch_predictor_iciallu(void)
>>>> +{
>>>> +	write_sysreg(0, ICIALLU);
>>>> +}
>>>> +
>>>> +void cpu_v7_bugs_init(void)
>>>> +{
>>>> +	const char *spectre_v2_method = NULL;
>>>> +
>>>> +	if (harden_branch_predictor)
>>>> +		return;
>>>
>>> How does it work on a big-little systems where two CPUs have diverging
>>> mitigation methods? Let's say an hypothetical A15/A17 system? Or even a
>>> more common A15/A7 system, where the small core doesn't require the
>>> mitigation?
>>
>> Hmm, I'd forgotten about those, because I don't have them.
>>
>> We don't have the ability to mitigate this on such systems at all at
>> present, it would require a per-CPU cpu_switch_mm() implementation, and
>> the code has no structure to support that at present without considerable
>> rewrite of the CPU glue support.
>>
>> I'm not even sure it could without checking deeper - I think there's some
>> situations where we call this before we're sufficiently setup.
> 
> Confirmed.  We can't access per_cpu variables via cpu_switch_mm()
> because it is used prior to the per_cpu offset being initialised in
> the CPU.  Eg,
> 
> secondary_start_kernel
> {
> ...
> 	cpu_switch_mm(mm->pgd, mm);
> ...
>         cpu_init(); /* <== per cpu setup */
> }
> 
> However, we can change harden_branch_predictor() to be a per-cpu
> function pointer to solve some of your concern, but it's still
> insufficient.

I hate to play that card, but we have all been waiting for these patches
to land in Linus' tree, so on one hand, waiting for the next merge
window is probably doable, since nothing is there. On the other hand,
does this absolutely needs to be addressed right now?
-- 
Florian

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 06/14] ARM: spectre-v2: harden branch predictor on context switches
  2018-05-21 11:44   ` Russell King
@ 2018-05-22 18:27     ` Tony Lindgren
  -1 siblings, 0 replies; 84+ messages in thread
From: Tony Lindgren @ 2018-05-22 18:27 UTC (permalink / raw)
  To: Russell King
  Cc: Marc Zyngier, Florian Fainelli, Christoffer Dall,
	linux-arm-kernel, kvmarm

* Russell King <rmk+kernel@armlinux.org.uk> [180521 12:06]:
> Harden the branch predictor against Spectre v2 attacks on context
> switches for ARMv7 and later CPUs.  We do this by:
> 
> Cortex A9, A12, A17, A73, A75: invalidating the BTB.
> Cortex A15, Brahma B15: invalidating the instruction cache.
> 
> Cortex A57 and Cortex A72 are not addressed in this patch.
> 
> Cortex R7 and Cortex R8 are also not addressed as we do not enforce
> memory protection on these cores.

Not seeing regressions so:

Tested-by: Tony Lindgren <tony@atomide.com>

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 06/14] ARM: spectre-v2: harden branch predictor on context switches
@ 2018-05-22 18:27     ` Tony Lindgren
  0 siblings, 0 replies; 84+ messages in thread
From: Tony Lindgren @ 2018-05-22 18:27 UTC (permalink / raw)
  To: linux-arm-kernel

* Russell King <rmk+kernel@armlinux.org.uk> [180521 12:06]:
> Harden the branch predictor against Spectre v2 attacks on context
> switches for ARMv7 and later CPUs.  We do this by:
> 
> Cortex A9, A12, A17, A73, A75: invalidating the BTB.
> Cortex A15, Brahma B15: invalidating the instruction cache.
> 
> Cortex A57 and Cortex A72 are not addressed in this patch.
> 
> Cortex R7 and Cortex R8 are also not addressed as we do not enforce
> memory protection on these cores.

Not seeing regressions so:

Tested-by: Tony Lindgren <tony@atomide.com>

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 07/14] ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit
  2018-05-21 11:44   ` Russell King
@ 2018-05-22 18:28     ` Tony Lindgren
  -1 siblings, 0 replies; 84+ messages in thread
From: Tony Lindgren @ 2018-05-22 18:28 UTC (permalink / raw)
  To: Russell King
  Cc: Marc Zyngier, Florian Fainelli, Christoffer Dall,
	linux-arm-kernel, kvmarm

* Russell King <rmk+kernel@armlinux.org.uk> [180521 12:09]:
> When the branch predictor hardening is enabled, firmware must have set
> the IBE bit in the auxiliary control register.  If this bit has not
> been set, the Spectre workarounds will not be functional.
> 
> Add validation that this bit is set, and print a warning at alert level
> if this is not the case.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>

Yup the alert is working:

Tested-by: Tony Lindgren <tony@atomide.com>

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 07/14] ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit
@ 2018-05-22 18:28     ` Tony Lindgren
  0 siblings, 0 replies; 84+ messages in thread
From: Tony Lindgren @ 2018-05-22 18:28 UTC (permalink / raw)
  To: linux-arm-kernel

* Russell King <rmk+kernel@armlinux.org.uk> [180521 12:09]:
> When the branch predictor hardening is enabled, firmware must have set
> the IBE bit in the auxiliary control register.  If this bit has not
> been set, the Spectre workarounds will not be functional.
> 
> Add validation that this bit is set, and print a warning at alert level
> if this is not the case.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>

Yup the alert is working:

Tested-by: Tony Lindgren <tony@atomide.com>

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 08/14] ARM: spectre-v2: harden user aborts in kernel space
  2018-05-22 17:15     ` Marc Zyngier
@ 2018-05-22 23:25       ` Russell King - ARM Linux
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-22 23:25 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: Florian Fainelli, Christoffer Dall, linux-arm-kernel, kvmarm

On Tue, May 22, 2018 at 06:15:02PM +0100, Marc Zyngier wrote:
> On 21/05/18 12:45, Russell King wrote:
> > +	switch (read_cpuid_part()) {
> > +	case ARM_CPU_PART_CORTEX_A8:
> > +	case ARM_CPU_PART_CORTEX_A9:
> > +	case ARM_CPU_PART_CORTEX_A12:
> > +	case ARM_CPU_PART_CORTEX_A17:
> > +	case ARM_CPU_PART_CORTEX_A73:
> > +	case ARM_CPU_PART_CORTEX_A75:
> > +		harden_branch_predictor = harden_branch_predictor_bpiall;
> > +		spectre_v2_method = "BPIALL";
> > +		break;
> 
> You don't seem to take into account the PFR0.CSV2 field which indicates
> that the CPU has a branch predictor that is immune to Spectre-v2.

That information is not covered in the description of the vulnerability
- the published information on the security-updates site states that
BPIALL is required without stating any conditions.

> See for example the Cortex-A75 r3p0 TRM[1].

So which cores should such a test be applied to?  As I mention,
the support site doesn't give this detail.  That brings up the
obvious question: what else does the web page miss out on?

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 08/14] ARM: spectre-v2: harden user aborts in kernel space
@ 2018-05-22 23:25       ` Russell King - ARM Linux
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-22 23:25 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 06:15:02PM +0100, Marc Zyngier wrote:
> On 21/05/18 12:45, Russell King wrote:
> > +	switch (read_cpuid_part()) {
> > +	case ARM_CPU_PART_CORTEX_A8:
> > +	case ARM_CPU_PART_CORTEX_A9:
> > +	case ARM_CPU_PART_CORTEX_A12:
> > +	case ARM_CPU_PART_CORTEX_A17:
> > +	case ARM_CPU_PART_CORTEX_A73:
> > +	case ARM_CPU_PART_CORTEX_A75:
> > +		harden_branch_predictor = harden_branch_predictor_bpiall;
> > +		spectre_v2_method = "BPIALL";
> > +		break;
> 
> You don't seem to take into account the PFR0.CSV2 field which indicates
> that the CPU has a branch predictor that is immune to Spectre-v2.

That information is not covered in the description of the vulnerability
- the published information on the security-updates site states that
BPIALL is required without stating any conditions.

> See for example the Cortex-A75 r3p0 TRM[1].

So which cores should such a test be applied to?  As I mention,
the support site doesn't give this detail.  That brings up the
obvious question: what else does the web page miss out on?

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
  2018-05-22 17:57       ` Russell King - ARM Linux
@ 2018-05-23  7:25         ` Marc Zyngier
  -1 siblings, 0 replies; 84+ messages in thread
From: Marc Zyngier @ 2018-05-23  7:25 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Florian Fainelli, Christoffer Dall, linux-arm-kernel, kvmarm

On Tue, 22 May 2018 18:57:18 +0100,
Russell King wrote:
> 
> On Tue, May 22, 2018 at 06:24:13PM +0100, Marc Zyngier wrote:
> > On 21/05/18 12:45, Russell King wrote:
> > > Add PSCI based hardening for cores that require more complex handling in
> > > firmware.
> > > 
> > > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> > > Acked-by: Marc Zyngier <marc.zyngier@arm.com>
> > > ---
> > >  arch/arm/mm/proc-v7-bugs.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++
> > >  arch/arm/mm/proc-v7.S      | 21 +++++++++++++++++++
> > >  2 files changed, 71 insertions(+)
> > > 
> > > diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
> > > index 65a9b8141f86..0c37e6a2830d 100644
> > > --- a/arch/arm/mm/proc-v7-bugs.c
> > > +++ b/arch/arm/mm/proc-v7-bugs.c
> > > @@ -1,9 +1,12 @@
> > >  // SPDX-License-Identifier: GPL-2.0
> > > +#include <linux/arm-smccc.h>
> > >  #include <linux/kernel.h>
> > > +#include <linux/psci.h>
> > >  #include <linux/smp.h>
> > >  
> > >  #include <asm/cp15.h>
> > >  #include <asm/cputype.h>
> > > +#include <asm/proc-fns.h>
> > >  #include <asm/system_misc.h>
> > >  
> > >  void cpu_v7_bugs_init(void);
> > > @@ -39,6 +42,9 @@ void cpu_v7_ca15_ibe(void)
> > >  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> > >  void (*harden_branch_predictor)(void);
> > >  
> > > +extern void cpu_v7_smc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
> > > +extern void cpu_v7_hvc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
> > > +
> > >  static void harden_branch_predictor_bpiall(void)
> > >  {
> > >  	write_sysreg(0, BPIALL);
> > > @@ -49,6 +55,18 @@ static void harden_branch_predictor_iciallu(void)
> > >  	write_sysreg(0, ICIALLU);
> > >  }
> > >  
> > > +#ifdef CONFIG_ARM_PSCI
> > > +static void call_smc_arch_workaround_1(void)
> > > +{
> > > +	arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
> > > +}
> > > +
> > > +static void call_hvc_arch_workaround_1(void)
> > > +{
> > > +	arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
> > > +}
> > > +#endif
> > > +
> > >  void cpu_v7_bugs_init(void)
> > >  {
> > >  	const char *spectre_v2_method = NULL;
> > > @@ -73,6 +91,38 @@ void cpu_v7_bugs_init(void)
> > >  		spectre_v2_method = "ICIALLU";
> > >  		break;
> > >  	}
> > > +
> > > +#ifdef CONFIG_ARM_PSCI
> > > +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
> > > +		struct arm_smccc_res res;
> > > +
> > > +		switch (psci_ops.conduit) {
> > > +		case PSCI_CONDUIT_HVC:
> > > +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> > > +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> > > +			if ((int)res.a0 < 0)
> > > +				break;
> > 
> > I just realised that there is a small, but significant difference
> > between this and the arm64 version: On arm64, we have a table of
> > vulnerable implementations, and we try the mitigation on a per-cpu
> > basis. Here, you entirely rely on the firmware to discover whether the
> > CPU needs mitigation or not. You then need to check for a return value
> > of 1, which indicates that although the mitigation is implemented, it is
> > not required on this particular CPU.
> > 
> > But that's probably moot if you don't support BL systems.
> > 
> > > +			harden_branch_predictor = call_hvc_arch_workaround_1;
> > > +			processor.switch_mm = cpu_v7_hvc_switch_mm;
> > > +			spectre_v2_method = "hypervisor";
> > > +			break;
> > > +
> > > +		case PSCI_CONDUIT_SMC:
> > > +			arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> > > +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> > > +			if ((int)res.a0 < 0)
> > > +				break;
> > > +			harden_branch_predictor = call_smc_arch_workaround_1;
> > > +			processor.switch_mm = cpu_v7_smc_switch_mm;
> > > +			spectre_v2_method = "firmware PSCI";
> > 
> > My previous remark still stands: this is not really PSCI.
> 
> Sorry, no.  Your comment was for the HVC call, not the SMC.  You said
> nothing about this one.

My bad then. For all intents and purposes, they are the same thing,
just serviced by a different exception level.

	M.

-- 
Jazz is not dead, it just smell funny.

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
@ 2018-05-23  7:25         ` Marc Zyngier
  0 siblings, 0 replies; 84+ messages in thread
From: Marc Zyngier @ 2018-05-23  7:25 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 22 May 2018 18:57:18 +0100,
Russell King wrote:
> 
> On Tue, May 22, 2018 at 06:24:13PM +0100, Marc Zyngier wrote:
> > On 21/05/18 12:45, Russell King wrote:
> > > Add PSCI based hardening for cores that require more complex handling in
> > > firmware.
> > > 
> > > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> > > Acked-by: Marc Zyngier <marc.zyngier@arm.com>
> > > ---
> > >  arch/arm/mm/proc-v7-bugs.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++
> > >  arch/arm/mm/proc-v7.S      | 21 +++++++++++++++++++
> > >  2 files changed, 71 insertions(+)
> > > 
> > > diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
> > > index 65a9b8141f86..0c37e6a2830d 100644
> > > --- a/arch/arm/mm/proc-v7-bugs.c
> > > +++ b/arch/arm/mm/proc-v7-bugs.c
> > > @@ -1,9 +1,12 @@
> > >  // SPDX-License-Identifier: GPL-2.0
> > > +#include <linux/arm-smccc.h>
> > >  #include <linux/kernel.h>
> > > +#include <linux/psci.h>
> > >  #include <linux/smp.h>
> > >  
> > >  #include <asm/cp15.h>
> > >  #include <asm/cputype.h>
> > > +#include <asm/proc-fns.h>
> > >  #include <asm/system_misc.h>
> > >  
> > >  void cpu_v7_bugs_init(void);
> > > @@ -39,6 +42,9 @@ void cpu_v7_ca15_ibe(void)
> > >  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> > >  void (*harden_branch_predictor)(void);
> > >  
> > > +extern void cpu_v7_smc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
> > > +extern void cpu_v7_hvc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
> > > +
> > >  static void harden_branch_predictor_bpiall(void)
> > >  {
> > >  	write_sysreg(0, BPIALL);
> > > @@ -49,6 +55,18 @@ static void harden_branch_predictor_iciallu(void)
> > >  	write_sysreg(0, ICIALLU);
> > >  }
> > >  
> > > +#ifdef CONFIG_ARM_PSCI
> > > +static void call_smc_arch_workaround_1(void)
> > > +{
> > > +	arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
> > > +}
> > > +
> > > +static void call_hvc_arch_workaround_1(void)
> > > +{
> > > +	arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
> > > +}
> > > +#endif
> > > +
> > >  void cpu_v7_bugs_init(void)
> > >  {
> > >  	const char *spectre_v2_method = NULL;
> > > @@ -73,6 +91,38 @@ void cpu_v7_bugs_init(void)
> > >  		spectre_v2_method = "ICIALLU";
> > >  		break;
> > >  	}
> > > +
> > > +#ifdef CONFIG_ARM_PSCI
> > > +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
> > > +		struct arm_smccc_res res;
> > > +
> > > +		switch (psci_ops.conduit) {
> > > +		case PSCI_CONDUIT_HVC:
> > > +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> > > +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> > > +			if ((int)res.a0 < 0)
> > > +				break;
> > 
> > I just realised that there is a small, but significant difference
> > between this and the arm64 version: On arm64, we have a table of
> > vulnerable implementations, and we try the mitigation on a per-cpu
> > basis. Here, you entirely rely on the firmware to discover whether the
> > CPU needs mitigation or not. You then need to check for a return value
> > of 1, which indicates that although the mitigation is implemented, it is
> > not required on this particular CPU.
> > 
> > But that's probably moot if you don't support BL systems.
> > 
> > > +			harden_branch_predictor = call_hvc_arch_workaround_1;
> > > +			processor.switch_mm = cpu_v7_hvc_switch_mm;
> > > +			spectre_v2_method = "hypervisor";
> > > +			break;
> > > +
> > > +		case PSCI_CONDUIT_SMC:
> > > +			arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> > > +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> > > +			if ((int)res.a0 < 0)
> > > +				break;
> > > +			harden_branch_predictor = call_smc_arch_workaround_1;
> > > +			processor.switch_mm = cpu_v7_smc_switch_mm;
> > > +			spectre_v2_method = "firmware PSCI";
> > 
> > My previous remark still stands: this is not really PSCI.
> 
> Sorry, no.  Your comment was for the HVC call, not the SMC.  You said
> nothing about this one.

My bad then. For all intents and purposes, they are the same thing,
just serviced by a different exception level.

	M.

-- 
Jazz is not dead, it just smell funny.

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 13/14] ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
  2018-05-21 11:45   ` Russell King
@ 2018-05-23 10:50     ` Marc Zyngier
  -1 siblings, 0 replies; 84+ messages in thread
From: Marc Zyngier @ 2018-05-23 10:50 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel; +Cc: Florian Fainelli, kvmarm

On 21/05/18 12:45, Russell King wrote:
> We want SMCCC_ARCH_WORKAROUND_1 to be fast. As fast as possible.
> So let's intercept it as early as we can by testing for the
> function call number as soon as we've identified a HVC call
> coming from the guest.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> ---
>  arch/arm/kvm/hyp/hyp-entry.S | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
> index 918a05dd2d63..67de45685e29 100644
> --- a/arch/arm/kvm/hyp/hyp-entry.S
> +++ b/arch/arm/kvm/hyp/hyp-entry.S
> @@ -16,6 +16,7 @@
>   * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
>   */
>  
> +#include <linux/arm-smccc.h>
>  #include <linux/linkage.h>
>  #include <asm/kvm_arm.h>
>  #include <asm/kvm_asm.h>
> @@ -202,7 +203,7 @@ ENDPROC(__hyp_do_panic)
>  	lsr     r2, r2, #16
>  	and     r2, r2, #0xff
>  	cmp     r2, #0
> -	bne	guest_trap		@ Guest called HVC
> +	bne	guest_hvc_trap		@ Guest called HVC
>  
>  	/*
>  	 * Getting here means host called HVC, we shift parameters and branch
> @@ -253,6 +254,16 @@ THUMB(	orr	lr, #1)
>  	pop	{r2, lr}
>  	eret
>  
> +guest_hvc_trap:
> +	movw	ip, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
> +	movt	ip, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1

r12 is a live guest register, and only r0-r2 are saved at that stage.
You could additionally corrupt r3 though, once you've identified that
you're in the context of an SMCCC 1.1 call.

You should be able to replace r12 with r2.

> +	ldr	r0, [sp]		@ Guest's r0
> +	teq	r0, ip
> +	bne	guest_trap
> +	pop	{r0, r1, r2}

You could replace this slightly more efficient

	add	sp, sp, #12

since we don't need to restore those registers to the guest. r2 would be
left containing ARM_SMCCC_ARCH_WORKAROUND_1 (harmless), and r1 has the
HSR value (perfectly predictable).

> +	mov	r0, #0
> +	eret
> +
>  guest_trap:
>  	load_vcpu r0			@ Load VCPU pointer to r0
>  
> 

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 13/14] ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
@ 2018-05-23 10:50     ` Marc Zyngier
  0 siblings, 0 replies; 84+ messages in thread
From: Marc Zyngier @ 2018-05-23 10:50 UTC (permalink / raw)
  To: linux-arm-kernel

On 21/05/18 12:45, Russell King wrote:
> We want SMCCC_ARCH_WORKAROUND_1 to be fast. As fast as possible.
> So let's intercept it as early as we can by testing for the
> function call number as soon as we've identified a HVC call
> coming from the guest.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> ---
>  arch/arm/kvm/hyp/hyp-entry.S | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
> index 918a05dd2d63..67de45685e29 100644
> --- a/arch/arm/kvm/hyp/hyp-entry.S
> +++ b/arch/arm/kvm/hyp/hyp-entry.S
> @@ -16,6 +16,7 @@
>   * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
>   */
>  
> +#include <linux/arm-smccc.h>
>  #include <linux/linkage.h>
>  #include <asm/kvm_arm.h>
>  #include <asm/kvm_asm.h>
> @@ -202,7 +203,7 @@ ENDPROC(__hyp_do_panic)
>  	lsr     r2, r2, #16
>  	and     r2, r2, #0xff
>  	cmp     r2, #0
> -	bne	guest_trap		@ Guest called HVC
> +	bne	guest_hvc_trap		@ Guest called HVC
>  
>  	/*
>  	 * Getting here means host called HVC, we shift parameters and branch
> @@ -253,6 +254,16 @@ THUMB(	orr	lr, #1)
>  	pop	{r2, lr}
>  	eret
>  
> +guest_hvc_trap:
> +	movw	ip, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
> +	movt	ip, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1

r12 is a live guest register, and only r0-r2 are saved at that stage.
You could additionally corrupt r3 though, once you've identified that
you're in the context of an SMCCC 1.1 call.

You should be able to replace r12 with r2.

> +	ldr	r0, [sp]		@ Guest's r0
> +	teq	r0, ip
> +	bne	guest_trap
> +	pop	{r0, r1, r2}

You could replace this slightly more efficient

	add	sp, sp, #12

since we don't need to restore those registers to the guest. r2 would be
left containing ARM_SMCCC_ARCH_WORKAROUND_1 (harmless), and r1 has the
HSR value (perfectly predictable).

> +	mov	r0, #0
> +	eret
> +
>  guest_trap:
>  	load_vcpu r0			@ Load VCPU pointer to r0
>  
> 

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
  2018-05-22 17:24     ` Marc Zyngier
@ 2018-05-23 19:45       ` Russell King - ARM Linux
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-23 19:45 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: Florian Fainelli, linux-arm-kernel, kvmarm

On Tue, May 22, 2018 at 06:24:13PM +0100, Marc Zyngier wrote:
> On 21/05/18 12:45, Russell King wrote:
> > +#ifdef CONFIG_ARM_PSCI
> > +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
> > +		struct arm_smccc_res res;
> > +
> > +		switch (psci_ops.conduit) {
> > +		case PSCI_CONDUIT_HVC:
> > +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> > +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> > +			if ((int)res.a0 < 0)
> > +				break;
> 
> I just realised that there is a small, but significant difference
> between this and the arm64 version: On arm64, we have a table of
> vulnerable implementations, and we try the mitigation on a per-cpu
> basis. Here, you entirely rely on the firmware to discover whether the
> CPU needs mitigation or not. You then need to check for a return value
> of 1, which indicates that although the mitigation is implemented, it is
> not required on this particular CPU.

Okay, so digging further into the documentation seems to suggest that we
only need to check the firmware for A72 and A57 CPUs, and given this
statement:

"Arm recommends that the caller only call this on PEs for which a
 firmware based mitigation of CVE-2017-5715 is required, or where
 a local workaround is infeasible."

it seems that the right answer is to ignore the PSCI based methods when
we have anything but these CPUs.  Do you agree?

> But that's probably moot if you don't support BL systems.

Any bL systems with A72 or A57?

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
@ 2018-05-23 19:45       ` Russell King - ARM Linux
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-23 19:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 22, 2018 at 06:24:13PM +0100, Marc Zyngier wrote:
> On 21/05/18 12:45, Russell King wrote:
> > +#ifdef CONFIG_ARM_PSCI
> > +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
> > +		struct arm_smccc_res res;
> > +
> > +		switch (psci_ops.conduit) {
> > +		case PSCI_CONDUIT_HVC:
> > +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> > +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> > +			if ((int)res.a0 < 0)
> > +				break;
> 
> I just realised that there is a small, but significant difference
> between this and the arm64 version: On arm64, we have a table of
> vulnerable implementations, and we try the mitigation on a per-cpu
> basis. Here, you entirely rely on the firmware to discover whether the
> CPU needs mitigation or not. You then need to check for a return value
> of 1, which indicates that although the mitigation is implemented, it is
> not required on this particular CPU.

Okay, so digging further into the documentation seems to suggest that we
only need to check the firmware for A72 and A57 CPUs, and given this
statement:

"Arm recommends that the caller only call this on PEs for which a
 firmware based mitigation of CVE-2017-5715 is required, or where
 a local workaround is infeasible."

it seems that the right answer is to ignore the PSCI based methods when
we have anything but these CPUs.  Do you agree?

> But that's probably moot if you don't support BL systems.

Any bL systems with A72 or A57?

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
  2018-05-23 19:45       ` Russell King - ARM Linux
@ 2018-05-24 12:03         ` Marc Zyngier
  -1 siblings, 0 replies; 84+ messages in thread
From: Marc Zyngier @ 2018-05-24 12:03 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Florian Fainelli, Christoffer Dall, linux-arm-kernel, kvmarm

On 23/05/18 20:45, Russell King - ARM Linux wrote:
> On Tue, May 22, 2018 at 06:24:13PM +0100, Marc Zyngier wrote:
>> On 21/05/18 12:45, Russell King wrote:
>>> +#ifdef CONFIG_ARM_PSCI
>>> +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
>>> +		struct arm_smccc_res res;
>>> +
>>> +		switch (psci_ops.conduit) {
>>> +		case PSCI_CONDUIT_HVC:
>>> +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>>> +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
>>> +			if ((int)res.a0 < 0)
>>> +				break;
>>
>> I just realised that there is a small, but significant difference
>> between this and the arm64 version: On arm64, we have a table of
>> vulnerable implementations, and we try the mitigation on a per-cpu
>> basis. Here, you entirely rely on the firmware to discover whether the
>> CPU needs mitigation or not. You then need to check for a return value
>> of 1, which indicates that although the mitigation is implemented, it is
>> not required on this particular CPU.
> 
> Okay, so digging further into the documentation seems to suggest that we
> only need to check the firmware for A72 and A57 CPUs, and given this
> statement:
> 
> "Arm recommends that the caller only call this on PEs for which a
>  firmware based mitigation of CVE-2017-5715 is required, or where
>  a local workaround is infeasible."
> 
> it seems that the right answer is to ignore the PSCI based methods when
> we have anything but these CPUs.  Do you agree?

For CPUs that are produced by ARM, I agree. I don't know about CPUs
produced by ARM licensees though, so I'd rather use the opposite logic:
Use the firmware unless the CPU is one of those that can be easily
mitigated at EL1 (or isn't affected).

>> But that's probably moot if you don't support BL systems.
> 
> Any bL systems with A72 or A57?

Juno is a canonical example of such a system (either 2xA57+4xA53, or
2xA72+4xA53), and there is plenty of partner's silicon in the wild.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
@ 2018-05-24 12:03         ` Marc Zyngier
  0 siblings, 0 replies; 84+ messages in thread
From: Marc Zyngier @ 2018-05-24 12:03 UTC (permalink / raw)
  To: linux-arm-kernel

On 23/05/18 20:45, Russell King - ARM Linux wrote:
> On Tue, May 22, 2018 at 06:24:13PM +0100, Marc Zyngier wrote:
>> On 21/05/18 12:45, Russell King wrote:
>>> +#ifdef CONFIG_ARM_PSCI
>>> +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
>>> +		struct arm_smccc_res res;
>>> +
>>> +		switch (psci_ops.conduit) {
>>> +		case PSCI_CONDUIT_HVC:
>>> +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>>> +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
>>> +			if ((int)res.a0 < 0)
>>> +				break;
>>
>> I just realised that there is a small, but significant difference
>> between this and the arm64 version: On arm64, we have a table of
>> vulnerable implementations, and we try the mitigation on a per-cpu
>> basis. Here, you entirely rely on the firmware to discover whether the
>> CPU needs mitigation or not. You then need to check for a return value
>> of 1, which indicates that although the mitigation is implemented, it is
>> not required on this particular CPU.
> 
> Okay, so digging further into the documentation seems to suggest that we
> only need to check the firmware for A72 and A57 CPUs, and given this
> statement:
> 
> "Arm recommends that the caller only call this on PEs for which a
>  firmware based mitigation of CVE-2017-5715 is required, or where
>  a local workaround is infeasible."
> 
> it seems that the right answer is to ignore the PSCI based methods when
> we have anything but these CPUs.  Do you agree?

For CPUs that are produced by ARM, I agree. I don't know about CPUs
produced by ARM licensees though, so I'd rather use the opposite logic:
Use the firmware unless the CPU is one of those that can be easily
mitigated at EL1 (or isn't affected).

>> But that's probably moot if you don't support BL systems.
> 
> Any bL systems with A72 or A57?

Juno is a canonical example of such a system (either 2xA57+4xA53, or
2xA72+4xA53), and there is plenty of partner's silicon in the wild.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
  2018-05-24 12:03         ` Marc Zyngier
@ 2018-05-24 12:30           ` Russell King - ARM Linux
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-24 12:30 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: Florian Fainelli, Christoffer Dall, linux-arm-kernel, kvmarm

On Thu, May 24, 2018 at 01:03:50PM +0100, Marc Zyngier wrote:
> On 23/05/18 20:45, Russell King - ARM Linux wrote:
> > On Tue, May 22, 2018 at 06:24:13PM +0100, Marc Zyngier wrote:
> >> On 21/05/18 12:45, Russell King wrote:
> >>> +#ifdef CONFIG_ARM_PSCI
> >>> +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
> >>> +		struct arm_smccc_res res;
> >>> +
> >>> +		switch (psci_ops.conduit) {
> >>> +		case PSCI_CONDUIT_HVC:
> >>> +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> >>> +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> >>> +			if ((int)res.a0 < 0)
> >>> +				break;
> >>
> >> I just realised that there is a small, but significant difference
> >> between this and the arm64 version: On arm64, we have a table of
> >> vulnerable implementations, and we try the mitigation on a per-cpu
> >> basis. Here, you entirely rely on the firmware to discover whether the
> >> CPU needs mitigation or not. You then need to check for a return value
> >> of 1, which indicates that although the mitigation is implemented, it is
> >> not required on this particular CPU.
> > 
> > Okay, so digging further into the documentation seems to suggest that we
> > only need to check the firmware for A72 and A57 CPUs, and given this
> > statement:
> > 
> > "Arm recommends that the caller only call this on PEs for which a
> >  firmware based mitigation of CVE-2017-5715 is required, or where
> >  a local workaround is infeasible."
> > 
> > it seems that the right answer is to ignore the PSCI based methods when
> > we have anything but these CPUs.  Do you agree?
> 
> For CPUs that are produced by ARM, I agree. I don't know about CPUs
> produced by ARM licensees though, so I'd rather use the opposite logic:
> Use the firmware unless the CPU is one of those that can be easily
> mitigated at EL1 (or isn't affected).

The "or isn't affected" is the difficult bit - I guess we could match
on the CPU vendor field though, and just reject all ARM CPUs that
aren't explicitly listed as having a problem.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
@ 2018-05-24 12:30           ` Russell King - ARM Linux
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-24 12:30 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, May 24, 2018 at 01:03:50PM +0100, Marc Zyngier wrote:
> On 23/05/18 20:45, Russell King - ARM Linux wrote:
> > On Tue, May 22, 2018 at 06:24:13PM +0100, Marc Zyngier wrote:
> >> On 21/05/18 12:45, Russell King wrote:
> >>> +#ifdef CONFIG_ARM_PSCI
> >>> +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
> >>> +		struct arm_smccc_res res;
> >>> +
> >>> +		switch (psci_ops.conduit) {
> >>> +		case PSCI_CONDUIT_HVC:
> >>> +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> >>> +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> >>> +			if ((int)res.a0 < 0)
> >>> +				break;
> >>
> >> I just realised that there is a small, but significant difference
> >> between this and the arm64 version: On arm64, we have a table of
> >> vulnerable implementations, and we try the mitigation on a per-cpu
> >> basis. Here, you entirely rely on the firmware to discover whether the
> >> CPU needs mitigation or not. You then need to check for a return value
> >> of 1, which indicates that although the mitigation is implemented, it is
> >> not required on this particular CPU.
> > 
> > Okay, so digging further into the documentation seems to suggest that we
> > only need to check the firmware for A72 and A57 CPUs, and given this
> > statement:
> > 
> > "Arm recommends that the caller only call this on PEs for which a
> >  firmware based mitigation of CVE-2017-5715 is required, or where
> >  a local workaround is infeasible."
> > 
> > it seems that the right answer is to ignore the PSCI based methods when
> > we have anything but these CPUs.  Do you agree?
> 
> For CPUs that are produced by ARM, I agree. I don't know about CPUs
> produced by ARM licensees though, so I'd rather use the opposite logic:
> Use the firmware unless the CPU is one of those that can be easily
> mitigated at EL1 (or isn't affected).

The "or isn't affected" is the difficult bit - I guess we could match
on the CPU vendor field though, and just reject all ARM CPUs that
aren't explicitly listed as having a problem.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
  2018-05-24 12:30           ` Russell King - ARM Linux
@ 2018-05-24 12:49             ` Marc Zyngier
  -1 siblings, 0 replies; 84+ messages in thread
From: Marc Zyngier @ 2018-05-24 12:49 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Florian Fainelli, Christoffer Dall, linux-arm-kernel, kvmarm

On 24/05/18 13:30, Russell King - ARM Linux wrote:
> On Thu, May 24, 2018 at 01:03:50PM +0100, Marc Zyngier wrote:
>> On 23/05/18 20:45, Russell King - ARM Linux wrote:
>>> On Tue, May 22, 2018 at 06:24:13PM +0100, Marc Zyngier wrote:
>>>> On 21/05/18 12:45, Russell King wrote:
>>>>> +#ifdef CONFIG_ARM_PSCI
>>>>> +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
>>>>> +		struct arm_smccc_res res;
>>>>> +
>>>>> +		switch (psci_ops.conduit) {
>>>>> +		case PSCI_CONDUIT_HVC:
>>>>> +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>>>>> +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
>>>>> +			if ((int)res.a0 < 0)
>>>>> +				break;
>>>>
>>>> I just realised that there is a small, but significant difference
>>>> between this and the arm64 version: On arm64, we have a table of
>>>> vulnerable implementations, and we try the mitigation on a per-cpu
>>>> basis. Here, you entirely rely on the firmware to discover whether the
>>>> CPU needs mitigation or not. You then need to check for a return value
>>>> of 1, which indicates that although the mitigation is implemented, it is
>>>> not required on this particular CPU.
>>>
>>> Okay, so digging further into the documentation seems to suggest that we
>>> only need to check the firmware for A72 and A57 CPUs, and given this
>>> statement:
>>>
>>> "Arm recommends that the caller only call this on PEs for which a
>>>  firmware based mitigation of CVE-2017-5715 is required, or where
>>>  a local workaround is infeasible."
>>>
>>> it seems that the right answer is to ignore the PSCI based methods when
>>> we have anything but these CPUs.  Do you agree?
>>
>> For CPUs that are produced by ARM, I agree. I don't know about CPUs
>> produced by ARM licensees though, so I'd rather use the opposite logic:
>> Use the firmware unless the CPU is one of those that can be easily
>> mitigated at EL1 (or isn't affected).
> 
> The "or isn't affected" is the difficult bit - I guess we could match
> on the CPU vendor field though, and just reject all ARM CPUs that
> aren't explicitly listed as having a problem.

That seems sensible. ARM has published an exhaustive status for all its
cores, which we can trust. For architecture licensees, I'm not aware of
such a list, but I'd expect them to communicate one if they were affected.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
@ 2018-05-24 12:49             ` Marc Zyngier
  0 siblings, 0 replies; 84+ messages in thread
From: Marc Zyngier @ 2018-05-24 12:49 UTC (permalink / raw)
  To: linux-arm-kernel

On 24/05/18 13:30, Russell King - ARM Linux wrote:
> On Thu, May 24, 2018 at 01:03:50PM +0100, Marc Zyngier wrote:
>> On 23/05/18 20:45, Russell King - ARM Linux wrote:
>>> On Tue, May 22, 2018 at 06:24:13PM +0100, Marc Zyngier wrote:
>>>> On 21/05/18 12:45, Russell King wrote:
>>>>> +#ifdef CONFIG_ARM_PSCI
>>>>> +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
>>>>> +		struct arm_smccc_res res;
>>>>> +
>>>>> +		switch (psci_ops.conduit) {
>>>>> +		case PSCI_CONDUIT_HVC:
>>>>> +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>>>>> +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
>>>>> +			if ((int)res.a0 < 0)
>>>>> +				break;
>>>>
>>>> I just realised that there is a small, but significant difference
>>>> between this and the arm64 version: On arm64, we have a table of
>>>> vulnerable implementations, and we try the mitigation on a per-cpu
>>>> basis. Here, you entirely rely on the firmware to discover whether the
>>>> CPU needs mitigation or not. You then need to check for a return value
>>>> of 1, which indicates that although the mitigation is implemented, it is
>>>> not required on this particular CPU.
>>>
>>> Okay, so digging further into the documentation seems to suggest that we
>>> only need to check the firmware for A72 and A57 CPUs, and given this
>>> statement:
>>>
>>> "Arm recommends that the caller only call this on PEs for which a
>>>  firmware based mitigation of CVE-2017-5715 is required, or where
>>>  a local workaround is infeasible."
>>>
>>> it seems that the right answer is to ignore the PSCI based methods when
>>> we have anything but these CPUs.  Do you agree?
>>
>> For CPUs that are produced by ARM, I agree. I don't know about CPUs
>> produced by ARM licensees though, so I'd rather use the opposite logic:
>> Use the firmware unless the CPU is one of those that can be easily
>> mitigated at EL1 (or isn't affected).
> 
> The "or isn't affected" is the difficult bit - I guess we could match
> on the CPU vendor field though, and just reject all ARM CPUs that
> aren't explicitly listed as having a problem.

That seems sensible. ARM has published an exhaustive status for all its
cores, which we can trust. For architecture licensees, I'm not aware of
such a list, but I'd expect them to communicate one if they were affected.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
  2018-05-24 12:49             ` Marc Zyngier
@ 2018-05-24 13:04               ` Russell King - ARM Linux
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-24 13:04 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: Florian Fainelli, Christoffer Dall, linux-arm-kernel, kvmarm

On Thu, May 24, 2018 at 01:49:51PM +0100, Marc Zyngier wrote:
> On 24/05/18 13:30, Russell King - ARM Linux wrote:
> > On Thu, May 24, 2018 at 01:03:50PM +0100, Marc Zyngier wrote:
> >> On 23/05/18 20:45, Russell King - ARM Linux wrote:
> >>> On Tue, May 22, 2018 at 06:24:13PM +0100, Marc Zyngier wrote:
> >>>> On 21/05/18 12:45, Russell King wrote:
> >>>>> +#ifdef CONFIG_ARM_PSCI
> >>>>> +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
> >>>>> +		struct arm_smccc_res res;
> >>>>> +
> >>>>> +		switch (psci_ops.conduit) {
> >>>>> +		case PSCI_CONDUIT_HVC:
> >>>>> +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> >>>>> +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> >>>>> +			if ((int)res.a0 < 0)
> >>>>> +				break;
> >>>>
> >>>> I just realised that there is a small, but significant difference
> >>>> between this and the arm64 version: On arm64, we have a table of
> >>>> vulnerable implementations, and we try the mitigation on a per-cpu
> >>>> basis. Here, you entirely rely on the firmware to discover whether the
> >>>> CPU needs mitigation or not. You then need to check for a return value
> >>>> of 1, which indicates that although the mitigation is implemented, it is
> >>>> not required on this particular CPU.
> >>>
> >>> Okay, so digging further into the documentation seems to suggest that we
> >>> only need to check the firmware for A72 and A57 CPUs, and given this
> >>> statement:
> >>>
> >>> "Arm recommends that the caller only call this on PEs for which a
> >>>  firmware based mitigation of CVE-2017-5715 is required, or where
> >>>  a local workaround is infeasible."
> >>>
> >>> it seems that the right answer is to ignore the PSCI based methods when
> >>> we have anything but these CPUs.  Do you agree?
> >>
> >> For CPUs that are produced by ARM, I agree. I don't know about CPUs
> >> produced by ARM licensees though, so I'd rather use the opposite logic:
> >> Use the firmware unless the CPU is one of those that can be easily
> >> mitigated at EL1 (or isn't affected).
> > 
> > The "or isn't affected" is the difficult bit - I guess we could match
> > on the CPU vendor field though, and just reject all ARM CPUs that
> > aren't explicitly listed as having a problem.
> 
> That seems sensible. ARM has published an exhaustive status for all its
> cores, which we can trust. For architecture licensees, I'm not aware of
> such a list, but I'd expect them to communicate one if they were affected.

It's not that simple - there's an exhaustive list for those affected
cores, but it says that cores which aren't listed are unaffected.

If we want to explicitly list each core, we need a complete list of
both affected and unaffected cores to ensure that none are missed.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening
@ 2018-05-24 13:04               ` Russell King - ARM Linux
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-24 13:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, May 24, 2018 at 01:49:51PM +0100, Marc Zyngier wrote:
> On 24/05/18 13:30, Russell King - ARM Linux wrote:
> > On Thu, May 24, 2018 at 01:03:50PM +0100, Marc Zyngier wrote:
> >> On 23/05/18 20:45, Russell King - ARM Linux wrote:
> >>> On Tue, May 22, 2018 at 06:24:13PM +0100, Marc Zyngier wrote:
> >>>> On 21/05/18 12:45, Russell King wrote:
> >>>>> +#ifdef CONFIG_ARM_PSCI
> >>>>> +	if (psci_ops.smccc_version != SMCCC_VERSION_1_0) {
> >>>>> +		struct arm_smccc_res res;
> >>>>> +
> >>>>> +		switch (psci_ops.conduit) {
> >>>>> +		case PSCI_CONDUIT_HVC:
> >>>>> +			arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> >>>>> +					  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
> >>>>> +			if ((int)res.a0 < 0)
> >>>>> +				break;
> >>>>
> >>>> I just realised that there is a small, but significant difference
> >>>> between this and the arm64 version: On arm64, we have a table of
> >>>> vulnerable implementations, and we try the mitigation on a per-cpu
> >>>> basis. Here, you entirely rely on the firmware to discover whether the
> >>>> CPU needs mitigation or not. You then need to check for a return value
> >>>> of 1, which indicates that although the mitigation is implemented, it is
> >>>> not required on this particular CPU.
> >>>
> >>> Okay, so digging further into the documentation seems to suggest that we
> >>> only need to check the firmware for A72 and A57 CPUs, and given this
> >>> statement:
> >>>
> >>> "Arm recommends that the caller only call this on PEs for which a
> >>>  firmware based mitigation of CVE-2017-5715 is required, or where
> >>>  a local workaround is infeasible."
> >>>
> >>> it seems that the right answer is to ignore the PSCI based methods when
> >>> we have anything but these CPUs.  Do you agree?
> >>
> >> For CPUs that are produced by ARM, I agree. I don't know about CPUs
> >> produced by ARM licensees though, so I'd rather use the opposite logic:
> >> Use the firmware unless the CPU is one of those that can be easily
> >> mitigated at EL1 (or isn't affected).
> > 
> > The "or isn't affected" is the difficult bit - I guess we could match
> > on the CPU vendor field though, and just reject all ARM CPUs that
> > aren't explicitly listed as having a problem.
> 
> That seems sensible. ARM has published an exhaustive status for all its
> cores, which we can trust. For architecture licensees, I'm not aware of
> such a list, but I'd expect them to communicate one if they were affected.

It's not that simple - there's an exhaustive list for those affected
cores, but it says that cores which aren't listed are unaffected.

If we want to explicitly list each core, we need a complete list of
both affected and unaffected cores to ensure that none are missed.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v2 00/14] ARM Spectre variant 2 fixes
  2018-05-21 11:42 ` Russell King - ARM Linux
@ 2018-05-24 23:18   ` Florian Fainelli
  -1 siblings, 0 replies; 84+ messages in thread
From: Florian Fainelli @ 2018-05-24 23:18 UTC (permalink / raw)
  To: Russell King - ARM Linux, linux-arm-kernel
  Cc: Marc Zyngier, tony, kvmarm, Christoffer Dall

On 05/21/2018 04:42 AM, Russell King - ARM Linux wrote:
> This is the second posting - the original cover note is below.  Comments
> from previous series addresesd:
> - Drop R7 and R8 changes.
> - Remove "PSCI" from the hypervisor version of the workaround.
> 
>  arch/arm/include/asm/bugs.h        |   6 +-
>  arch/arm/include/asm/cp15.h        |   3 +
>  arch/arm/include/asm/cputype.h     |   5 ++
>  arch/arm/include/asm/kvm_asm.h     |   2 -
>  arch/arm/include/asm/kvm_host.h    |  14 +++-
>  arch/arm/include/asm/kvm_mmu.h     |  23 +++++-
>  arch/arm/include/asm/proc-fns.h    |   4 +
>  arch/arm/include/asm/system_misc.h |   8 ++
>  arch/arm/kernel/Makefile           |   1 +
>  arch/arm/kernel/bugs.c             |  18 +++++
>  arch/arm/kernel/smp.c              |   4 +
>  arch/arm/kernel/suspend.c          |   2 +
>  arch/arm/kvm/hyp/hyp-entry.S       | 108 +++++++++++++++++++++++++-
>  arch/arm/mm/Kconfig                |  23 ++++++
>  arch/arm/mm/Makefile               |   2 +-
>  arch/arm/mm/fault.c                |   3 +
>  arch/arm/mm/proc-macros.S          |   3 +-
>  arch/arm/mm/proc-v7-2level.S       |   6 --
>  arch/arm/mm/proc-v7-bugs.c         | 130 +++++++++++++++++++++++++++++++
>  arch/arm/mm/proc-v7.S              | 154 +++++++++++++++++++++++++++++--------
>  20 files changed, 469 insertions(+), 50 deletions(-)
>  create mode 100644 arch/arm/kernel/bugs.c
>  create mode 100644 arch/arm/mm/proc-v7-bugs.c

Since there appears to be more work needed in the PSCI/KVM changes
(patches 9 through 14), how about we go with the "bare-metal" parts:
patches 1-8 first and try to get those included ASAP?

The rationale being that a lot of affected people have Linux running on
ARMv7-A Cortex-A, typically A9, A15, Brahma-B15, and are in need of
those patches but do not necessarily require KVM fixes right now, and
even less so PSCI infrastructure to mitigate ARMv8-A running in AArch32.

In terms of backporting to -stable, and because the spectre variant 1
patches have not been submitted yet, it is not like we can lump
everything in one go anyway, so we are not making the lives of the
-stable maintainers any worse than it currently is?

Yay or nay?

> 
> On Wed, May 16, 2018 at 11:59:49AM +0100, Russell King - ARM Linux wrote:
>> This series addresses the Spectre variant 2 issues on ARM Cortex and
>> Broadcom Brahma B15 CPUs.  Due to the complexity of the bug, it is not
>> possible to verify that this series fixes any of the bugs, since it
>> has not been able to reproduce these exact scenarios using test
>> programs.
>>
>> I believe that this covers the entire extent of the Spectre variant 2
>> issues, with the exception of Cortex A53 and Cortex A72 processors as
>> these require a substantially more complex solution (except where the
>> workaround is implemented in PSCI firmware.)
>>
>> Spectre variant 1 is not covered by this series.
>>
>> The patch series is based partly on Marc Zyngier's work from February -
>> two of the KVM patches are from Marc's work.
>>
>> The main differences are:
>> - Inclusion of more processors as per current ARM Ltd security update
>>   documentation.
>> - Extension of "bugs" infrastructure to detect Cortex A8 and Cortex A15
>>   CPUs missing out on the IBE bit being set on (re-)entry to the kernel
>>   through all paths.
>> - Handle all suspect userspace-touching-kernelspace aborts irrespective
>>   of mapping type.
>>
>> The first patch will trivially conflict with the Broadcom Brahma
>> updates already in arm-soc - it has been necessary to independently
>> add the ID definitions for the B15 CPU.
>>
>> Having worked through this series, I'm of the opinion that the
>> define_processor_functions macro in proc-v7 are probably  more hassle
>> than they're worth - here, we don't need the global equivalent symbols,
>> because we never refer to them from the kernel code for any V7
>> processor (MULTI_CPU is always defined.)
>>
>> This series is currently in my "spectre" branch (along with some
>> Spectre variant 1 patches.)
>>
>> Please carefully review.
>>
>>  arch/arm/include/asm/bugs.h        |   6 +-
>>  arch/arm/include/asm/cp15.h        |   3 +
>>  arch/arm/include/asm/cputype.h     |   5 ++
>>  arch/arm/include/asm/kvm_asm.h     |   2 -
>>  arch/arm/include/asm/kvm_host.h    |  14 +++-
>>  arch/arm/include/asm/kvm_mmu.h     |  23 +++++-
>>  arch/arm/include/asm/proc-fns.h    |   4 +
>>  arch/arm/include/asm/system_misc.h |   8 ++
>>  arch/arm/kernel/Makefile           |   1 +
>>  arch/arm/kernel/bugs.c             |  18 +++++
>>  arch/arm/kernel/smp.c              |   4 +
>>  arch/arm/kernel/suspend.c          |   2 +
>>  arch/arm/kvm/hyp/hyp-entry.S       | 108 ++++++++++++++++++++++++-
>>  arch/arm/mm/Kconfig                |  23 ++++++
>>  arch/arm/mm/Makefile               |   2 +-
>>  arch/arm/mm/fault.c                |   3 +
>>  arch/arm/mm/proc-macros.S          |   3 +-
>>  arch/arm/mm/proc-v7-2level.S       |   6 --
>>  arch/arm/mm/proc-v7-bugs.c         | 130 ++++++++++++++++++++++++++++++
>>  arch/arm/mm/proc-v7.S              | 158 +++++++++++++++++++++++++++++--------
>>  20 files changed, 471 insertions(+), 52 deletions(-)
>>
>> -- 
>> RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
>> FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
>> According to speedtest.net: 8.21Mbps down 510kbps up
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 


-- 
Florian

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v2 00/14] ARM Spectre variant 2 fixes
@ 2018-05-24 23:18   ` Florian Fainelli
  0 siblings, 0 replies; 84+ messages in thread
From: Florian Fainelli @ 2018-05-24 23:18 UTC (permalink / raw)
  To: linux-arm-kernel

On 05/21/2018 04:42 AM, Russell King - ARM Linux wrote:
> This is the second posting - the original cover note is below.  Comments
> from previous series addresesd:
> - Drop R7 and R8 changes.
> - Remove "PSCI" from the hypervisor version of the workaround.
> 
>  arch/arm/include/asm/bugs.h        |   6 +-
>  arch/arm/include/asm/cp15.h        |   3 +
>  arch/arm/include/asm/cputype.h     |   5 ++
>  arch/arm/include/asm/kvm_asm.h     |   2 -
>  arch/arm/include/asm/kvm_host.h    |  14 +++-
>  arch/arm/include/asm/kvm_mmu.h     |  23 +++++-
>  arch/arm/include/asm/proc-fns.h    |   4 +
>  arch/arm/include/asm/system_misc.h |   8 ++
>  arch/arm/kernel/Makefile           |   1 +
>  arch/arm/kernel/bugs.c             |  18 +++++
>  arch/arm/kernel/smp.c              |   4 +
>  arch/arm/kernel/suspend.c          |   2 +
>  arch/arm/kvm/hyp/hyp-entry.S       | 108 +++++++++++++++++++++++++-
>  arch/arm/mm/Kconfig                |  23 ++++++
>  arch/arm/mm/Makefile               |   2 +-
>  arch/arm/mm/fault.c                |   3 +
>  arch/arm/mm/proc-macros.S          |   3 +-
>  arch/arm/mm/proc-v7-2level.S       |   6 --
>  arch/arm/mm/proc-v7-bugs.c         | 130 +++++++++++++++++++++++++++++++
>  arch/arm/mm/proc-v7.S              | 154 +++++++++++++++++++++++++++++--------
>  20 files changed, 469 insertions(+), 50 deletions(-)
>  create mode 100644 arch/arm/kernel/bugs.c
>  create mode 100644 arch/arm/mm/proc-v7-bugs.c

Since there appears to be more work needed in the PSCI/KVM changes
(patches 9 through 14), how about we go with the "bare-metal" parts:
patches 1-8 first and try to get those included ASAP?

The rationale being that a lot of affected people have Linux running on
ARMv7-A Cortex-A, typically A9, A15, Brahma-B15, and are in need of
those patches but do not necessarily require KVM fixes right now, and
even less so PSCI infrastructure to mitigate ARMv8-A running in AArch32.

In terms of backporting to -stable, and because the spectre variant 1
patches have not been submitted yet, it is not like we can lump
everything in one go anyway, so we are not making the lives of the
-stable maintainers any worse than it currently is?

Yay or nay?

> 
> On Wed, May 16, 2018 at 11:59:49AM +0100, Russell King - ARM Linux wrote:
>> This series addresses the Spectre variant 2 issues on ARM Cortex and
>> Broadcom Brahma B15 CPUs.  Due to the complexity of the bug, it is not
>> possible to verify that this series fixes any of the bugs, since it
>> has not been able to reproduce these exact scenarios using test
>> programs.
>>
>> I believe that this covers the entire extent of the Spectre variant 2
>> issues, with the exception of Cortex A53 and Cortex A72 processors as
>> these require a substantially more complex solution (except where the
>> workaround is implemented in PSCI firmware.)
>>
>> Spectre variant 1 is not covered by this series.
>>
>> The patch series is based partly on Marc Zyngier's work from February -
>> two of the KVM patches are from Marc's work.
>>
>> The main differences are:
>> - Inclusion of more processors as per current ARM Ltd security update
>>   documentation.
>> - Extension of "bugs" infrastructure to detect Cortex A8 and Cortex A15
>>   CPUs missing out on the IBE bit being set on (re-)entry to the kernel
>>   through all paths.
>> - Handle all suspect userspace-touching-kernelspace aborts irrespective
>>   of mapping type.
>>
>> The first patch will trivially conflict with the Broadcom Brahma
>> updates already in arm-soc - it has been necessary to independently
>> add the ID definitions for the B15 CPU.
>>
>> Having worked through this series, I'm of the opinion that the
>> define_processor_functions macro in proc-v7 are probably  more hassle
>> than they're worth - here, we don't need the global equivalent symbols,
>> because we never refer to them from the kernel code for any V7
>> processor (MULTI_CPU is always defined.)
>>
>> This series is currently in my "spectre" branch (along with some
>> Spectre variant 1 patches.)
>>
>> Please carefully review.
>>
>>  arch/arm/include/asm/bugs.h        |   6 +-
>>  arch/arm/include/asm/cp15.h        |   3 +
>>  arch/arm/include/asm/cputype.h     |   5 ++
>>  arch/arm/include/asm/kvm_asm.h     |   2 -
>>  arch/arm/include/asm/kvm_host.h    |  14 +++-
>>  arch/arm/include/asm/kvm_mmu.h     |  23 +++++-
>>  arch/arm/include/asm/proc-fns.h    |   4 +
>>  arch/arm/include/asm/system_misc.h |   8 ++
>>  arch/arm/kernel/Makefile           |   1 +
>>  arch/arm/kernel/bugs.c             |  18 +++++
>>  arch/arm/kernel/smp.c              |   4 +
>>  arch/arm/kernel/suspend.c          |   2 +
>>  arch/arm/kvm/hyp/hyp-entry.S       | 108 ++++++++++++++++++++++++-
>>  arch/arm/mm/Kconfig                |  23 ++++++
>>  arch/arm/mm/Makefile               |   2 +-
>>  arch/arm/mm/fault.c                |   3 +
>>  arch/arm/mm/proc-macros.S          |   3 +-
>>  arch/arm/mm/proc-v7-2level.S       |   6 --
>>  arch/arm/mm/proc-v7-bugs.c         | 130 ++++++++++++++++++++++++++++++
>>  arch/arm/mm/proc-v7.S              | 158 +++++++++++++++++++++++++++++--------
>>  20 files changed, 471 insertions(+), 52 deletions(-)
>>
>> -- 
>> RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
>> FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
>> According to speedtest.net: 8.21Mbps down 510kbps up
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel at lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 


-- 
Florian

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths
  2018-05-21 11:44   ` Russell King
@ 2018-05-24 23:30     ` Florian Fainelli
  -1 siblings, 0 replies; 84+ messages in thread
From: Florian Fainelli @ 2018-05-24 23:30 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel; +Cc: Marc Zyngier, kvmarm

On 05/21/2018 04:44 AM, Russell King wrote:
> Check for CPU bugs when secondary processors are being brought online,
> and also when CPUs are resuming from a low power mode.  This gives an
> opportunity to check that processor specific bug workarounds are
> correctly enabled for all paths that a CPU re-enters the kernel.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>

Something I missed, is that this correctly warns about e.g: missing the
IBE bit for secondary cores, but it seems to be missing it for the boot CPU:

[    0.001053] CPU: Testing write buffer coherency: ok
[    0.001086] CPU: Spectre v2: using ICIALLU workaround
[    0.001304] CPU0: update cpu_capacity 1024
[    0.001316] CPU0: thread -1, cpu 0, socket 0, mpidr 80000000
[    0.001693] Setting up static identity map for 0x200000 - 0x200060
[    0.001769] Hierarchical SRCU implementation.
[    0.003951] brcmstb: biuctrl: MCP: Write pairing already disabled
[    0.004224] smp: Bringing up secondary CPUs ...
[    0.004874] CPU1: update cpu_capacity 1024
[    0.004877] CPU1: thread -1, cpu 1, socket 0, mpidr 80000001
[    0.004881] CPU1: Spectre v2: firmware did not set auxiliary control
register IBE bit, system vulnerable
[    0.005604] CPU2: update cpu_capacity 1024
[    0.005607] CPU2: thread -1, cpu 2, socket 0, mpidr 80000002
[    0.005610] CPU2: Spectre v2: firmware did not set auxiliary control
register IBE bit, system vulnerable
[    0.006295] CPU3: update cpu_capacity 1024
[    0.006299] CPU3: thread -1, cpu 3, socket 0, mpidr 80000003
[    0.006302] CPU3: Spectre v2: firmware did not set auxiliary control
register IBE bit, system vulnerable
[    0.006377] smp: Brought up 1 node, 4 CPUs
[    0.006389] SMP: Total of 4 processors activated (216.00 BogoMIPS).
[    0.006398] CPU: All CPU(s) started in SVC mode.

Which could be confusing if you intentionally restricted a SMP system to
UP with maxcpus=1 or smp=off:

[    0.001043] CPU: Testing write buffer coherency: ok
[    0.001077] CPU: Spectre v2: using ICIALLU workaround
[    0.001291] CPU0: update cpu_capacity 1024
[    0.001302] CPU0: thread -1, cpu 0, socket 0, mpidr 80000000
[    0.001516] Setting up static identity map for 0x200000 - 0x200060
[    0.001593] Hierarchical SRCU implementation.
[    0.003829] brcmstb: biuctrl: MCP: Write pairing already disabled
[    0.004097] smp: Bringing up secondary CPUs ...
[    0.004108] smp: Brought up 1 node, 1 CPU
[    0.004117] SMP: Total of 1 processors activated (54.00 BogoMIPS).
[    0.004126] CPU: All CPU(s) started in SVC mode.



> ---
>  arch/arm/include/asm/bugs.h | 2 ++
>  arch/arm/kernel/bugs.c      | 5 +++++
>  arch/arm/kernel/smp.c       | 4 ++++
>  arch/arm/kernel/suspend.c   | 2 ++
>  4 files changed, 13 insertions(+)
> 
> diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h
> index ed122d294f3f..73a99c72a930 100644
> --- a/arch/arm/include/asm/bugs.h
> +++ b/arch/arm/include/asm/bugs.h
> @@ -14,8 +14,10 @@ extern void check_writebuffer_bugs(void);
>  
>  #ifdef CONFIG_MMU
>  extern void check_bugs(void);
> +extern void check_other_bugs(void);
>  #else
>  #define check_bugs() do { } while (0)
> +#define check_other_bugs() do { } while (0)
>  #endif
>  
>  #endif
> diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
> index 88024028bb70..16e7ba2a9cc4 100644
> --- a/arch/arm/kernel/bugs.c
> +++ b/arch/arm/kernel/bugs.c
> @@ -3,7 +3,12 @@
>  #include <asm/bugs.h>
>  #include <asm/proc-fns.h>
>  
> +void check_other_bugs(void)
> +{
> +}
> +
>  void __init check_bugs(void)
>  {
>  	check_writebuffer_bugs();
> +	check_other_bugs();
>  }
> diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
> index 2da087926ebe..5ad0b67b9e33 100644
> --- a/arch/arm/kernel/smp.c
> +++ b/arch/arm/kernel/smp.c
> @@ -31,6 +31,7 @@
>  #include <linux/irq_work.h>
>  
>  #include <linux/atomic.h>
> +#include <asm/bugs.h>
>  #include <asm/smp.h>
>  #include <asm/cacheflush.h>
>  #include <asm/cpu.h>
> @@ -405,6 +406,9 @@ asmlinkage void secondary_start_kernel(void)
>  	 * before we continue - which happens after __cpu_up returns.
>  	 */
>  	set_cpu_online(cpu, true);
> +
> +	check_other_bugs();
> +
>  	complete(&cpu_running);
>  
>  	local_irq_enable();
> diff --git a/arch/arm/kernel/suspend.c b/arch/arm/kernel/suspend.c
> index a40ebb7c0896..d08099269e35 100644
> --- a/arch/arm/kernel/suspend.c
> +++ b/arch/arm/kernel/suspend.c
> @@ -3,6 +3,7 @@
>  #include <linux/slab.h>
>  #include <linux/mm_types.h>
>  
> +#include <asm/bugs.h>
>  #include <asm/cacheflush.h>
>  #include <asm/idmap.h>
>  #include <asm/pgalloc.h>
> @@ -36,6 +37,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
>  		cpu_switch_mm(mm->pgd, mm);
>  		local_flush_bp_all();
>  		local_flush_tlb_all();
> +		check_other_bugs();
>  	}
>  
>  	return ret;
> 


-- 
Florian

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths
@ 2018-05-24 23:30     ` Florian Fainelli
  0 siblings, 0 replies; 84+ messages in thread
From: Florian Fainelli @ 2018-05-24 23:30 UTC (permalink / raw)
  To: linux-arm-kernel

On 05/21/2018 04:44 AM, Russell King wrote:
> Check for CPU bugs when secondary processors are being brought online,
> and also when CPUs are resuming from a low power mode.  This gives an
> opportunity to check that processor specific bug workarounds are
> correctly enabled for all paths that a CPU re-enters the kernel.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>

Something I missed, is that this correctly warns about e.g: missing the
IBE bit for secondary cores, but it seems to be missing it for the boot CPU:

[    0.001053] CPU: Testing write buffer coherency: ok
[    0.001086] CPU: Spectre v2: using ICIALLU workaround
[    0.001304] CPU0: update cpu_capacity 1024
[    0.001316] CPU0: thread -1, cpu 0, socket 0, mpidr 80000000
[    0.001693] Setting up static identity map for 0x200000 - 0x200060
[    0.001769] Hierarchical SRCU implementation.
[    0.003951] brcmstb: biuctrl: MCP: Write pairing already disabled
[    0.004224] smp: Bringing up secondary CPUs ...
[    0.004874] CPU1: update cpu_capacity 1024
[    0.004877] CPU1: thread -1, cpu 1, socket 0, mpidr 80000001
[    0.004881] CPU1: Spectre v2: firmware did not set auxiliary control
register IBE bit, system vulnerable
[    0.005604] CPU2: update cpu_capacity 1024
[    0.005607] CPU2: thread -1, cpu 2, socket 0, mpidr 80000002
[    0.005610] CPU2: Spectre v2: firmware did not set auxiliary control
register IBE bit, system vulnerable
[    0.006295] CPU3: update cpu_capacity 1024
[    0.006299] CPU3: thread -1, cpu 3, socket 0, mpidr 80000003
[    0.006302] CPU3: Spectre v2: firmware did not set auxiliary control
register IBE bit, system vulnerable
[    0.006377] smp: Brought up 1 node, 4 CPUs
[    0.006389] SMP: Total of 4 processors activated (216.00 BogoMIPS).
[    0.006398] CPU: All CPU(s) started in SVC mode.

Which could be confusing if you intentionally restricted a SMP system to
UP with maxcpus=1 or smp=off:

[    0.001043] CPU: Testing write buffer coherency: ok
[    0.001077] CPU: Spectre v2: using ICIALLU workaround
[    0.001291] CPU0: update cpu_capacity 1024
[    0.001302] CPU0: thread -1, cpu 0, socket 0, mpidr 80000000
[    0.001516] Setting up static identity map for 0x200000 - 0x200060
[    0.001593] Hierarchical SRCU implementation.
[    0.003829] brcmstb: biuctrl: MCP: Write pairing already disabled
[    0.004097] smp: Bringing up secondary CPUs ...
[    0.004108] smp: Brought up 1 node, 1 CPU
[    0.004117] SMP: Total of 1 processors activated (54.00 BogoMIPS).
[    0.004126] CPU: All CPU(s) started in SVC mode.



> ---
>  arch/arm/include/asm/bugs.h | 2 ++
>  arch/arm/kernel/bugs.c      | 5 +++++
>  arch/arm/kernel/smp.c       | 4 ++++
>  arch/arm/kernel/suspend.c   | 2 ++
>  4 files changed, 13 insertions(+)
> 
> diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h
> index ed122d294f3f..73a99c72a930 100644
> --- a/arch/arm/include/asm/bugs.h
> +++ b/arch/arm/include/asm/bugs.h
> @@ -14,8 +14,10 @@ extern void check_writebuffer_bugs(void);
>  
>  #ifdef CONFIG_MMU
>  extern void check_bugs(void);
> +extern void check_other_bugs(void);
>  #else
>  #define check_bugs() do { } while (0)
> +#define check_other_bugs() do { } while (0)
>  #endif
>  
>  #endif
> diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
> index 88024028bb70..16e7ba2a9cc4 100644
> --- a/arch/arm/kernel/bugs.c
> +++ b/arch/arm/kernel/bugs.c
> @@ -3,7 +3,12 @@
>  #include <asm/bugs.h>
>  #include <asm/proc-fns.h>
>  
> +void check_other_bugs(void)
> +{
> +}
> +
>  void __init check_bugs(void)
>  {
>  	check_writebuffer_bugs();
> +	check_other_bugs();
>  }
> diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
> index 2da087926ebe..5ad0b67b9e33 100644
> --- a/arch/arm/kernel/smp.c
> +++ b/arch/arm/kernel/smp.c
> @@ -31,6 +31,7 @@
>  #include <linux/irq_work.h>
>  
>  #include <linux/atomic.h>
> +#include <asm/bugs.h>
>  #include <asm/smp.h>
>  #include <asm/cacheflush.h>
>  #include <asm/cpu.h>
> @@ -405,6 +406,9 @@ asmlinkage void secondary_start_kernel(void)
>  	 * before we continue - which happens after __cpu_up returns.
>  	 */
>  	set_cpu_online(cpu, true);
> +
> +	check_other_bugs();
> +
>  	complete(&cpu_running);
>  
>  	local_irq_enable();
> diff --git a/arch/arm/kernel/suspend.c b/arch/arm/kernel/suspend.c
> index a40ebb7c0896..d08099269e35 100644
> --- a/arch/arm/kernel/suspend.c
> +++ b/arch/arm/kernel/suspend.c
> @@ -3,6 +3,7 @@
>  #include <linux/slab.h>
>  #include <linux/mm_types.h>
>  
> +#include <asm/bugs.h>
>  #include <asm/cacheflush.h>
>  #include <asm/idmap.h>
>  #include <asm/pgalloc.h>
> @@ -36,6 +37,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
>  		cpu_switch_mm(mm->pgd, mm);
>  		local_flush_bp_all();
>  		local_flush_tlb_all();
> +		check_other_bugs();
>  	}
>  
>  	return ret;
> 


-- 
Florian

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v2 00/14] ARM Spectre variant 2 fixes
  2018-05-24 23:18   ` Florian Fainelli
@ 2018-05-25 10:00     ` Russell King - ARM Linux
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-25 10:00 UTC (permalink / raw)
  To: Florian Fainelli; +Cc: Marc Zyngier, tony, kvmarm, linux-arm-kernel

On Thu, May 24, 2018 at 04:18:30PM -0700, Florian Fainelli wrote:
> On 05/21/2018 04:42 AM, Russell King - ARM Linux wrote:
> > This is the second posting - the original cover note is below.  Comments
> > from previous series addresesd:
> > - Drop R7 and R8 changes.
> > - Remove "PSCI" from the hypervisor version of the workaround.
> > 
> >  arch/arm/include/asm/bugs.h        |   6 +-
> >  arch/arm/include/asm/cp15.h        |   3 +
> >  arch/arm/include/asm/cputype.h     |   5 ++
> >  arch/arm/include/asm/kvm_asm.h     |   2 -
> >  arch/arm/include/asm/kvm_host.h    |  14 +++-
> >  arch/arm/include/asm/kvm_mmu.h     |  23 +++++-
> >  arch/arm/include/asm/proc-fns.h    |   4 +
> >  arch/arm/include/asm/system_misc.h |   8 ++
> >  arch/arm/kernel/Makefile           |   1 +
> >  arch/arm/kernel/bugs.c             |  18 +++++
> >  arch/arm/kernel/smp.c              |   4 +
> >  arch/arm/kernel/suspend.c          |   2 +
> >  arch/arm/kvm/hyp/hyp-entry.S       | 108 +++++++++++++++++++++++++-
> >  arch/arm/mm/Kconfig                |  23 ++++++
> >  arch/arm/mm/Makefile               |   2 +-
> >  arch/arm/mm/fault.c                |   3 +
> >  arch/arm/mm/proc-macros.S          |   3 +-
> >  arch/arm/mm/proc-v7-2level.S       |   6 --
> >  arch/arm/mm/proc-v7-bugs.c         | 130 +++++++++++++++++++++++++++++++
> >  arch/arm/mm/proc-v7.S              | 154 +++++++++++++++++++++++++++++--------
> >  20 files changed, 469 insertions(+), 50 deletions(-)
> >  create mode 100644 arch/arm/kernel/bugs.c
> >  create mode 100644 arch/arm/mm/proc-v7-bugs.c
> 
> Since there appears to be more work needed in the PSCI/KVM changes
> (patches 9 through 14), how about we go with the "bare-metal" parts:
> patches 1-8 first and try to get those included ASAP?
> 
> The rationale being that a lot of affected people have Linux running on
> ARMv7-A Cortex-A, typically A9, A15, Brahma-B15, and are in need of
> those patches but do not necessarily require KVM fixes right now, and
> even less so PSCI infrastructure to mitigate ARMv8-A running in AArch32.
> 
> In terms of backporting to -stable, and because the spectre variant 1
> patches have not been submitted yet, it is not like we can lump
> everything in one go anyway, so we are not making the lives of the
> -stable maintainers any worse than it currently is?

Patch 6 is implicated in the rework - it will cause big.Little systems
to either have the workaround applied when it isn't required or not
applied when it is required.  It depends which CPU is the boot CPU.

For example, a big.Little A7/A15 cluster - if the boot CPU is an A15,
then we will use the ICIALLI switch_mm method for both the A15 and A7.
If the boot CPU is an A7, then we will use the standard switch_mm
method that does not contain the workaround even for the A15, and the
A15 will then be vulnerable.

I don't think we have a way to identify if we boot on a big.Little
system, so the kernel messages could claim to have the workarounds,
but they won't be effective.

This is the hardest one to solve, because it means more invasive
changes to deal with the MM switching paths, which need to become
per-cpu.  The problem is, there are times that we call into that
path but the per-cpu variables are not accessible (because the CPU
isn't initialised sufficiently for them to work.)

It's trivial to solve the issues in the later patches by comparison.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v2 00/14] ARM Spectre variant 2 fixes
@ 2018-05-25 10:00     ` Russell King - ARM Linux
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-25 10:00 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, May 24, 2018 at 04:18:30PM -0700, Florian Fainelli wrote:
> On 05/21/2018 04:42 AM, Russell King - ARM Linux wrote:
> > This is the second posting - the original cover note is below.  Comments
> > from previous series addresesd:
> > - Drop R7 and R8 changes.
> > - Remove "PSCI" from the hypervisor version of the workaround.
> > 
> >  arch/arm/include/asm/bugs.h        |   6 +-
> >  arch/arm/include/asm/cp15.h        |   3 +
> >  arch/arm/include/asm/cputype.h     |   5 ++
> >  arch/arm/include/asm/kvm_asm.h     |   2 -
> >  arch/arm/include/asm/kvm_host.h    |  14 +++-
> >  arch/arm/include/asm/kvm_mmu.h     |  23 +++++-
> >  arch/arm/include/asm/proc-fns.h    |   4 +
> >  arch/arm/include/asm/system_misc.h |   8 ++
> >  arch/arm/kernel/Makefile           |   1 +
> >  arch/arm/kernel/bugs.c             |  18 +++++
> >  arch/arm/kernel/smp.c              |   4 +
> >  arch/arm/kernel/suspend.c          |   2 +
> >  arch/arm/kvm/hyp/hyp-entry.S       | 108 +++++++++++++++++++++++++-
> >  arch/arm/mm/Kconfig                |  23 ++++++
> >  arch/arm/mm/Makefile               |   2 +-
> >  arch/arm/mm/fault.c                |   3 +
> >  arch/arm/mm/proc-macros.S          |   3 +-
> >  arch/arm/mm/proc-v7-2level.S       |   6 --
> >  arch/arm/mm/proc-v7-bugs.c         | 130 +++++++++++++++++++++++++++++++
> >  arch/arm/mm/proc-v7.S              | 154 +++++++++++++++++++++++++++++--------
> >  20 files changed, 469 insertions(+), 50 deletions(-)
> >  create mode 100644 arch/arm/kernel/bugs.c
> >  create mode 100644 arch/arm/mm/proc-v7-bugs.c
> 
> Since there appears to be more work needed in the PSCI/KVM changes
> (patches 9 through 14), how about we go with the "bare-metal" parts:
> patches 1-8 first and try to get those included ASAP?
> 
> The rationale being that a lot of affected people have Linux running on
> ARMv7-A Cortex-A, typically A9, A15, Brahma-B15, and are in need of
> those patches but do not necessarily require KVM fixes right now, and
> even less so PSCI infrastructure to mitigate ARMv8-A running in AArch32.
> 
> In terms of backporting to -stable, and because the spectre variant 1
> patches have not been submitted yet, it is not like we can lump
> everything in one go anyway, so we are not making the lives of the
> -stable maintainers any worse than it currently is?

Patch 6 is implicated in the rework - it will cause big.Little systems
to either have the workaround applied when it isn't required or not
applied when it is required.  It depends which CPU is the boot CPU.

For example, a big.Little A7/A15 cluster - if the boot CPU is an A15,
then we will use the ICIALLI switch_mm method for both the A15 and A7.
If the boot CPU is an A7, then we will use the standard switch_mm
method that does not contain the workaround even for the A15, and the
A15 will then be vulnerable.

I don't think we have a way to identify if we boot on a big.Little
system, so the kernel messages could claim to have the workarounds,
but they won't be effective.

This is the hardest one to solve, because it means more invasive
changes to deal with the MM switching paths, which need to become
per-cpu.  The problem is, there are times that we call into that
path but the per-cpu variables are not accessible (because the CPU
isn't initialised sufficiently for them to work.)

It's trivial to solve the issues in the later patches by comparison.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths
  2018-05-24 23:30     ` Florian Fainelli
@ 2018-05-25 10:03       ` Russell King - ARM Linux
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-25 10:03 UTC (permalink / raw)
  To: Florian Fainelli; +Cc: Marc Zyngier, linux-arm-kernel, kvmarm

On Thu, May 24, 2018 at 04:30:40PM -0700, Florian Fainelli wrote:
> On 05/21/2018 04:44 AM, Russell King wrote:
> > Check for CPU bugs when secondary processors are being brought online,
> > and also when CPUs are resuming from a low power mode.  This gives an
> > opportunity to check that processor specific bug workarounds are
> > correctly enabled for all paths that a CPU re-enters the kernel.
> > 
> > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> > Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
> 
> Something I missed, is that this correctly warns about e.g: missing the
> IBE bit for secondary cores, but it seems to be missing it for the boot CPU:

Are you sure that the boot CPU has the IBE bit clear?

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths
@ 2018-05-25 10:03       ` Russell King - ARM Linux
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-25 10:03 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, May 24, 2018 at 04:30:40PM -0700, Florian Fainelli wrote:
> On 05/21/2018 04:44 AM, Russell King wrote:
> > Check for CPU bugs when secondary processors are being brought online,
> > and also when CPUs are resuming from a low power mode.  This gives an
> > opportunity to check that processor specific bug workarounds are
> > correctly enabled for all paths that a CPU re-enters the kernel.
> > 
> > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> > Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
> 
> Something I missed, is that this correctly warns about e.g: missing the
> IBE bit for secondary cores, but it seems to be missing it for the boot CPU:

Are you sure that the boot CPU has the IBE bit clear?

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths
  2018-05-25 10:03       ` Russell King - ARM Linux
@ 2018-05-25 11:31         ` Russell King - ARM Linux
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-25 11:31 UTC (permalink / raw)
  To: Florian Fainelli; +Cc: Marc Zyngier, Christoffer Dall, linux-arm-kernel, kvmarm

On Fri, May 25, 2018 at 11:03:59AM +0100, Russell King - ARM Linux wrote:
> On Thu, May 24, 2018 at 04:30:40PM -0700, Florian Fainelli wrote:
> > On 05/21/2018 04:44 AM, Russell King wrote:
> > > Check for CPU bugs when secondary processors are being brought online,
> > > and also when CPUs are resuming from a low power mode.  This gives an
> > > opportunity to check that processor specific bug workarounds are
> > > correctly enabled for all paths that a CPU re-enters the kernel.
> > > 
> > > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> > > Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
> > 
> > Something I missed, is that this correctly warns about e.g: missing the
> > IBE bit for secondary cores, but it seems to be missing it for the boot CPU:
> 
> Are you sure that the boot CPU has the IBE bit clear?

Here's what I get on TI Keystone 2, which doesn't set the IBE bit for
any CPU:

CPU: Testing write buffer coherency: ok
CPU0: Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable
CPU0: Spectre v2: using ICIALLU workaround
/cpus/cpu@0 missing clock-frequency property
/cpus/cpu@1 missing clock-frequency property
/cpus/cpu@2 missing clock-frequency property
/cpus/cpu@3 missing clock-frequency property
CPU0: thread -1, cpu 0, socket 0, mpidr 80000000
Setting up static identity map for 0x80008300 - 0x80008438
Hierarchical SRCU implementation.
smp: Bringing up secondary CPUs ...
CPU1: thread -1, cpu 1, socket 0, mpidr 80000001
CPU1: Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable
CPU1: Spectre v2: using ICIALLU workaround
CPU2: thread -1, cpu 2, socket 0, mpidr 80000002
CPU2: Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable
CPU2: Spectre v2: using ICIALLU workaround
CPU3: thread -1, cpu 3, socket 0, mpidr 80000003
CPU3: Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable
CPU3: Spectre v2: using ICIALLU workaround

It should be noted that, if we implement a "bugs" bit exported to
userspace (as I think someone requested) that booting on a system
where only the boot CPU initially comes up (which has the IBE bit
set) and then bringing the secondary CPUs online after userspace
has started (where those CPUs don't have the IBE bit set) will
result in the system initially not being vulnerable, but then
becoming vulnerable when running on those other CPUs.

If the bugs bit had already been checked by userspace, then it would
think that there's no system level vulnerability.  Userspace would
need to check the status each time a CPU comes online.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths
@ 2018-05-25 11:31         ` Russell King - ARM Linux
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-25 11:31 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, May 25, 2018 at 11:03:59AM +0100, Russell King - ARM Linux wrote:
> On Thu, May 24, 2018 at 04:30:40PM -0700, Florian Fainelli wrote:
> > On 05/21/2018 04:44 AM, Russell King wrote:
> > > Check for CPU bugs when secondary processors are being brought online,
> > > and also when CPUs are resuming from a low power mode.  This gives an
> > > opportunity to check that processor specific bug workarounds are
> > > correctly enabled for all paths that a CPU re-enters the kernel.
> > > 
> > > Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> > > Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
> > 
> > Something I missed, is that this correctly warns about e.g: missing the
> > IBE bit for secondary cores, but it seems to be missing it for the boot CPU:
> 
> Are you sure that the boot CPU has the IBE bit clear?

Here's what I get on TI Keystone 2, which doesn't set the IBE bit for
any CPU:

CPU: Testing write buffer coherency: ok
CPU0: Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable
CPU0: Spectre v2: using ICIALLU workaround
/cpus/cpu at 0 missing clock-frequency property
/cpus/cpu at 1 missing clock-frequency property
/cpus/cpu at 2 missing clock-frequency property
/cpus/cpu at 3 missing clock-frequency property
CPU0: thread -1, cpu 0, socket 0, mpidr 80000000
Setting up static identity map for 0x80008300 - 0x80008438
Hierarchical SRCU implementation.
smp: Bringing up secondary CPUs ...
CPU1: thread -1, cpu 1, socket 0, mpidr 80000001
CPU1: Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable
CPU1: Spectre v2: using ICIALLU workaround
CPU2: thread -1, cpu 2, socket 0, mpidr 80000002
CPU2: Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable
CPU2: Spectre v2: using ICIALLU workaround
CPU3: thread -1, cpu 3, socket 0, mpidr 80000003
CPU3: Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable
CPU3: Spectre v2: using ICIALLU workaround

It should be noted that, if we implement a "bugs" bit exported to
userspace (as I think someone requested) that booting on a system
where only the boot CPU initially comes up (which has the IBE bit
set) and then bringing the secondary CPUs online after userspace
has started (where those CPUs don't have the IBE bit set) will
result in the system initially not being vulnerable, but then
becoming vulnerable when running on those other CPUs.

If the bugs bit had already been checked by userspace, then it would
think that there's no system level vulnerability.  Userspace would
need to check the status each time a CPU comes online.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths
  2018-05-16 16:23     ` Florian Fainelli
@ 2018-05-19 10:13       ` Russell King - ARM Linux
  -1 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-19 10:13 UTC (permalink / raw)
  To: Florian Fainelli; +Cc: Marc Zyngier, Christoffer Dall, linux-arm-kernel, kvmarm

On Wed, May 16, 2018 at 09:23:01AM -0700, Florian Fainelli wrote:
> On 05/16/2018 04:00 AM, Russell King wrote:
> > diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
> > index 2da087926ebe..5ad0b67b9e33 100644
> > --- a/arch/arm/kernel/smp.c
> > +++ b/arch/arm/kernel/smp.c
> > @@ -31,6 +31,7 @@
> >  #include <linux/irq_work.h>
> >  
> >  #include <linux/atomic.h>
> > +#include <asm/bugs.h>
> >  #include <asm/smp.h>
> >  #include <asm/cacheflush.h>
> >  #include <asm/cpu.h>
> > @@ -405,6 +406,9 @@ asmlinkage void secondary_start_kernel(void)
> >  	 * before we continue - which happens after __cpu_up returns.
> >  	 */
> >  	set_cpu_online(cpu, true);
> > +
> > +	check_other_bugs();
> 
> Given what is currently implemented, I don't think the location of
> check_other_bugs() matters too much, but we might have to move this
> after the local_irq_enable() at some point if we need to check for e.g:
> a bogus local timer or whatever?

We could move it later if we need to.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths
@ 2018-05-19 10:13       ` Russell King - ARM Linux
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King - ARM Linux @ 2018-05-19 10:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, May 16, 2018 at 09:23:01AM -0700, Florian Fainelli wrote:
> On 05/16/2018 04:00 AM, Russell King wrote:
> > diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
> > index 2da087926ebe..5ad0b67b9e33 100644
> > --- a/arch/arm/kernel/smp.c
> > +++ b/arch/arm/kernel/smp.c
> > @@ -31,6 +31,7 @@
> >  #include <linux/irq_work.h>
> >  
> >  #include <linux/atomic.h>
> > +#include <asm/bugs.h>
> >  #include <asm/smp.h>
> >  #include <asm/cacheflush.h>
> >  #include <asm/cpu.h>
> > @@ -405,6 +406,9 @@ asmlinkage void secondary_start_kernel(void)
> >  	 * before we continue - which happens after __cpu_up returns.
> >  	 */
> >  	set_cpu_online(cpu, true);
> > +
> > +	check_other_bugs();
> 
> Given what is currently implemented, I don't think the location of
> check_other_bugs() matters too much, but we might have to move this
> after the local_irq_enable() at some point if we need to check for e.g:
> a bogus local timer or whatever?

We could move it later if we need to.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths
  2018-05-16 11:00   ` Russell King
@ 2018-05-16 16:23     ` Florian Fainelli
  -1 siblings, 0 replies; 84+ messages in thread
From: Florian Fainelli @ 2018-05-16 16:23 UTC (permalink / raw)
  To: Russell King, linux-arm-kernel; +Cc: Marc Zyngier, Christoffer Dall, kvmarm

On 05/16/2018 04:00 AM, Russell King wrote:
> Check for CPU bugs when secondary processors are being brought online,
> and also when CPUs are resuming from a low power mode.  This gives an
> opportunity to check that processor specific bug workarounds are
> correctly enabled for all paths that a CPU re-enters the kernel.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>

> ---
>  arch/arm/include/asm/bugs.h | 2 ++
>  arch/arm/kernel/bugs.c      | 5 +++++
>  arch/arm/kernel/smp.c       | 4 ++++
>  arch/arm/kernel/suspend.c   | 2 ++
>  4 files changed, 13 insertions(+)
> 
> diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h
> index ed122d294f3f..73a99c72a930 100644
> --- a/arch/arm/include/asm/bugs.h
> +++ b/arch/arm/include/asm/bugs.h
> @@ -14,8 +14,10 @@ extern void check_writebuffer_bugs(void);
>  
>  #ifdef CONFIG_MMU
>  extern void check_bugs(void);
> +extern void check_other_bugs(void);
>  #else
>  #define check_bugs() do { } while (0)
> +#define check_other_bugs() do { } while (0)
>  #endif
>  
>  #endif
> diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
> index 88024028bb70..16e7ba2a9cc4 100644
> --- a/arch/arm/kernel/bugs.c
> +++ b/arch/arm/kernel/bugs.c
> @@ -3,7 +3,12 @@
>  #include <asm/bugs.h>
>  #include <asm/proc-fns.h>
>  
> +void check_other_bugs(void)
> +{
> +}
> +
>  void __init check_bugs(void)
>  {
>  	check_writebuffer_bugs();
> +	check_other_bugs();
>  }
> diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
> index 2da087926ebe..5ad0b67b9e33 100644
> --- a/arch/arm/kernel/smp.c
> +++ b/arch/arm/kernel/smp.c
> @@ -31,6 +31,7 @@
>  #include <linux/irq_work.h>
>  
>  #include <linux/atomic.h>
> +#include <asm/bugs.h>
>  #include <asm/smp.h>
>  #include <asm/cacheflush.h>
>  #include <asm/cpu.h>
> @@ -405,6 +406,9 @@ asmlinkage void secondary_start_kernel(void)
>  	 * before we continue - which happens after __cpu_up returns.
>  	 */
>  	set_cpu_online(cpu, true);
> +
> +	check_other_bugs();

Given what is currently implemented, I don't think the location of
check_other_bugs() matters too much, but we might have to move this
after the local_irq_enable() at some point if we need to check for e.g:
a bogus local timer or whatever?


-- 
Florian

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths
@ 2018-05-16 16:23     ` Florian Fainelli
  0 siblings, 0 replies; 84+ messages in thread
From: Florian Fainelli @ 2018-05-16 16:23 UTC (permalink / raw)
  To: linux-arm-kernel

On 05/16/2018 04:00 AM, Russell King wrote:
> Check for CPU bugs when secondary processors are being brought online,
> and also when CPUs are resuming from a low power mode.  This gives an
> opportunity to check that processor specific bug workarounds are
> correctly enabled for all paths that a CPU re-enters the kernel.
> 
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>

> ---
>  arch/arm/include/asm/bugs.h | 2 ++
>  arch/arm/kernel/bugs.c      | 5 +++++
>  arch/arm/kernel/smp.c       | 4 ++++
>  arch/arm/kernel/suspend.c   | 2 ++
>  4 files changed, 13 insertions(+)
> 
> diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h
> index ed122d294f3f..73a99c72a930 100644
> --- a/arch/arm/include/asm/bugs.h
> +++ b/arch/arm/include/asm/bugs.h
> @@ -14,8 +14,10 @@ extern void check_writebuffer_bugs(void);
>  
>  #ifdef CONFIG_MMU
>  extern void check_bugs(void);
> +extern void check_other_bugs(void);
>  #else
>  #define check_bugs() do { } while (0)
> +#define check_other_bugs() do { } while (0)
>  #endif
>  
>  #endif
> diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
> index 88024028bb70..16e7ba2a9cc4 100644
> --- a/arch/arm/kernel/bugs.c
> +++ b/arch/arm/kernel/bugs.c
> @@ -3,7 +3,12 @@
>  #include <asm/bugs.h>
>  #include <asm/proc-fns.h>
>  
> +void check_other_bugs(void)
> +{
> +}
> +
>  void __init check_bugs(void)
>  {
>  	check_writebuffer_bugs();
> +	check_other_bugs();
>  }
> diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
> index 2da087926ebe..5ad0b67b9e33 100644
> --- a/arch/arm/kernel/smp.c
> +++ b/arch/arm/kernel/smp.c
> @@ -31,6 +31,7 @@
>  #include <linux/irq_work.h>
>  
>  #include <linux/atomic.h>
> +#include <asm/bugs.h>
>  #include <asm/smp.h>
>  #include <asm/cacheflush.h>
>  #include <asm/cpu.h>
> @@ -405,6 +406,9 @@ asmlinkage void secondary_start_kernel(void)
>  	 * before we continue - which happens after __cpu_up returns.
>  	 */
>  	set_cpu_online(cpu, true);
> +
> +	check_other_bugs();

Given what is currently implemented, I don't think the location of
check_other_bugs() matters too much, but we might have to move this
after the local_irq_enable() at some point if we need to check for e.g:
a bogus local timer or whatever?


-- 
Florian

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths
  2018-05-16 10:59 [PATCH 0/14] " Russell King - ARM Linux
@ 2018-05-16 11:00   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-16 11:00 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Florian Fainelli, kvmarm, Christoffer Dall

Check for CPU bugs when secondary processors are being brought online,
and also when CPUs are resuming from a low power mode.  This gives an
opportunity to check that processor specific bug workarounds are
correctly enabled for all paths that a CPU re-enters the kernel.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
---
 arch/arm/include/asm/bugs.h | 2 ++
 arch/arm/kernel/bugs.c      | 5 +++++
 arch/arm/kernel/smp.c       | 4 ++++
 arch/arm/kernel/suspend.c   | 2 ++
 4 files changed, 13 insertions(+)

diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h
index ed122d294f3f..73a99c72a930 100644
--- a/arch/arm/include/asm/bugs.h
+++ b/arch/arm/include/asm/bugs.h
@@ -14,8 +14,10 @@ extern void check_writebuffer_bugs(void);
 
 #ifdef CONFIG_MMU
 extern void check_bugs(void);
+extern void check_other_bugs(void);
 #else
 #define check_bugs() do { } while (0)
+#define check_other_bugs() do { } while (0)
 #endif
 
 #endif
diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
index 88024028bb70..16e7ba2a9cc4 100644
--- a/arch/arm/kernel/bugs.c
+++ b/arch/arm/kernel/bugs.c
@@ -3,7 +3,12 @@
 #include <asm/bugs.h>
 #include <asm/proc-fns.h>
 
+void check_other_bugs(void)
+{
+}
+
 void __init check_bugs(void)
 {
 	check_writebuffer_bugs();
+	check_other_bugs();
 }
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 2da087926ebe..5ad0b67b9e33 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -31,6 +31,7 @@
 #include <linux/irq_work.h>
 
 #include <linux/atomic.h>
+#include <asm/bugs.h>
 #include <asm/smp.h>
 #include <asm/cacheflush.h>
 #include <asm/cpu.h>
@@ -405,6 +406,9 @@ asmlinkage void secondary_start_kernel(void)
 	 * before we continue - which happens after __cpu_up returns.
 	 */
 	set_cpu_online(cpu, true);
+
+	check_other_bugs();
+
 	complete(&cpu_running);
 
 	local_irq_enable();
diff --git a/arch/arm/kernel/suspend.c b/arch/arm/kernel/suspend.c
index a40ebb7c0896..d08099269e35 100644
--- a/arch/arm/kernel/suspend.c
+++ b/arch/arm/kernel/suspend.c
@@ -3,6 +3,7 @@
 #include <linux/slab.h>
 #include <linux/mm_types.h>
 
+#include <asm/bugs.h>
 #include <asm/cacheflush.h>
 #include <asm/idmap.h>
 #include <asm/pgalloc.h>
@@ -36,6 +37,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
 		cpu_switch_mm(mm->pgd, mm);
 		local_flush_bp_all();
 		local_flush_tlb_all();
+		check_other_bugs();
 	}
 
 	return ret;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths
@ 2018-05-16 11:00   ` Russell King
  0 siblings, 0 replies; 84+ messages in thread
From: Russell King @ 2018-05-16 11:00 UTC (permalink / raw)
  To: linux-arm-kernel

Check for CPU bugs when secondary processors are being brought online,
and also when CPUs are resuming from a low power mode.  This gives an
opportunity to check that processor specific bug workarounds are
correctly enabled for all paths that a CPU re-enters the kernel.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
---
 arch/arm/include/asm/bugs.h | 2 ++
 arch/arm/kernel/bugs.c      | 5 +++++
 arch/arm/kernel/smp.c       | 4 ++++
 arch/arm/kernel/suspend.c   | 2 ++
 4 files changed, 13 insertions(+)

diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h
index ed122d294f3f..73a99c72a930 100644
--- a/arch/arm/include/asm/bugs.h
+++ b/arch/arm/include/asm/bugs.h
@@ -14,8 +14,10 @@ extern void check_writebuffer_bugs(void);
 
 #ifdef CONFIG_MMU
 extern void check_bugs(void);
+extern void check_other_bugs(void);
 #else
 #define check_bugs() do { } while (0)
+#define check_other_bugs() do { } while (0)
 #endif
 
 #endif
diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
index 88024028bb70..16e7ba2a9cc4 100644
--- a/arch/arm/kernel/bugs.c
+++ b/arch/arm/kernel/bugs.c
@@ -3,7 +3,12 @@
 #include <asm/bugs.h>
 #include <asm/proc-fns.h>
 
+void check_other_bugs(void)
+{
+}
+
 void __init check_bugs(void)
 {
 	check_writebuffer_bugs();
+	check_other_bugs();
 }
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 2da087926ebe..5ad0b67b9e33 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -31,6 +31,7 @@
 #include <linux/irq_work.h>
 
 #include <linux/atomic.h>
+#include <asm/bugs.h>
 #include <asm/smp.h>
 #include <asm/cacheflush.h>
 #include <asm/cpu.h>
@@ -405,6 +406,9 @@ asmlinkage void secondary_start_kernel(void)
 	 * before we continue - which happens after __cpu_up returns.
 	 */
 	set_cpu_online(cpu, true);
+
+	check_other_bugs();
+
 	complete(&cpu_running);
 
 	local_irq_enable();
diff --git a/arch/arm/kernel/suspend.c b/arch/arm/kernel/suspend.c
index a40ebb7c0896..d08099269e35 100644
--- a/arch/arm/kernel/suspend.c
+++ b/arch/arm/kernel/suspend.c
@@ -3,6 +3,7 @@
 #include <linux/slab.h>
 #include <linux/mm_types.h>
 
+#include <asm/bugs.h>
 #include <asm/cacheflush.h>
 #include <asm/idmap.h>
 #include <asm/pgalloc.h>
@@ -36,6 +37,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
 		cpu_switch_mm(mm->pgd, mm);
 		local_flush_bp_all();
 		local_flush_tlb_all();
+		check_other_bugs();
 	}
 
 	return ret;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 84+ messages in thread

end of thread, other threads:[~2018-05-25 11:31 UTC | newest]

Thread overview: 84+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-21 11:42 [PATCH v2 00/14] ARM Spectre variant 2 fixes Russell King - ARM Linux
2018-05-21 11:42 ` Russell King - ARM Linux
2018-05-21 11:44 ` [PATCH 01/14] ARM: add CPU part numbers for Cortex A73, A75 and Brahma B15 Russell King
2018-05-21 11:44   ` Russell King
2018-05-21 11:44 ` [PATCH 02/14] ARM: bugs: prepare processor bug infrastructure Russell King
2018-05-21 11:44   ` Russell King
2018-05-21 11:44 ` [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths Russell King
2018-05-21 11:44   ` Russell King
2018-05-24 23:30   ` Florian Fainelli
2018-05-24 23:30     ` Florian Fainelli
2018-05-25 10:03     ` Russell King - ARM Linux
2018-05-25 10:03       ` Russell King - ARM Linux
2018-05-25 11:31       ` Russell King - ARM Linux
2018-05-25 11:31         ` Russell King - ARM Linux
2018-05-21 11:44 ` [PATCH 04/14] ARM: bugs: add support for per-processor bug checking Russell King
2018-05-21 11:44   ` Russell King
2018-05-21 11:44 ` [PATCH 05/14] ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre Russell King
2018-05-21 11:44   ` Russell King
2018-05-21 11:44 ` [PATCH 06/14] ARM: spectre-v2: harden branch predictor on context switches Russell King
2018-05-21 11:44   ` Russell King
2018-05-22  3:21   ` Florian Fainelli
2018-05-22  3:21     ` Florian Fainelli
2018-05-22  9:55     ` Russell King - ARM Linux
2018-05-22  9:55       ` Russell King - ARM Linux
2018-05-22 18:27   ` Tony Lindgren
2018-05-22 18:27     ` Tony Lindgren
2018-05-21 11:44 ` [PATCH 07/14] ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit Russell King
2018-05-21 11:44   ` Russell King
2018-05-22 18:28   ` Tony Lindgren
2018-05-22 18:28     ` Tony Lindgren
2018-05-21 11:45 ` [PATCH 08/14] ARM: spectre-v2: harden user aborts in kernel space Russell King
2018-05-21 11:45   ` Russell King
2018-05-22 17:15   ` Marc Zyngier
2018-05-22 17:15     ` Marc Zyngier
2018-05-22 17:56     ` Russell King - ARM Linux
2018-05-22 17:56       ` Russell King - ARM Linux
2018-05-22 18:12       ` Russell King - ARM Linux
2018-05-22 18:12         ` Russell King - ARM Linux
2018-05-22 18:19         ` Florian Fainelli
2018-05-22 18:19           ` Florian Fainelli
2018-05-22 23:25     ` Russell King - ARM Linux
2018-05-22 23:25       ` Russell King - ARM Linux
2018-05-21 11:45 ` [PATCH 09/14] ARM: spectre-v2: add PSCI based hardening Russell King
2018-05-21 11:45   ` Russell King
2018-05-22 17:24   ` Marc Zyngier
2018-05-22 17:24     ` Marc Zyngier
2018-05-22 17:57     ` Russell King - ARM Linux
2018-05-22 17:57       ` Russell King - ARM Linux
2018-05-23  7:25       ` Marc Zyngier
2018-05-23  7:25         ` Marc Zyngier
2018-05-23 19:45     ` Russell King - ARM Linux
2018-05-23 19:45       ` Russell King - ARM Linux
2018-05-24 12:03       ` Marc Zyngier
2018-05-24 12:03         ` Marc Zyngier
2018-05-24 12:30         ` Russell King - ARM Linux
2018-05-24 12:30           ` Russell King - ARM Linux
2018-05-24 12:49           ` Marc Zyngier
2018-05-24 12:49             ` Marc Zyngier
2018-05-24 13:04             ` Russell King - ARM Linux
2018-05-24 13:04               ` Russell King - ARM Linux
2018-05-21 11:45 ` [PATCH 10/14] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17 Russell King
2018-05-21 11:45   ` Russell King
2018-05-21 11:45 ` [PATCH 11/14] ARM: KVM: invalidate icache on guest exit for Cortex-A15 Russell King
2018-05-21 11:45   ` Russell King
2018-05-21 11:45 ` [PATCH 12/14] ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15 Russell King
2018-05-21 11:45   ` Russell King
2018-05-22  3:22   ` Florian Fainelli
2018-05-22  3:22     ` Florian Fainelli
2018-05-21 11:45 ` [PATCH 13/14] ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling Russell King
2018-05-21 11:45   ` Russell King
2018-05-23 10:50   ` Marc Zyngier
2018-05-23 10:50     ` Marc Zyngier
2018-05-21 11:45 ` [PATCH 14/14] ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1 Russell King
2018-05-21 11:45   ` Russell King
2018-05-24 23:18 ` [PATCH v2 00/14] ARM Spectre variant 2 fixes Florian Fainelli
2018-05-24 23:18   ` Florian Fainelli
2018-05-25 10:00   ` Russell King - ARM Linux
2018-05-25 10:00     ` Russell King - ARM Linux
  -- strict thread matches above, loose matches on Subject: below --
2018-05-16 10:59 [PATCH 0/14] " Russell King - ARM Linux
2018-05-16 11:00 ` [PATCH 03/14] ARM: bugs: hook processor bug checking into SMP and suspend paths Russell King
2018-05-16 11:00   ` Russell King
2018-05-16 16:23   ` Florian Fainelli
2018-05-16 16:23     ` Florian Fainelli
2018-05-19 10:13     ` Russell King - ARM Linux
2018-05-19 10:13       ` Russell King - ARM Linux

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.