All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
@ 2019-04-21 14:19 ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Greg/Sasha,

Please queue up these powerpc patches for 4.4 if you have no objections.

cheers


Christophe Leroy (1):
  powerpc/fsl: Fix the flush of branch predictor.

Diana Craciun (10):
  powerpc/64: Disable the speculation barrier from the command line
  powerpc/64: Make stf barrier PPC_BOOK3S_64 specific.
  powerpc/64: Make meltdown reporting Book3S 64 specific
  powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E
  powerpc/fsl: Add infrastructure to fixup branch predictor flush
  powerpc/fsl: Add macro to flush the branch predictor
  powerpc/fsl: Fix spectre_v2 mitigations reporting
  powerpc/fsl: Add nospectre_v2 command line argument
  powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)
  powerpc/fsl: Update Spectre v2 reporting

Mauricio Faria de Oliveira (4):
  powerpc/rfi-flush: Differentiate enabled and patched flush types
  powerpc/pseries: Fix clearing of security feature flags
  powerpc: Move default security feature flags
  powerpc/pseries: Restore default security feature flags on setup

Michael Ellerman (29):
  powerpc/xmon: Add RFI flush related fields to paca dump
  powerpc/pseries: Support firmware disable of RFI flush
  powerpc/powernv: Support firmware disable of RFI flush
  powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs
    code
  powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again
  powerpc/rfi-flush: Always enable fallback flush on pseries
  powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags
  powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration
  powerpc: Add security feature flags for Spectre/Meltdown
  powerpc/pseries: Set or clear security feature flags
  powerpc/powernv: Set or clear security feature flags
  powerpc/64s: Move cpu_show_meltdown()
  powerpc/64s: Enhance the information in cpu_show_meltdown()
  powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()
  powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()
  powerpc/64s: Wire up cpu_show_spectre_v1()
  powerpc/64s: Wire up cpu_show_spectre_v2()
  powerpc/64s: Fix section mismatch warnings from setup_rfi_flush()
  powerpc/64: Use barrier_nospec in syscall entry
  powerpc: Use barrier_nospec in copy_from_user()
  powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2
  powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC
  powerpc/64: Call setup_barrier_nospec() from setup_arch()
  powerpc/asm: Add a patch_site macro & helpers for patching
    instructions
  powerpc/64s: Add new security feature flags for count cache flush
  powerpc/64s: Add support for software count cache flush
  powerpc/pseries: Query hypervisor for count cache flush settings
  powerpc/powernv: Query firmware for count cache flush settings
  powerpc/security: Fix spectre_v2 reporting

Michael Neuling (1):
  powerpc: Avoid code patching freed init sections

Michal Suchanek (5):
  powerpc/64s: Add barrier_nospec
  powerpc/64s: Add support for ori barrier_nospec patching
  powerpc/64s: Patch barrier_nospec in modules
  powerpc/64s: Enable barrier_nospec based on firmware settings
  powerpc/64s: Enhance the information in cpu_show_spectre_v1()

Nicholas Piggin (2):
  powerpc/64s: Improve RFI L1-D cache flush fallback
  powerpc/64s: Add support for a store forwarding barrier at kernel
    entry/exit

 arch/powerpc/Kconfig                         |   7 +-
 arch/powerpc/include/asm/asm-prototypes.h    |  21 +
 arch/powerpc/include/asm/barrier.h           |  21 +
 arch/powerpc/include/asm/code-patching-asm.h |  18 +
 arch/powerpc/include/asm/code-patching.h     |   2 +
 arch/powerpc/include/asm/exception-64s.h     |  35 ++
 arch/powerpc/include/asm/feature-fixups.h    |  40 ++
 arch/powerpc/include/asm/hvcall.h            |   5 +
 arch/powerpc/include/asm/paca.h              |   3 +-
 arch/powerpc/include/asm/ppc-opcode.h        |   1 +
 arch/powerpc/include/asm/ppc_asm.h           |  11 +
 arch/powerpc/include/asm/security_features.h |  92 ++++
 arch/powerpc/include/asm/setup.h             |  23 +-
 arch/powerpc/include/asm/uaccess.h           |  18 +-
 arch/powerpc/kernel/Makefile                 |   1 +
 arch/powerpc/kernel/asm-offsets.c            |   3 +-
 arch/powerpc/kernel/entry_64.S               |  69 +++
 arch/powerpc/kernel/exceptions-64e.S         |  27 +-
 arch/powerpc/kernel/exceptions-64s.S         |  98 +++--
 arch/powerpc/kernel/module.c                 |  10 +-
 arch/powerpc/kernel/security.c               | 433 +++++++++++++++++++
 arch/powerpc/kernel/setup_32.c               |   2 +
 arch/powerpc/kernel/setup_64.c               |  50 +--
 arch/powerpc/kernel/vmlinux.lds.S            |  33 +-
 arch/powerpc/lib/code-patching.c             |  29 ++
 arch/powerpc/lib/feature-fixups.c            | 218 +++++++++-
 arch/powerpc/mm/mem.c                        |   2 +
 arch/powerpc/mm/tlb_low_64e.S                |   7 +
 arch/powerpc/platforms/powernv/setup.c       |  99 +++--
 arch/powerpc/platforms/pseries/mobility.c    |   3 +
 arch/powerpc/platforms/pseries/pseries.h     |   2 +
 arch/powerpc/platforms/pseries/setup.c       |  88 +++-
 arch/powerpc/xmon/xmon.c                     |   2 +
 33 files changed, 1345 insertions(+), 128 deletions(-)
 create mode 100644 arch/powerpc/include/asm/asm-prototypes.h
 create mode 100644 arch/powerpc/include/asm/code-patching-asm.h
 create mode 100644 arch/powerpc/include/asm/security_features.h
 create mode 100644 arch/powerpc/kernel/security.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 58a1fa979655..01b6c00a7060 100644
- --- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -136,7 +136,7 @@ config PPC
 	select GENERIC_SMP_IDLE_THREAD
 	select GENERIC_CMOS_UPDATE
 	select GENERIC_TIME_VSYSCALL_OLD
- -	select GENERIC_CPU_VULNERABILITIES	if PPC_BOOK3S_64
+	select GENERIC_CPU_VULNERABILITIES	if PPC_BARRIER_NOSPEC
 	select GENERIC_CLOCKEVENTS
 	select GENERIC_CLOCKEVENTS_BROADCAST if SMP
 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
@@ -162,6 +162,11 @@ config PPC
 	select ARCH_HAS_DMA_SET_COHERENT_MASK
 	select HAVE_ARCH_SECCOMP_FILTER
 
+config PPC_BARRIER_NOSPEC
+    bool
+    default y
+    depends on PPC_BOOK3S_64 || PPC_FSL_BOOK3E
+
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
 
diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
new file mode 100644
index 000000000000..8944c55591cf
- --- /dev/null
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -0,0 +1,21 @@
+#ifndef _ASM_POWERPC_ASM_PROTOTYPES_H
+#define _ASM_POWERPC_ASM_PROTOTYPES_H
+/*
+ * This file is for prototypes of C functions that are only called
+ * from asm, and any associated variables.
+ *
+ * Copyright 2016, Daniel Axtens, IBM Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ */
+
+/* Patch sites */
+extern s32 patch__call_flush_count_cache;
+extern s32 patch__flush_count_cache_return;
+
+extern long flush_count_cache;
+
+#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index b9e16855a037..e7cb72cdb2ba 100644
- --- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -92,4 +92,25 @@ do {									\
 #define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
+#ifdef CONFIG_PPC_BOOK3S_64
+#define NOSPEC_BARRIER_SLOT   nop
+#elif defined(CONFIG_PPC_FSL_BOOK3E)
+#define NOSPEC_BARRIER_SLOT   nop; nop
+#endif
+
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
+/*
+ * Prevent execution of subsequent instructions until preceding branches have
+ * been fully resolved and are no longer executing speculatively.
+ */
+#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; NOSPEC_BARRIER_SLOT
+
+// This also acts as a compiler barrier due to the memory clobber.
+#define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
+
+#else /* !CONFIG_PPC_BARRIER_NOSPEC */
+#define barrier_nospec_asm
+#define barrier_nospec()
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
+
 #endif /* _ASM_POWERPC_BARRIER_H */
diff --git a/arch/powerpc/include/asm/code-patching-asm.h b/arch/powerpc/include/asm/code-patching-asm.h
new file mode 100644
index 000000000000..ed7b1448493a
- --- /dev/null
+++ b/arch/powerpc/include/asm/code-patching-asm.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Copyright 2018, Michael Ellerman, IBM Corporation.
+ */
+#ifndef _ASM_POWERPC_CODE_PATCHING_ASM_H
+#define _ASM_POWERPC_CODE_PATCHING_ASM_H
+
+/* Define a "site" that can be patched */
+.macro patch_site label name
+	.pushsection ".rodata"
+	.balign 4
+	.global \name
+\name:
+	.4byte	\label - .
+	.popsection
+.endm
+
+#endif /* _ASM_POWERPC_CODE_PATCHING_ASM_H */
diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
index 840a5509b3f1..a734b4b34d26 100644
- --- a/arch/powerpc/include/asm/code-patching.h
+++ b/arch/powerpc/include/asm/code-patching.h
@@ -28,6 +28,8 @@ unsigned int create_cond_branch(const unsigned int *addr,
 				unsigned long target, int flags);
 int patch_branch(unsigned int *addr, unsigned long target, int flags);
 int patch_instruction(unsigned int *addr, unsigned int instr);
+int patch_instruction_site(s32 *addr, unsigned int instr);
+int patch_branch_site(s32 *site, unsigned long target, int flags);
 
 int instr_is_relative_branch(unsigned int instr);
 int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 9bddbec441b8..3ed536bec462 100644
- --- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -50,6 +50,27 @@
 #define EX_PPR		88	/* SMT thread status register (priority) */
 #define EX_CTR		96
 
+#define STF_ENTRY_BARRIER_SLOT						\
+	STF_ENTRY_BARRIER_FIXUP_SECTION;				\
+	nop;								\
+	nop;								\
+	nop
+
+#define STF_EXIT_BARRIER_SLOT						\
+	STF_EXIT_BARRIER_FIXUP_SECTION;					\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop
+
+/*
+ * r10 must be free to use, r13 must be paca
+ */
+#define INTERRUPT_TO_KERNEL						\
+	STF_ENTRY_BARRIER_SLOT
+
 /*
  * Macros for annotating the expected destination of (h)rfid
  *
@@ -66,16 +87,19 @@
 	rfid
 
 #define RFI_TO_USER							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
 
 #define RFI_TO_USER_OR_KERNEL						\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
 
 #define RFI_TO_GUEST							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
@@ -84,21 +108,25 @@
 	hrfid
 
 #define HRFI_TO_USER							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_USER_OR_KERNEL						\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_GUEST							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_UNKNOWN							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
@@ -226,6 +254,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 #define __EXCEPTION_PROLOG_1(area, extra, vec)				\
 	OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR);		\
 	OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);		\
+	INTERRUPT_TO_KERNEL;						\
 	SAVE_CTR(r10, area);						\
 	mfcr	r9;							\
 	extra(vec);							\
@@ -512,6 +541,12 @@ label##_relon_hv:						\
 #define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
 	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
 
+#define MASKABLE_EXCEPTION_OOL(vec, label)				\
+	.globl label##_ool;						\
+label##_ool:								\
+	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
+	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
+
 #define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
 	. = loc;							\
 	.globl label##_pSeries;						\
diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
index 7068bafbb2d6..145a37ab2d3e 100644
- --- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -184,6 +184,22 @@ label##3:					       	\
 	FTR_ENTRY_OFFSET label##1b-label##3b;		\
 	.popsection;
 
+#define STF_ENTRY_BARRIER_FIXUP_SECTION			\
+953:							\
+	.pushsection __stf_entry_barrier_fixup,"a";	\
+	.align 2;					\
+954:							\
+	FTR_ENTRY_OFFSET 953b-954b;			\
+	.popsection;
+
+#define STF_EXIT_BARRIER_FIXUP_SECTION			\
+955:							\
+	.pushsection __stf_exit_barrier_fixup,"a";	\
+	.align 2;					\
+956:							\
+	FTR_ENTRY_OFFSET 955b-956b;			\
+	.popsection;
+
 #define RFI_FLUSH_FIXUP_SECTION				\
 951:							\
 	.pushsection __rfi_flush_fixup,"a";		\
@@ -192,10 +208,34 @@ label##3:					       	\
 	FTR_ENTRY_OFFSET 951b-952b;			\
 	.popsection;
 
+#define NOSPEC_BARRIER_FIXUP_SECTION			\
+953:							\
+	.pushsection __barrier_nospec_fixup,"a";	\
+	.align 2;					\
+954:							\
+	FTR_ENTRY_OFFSET 953b-954b;			\
+	.popsection;
+
+#define START_BTB_FLUSH_SECTION			\
+955:							\
+
+#define END_BTB_FLUSH_SECTION			\
+956:							\
+	.pushsection __btb_flush_fixup,"a";	\
+	.align 2;							\
+957:						\
+	FTR_ENTRY_OFFSET 955b-957b;			\
+	FTR_ENTRY_OFFSET 956b-957b;			\
+	.popsection;
 
 #ifndef __ASSEMBLY__
 
+extern long stf_barrier_fallback;
+extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
+extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
 extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
+extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
+extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
 
 #endif
 
diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 449bbb87c257..b57db9d09db9 100644
- --- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -292,10 +292,15 @@
 #define H_CPU_CHAR_L1D_FLUSH_ORI30	(1ull << 61) // IBM bit 2
 #define H_CPU_CHAR_L1D_FLUSH_TRIG2	(1ull << 60) // IBM bit 3
 #define H_CPU_CHAR_L1D_THREAD_PRIV	(1ull << 59) // IBM bit 4
+#define H_CPU_CHAR_BRANCH_HINTS_HONORED	(1ull << 58) // IBM bit 5
+#define H_CPU_CHAR_THREAD_RECONFIG_CTRL	(1ull << 57) // IBM bit 6
+#define H_CPU_CHAR_COUNT_CACHE_DISABLED	(1ull << 56) // IBM bit 7
+#define H_CPU_CHAR_BCCTR_FLUSH_ASSIST	(1ull << 54) // IBM bit 9
 
 #define H_CPU_BEHAV_FAVOUR_SECURITY	(1ull << 63) // IBM bit 0
 #define H_CPU_BEHAV_L1D_FLUSH_PR	(1ull << 62) // IBM bit 1
 #define H_CPU_BEHAV_BNDS_CHK_SPEC_BAR	(1ull << 61) // IBM bit 2
+#define H_CPU_BEHAV_FLUSH_COUNT_CACHE	(1ull << 58) // IBM bit 5
 
 #ifndef __ASSEMBLY__
 #include <linux/types.h>
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 45e2aefece16..08e5df3395fa 100644
- --- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -199,8 +199,7 @@ struct paca_struct {
 	 */
 	u64 exrfi[13] __aligned(0x80);
 	void *rfi_flush_fallback_area;
- -	u64 l1d_flush_congruence;
- -	u64 l1d_flush_sets;
+	u64 l1d_flush_size;
 #endif
 };
 
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 7ab04fc59e24..faf1bb045dee 100644
- --- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -147,6 +147,7 @@
 #define PPC_INST_LWSYNC			0x7c2004ac
 #define PPC_INST_SYNC			0x7c0004ac
 #define PPC_INST_SYNC_MASK		0xfc0007fe
+#define PPC_INST_ISYNC			0x4c00012c
 #define PPC_INST_LXVD2X			0x7c000698
 #define PPC_INST_MCRXR			0x7c000400
 #define PPC_INST_MCRXR_MASK		0xfc0007fe
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 160bb2311bbb..d219816b3e19 100644
- --- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -821,4 +821,15 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,945)
 	.long 0x2400004c  /* rfid				*/
 #endif /* !CONFIG_PPC_BOOK3E */
 #endif /*  __ASSEMBLY__ */
+
+#ifdef CONFIG_PPC_FSL_BOOK3E
+#define BTB_FLUSH(reg)			\
+	lis reg,BUCSR_INIT@h;		\
+	ori reg,reg,BUCSR_INIT@l;	\
+	mtspr SPRN_BUCSR,reg;		\
+	isync;
+#else
+#define BTB_FLUSH(reg)
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
 #endif /* _ASM_POWERPC_PPC_ASM_H */
diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
new file mode 100644
index 000000000000..759597bf0fd8
- --- /dev/null
+++ b/arch/powerpc/include/asm/security_features.h
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Security related feature bit definitions.
+ *
+ * Copyright 2018, Michael Ellerman, IBM Corporation.
+ */
+
+#ifndef _ASM_POWERPC_SECURITY_FEATURES_H
+#define _ASM_POWERPC_SECURITY_FEATURES_H
+
+
+extern unsigned long powerpc_security_features;
+extern bool rfi_flush;
+
+/* These are bit flags */
+enum stf_barrier_type {
+	STF_BARRIER_NONE	= 0x1,
+	STF_BARRIER_FALLBACK	= 0x2,
+	STF_BARRIER_EIEIO	= 0x4,
+	STF_BARRIER_SYNC_ORI	= 0x8,
+};
+
+void setup_stf_barrier(void);
+void do_stf_barrier_fixups(enum stf_barrier_type types);
+void setup_count_cache_flush(void);
+
+static inline void security_ftr_set(unsigned long feature)
+{
+	powerpc_security_features |= feature;
+}
+
+static inline void security_ftr_clear(unsigned long feature)
+{
+	powerpc_security_features &= ~feature;
+}
+
+static inline bool security_ftr_enabled(unsigned long feature)
+{
+	return !!(powerpc_security_features & feature);
+}
+
+
+// Features indicating support for Spectre/Meltdown mitigations
+
+// The L1-D cache can be flushed with ori r30,r30,0
+#define SEC_FTR_L1D_FLUSH_ORI30		0x0000000000000001ull
+
+// The L1-D cache can be flushed with mtspr 882,r0 (aka SPRN_TRIG2)
+#define SEC_FTR_L1D_FLUSH_TRIG2		0x0000000000000002ull
+
+// ori r31,r31,0 acts as a speculation barrier
+#define SEC_FTR_SPEC_BAR_ORI31		0x0000000000000004ull
+
+// Speculation past bctr is disabled
+#define SEC_FTR_BCCTRL_SERIALISED	0x0000000000000008ull
+
+// Entries in L1-D are private to a SMT thread
+#define SEC_FTR_L1D_THREAD_PRIV		0x0000000000000010ull
+
+// Indirect branch prediction cache disabled
+#define SEC_FTR_COUNT_CACHE_DISABLED	0x0000000000000020ull
+
+// bcctr 2,0,0 triggers a hardware assisted count cache flush
+#define SEC_FTR_BCCTR_FLUSH_ASSIST	0x0000000000000800ull
+
+
+// Features indicating need for Spectre/Meltdown mitigations
+
+// The L1-D cache should be flushed on MSR[HV] 1->0 transition (hypervisor to guest)
+#define SEC_FTR_L1D_FLUSH_HV		0x0000000000000040ull
+
+// The L1-D cache should be flushed on MSR[PR] 0->1 transition (kernel to userspace)
+#define SEC_FTR_L1D_FLUSH_PR		0x0000000000000080ull
+
+// A speculation barrier should be used for bounds checks (Spectre variant 1)
+#define SEC_FTR_BNDS_CHK_SPEC_BAR	0x0000000000000100ull
+
+// Firmware configuration indicates user favours security over performance
+#define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
+
+// Software required to flush count cache on context switch
+#define SEC_FTR_FLUSH_COUNT_CACHE	0x0000000000000400ull
+
+
+// Features enabled by default
+#define SEC_FTR_DEFAULT \
+	(SEC_FTR_L1D_FLUSH_HV | \
+	 SEC_FTR_L1D_FLUSH_PR | \
+	 SEC_FTR_BNDS_CHK_SPEC_BAR | \
+	 SEC_FTR_FAVOUR_SECURITY)
+
+#endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 7916b56f2e60..d299479c770b 100644
- --- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -8,6 +8,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
 
 extern unsigned int rtas_data;
 extern unsigned long long memory_limit;
+extern bool init_mem_is_free;
 extern unsigned long klimit;
 extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
 
@@ -36,8 +37,28 @@ enum l1d_flush_type {
 	L1D_FLUSH_MTTRIG	= 0x8,
 };
 
- -void __init setup_rfi_flush(enum l1d_flush_type, bool enable);
+void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
+void setup_barrier_nospec(void);
+#else
+static inline void setup_barrier_nospec(void) { };
+#endif
+void do_barrier_nospec_fixups(bool enable);
+extern bool barrier_nospec_enabled;
+
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
+void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
+#else
+static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
+#endif
+
+#ifdef CONFIG_PPC_FSL_BOOK3E
+void setup_spectre_v2(void);
+#else
+static inline void setup_spectre_v2(void) {};
+#endif
+void do_btb_flush_fixups(void);
 
 #endif /* !__ASSEMBLY__ */
 
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 05f1389228d2..e51ce5a0e221 100644
- --- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -269,6 +269,7 @@ do {								\
 	__chk_user_ptr(ptr);					\
 	if (!is_kernel_addr((unsigned long)__gu_addr))		\
 		might_fault();					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -283,6 +284,7 @@ do {								\
 	__chk_user_ptr(ptr);					\
 	if (!is_kernel_addr((unsigned long)__gu_addr))		\
 		might_fault();					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -295,8 +297,10 @@ do {								\
 	unsigned long  __gu_val = 0;					\
 	__typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
 	might_fault();							\
- -	if (access_ok(VERIFY_READ, __gu_addr, (size)))			\
+	if (access_ok(VERIFY_READ, __gu_addr, (size))) {		\
+		barrier_nospec();					\
 		__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
+	}								\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
 	__gu_err;							\
 })
@@ -307,6 +311,7 @@ do {								\
 	unsigned long __gu_val;					\
 	__typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
 	__chk_user_ptr(ptr);					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -323,8 +328,10 @@ extern unsigned long __copy_tofrom_user(void __user *to,
 static inline unsigned long copy_from_user(void *to,
 		const void __user *from, unsigned long n)
 {
- -	if (likely(access_ok(VERIFY_READ, from, n)))
+	if (likely(access_ok(VERIFY_READ, from, n))) {
+		barrier_nospec();
 		return __copy_tofrom_user((__force void __user *)to, from, n);
+	}
 	memset(to, 0, n);
 	return n;
 }
@@ -359,21 +366,27 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
 
 		switch (n) {
 		case 1:
+			barrier_nospec();
 			__get_user_size(*(u8 *)to, from, 1, ret);
 			break;
 		case 2:
+			barrier_nospec();
 			__get_user_size(*(u16 *)to, from, 2, ret);
 			break;
 		case 4:
+			barrier_nospec();
 			__get_user_size(*(u32 *)to, from, 4, ret);
 			break;
 		case 8:
+			barrier_nospec();
 			__get_user_size(*(u64 *)to, from, 8, ret);
 			break;
 		}
 		if (ret == 0)
 			return 0;
 	}
+
+	barrier_nospec();
 	return __copy_tofrom_user((__force void __user *)to, from, n);
 }
 
@@ -400,6 +413,7 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
 		if (ret == 0)
 			return 0;
 	}
+
 	return __copy_tofrom_user(to, (__force const void __user *)from, n);
 }
 
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ba336930d448..22ed3c32fca8 100644
- --- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -44,6 +44,7 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o
 obj-$(CONFIG_PPC_BOOK3S_64)	+= mce.o mce_power.o
 obj64-$(CONFIG_RELOCATABLE)	+= reloc_64.o
 obj-$(CONFIG_PPC_BOOK3E_64)	+= exceptions-64e.o idle_book3e.o
+obj-$(CONFIG_PPC_BARRIER_NOSPEC) += security.o
 obj-$(CONFIG_PPC64)		+= vdso64/
 obj-$(CONFIG_ALTIVEC)		+= vecemu.o
 obj-$(CONFIG_PPC_970_NAP)	+= idle_power4.o
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index d92705e3a0c1..de3c29c51503 100644
- --- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -245,8 +245,7 @@ int main(void)
 	DEFINE(PACA_IN_MCE, offsetof(struct paca_struct, in_mce));
 	DEFINE(PACA_RFI_FLUSH_FALLBACK_AREA, offsetof(struct paca_struct, rfi_flush_fallback_area));
 	DEFINE(PACA_EXRFI, offsetof(struct paca_struct, exrfi));
- -	DEFINE(PACA_L1D_FLUSH_CONGRUENCE, offsetof(struct paca_struct, l1d_flush_congruence));
- -	DEFINE(PACA_L1D_FLUSH_SETS, offsetof(struct paca_struct, l1d_flush_sets));
+	DEFINE(PACA_L1D_FLUSH_SIZE, offsetof(struct paca_struct, l1d_flush_size));
 #endif
 	DEFINE(PACAHWCPUID, offsetof(struct paca_struct, hw_cpu_id));
 	DEFINE(PACAKEXECSTATE, offsetof(struct paca_struct, kexec_state));
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 59be96917369..6d36a4fb4acf 100644
- --- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -25,6 +25,7 @@
 #include <asm/page.h>
 #include <asm/mmu.h>
 #include <asm/thread_info.h>
+#include <asm/code-patching-asm.h>
 #include <asm/ppc_asm.h>
 #include <asm/asm-offsets.h>
 #include <asm/cputable.h>
@@ -36,6 +37,7 @@
 #include <asm/hw_irq.h>
 #include <asm/context_tracking.h>
 #include <asm/tm.h>
+#include <asm/barrier.h>
 #ifdef CONFIG_PPC_BOOK3S
 #include <asm/exception-64s.h>
 #else
@@ -75,6 +77,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
 	std	r0,GPR0(r1)
 	std	r10,GPR1(r1)
 	beq	2f			/* if from kernel mode */
+#ifdef CONFIG_PPC_FSL_BOOK3E
+START_BTB_FLUSH_SECTION
+	BTB_FLUSH(r10)
+END_BTB_FLUSH_SECTION
+#endif
 	ACCOUNT_CPU_USER_ENTRY(r10, r11)
 2:	std	r2,GPR2(r1)
 	std	r3,GPR3(r1)
@@ -177,6 +184,15 @@ system_call:			/* label this so stack traces look sane */
 	clrldi	r8,r8,32
 15:
 	slwi	r0,r0,4
+
+	barrier_nospec_asm
+	/*
+	 * Prevent the load of the handler below (based on the user-passed
+	 * system call number) being speculatively executed until the test
+	 * against NR_syscalls and branch to .Lsyscall_enosys above has
+	 * committed.
+	 */
+
 	ldx	r12,r11,r0	/* Fetch system call handler [ptr] */
 	mtctr   r12
 	bctrl			/* Call handler */
@@ -440,6 +456,57 @@ _GLOBAL(ret_from_kernel_thread)
 	li	r3,0
 	b	.Lsyscall_exit
 
+#ifdef CONFIG_PPC_BOOK3S_64
+
+#define FLUSH_COUNT_CACHE	\
+1:	nop;			\
+	patch_site 1b, patch__call_flush_count_cache
+
+
+#define BCCTR_FLUSH	.long 0x4c400420
+
+.macro nops number
+	.rept \number
+	nop
+	.endr
+.endm
+
+.balign 32
+.global flush_count_cache
+flush_count_cache:
+	/* Save LR into r9 */
+	mflr	r9
+
+	.rept 64
+	bl	.+4
+	.endr
+	b	1f
+	nops	6
+
+	.balign 32
+	/* Restore LR */
+1:	mtlr	r9
+	li	r9,0x7fff
+	mtctr	r9
+
+	BCCTR_FLUSH
+
+2:	nop
+	patch_site 2b patch__flush_count_cache_return
+
+	nops	3
+
+	.rept 278
+	.balign 32
+	BCCTR_FLUSH
+	nops	7
+	.endr
+
+	blr
+#else
+#define FLUSH_COUNT_CACHE
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
 /*
  * This routine switches between two different tasks.  The process
  * state of one is saved on its kernel stack.  Then the state
@@ -503,6 +570,8 @@ BEGIN_FTR_SECTION
 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
 #endif
 
+	FLUSH_COUNT_CACHE
+
 #ifdef CONFIG_SMP
 	/* We need a sync somewhere here to make sure that if the
 	 * previous task gets rescheduled on another CPU, it sees all
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 5cc93f0b52ca..48ec841ea1bf 100644
- --- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -295,7 +295,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	andi.	r10,r11,MSR_PR;		/* save stack pointer */	    \
 	beq	1f;			/* branch around if supervisor */   \
 	ld	r1,PACAKSAVE(r13);	/* get kernel stack coming from usr */\
- -1:	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
+1:	type##_BTB_FLUSH		\
+	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
 	bge-	cr1,exc_##n##_bad_stack;/* bad stack (TODO: out of line) */ \
 	mfspr	r10,SPRN_##type##_SRR0;	/* read SRR0 before touching stack */
 
@@ -327,6 +328,30 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 #define SPRN_MC_SRR0	SPRN_MCSRR0
 #define SPRN_MC_SRR1	SPRN_MCSRR1
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+#define GEN_BTB_FLUSH			\
+	START_BTB_FLUSH_SECTION		\
+		beq 1f;			\
+		BTB_FLUSH(r10)			\
+		1:		\
+	END_BTB_FLUSH_SECTION
+
+#define CRIT_BTB_FLUSH			\
+	START_BTB_FLUSH_SECTION		\
+		BTB_FLUSH(r10)		\
+	END_BTB_FLUSH_SECTION
+
+#define DBG_BTB_FLUSH CRIT_BTB_FLUSH
+#define MC_BTB_FLUSH CRIT_BTB_FLUSH
+#define GDBELL_BTB_FLUSH GEN_BTB_FLUSH
+#else
+#define GEN_BTB_FLUSH
+#define CRIT_BTB_FLUSH
+#define DBG_BTB_FLUSH
+#define MC_BTB_FLUSH
+#define GDBELL_BTB_FLUSH
+#endif
+
 #define NORMAL_EXCEPTION_PROLOG(n, intnum, addition)			    \
 	EXCEPTION_PROLOG(n, intnum, GEN, addition##_GEN(n))
 
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 938a30fef031..10e7cec9553d 100644
- --- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -36,6 +36,7 @@ BEGIN_FTR_SECTION						\
 END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
 	mr	r9,r13 ;					\
 	GET_PACA(r13) ;						\
+	INTERRUPT_TO_KERNEL ;					\
 	mfspr	r11,SPRN_SRR0 ;					\
 0:
 
@@ -292,7 +293,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	. = 0x900
 	.globl decrementer_pSeries
 decrementer_pSeries:
- -	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
+	SET_SCRATCH0(r13)
+	EXCEPTION_PROLOG_0(PACA_EXGEN)
+	b	decrementer_ool
 
 	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
 
@@ -319,6 +322,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
 	HMT_MEDIUM;
 	std	r10,PACA_EXGEN+EX_R10(r13)
+	INTERRUPT_TO_KERNEL
 	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
 	mfcr	r9
 	KVMTEST(0xc00)
@@ -607,6 +611,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 
 	.align	7
 	/* moved from 0xe00 */
+	MASKABLE_EXCEPTION_OOL(0x900, decrementer)
 	STD_EXCEPTION_HV_OOL(0xe02, h_data_storage)
 	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02)
 	STD_EXCEPTION_HV_OOL(0xe22, h_instr_storage)
@@ -1564,6 +1569,21 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	blr
 #endif
 
+	.balign 16
+	.globl stf_barrier_fallback
+stf_barrier_fallback:
+	std	r9,PACA_EXRFI+EX_R9(r13)
+	std	r10,PACA_EXRFI+EX_R10(r13)
+	sync
+	ld	r9,PACA_EXRFI+EX_R9(r13)
+	ld	r10,PACA_EXRFI+EX_R10(r13)
+	ori	31,31,0
+	.rept 14
+	b	1f
+1:
+	.endr
+	blr
+
 	.globl rfi_flush_fallback
 rfi_flush_fallback:
 	SET_SCRATCH0(r13);
@@ -1571,39 +1591,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	std	r9,PACA_EXRFI+EX_R9(r13)
 	std	r10,PACA_EXRFI+EX_R10(r13)
 	std	r11,PACA_EXRFI+EX_R11(r13)
- -	std	r12,PACA_EXRFI+EX_R12(r13)
- -	std	r8,PACA_EXRFI+EX_R13(r13)
 	mfctr	r9
 	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
- -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
- -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
- -	/*
- -	 * The load adresses are at staggered offsets within cachelines,
- -	 * which suits some pipelines better (on others it should not
- -	 * hurt).
- -	 */
- -	addi	r12,r12,8
+	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
+	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
 	mtctr	r11
 	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
 
 	/* order ld/st prior to dcbt stop all streams with flushing */
 	sync
- -1:	li	r8,0
- -	.rept	8 /* 8-way set associative */
- -	ldx	r11,r10,r8
- -	add	r8,r8,r12
- -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
- -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
- -	.endr
- -	addi	r10,r10,128 /* 128 byte cache line */
+
+	/*
+	 * The load adresses are at staggered offsets within cachelines,
+	 * which suits some pipelines better (on others it should not
+	 * hurt).
+	 */
+1:
+	ld	r11,(0x80 + 8)*0(r10)
+	ld	r11,(0x80 + 8)*1(r10)
+	ld	r11,(0x80 + 8)*2(r10)
+	ld	r11,(0x80 + 8)*3(r10)
+	ld	r11,(0x80 + 8)*4(r10)
+	ld	r11,(0x80 + 8)*5(r10)
+	ld	r11,(0x80 + 8)*6(r10)
+	ld	r11,(0x80 + 8)*7(r10)
+	addi	r10,r10,0x80*8
 	bdnz	1b
 
 	mtctr	r9
 	ld	r9,PACA_EXRFI+EX_R9(r13)
 	ld	r10,PACA_EXRFI+EX_R10(r13)
 	ld	r11,PACA_EXRFI+EX_R11(r13)
- -	ld	r12,PACA_EXRFI+EX_R12(r13)
- -	ld	r8,PACA_EXRFI+EX_R13(r13)
 	GET_SCRATCH0(r13);
 	rfid
 
@@ -1614,39 +1632,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	std	r9,PACA_EXRFI+EX_R9(r13)
 	std	r10,PACA_EXRFI+EX_R10(r13)
 	std	r11,PACA_EXRFI+EX_R11(r13)
- -	std	r12,PACA_EXRFI+EX_R12(r13)
- -	std	r8,PACA_EXRFI+EX_R13(r13)
 	mfctr	r9
 	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
- -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
- -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
- -	/*
- -	 * The load adresses are at staggered offsets within cachelines,
- -	 * which suits some pipelines better (on others it should not
- -	 * hurt).
- -	 */
- -	addi	r12,r12,8
+	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
+	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
 	mtctr	r11
 	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
 
 	/* order ld/st prior to dcbt stop all streams with flushing */
 	sync
- -1:	li	r8,0
- -	.rept	8 /* 8-way set associative */
- -	ldx	r11,r10,r8
- -	add	r8,r8,r12
- -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
- -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
- -	.endr
- -	addi	r10,r10,128 /* 128 byte cache line */
+
+	/*
+	 * The load adresses are at staggered offsets within cachelines,
+	 * which suits some pipelines better (on others it should not
+	 * hurt).
+	 */
+1:
+	ld	r11,(0x80 + 8)*0(r10)
+	ld	r11,(0x80 + 8)*1(r10)
+	ld	r11,(0x80 + 8)*2(r10)
+	ld	r11,(0x80 + 8)*3(r10)
+	ld	r11,(0x80 + 8)*4(r10)
+	ld	r11,(0x80 + 8)*5(r10)
+	ld	r11,(0x80 + 8)*6(r10)
+	ld	r11,(0x80 + 8)*7(r10)
+	addi	r10,r10,0x80*8
 	bdnz	1b
 
 	mtctr	r9
 	ld	r9,PACA_EXRFI+EX_R9(r13)
 	ld	r10,PACA_EXRFI+EX_R10(r13)
 	ld	r11,PACA_EXRFI+EX_R11(r13)
- -	ld	r12,PACA_EXRFI+EX_R12(r13)
- -	ld	r8,PACA_EXRFI+EX_R13(r13)
 	GET_SCRATCH0(r13);
 	hrfid
 
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index 9547381b631a..ff009be97a42 100644
- --- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -67,7 +67,15 @@ int module_finalize(const Elf_Ehdr *hdr,
 		do_feature_fixups(powerpc_firmware_features,
 				  (void *)sect->sh_addr,
 				  (void *)sect->sh_addr + sect->sh_size);
- -#endif
+#endif /* CONFIG_PPC64 */
+
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
+	sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
+	if (sect != NULL)
+		do_barrier_nospec_fixups_range(barrier_nospec_enabled,
+				  (void *)sect->sh_addr,
+				  (void *)sect->sh_addr + sect->sh_size);
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 	sect = find_section(hdr, sechdrs, "__lwsync_fixup");
 	if (sect != NULL)
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
new file mode 100644
index 000000000000..58f0602a92b9
- --- /dev/null
+++ b/arch/powerpc/kernel/security.c
@@ -0,0 +1,433 @@
+// SPDX-License-Identifier: GPL-2.0+
+//
+// Security related flags and so on.
+//
+// Copyright 2018, Michael Ellerman, IBM Corporation.
+
+#include <linux/kernel.h>
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/seq_buf.h>
+
+#include <asm/debug.h>
+#include <asm/asm-prototypes.h>
+#include <asm/code-patching.h>
+#include <asm/security_features.h>
+#include <asm/setup.h>
+
+
+unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
+
+enum count_cache_flush_type {
+	COUNT_CACHE_FLUSH_NONE	= 0x1,
+	COUNT_CACHE_FLUSH_SW	= 0x2,
+	COUNT_CACHE_FLUSH_HW	= 0x4,
+};
+static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
+
+bool barrier_nospec_enabled;
+static bool no_nospec;
+static bool btb_flush_enabled;
+#ifdef CONFIG_PPC_FSL_BOOK3E
+static bool no_spectrev2;
+#endif
+
+static void enable_barrier_nospec(bool enable)
+{
+	barrier_nospec_enabled = enable;
+	do_barrier_nospec_fixups(enable);
+}
+
+void setup_barrier_nospec(void)
+{
+	bool enable;
+
+	/*
+	 * It would make sense to check SEC_FTR_SPEC_BAR_ORI31 below as well.
+	 * But there's a good reason not to. The two flags we check below are
+	 * both are enabled by default in the kernel, so if the hcall is not
+	 * functional they will be enabled.
+	 * On a system where the host firmware has been updated (so the ori
+	 * functions as a barrier), but on which the hypervisor (KVM/Qemu) has
+	 * not been updated, we would like to enable the barrier. Dropping the
+	 * check for SEC_FTR_SPEC_BAR_ORI31 achieves that. The only downside is
+	 * we potentially enable the barrier on systems where the host firmware
+	 * is not updated, but that's harmless as it's a no-op.
+	 */
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
+
+	if (!no_nospec)
+		enable_barrier_nospec(enable);
+}
+
+static int __init handle_nospectre_v1(char *p)
+{
+	no_nospec = true;
+
+	return 0;
+}
+early_param("nospectre_v1", handle_nospectre_v1);
+
+#ifdef CONFIG_DEBUG_FS
+static int barrier_nospec_set(void *data, u64 val)
+{
+	switch (val) {
+	case 0:
+	case 1:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (!!val == !!barrier_nospec_enabled)
+		return 0;
+
+	enable_barrier_nospec(!!val);
+
+	return 0;
+}
+
+static int barrier_nospec_get(void *data, u64 *val)
+{
+	*val = barrier_nospec_enabled ? 1 : 0;
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_barrier_nospec,
+			barrier_nospec_get, barrier_nospec_set, "%llu\n");
+
+static __init int barrier_nospec_debugfs_init(void)
+{
+	debugfs_create_file("barrier_nospec", 0600, powerpc_debugfs_root, NULL,
+			    &fops_barrier_nospec);
+	return 0;
+}
+device_initcall(barrier_nospec_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
+
+#ifdef CONFIG_PPC_FSL_BOOK3E
+static int __init handle_nospectre_v2(char *p)
+{
+	no_spectrev2 = true;
+
+	return 0;
+}
+early_param("nospectre_v2", handle_nospectre_v2);
+void setup_spectre_v2(void)
+{
+	if (no_spectrev2)
+		do_btb_flush_fixups();
+	else
+		btb_flush_enabled = true;
+}
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
+#ifdef CONFIG_PPC_BOOK3S_64
+ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	bool thread_priv;
+
+	thread_priv = security_ftr_enabled(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (rfi_flush || thread_priv) {
+		struct seq_buf s;
+		seq_buf_init(&s, buf, PAGE_SIZE - 1);
+
+		seq_buf_printf(&s, "Mitigation: ");
+
+		if (rfi_flush)
+			seq_buf_printf(&s, "RFI Flush");
+
+		if (rfi_flush && thread_priv)
+			seq_buf_printf(&s, ", ");
+
+		if (thread_priv)
+			seq_buf_printf(&s, "L1D private per thread");
+
+		seq_buf_printf(&s, "\n");
+
+		return s.len;
+	}
+
+	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
+	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
+#endif
+
+ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	struct seq_buf s;
+
+	seq_buf_init(&s, buf, PAGE_SIZE - 1);
+
+	if (security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR)) {
+		if (barrier_nospec_enabled)
+			seq_buf_printf(&s, "Mitigation: __user pointer sanitization");
+		else
+			seq_buf_printf(&s, "Vulnerable");
+
+		if (security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31))
+			seq_buf_printf(&s, ", ori31 speculation barrier enabled");
+
+		seq_buf_printf(&s, "\n");
+	} else
+		seq_buf_printf(&s, "Not affected\n");
+
+	return s.len;
+}
+
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	struct seq_buf s;
+	bool bcs, ccd;
+
+	seq_buf_init(&s, buf, PAGE_SIZE - 1);
+
+	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
+	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
+
+	if (bcs || ccd) {
+		seq_buf_printf(&s, "Mitigation: ");
+
+		if (bcs)
+			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
+
+		if (bcs && ccd)
+			seq_buf_printf(&s, ", ");
+
+		if (ccd)
+			seq_buf_printf(&s, "Indirect branch cache disabled");
+	} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
+		seq_buf_printf(&s, "Mitigation: Software count cache flush");
+
+		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
+			seq_buf_printf(&s, " (hardware accelerated)");
+	} else if (btb_flush_enabled) {
+		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
+	} else {
+		seq_buf_printf(&s, "Vulnerable");
+	}
+
+	seq_buf_printf(&s, "\n");
+
+	return s.len;
+}
+
+#ifdef CONFIG_PPC_BOOK3S_64
+/*
+ * Store-forwarding barrier support.
+ */
+
+static enum stf_barrier_type stf_enabled_flush_types;
+static bool no_stf_barrier;
+bool stf_barrier;
+
+static int __init handle_no_stf_barrier(char *p)
+{
+	pr_info("stf-barrier: disabled on command line.");
+	no_stf_barrier = true;
+	return 0;
+}
+
+early_param("no_stf_barrier", handle_no_stf_barrier);
+
+/* This is the generic flag used by other architectures */
+static int __init handle_ssbd(char *p)
+{
+	if (!p || strncmp(p, "auto", 5) == 0 || strncmp(p, "on", 2) == 0 ) {
+		/* Until firmware tells us, we have the barrier with auto */
+		return 0;
+	} else if (strncmp(p, "off", 3) == 0) {
+		handle_no_stf_barrier(NULL);
+		return 0;
+	} else
+		return 1;
+
+	return 0;
+}
+early_param("spec_store_bypass_disable", handle_ssbd);
+
+/* This is the generic flag used by other architectures */
+static int __init handle_no_ssbd(char *p)
+{
+	handle_no_stf_barrier(NULL);
+	return 0;
+}
+early_param("nospec_store_bypass_disable", handle_no_ssbd);
+
+static void stf_barrier_enable(bool enable)
+{
+	if (enable)
+		do_stf_barrier_fixups(stf_enabled_flush_types);
+	else
+		do_stf_barrier_fixups(STF_BARRIER_NONE);
+
+	stf_barrier = enable;
+}
+
+void setup_stf_barrier(void)
+{
+	enum stf_barrier_type type;
+	bool enable, hv;
+
+	hv = cpu_has_feature(CPU_FTR_HVMODE);
+
+	/* Default to fallback in case fw-features are not available */
+	if (cpu_has_feature(CPU_FTR_ARCH_207S))
+		type = STF_BARRIER_SYNC_ORI;
+	else if (cpu_has_feature(CPU_FTR_ARCH_206))
+		type = STF_BARRIER_FALLBACK;
+	else
+		type = STF_BARRIER_NONE;
+
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+		(security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) ||
+		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) && hv));
+
+	if (type == STF_BARRIER_FALLBACK) {
+		pr_info("stf-barrier: fallback barrier available\n");
+	} else if (type == STF_BARRIER_SYNC_ORI) {
+		pr_info("stf-barrier: hwsync barrier available\n");
+	} else if (type == STF_BARRIER_EIEIO) {
+		pr_info("stf-barrier: eieio barrier available\n");
+	}
+
+	stf_enabled_flush_types = type;
+
+	if (!no_stf_barrier)
+		stf_barrier_enable(enable);
+}
+
+ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	if (stf_barrier && stf_enabled_flush_types != STF_BARRIER_NONE) {
+		const char *type;
+		switch (stf_enabled_flush_types) {
+		case STF_BARRIER_EIEIO:
+			type = "eieio";
+			break;
+		case STF_BARRIER_SYNC_ORI:
+			type = "hwsync";
+			break;
+		case STF_BARRIER_FALLBACK:
+			type = "fallback";
+			break;
+		default:
+			type = "unknown";
+		}
+		return sprintf(buf, "Mitigation: Kernel entry/exit barrier (%s)\n", type);
+	}
+
+	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
+	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int stf_barrier_set(void *data, u64 val)
+{
+	bool enable;
+
+	if (val == 1)
+		enable = true;
+	else if (val == 0)
+		enable = false;
+	else
+		return -EINVAL;
+
+	/* Only do anything if we're changing state */
+	if (enable != stf_barrier)
+		stf_barrier_enable(enable);
+
+	return 0;
+}
+
+static int stf_barrier_get(void *data, u64 *val)
+{
+	*val = stf_barrier ? 1 : 0;
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_stf_barrier, stf_barrier_get, stf_barrier_set, "%llu\n");
+
+static __init int stf_barrier_debugfs_init(void)
+{
+	debugfs_create_file("stf_barrier", 0600, powerpc_debugfs_root, NULL, &fops_stf_barrier);
+	return 0;
+}
+device_initcall(stf_barrier_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
+
+static void toggle_count_cache_flush(bool enable)
+{
+	if (!enable || !security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
+		patch_instruction_site(&patch__call_flush_count_cache, PPC_INST_NOP);
+		count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
+		pr_info("count-cache-flush: software flush disabled.\n");
+		return;
+	}
+
+	patch_branch_site(&patch__call_flush_count_cache,
+			  (u64)&flush_count_cache, BRANCH_SET_LINK);
+
+	if (!security_ftr_enabled(SEC_FTR_BCCTR_FLUSH_ASSIST)) {
+		count_cache_flush_type = COUNT_CACHE_FLUSH_SW;
+		pr_info("count-cache-flush: full software flush sequence enabled.\n");
+		return;
+	}
+
+	patch_instruction_site(&patch__flush_count_cache_return, PPC_INST_BLR);
+	count_cache_flush_type = COUNT_CACHE_FLUSH_HW;
+	pr_info("count-cache-flush: hardware assisted flush sequence enabled\n");
+}
+
+void setup_count_cache_flush(void)
+{
+	toggle_count_cache_flush(true);
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int count_cache_flush_set(void *data, u64 val)
+{
+	bool enable;
+
+	if (val == 1)
+		enable = true;
+	else if (val == 0)
+		enable = false;
+	else
+		return -EINVAL;
+
+	toggle_count_cache_flush(enable);
+
+	return 0;
+}
+
+static int count_cache_flush_get(void *data, u64 *val)
+{
+	if (count_cache_flush_type == COUNT_CACHE_FLUSH_NONE)
+		*val = 0;
+	else
+		*val = 1;
+
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_count_cache_flush, count_cache_flush_get,
+			count_cache_flush_set, "%llu\n");
+
+static __init int count_cache_flush_debugfs_init(void)
+{
+	debugfs_create_file("count_cache_flush", 0600, powerpc_debugfs_root,
+			    NULL, &fops_count_cache_flush);
+	return 0;
+}
+device_initcall(count_cache_flush_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
+#endif /* CONFIG_PPC_BOOK3S_64 */
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index ad8c9db61237..5a9f035bcd6b 100644
- --- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -322,6 +322,8 @@ void __init setup_arch(char **cmdline_p)
 		ppc_md.setup_arch();
 	if ( ppc_md.progress ) ppc_md.progress("arch: exit", 0x3eab);
 
+	setup_barrier_nospec();
+
 	paging_init();
 
 	/* Initialize the MMU context management stuff */
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 9eb469bed22b..6bb731ababc6 100644
- --- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -736,6 +736,8 @@ void __init setup_arch(char **cmdline_p)
 	if (ppc_md.setup_arch)
 		ppc_md.setup_arch();
 
+	setup_barrier_nospec();
+
 	paging_init();
 
 	/* Initialize the MMU context management stuff */
@@ -873,9 +875,6 @@ static void do_nothing(void *unused)
 
 void rfi_flush_enable(bool enable)
 {
- -	if (rfi_flush == enable)
- -		return;
- -
 	if (enable) {
 		do_rfi_flush_fixups(enabled_flush_types);
 		on_each_cpu(do_nothing, NULL, 1);
@@ -885,11 +884,15 @@ void rfi_flush_enable(bool enable)
 	rfi_flush = enable;
 }
 
- -static void init_fallback_flush(void)
+static void __ref init_fallback_flush(void)
 {
 	u64 l1d_size, limit;
 	int cpu;
 
+	/* Only allocate the fallback flush area once (at boot time). */
+	if (l1d_flush_fallback_area)
+		return;
+
 	l1d_size = ppc64_caches.dsize;
 	limit = min(safe_stack_limit(), ppc64_rma_size);
 
@@ -902,34 +905,23 @@ static void init_fallback_flush(void)
 	memset(l1d_flush_fallback_area, 0, l1d_size * 2);
 
 	for_each_possible_cpu(cpu) {
- -		/*
- -		 * The fallback flush is currently coded for 8-way
- -		 * associativity. Different associativity is possible, but it
- -		 * will be treated as 8-way and may not evict the lines as
- -		 * effectively.
- -		 *
- -		 * 128 byte lines are mandatory.
- -		 */
- -		u64 c = l1d_size / 8;
- -
 		paca[cpu].rfi_flush_fallback_area = l1d_flush_fallback_area;
- -		paca[cpu].l1d_flush_congruence = c;
- -		paca[cpu].l1d_flush_sets = c / 128;
+		paca[cpu].l1d_flush_size = l1d_size;
 	}
 }
 
- -void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
+void setup_rfi_flush(enum l1d_flush_type types, bool enable)
 {
 	if (types & L1D_FLUSH_FALLBACK) {
- -		pr_info("rfi-flush: Using fallback displacement flush\n");
+		pr_info("rfi-flush: fallback displacement flush available\n");
 		init_fallback_flush();
 	}
 
 	if (types & L1D_FLUSH_ORI)
- -		pr_info("rfi-flush: Using ori type flush\n");
+		pr_info("rfi-flush: ori type flush available\n");
 
 	if (types & L1D_FLUSH_MTTRIG)
- -		pr_info("rfi-flush: Using mttrig type flush\n");
+		pr_info("rfi-flush: mttrig type flush available\n");
 
 	enabled_flush_types = types;
 
@@ -940,13 +932,19 @@ void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
 #ifdef CONFIG_DEBUG_FS
 static int rfi_flush_set(void *data, u64 val)
 {
+	bool enable;
+
 	if (val == 1)
- -		rfi_flush_enable(true);
+		enable = true;
 	else if (val == 0)
- -		rfi_flush_enable(false);
+		enable = false;
 	else
 		return -EINVAL;
 
+	/* Only do anything if we're changing state */
+	if (enable != rfi_flush)
+		rfi_flush_enable(enable);
+
 	return 0;
 }
 
@@ -965,12 +963,4 @@ static __init int rfi_flush_debugfs_init(void)
 }
 device_initcall(rfi_flush_debugfs_init);
 #endif
- -
- -ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
- -{
- -	if (rfi_flush)
- -		return sprintf(buf, "Mitigation: RFI Flush\n");
- -
- -	return sprintf(buf, "Vulnerable\n");
- -}
 #endif /* CONFIG_PPC_BOOK3S_64 */
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 072a23a17350..876ac9d52afc 100644
- --- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -73,14 +73,45 @@ SECTIONS
 	RODATA
 
 #ifdef CONFIG_PPC64
+	. = ALIGN(8);
+	__stf_entry_barrier_fixup : AT(ADDR(__stf_entry_barrier_fixup) - LOAD_OFFSET) {
+		__start___stf_entry_barrier_fixup = .;
+		*(__stf_entry_barrier_fixup)
+		__stop___stf_entry_barrier_fixup = .;
+	}
+
+	. = ALIGN(8);
+	__stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) {
+		__start___stf_exit_barrier_fixup = .;
+		*(__stf_exit_barrier_fixup)
+		__stop___stf_exit_barrier_fixup = .;
+	}
+
 	. = ALIGN(8);
 	__rfi_flush_fixup : AT(ADDR(__rfi_flush_fixup) - LOAD_OFFSET) {
 		__start___rfi_flush_fixup = .;
 		*(__rfi_flush_fixup)
 		__stop___rfi_flush_fixup = .;
 	}
- -#endif
+#endif /* CONFIG_PPC64 */
 
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
+	. = ALIGN(8);
+	__spec_barrier_fixup : AT(ADDR(__spec_barrier_fixup) - LOAD_OFFSET) {
+		__start___barrier_nospec_fixup = .;
+		*(__barrier_nospec_fixup)
+		__stop___barrier_nospec_fixup = .;
+	}
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
+
+#ifdef CONFIG_PPC_FSL_BOOK3E
+	. = ALIGN(8);
+	__spec_btb_flush_fixup : AT(ADDR(__spec_btb_flush_fixup) - LOAD_OFFSET) {
+		__start__btb_flush_fixup = .;
+		*(__btb_flush_fixup)
+		__stop__btb_flush_fixup = .;
+	}
+#endif
 	EXCEPTION_TABLE(0)
 
 	NOTES :kernel :notes
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index d5edbeb8eb82..570c06a00db6 100644
- --- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -14,12 +14,25 @@
 #include <asm/page.h>
 #include <asm/code-patching.h>
 #include <asm/uaccess.h>
+#include <asm/setup.h>
+#include <asm/sections.h>
 
 
+static inline bool is_init(unsigned int *addr)
+{
+	return addr >= (unsigned int *)__init_begin && addr < (unsigned int *)__init_end;
+}
+
 int patch_instruction(unsigned int *addr, unsigned int instr)
 {
 	int err;
 
+	/* Make sure we aren't patching a freed init section */
+	if (init_mem_is_free && is_init(addr)) {
+		pr_debug("Skipping init section patching addr: 0x%px\n", addr);
+		return 0;
+	}
+
 	__put_user_size(instr, addr, 4, err);
 	if (err)
 		return err;
@@ -32,6 +45,22 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags)
 	return patch_instruction(addr, create_branch(addr, target, flags));
 }
 
+int patch_branch_site(s32 *site, unsigned long target, int flags)
+{
+	unsigned int *addr;
+
+	addr = (unsigned int *)((unsigned long)site + *site);
+	return patch_instruction(addr, create_branch(addr, target, flags));
+}
+
+int patch_instruction_site(s32 *site, unsigned int instr)
+{
+	unsigned int *addr;
+
+	addr = (unsigned int *)((unsigned long)site + *site);
+	return patch_instruction(addr, instr);
+}
+
 unsigned int create_branch(const unsigned int *addr,
 			   unsigned long target, int flags)
 {
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index 3af014684872..7bdfc19a491d 100644
- --- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -21,7 +21,7 @@
 #include <asm/page.h>
 #include <asm/sections.h>
 #include <asm/setup.h>
- -
+#include <asm/security_features.h>
 
 struct fixup_entry {
 	unsigned long	mask;
@@ -115,6 +115,120 @@ void do_feature_fixups(unsigned long value, void *fixup_start, void *fixup_end)
 }
 
 #ifdef CONFIG_PPC_BOOK3S_64
+void do_stf_entry_barrier_fixups(enum stf_barrier_type types)
+{
+	unsigned int instrs[3], *dest;
+	long *start, *end;
+	int i;
+
+	start = PTRRELOC(&__start___stf_entry_barrier_fixup),
+	end = PTRRELOC(&__stop___stf_entry_barrier_fixup);
+
+	instrs[0] = 0x60000000; /* nop */
+	instrs[1] = 0x60000000; /* nop */
+	instrs[2] = 0x60000000; /* nop */
+
+	i = 0;
+	if (types & STF_BARRIER_FALLBACK) {
+		instrs[i++] = 0x7d4802a6; /* mflr r10		*/
+		instrs[i++] = 0x60000000; /* branch patched below */
+		instrs[i++] = 0x7d4803a6; /* mtlr r10		*/
+	} else if (types & STF_BARRIER_EIEIO) {
+		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
+	} else if (types & STF_BARRIER_SYNC_ORI) {
+		instrs[i++] = 0x7c0004ac; /* hwsync		*/
+		instrs[i++] = 0xe94d0000; /* ld r10,0(r13)	*/
+		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+
+		patch_instruction(dest, instrs[0]);
+
+		if (types & STF_BARRIER_FALLBACK)
+			patch_branch(dest + 1, (unsigned long)&stf_barrier_fallback,
+				     BRANCH_SET_LINK);
+		else
+			patch_instruction(dest + 1, instrs[1]);
+
+		patch_instruction(dest + 2, instrs[2]);
+	}
+
+	printk(KERN_DEBUG "stf-barrier: patched %d entry locations (%s barrier)\n", i,
+		(types == STF_BARRIER_NONE)                  ? "no" :
+		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
+		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
+		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
+		                                           : "unknown");
+}
+
+void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
+{
+	unsigned int instrs[6], *dest;
+	long *start, *end;
+	int i;
+
+	start = PTRRELOC(&__start___stf_exit_barrier_fixup),
+	end = PTRRELOC(&__stop___stf_exit_barrier_fixup);
+
+	instrs[0] = 0x60000000; /* nop */
+	instrs[1] = 0x60000000; /* nop */
+	instrs[2] = 0x60000000; /* nop */
+	instrs[3] = 0x60000000; /* nop */
+	instrs[4] = 0x60000000; /* nop */
+	instrs[5] = 0x60000000; /* nop */
+
+	i = 0;
+	if (types & STF_BARRIER_FALLBACK || types & STF_BARRIER_SYNC_ORI) {
+		if (cpu_has_feature(CPU_FTR_HVMODE)) {
+			instrs[i++] = 0x7db14ba6; /* mtspr 0x131, r13 (HSPRG1) */
+			instrs[i++] = 0x7db04aa6; /* mfspr r13, 0x130 (HSPRG0) */
+		} else {
+			instrs[i++] = 0x7db243a6; /* mtsprg 2,r13	*/
+			instrs[i++] = 0x7db142a6; /* mfsprg r13,1    */
+	        }
+		instrs[i++] = 0x7c0004ac; /* hwsync		*/
+		instrs[i++] = 0xe9ad0000; /* ld r13,0(r13)	*/
+		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+		if (cpu_has_feature(CPU_FTR_HVMODE)) {
+			instrs[i++] = 0x7db14aa6; /* mfspr r13, 0x131 (HSPRG1) */
+		} else {
+			instrs[i++] = 0x7db242a6; /* mfsprg r13,2 */
+		}
+	} else if (types & STF_BARRIER_EIEIO) {
+		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+
+		patch_instruction(dest, instrs[0]);
+		patch_instruction(dest + 1, instrs[1]);
+		patch_instruction(dest + 2, instrs[2]);
+		patch_instruction(dest + 3, instrs[3]);
+		patch_instruction(dest + 4, instrs[4]);
+		patch_instruction(dest + 5, instrs[5]);
+	}
+	printk(KERN_DEBUG "stf-barrier: patched %d exit locations (%s barrier)\n", i,
+		(types == STF_BARRIER_NONE)                  ? "no" :
+		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
+		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
+		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
+		                                           : "unknown");
+}
+
+
+void do_stf_barrier_fixups(enum stf_barrier_type types)
+{
+	do_stf_entry_barrier_fixups(types);
+	do_stf_exit_barrier_fixups(types);
+}
+
 void do_rfi_flush_fixups(enum l1d_flush_type types)
 {
 	unsigned int instrs[3], *dest;
@@ -151,10 +265,110 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
 		patch_instruction(dest + 2, instrs[2]);
 	}
 
- -	printk(KERN_DEBUG "rfi-flush: patched %d locations\n", i);
+	printk(KERN_DEBUG "rfi-flush: patched %d locations (%s flush)\n", i,
+		(types == L1D_FLUSH_NONE)       ? "no" :
+		(types == L1D_FLUSH_FALLBACK)   ? "fallback displacement" :
+		(types &  L1D_FLUSH_ORI)        ? (types & L1D_FLUSH_MTTRIG)
+							? "ori+mttrig type"
+							: "ori type" :
+		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
+						: "unknown");
+}
+
+void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
+{
+	unsigned int instr, *dest;
+	long *start, *end;
+	int i;
+
+	start = fixup_start;
+	end = fixup_end;
+
+	instr = 0x60000000; /* nop */
+
+	if (enable) {
+		pr_info("barrier-nospec: using ORI speculation barrier\n");
+		instr = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+		patch_instruction(dest, instr);
+	}
+
+	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
 }
+
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
+void do_barrier_nospec_fixups(bool enable)
+{
+	void *start, *end;
+
+	start = PTRRELOC(&__start___barrier_nospec_fixup),
+	end = PTRRELOC(&__stop___barrier_nospec_fixup);
+
+	do_barrier_nospec_fixups_range(enable, start, end);
+}
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
+
+#ifdef CONFIG_PPC_FSL_BOOK3E
+void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
+{
+	unsigned int instr[2], *dest;
+	long *start, *end;
+	int i;
+
+	start = fixup_start;
+	end = fixup_end;
+
+	instr[0] = PPC_INST_NOP;
+	instr[1] = PPC_INST_NOP;
+
+	if (enable) {
+		pr_info("barrier-nospec: using isync; sync as speculation barrier\n");
+		instr[0] = PPC_INST_ISYNC;
+		instr[1] = PPC_INST_SYNC;
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+		patch_instruction(dest, instr[0]);
+		patch_instruction(dest + 1, instr[1]);
+	}
+
+	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
+}
+
+static void patch_btb_flush_section(long *curr)
+{
+	unsigned int *start, *end;
+
+	start = (void *)curr + *curr;
+	end = (void *)curr + *(curr + 1);
+	for (; start < end; start++) {
+		pr_devel("patching dest %lx\n", (unsigned long)start);
+		patch_instruction(start, PPC_INST_NOP);
+	}
+}
+
+void do_btb_flush_fixups(void)
+{
+	long *start, *end;
+
+	start = PTRRELOC(&__start__btb_flush_fixup);
+	end = PTRRELOC(&__stop__btb_flush_fixup);
+
+	for (; start < end; start += 2)
+		patch_btb_flush_section(start);
+}
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
 {
 	long *start, *end;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 22d94c3e6fc4..1efe5ca5c3bc 100644
- --- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -62,6 +62,7 @@
 #endif
 
 unsigned long long memory_limit;
+bool init_mem_is_free;
 
 #ifdef CONFIG_HIGHMEM
 pte_t *kmap_pte;
@@ -381,6 +382,7 @@ void __init mem_init(void)
 void free_initmem(void)
 {
 	ppc_md.progress = ppc_printk_progress;
+	init_mem_is_free = true;
 	free_initmem_default(POISON_FREE_INITMEM);
 }
 
diff --git a/arch/powerpc/mm/tlb_low_64e.S b/arch/powerpc/mm/tlb_low_64e.S
index 29d6987c37ba..5486d56da289 100644
- --- a/arch/powerpc/mm/tlb_low_64e.S
+++ b/arch/powerpc/mm/tlb_low_64e.S
@@ -69,6 +69,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	std	r15,EX_TLB_R15(r12)
 	std	r10,EX_TLB_CR(r12)
 #ifdef CONFIG_PPC_FSL_BOOK3E
+START_BTB_FLUSH_SECTION
+	mfspr r11, SPRN_SRR1
+	andi. r10,r11,MSR_PR
+	beq 1f
+	BTB_FLUSH(r10)
+1:
+END_BTB_FLUSH_SECTION
 	std	r7,EX_TLB_R7(r12)
 #endif
 	TLB_MISS_PROLOG_STATS
diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index c57afc619b20..e14b52c7ebd8 100644
- --- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -37,53 +37,99 @@
 #include <asm/smp.h>
 #include <asm/tm.h>
 #include <asm/setup.h>
+#include <asm/security_features.h>
 
 #include "powernv.h"
 
+
+static bool fw_feature_is(const char *state, const char *name,
+			  struct device_node *fw_features)
+{
+	struct device_node *np;
+	bool rc = false;
+
+	np = of_get_child_by_name(fw_features, name);
+	if (np) {
+		rc = of_property_read_bool(np, state);
+		of_node_put(np);
+	}
+
+	return rc;
+}
+
+static void init_fw_feat_flags(struct device_node *np)
+{
+	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
+		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
+
+	if (fw_feature_is("enabled", "fw-bcctrl-serialized", np))
+		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
+
+	if (fw_feature_is("enabled", "inst-l1d-flush-ori30,30,0", np))
+		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
+
+	if (fw_feature_is("enabled", "inst-l1d-flush-trig2", np))
+		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
+
+	if (fw_feature_is("enabled", "fw-l1d-thread-split", np))
+		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (fw_feature_is("enabled", "fw-count-cache-disabled", np))
+		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
+
+	if (fw_feature_is("enabled", "fw-count-cache-flush-bcctr2,0,0", np))
+		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
+
+	if (fw_feature_is("enabled", "needs-count-cache-flush-on-context-switch", np))
+		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
+
+	/*
+	 * The features below are enabled by default, so we instead look to see
+	 * if firmware has *disabled* them, and clear them if so.
+	 */
+	if (fw_feature_is("disabled", "speculation-policy-favor-security", np))
+		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
+
+	if (fw_feature_is("disabled", "needs-l1d-flush-msr-pr-0-to-1", np))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
+
+	if (fw_feature_is("disabled", "needs-l1d-flush-msr-hv-1-to-0", np))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
+
+	if (fw_feature_is("disabled", "needs-spec-barrier-for-bound-checks", np))
+		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
+}
+
 static void pnv_setup_rfi_flush(void)
 {
 	struct device_node *np, *fw_features;
 	enum l1d_flush_type type;
- -	int enable;
+	bool enable;
 
 	/* Default to fallback in case fw-features are not available */
 	type = L1D_FLUSH_FALLBACK;
- -	enable = 1;
 
 	np = of_find_node_by_name(NULL, "ibm,opal");
 	fw_features = of_get_child_by_name(np, "fw-features");
 	of_node_put(np);
 
 	if (fw_features) {
- -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-trig2");
- -		if (np && of_property_read_bool(np, "enabled"))
- -			type = L1D_FLUSH_MTTRIG;
+		init_fw_feat_flags(fw_features);
+		of_node_put(fw_features);
 
- -		of_node_put(np);
+		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
+			type = L1D_FLUSH_MTTRIG;
 
- -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-ori30,30,0");
- -		if (np && of_property_read_bool(np, "enabled"))
+		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
 			type = L1D_FLUSH_ORI;
- -
- -		of_node_put(np);
- -
- -		/* Enable unless firmware says NOT to */
- -		enable = 2;
- -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-hv-1-to-0");
- -		if (np && of_property_read_bool(np, "disabled"))
- -			enable--;
- -
- -		of_node_put(np);
- -
- -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-pr-0-to-1");
- -		if (np && of_property_read_bool(np, "disabled"))
- -			enable--;
- -
- -		of_node_put(np);
- -		of_node_put(fw_features);
 	}
 
- -	setup_rfi_flush(type, enable > 0);
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
+		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR)   || \
+		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
+
+	setup_rfi_flush(type, enable);
+	setup_count_cache_flush();
 }
 
 static void __init pnv_setup_arch(void)
@@ -91,6 +137,7 @@ static void __init pnv_setup_arch(void)
 	set_arch_panic_timeout(10, ARCH_PANIC_TIMEOUT);
 
 	pnv_setup_rfi_flush();
+	setup_stf_barrier();
 
 	/* Initialize SMP */
 	pnv_smp_init();
diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
index 8dd0c8edefd6..c773396d0969 100644
- --- a/arch/powerpc/platforms/pseries/mobility.c
+++ b/arch/powerpc/platforms/pseries/mobility.c
@@ -314,6 +314,9 @@ void post_mobility_fixup(void)
 		printk(KERN_ERR "Post-mobility device tree update "
 			"failed: %d\n", rc);
 
+	/* Possibly switch to a new RFI flush type */
+	pseries_setup_rfi_flush();
+
 	return;
 }
 
diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h
index 8411c27293e4..e7d80797384d 100644
- --- a/arch/powerpc/platforms/pseries/pseries.h
+++ b/arch/powerpc/platforms/pseries/pseries.h
@@ -81,4 +81,6 @@ extern struct pci_controller_ops pseries_pci_controller_ops;
 
 unsigned long pseries_memory_block_size(void);
 
+void pseries_setup_rfi_flush(void);
+
 #endif /* _PSERIES_PSERIES_H */
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index dd2545fc9947..9cc976ff7fec 100644
- --- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -67,6 +67,7 @@
 #include <asm/eeh.h>
 #include <asm/reg.h>
 #include <asm/plpar_wrappers.h>
+#include <asm/security_features.h>
 
 #include "pseries.h"
 
@@ -499,37 +500,87 @@ static void __init find_and_init_phbs(void)
 	of_pci_check_probe_only();
 }
 
- -static void pseries_setup_rfi_flush(void)
+static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
+{
+	/*
+	 * The features below are disabled by default, so we instead look to see
+	 * if firmware has *enabled* them, and set them if so.
+	 */
+	if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31)
+		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
+
+	if (result->character & H_CPU_CHAR_BCCTRL_SERIALISED)
+		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
+
+	if (result->character & H_CPU_CHAR_L1D_FLUSH_ORI30)
+		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
+
+	if (result->character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
+		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
+
+	if (result->character & H_CPU_CHAR_L1D_THREAD_PRIV)
+		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (result->character & H_CPU_CHAR_COUNT_CACHE_DISABLED)
+		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
+
+	if (result->character & H_CPU_CHAR_BCCTR_FLUSH_ASSIST)
+		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
+
+	if (result->behaviour & H_CPU_BEHAV_FLUSH_COUNT_CACHE)
+		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
+
+	/*
+	 * The features below are enabled by default, so we instead look to see
+	 * if firmware has *disabled* them, and clear them if so.
+	 */
+	if (!(result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY))
+		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
+
+	if (!(result->behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
+
+	if (!(result->behaviour & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
+		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
+}
+
+void pseries_setup_rfi_flush(void)
 {
 	struct h_cpu_char_result result;
 	enum l1d_flush_type types;
 	bool enable;
 	long rc;
 
- -	/* Enable by default */
- -	enable = true;
+	/*
+	 * Set features to the defaults assumed by init_cpu_char_feature_flags()
+	 * so it can set/clear again any features that might have changed after
+	 * migration, and in case the hypercall fails and it is not even called.
+	 */
+	powerpc_security_features = SEC_FTR_DEFAULT;
 
 	rc = plpar_get_cpu_characteristics(&result);
- -	if (rc == H_SUCCESS) {
- -		types = L1D_FLUSH_NONE;
+	if (rc == H_SUCCESS)
+		init_cpu_char_feature_flags(&result);
 
- -		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
- -			types |= L1D_FLUSH_MTTRIG;
- -		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
- -			types |= L1D_FLUSH_ORI;
+	/*
+	 * We're the guest so this doesn't apply to us, clear it to simplify
+	 * handling of it elsewhere.
+	 */
+	security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
 
- -		/* Use fallback if nothing set in hcall */
- -		if (types == L1D_FLUSH_NONE)
- -			types = L1D_FLUSH_FALLBACK;
+	types = L1D_FLUSH_FALLBACK;
 
- -		if (!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
- -			enable = false;
- -	} else {
- -		/* Default to fallback if case hcall is not available */
- -		types = L1D_FLUSH_FALLBACK;
- -	}
+	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
+		types |= L1D_FLUSH_MTTRIG;
+
+	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
+		types |= L1D_FLUSH_ORI;
+
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
+		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
 
 	setup_rfi_flush(types, enable);
+	setup_count_cache_flush();
 }
 
 static void __init pSeries_setup_arch(void)
@@ -549,6 +600,7 @@ static void __init pSeries_setup_arch(void)
 	fwnmi_init();
 
 	pseries_setup_rfi_flush();
+	setup_stf_barrier();
 
 	/* By default, only probe PCI (can be overridden by rtas_pci) */
 	pci_add_flags(PCI_PROBE_ONLY);
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 786bf01691c9..83619ebede93 100644
- --- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -2144,6 +2144,8 @@ static void dump_one_paca(int cpu)
 	DUMP(p, slb_cache_ptr, "x");
 	for (i = 0; i < SLB_CACHE_ENTRIES; i++)
 		printf(" slb_cache[%d]:        = 0x%016lx\n", i, p->slb_cache[i]);
+
+	DUMP(p, rfi_flush_fallback_area, "px");
 #endif
 	DUMP(p, dscr_default, "llx");
 #ifdef CONFIG_PPC_BOOK3E
- -- 
2.20.1

-----BEGIN PGP SIGNATURE-----

iQIcBAEBAgAGBQJcvHWhAAoJEFHr6jzI4aWA6nsP/0YskmAfLovcUmERQ7+bIjq6
IcS1T466dvy6MlqeBXU4x8pVgInWeHKEC9XJdkM1lOeib/SLW7Hbz4kgJeOGwFGY
lOTaexrxvsBqPm7f6GC0zbl9obEIIIIUs+TielFQANBgqm+q8Wio+XXPP9bpKeKY
agSpQ3nwL/PYixznbNmN/lP9py5p89LQ0IBcR7dDBGGWJtD/AXeZ9hslsZxPbPtI
nZJ0vdnjuoB2z+hCxfKWlYfLwH0VfoTpqP5x3ALCkvbBr67e8bf6EK8+trnvhyQ8
iLY4bp1pm2epAI0/3NfyEiDMsGjVJ6IFlkyhDkHJgJNu0BGcGOSX2GpyU3juviAK
c95FtBft/i8AwigOMCivg2mN5edYjsSiPoEItwT5KWqgByJsdr5i5mYVx8cUjMOz
iAxLZCdg+UHZYuCBCAO2ZI1G9bVXI1Pa3btMspiCOOOsYGjXGf0oFfKQ+7957hUO
ftYYJoGHlMHiHR1OPas6T3lk6YKF9uvfIDTE3OKw2obHbbRz3u82xoWMRGW503MN
7WpkpAP7oZ9RgqIWFVhatWy5f+7GFL0akEi4o2tsZHhYlPau7YWo+nToTd87itwt
GBaWJipzge4s13VkhAE+jWFO35Fvwi8uNZ7UgpuKMBECEjkGbtzBTq2MjSF5G8wc
yPEod5jby/Iqb7DkGPVG
=6DnF
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
@ 2019-04-21 14:19 ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Greg/Sasha,

Please queue up these powerpc patches for 4.4 if you have no objections.

cheers


Christophe Leroy (1):
  powerpc/fsl: Fix the flush of branch predictor.

Diana Craciun (10):
  powerpc/64: Disable the speculation barrier from the command line
  powerpc/64: Make stf barrier PPC_BOOK3S_64 specific.
  powerpc/64: Make meltdown reporting Book3S 64 specific
  powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E
  powerpc/fsl: Add infrastructure to fixup branch predictor flush
  powerpc/fsl: Add macro to flush the branch predictor
  powerpc/fsl: Fix spectre_v2 mitigations reporting
  powerpc/fsl: Add nospectre_v2 command line argument
  powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)
  powerpc/fsl: Update Spectre v2 reporting

Mauricio Faria de Oliveira (4):
  powerpc/rfi-flush: Differentiate enabled and patched flush types
  powerpc/pseries: Fix clearing of security feature flags
  powerpc: Move default security feature flags
  powerpc/pseries: Restore default security feature flags on setup

Michael Ellerman (29):
  powerpc/xmon: Add RFI flush related fields to paca dump
  powerpc/pseries: Support firmware disable of RFI flush
  powerpc/powernv: Support firmware disable of RFI flush
  powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs
    code
  powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again
  powerpc/rfi-flush: Always enable fallback flush on pseries
  powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags
  powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration
  powerpc: Add security feature flags for Spectre/Meltdown
  powerpc/pseries: Set or clear security feature flags
  powerpc/powernv: Set or clear security feature flags
  powerpc/64s: Move cpu_show_meltdown()
  powerpc/64s: Enhance the information in cpu_show_meltdown()
  powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()
  powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()
  powerpc/64s: Wire up cpu_show_spectre_v1()
  powerpc/64s: Wire up cpu_show_spectre_v2()
  powerpc/64s: Fix section mismatch warnings from setup_rfi_flush()
  powerpc/64: Use barrier_nospec in syscall entry
  powerpc: Use barrier_nospec in copy_from_user()
  powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2
  powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC
  powerpc/64: Call setup_barrier_nospec() from setup_arch()
  powerpc/asm: Add a patch_site macro & helpers for patching
    instructions
  powerpc/64s: Add new security feature flags for count cache flush
  powerpc/64s: Add support for software count cache flush
  powerpc/pseries: Query hypervisor for count cache flush settings
  powerpc/powernv: Query firmware for count cache flush settings
  powerpc/security: Fix spectre_v2 reporting

Michael Neuling (1):
  powerpc: Avoid code patching freed init sections

Michal Suchanek (5):
  powerpc/64s: Add barrier_nospec
  powerpc/64s: Add support for ori barrier_nospec patching
  powerpc/64s: Patch barrier_nospec in modules
  powerpc/64s: Enable barrier_nospec based on firmware settings
  powerpc/64s: Enhance the information in cpu_show_spectre_v1()

Nicholas Piggin (2):
  powerpc/64s: Improve RFI L1-D cache flush fallback
  powerpc/64s: Add support for a store forwarding barrier at kernel
    entry/exit

 arch/powerpc/Kconfig                         |   7 +-
 arch/powerpc/include/asm/asm-prototypes.h    |  21 +
 arch/powerpc/include/asm/barrier.h           |  21 +
 arch/powerpc/include/asm/code-patching-asm.h |  18 +
 arch/powerpc/include/asm/code-patching.h     |   2 +
 arch/powerpc/include/asm/exception-64s.h     |  35 ++
 arch/powerpc/include/asm/feature-fixups.h    |  40 ++
 arch/powerpc/include/asm/hvcall.h            |   5 +
 arch/powerpc/include/asm/paca.h              |   3 +-
 arch/powerpc/include/asm/ppc-opcode.h        |   1 +
 arch/powerpc/include/asm/ppc_asm.h           |  11 +
 arch/powerpc/include/asm/security_features.h |  92 ++++
 arch/powerpc/include/asm/setup.h             |  23 +-
 arch/powerpc/include/asm/uaccess.h           |  18 +-
 arch/powerpc/kernel/Makefile                 |   1 +
 arch/powerpc/kernel/asm-offsets.c            |   3 +-
 arch/powerpc/kernel/entry_64.S               |  69 +++
 arch/powerpc/kernel/exceptions-64e.S         |  27 +-
 arch/powerpc/kernel/exceptions-64s.S         |  98 +++--
 arch/powerpc/kernel/module.c                 |  10 +-
 arch/powerpc/kernel/security.c               | 433 +++++++++++++++++++
 arch/powerpc/kernel/setup_32.c               |   2 +
 arch/powerpc/kernel/setup_64.c               |  50 +--
 arch/powerpc/kernel/vmlinux.lds.S            |  33 +-
 arch/powerpc/lib/code-patching.c             |  29 ++
 arch/powerpc/lib/feature-fixups.c            | 218 +++++++++-
 arch/powerpc/mm/mem.c                        |   2 +
 arch/powerpc/mm/tlb_low_64e.S                |   7 +
 arch/powerpc/platforms/powernv/setup.c       |  99 +++--
 arch/powerpc/platforms/pseries/mobility.c    |   3 +
 arch/powerpc/platforms/pseries/pseries.h     |   2 +
 arch/powerpc/platforms/pseries/setup.c       |  88 +++-
 arch/powerpc/xmon/xmon.c                     |   2 +
 33 files changed, 1345 insertions(+), 128 deletions(-)
 create mode 100644 arch/powerpc/include/asm/asm-prototypes.h
 create mode 100644 arch/powerpc/include/asm/code-patching-asm.h
 create mode 100644 arch/powerpc/include/asm/security_features.h
 create mode 100644 arch/powerpc/kernel/security.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 58a1fa979655..01b6c00a7060 100644
- --- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -136,7 +136,7 @@ config PPC
 	select GENERIC_SMP_IDLE_THREAD
 	select GENERIC_CMOS_UPDATE
 	select GENERIC_TIME_VSYSCALL_OLD
- -	select GENERIC_CPU_VULNERABILITIES	if PPC_BOOK3S_64
+	select GENERIC_CPU_VULNERABILITIES	if PPC_BARRIER_NOSPEC
 	select GENERIC_CLOCKEVENTS
 	select GENERIC_CLOCKEVENTS_BROADCAST if SMP
 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
@@ -162,6 +162,11 @@ config PPC
 	select ARCH_HAS_DMA_SET_COHERENT_MASK
 	select HAVE_ARCH_SECCOMP_FILTER
 
+config PPC_BARRIER_NOSPEC
+    bool
+    default y
+    depends on PPC_BOOK3S_64 || PPC_FSL_BOOK3E
+
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
 
diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
new file mode 100644
index 000000000000..8944c55591cf
- --- /dev/null
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -0,0 +1,21 @@
+#ifndef _ASM_POWERPC_ASM_PROTOTYPES_H
+#define _ASM_POWERPC_ASM_PROTOTYPES_H
+/*
+ * This file is for prototypes of C functions that are only called
+ * from asm, and any associated variables.
+ *
+ * Copyright 2016, Daniel Axtens, IBM Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ */
+
+/* Patch sites */
+extern s32 patch__call_flush_count_cache;
+extern s32 patch__flush_count_cache_return;
+
+extern long flush_count_cache;
+
+#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index b9e16855a037..e7cb72cdb2ba 100644
- --- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -92,4 +92,25 @@ do {									\
 #define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
+#ifdef CONFIG_PPC_BOOK3S_64
+#define NOSPEC_BARRIER_SLOT   nop
+#elif defined(CONFIG_PPC_FSL_BOOK3E)
+#define NOSPEC_BARRIER_SLOT   nop; nop
+#endif
+
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
+/*
+ * Prevent execution of subsequent instructions until preceding branches have
+ * been fully resolved and are no longer executing speculatively.
+ */
+#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; NOSPEC_BARRIER_SLOT
+
+// This also acts as a compiler barrier due to the memory clobber.
+#define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
+
+#else /* !CONFIG_PPC_BARRIER_NOSPEC */
+#define barrier_nospec_asm
+#define barrier_nospec()
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
+
 #endif /* _ASM_POWERPC_BARRIER_H */
diff --git a/arch/powerpc/include/asm/code-patching-asm.h b/arch/powerpc/include/asm/code-patching-asm.h
new file mode 100644
index 000000000000..ed7b1448493a
- --- /dev/null
+++ b/arch/powerpc/include/asm/code-patching-asm.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Copyright 2018, Michael Ellerman, IBM Corporation.
+ */
+#ifndef _ASM_POWERPC_CODE_PATCHING_ASM_H
+#define _ASM_POWERPC_CODE_PATCHING_ASM_H
+
+/* Define a "site" that can be patched */
+.macro patch_site label name
+	.pushsection ".rodata"
+	.balign 4
+	.global \name
+\name:
+	.4byte	\label - .
+	.popsection
+.endm
+
+#endif /* _ASM_POWERPC_CODE_PATCHING_ASM_H */
diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
index 840a5509b3f1..a734b4b34d26 100644
- --- a/arch/powerpc/include/asm/code-patching.h
+++ b/arch/powerpc/include/asm/code-patching.h
@@ -28,6 +28,8 @@ unsigned int create_cond_branch(const unsigned int *addr,
 				unsigned long target, int flags);
 int patch_branch(unsigned int *addr, unsigned long target, int flags);
 int patch_instruction(unsigned int *addr, unsigned int instr);
+int patch_instruction_site(s32 *addr, unsigned int instr);
+int patch_branch_site(s32 *site, unsigned long target, int flags);
 
 int instr_is_relative_branch(unsigned int instr);
 int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 9bddbec441b8..3ed536bec462 100644
- --- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -50,6 +50,27 @@
 #define EX_PPR		88	/* SMT thread status register (priority) */
 #define EX_CTR		96
 
+#define STF_ENTRY_BARRIER_SLOT						\
+	STF_ENTRY_BARRIER_FIXUP_SECTION;				\
+	nop;								\
+	nop;								\
+	nop
+
+#define STF_EXIT_BARRIER_SLOT						\
+	STF_EXIT_BARRIER_FIXUP_SECTION;					\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop
+
+/*
+ * r10 must be free to use, r13 must be paca
+ */
+#define INTERRUPT_TO_KERNEL						\
+	STF_ENTRY_BARRIER_SLOT
+
 /*
  * Macros for annotating the expected destination of (h)rfid
  *
@@ -66,16 +87,19 @@
 	rfid
 
 #define RFI_TO_USER							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
 
 #define RFI_TO_USER_OR_KERNEL						\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
 
 #define RFI_TO_GUEST							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
@@ -84,21 +108,25 @@
 	hrfid
 
 #define HRFI_TO_USER							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_USER_OR_KERNEL						\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_GUEST							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_UNKNOWN							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
@@ -226,6 +254,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 #define __EXCEPTION_PROLOG_1(area, extra, vec)				\
 	OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR);		\
 	OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);		\
+	INTERRUPT_TO_KERNEL;						\
 	SAVE_CTR(r10, area);						\
 	mfcr	r9;							\
 	extra(vec);							\
@@ -512,6 +541,12 @@ label##_relon_hv:						\
 #define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
 	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
 
+#define MASKABLE_EXCEPTION_OOL(vec, label)				\
+	.globl label##_ool;						\
+label##_ool:								\
+	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
+	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
+
 #define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
 	. = loc;							\
 	.globl label##_pSeries;						\
diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
index 7068bafbb2d6..145a37ab2d3e 100644
- --- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -184,6 +184,22 @@ label##3:					       	\
 	FTR_ENTRY_OFFSET label##1b-label##3b;		\
 	.popsection;
 
+#define STF_ENTRY_BARRIER_FIXUP_SECTION			\
+953:							\
+	.pushsection __stf_entry_barrier_fixup,"a";	\
+	.align 2;					\
+954:							\
+	FTR_ENTRY_OFFSET 953b-954b;			\
+	.popsection;
+
+#define STF_EXIT_BARRIER_FIXUP_SECTION			\
+955:							\
+	.pushsection __stf_exit_barrier_fixup,"a";	\
+	.align 2;					\
+956:							\
+	FTR_ENTRY_OFFSET 955b-956b;			\
+	.popsection;
+
 #define RFI_FLUSH_FIXUP_SECTION				\
 951:							\
 	.pushsection __rfi_flush_fixup,"a";		\
@@ -192,10 +208,34 @@ label##3:					       	\
 	FTR_ENTRY_OFFSET 951b-952b;			\
 	.popsection;
 
+#define NOSPEC_BARRIER_FIXUP_SECTION			\
+953:							\
+	.pushsection __barrier_nospec_fixup,"a";	\
+	.align 2;					\
+954:							\
+	FTR_ENTRY_OFFSET 953b-954b;			\
+	.popsection;
+
+#define START_BTB_FLUSH_SECTION			\
+955:							\
+
+#define END_BTB_FLUSH_SECTION			\
+956:							\
+	.pushsection __btb_flush_fixup,"a";	\
+	.align 2;							\
+957:						\
+	FTR_ENTRY_OFFSET 955b-957b;			\
+	FTR_ENTRY_OFFSET 956b-957b;			\
+	.popsection;
 
 #ifndef __ASSEMBLY__
 
+extern long stf_barrier_fallback;
+extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
+extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
 extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
+extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
+extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
 
 #endif
 
diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 449bbb87c257..b57db9d09db9 100644
- --- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -292,10 +292,15 @@
 #define H_CPU_CHAR_L1D_FLUSH_ORI30	(1ull << 61) // IBM bit 2
 #define H_CPU_CHAR_L1D_FLUSH_TRIG2	(1ull << 60) // IBM bit 3
 #define H_CPU_CHAR_L1D_THREAD_PRIV	(1ull << 59) // IBM bit 4
+#define H_CPU_CHAR_BRANCH_HINTS_HONORED	(1ull << 58) // IBM bit 5
+#define H_CPU_CHAR_THREAD_RECONFIG_CTRL	(1ull << 57) // IBM bit 6
+#define H_CPU_CHAR_COUNT_CACHE_DISABLED	(1ull << 56) // IBM bit 7
+#define H_CPU_CHAR_BCCTR_FLUSH_ASSIST	(1ull << 54) // IBM bit 9
 
 #define H_CPU_BEHAV_FAVOUR_SECURITY	(1ull << 63) // IBM bit 0
 #define H_CPU_BEHAV_L1D_FLUSH_PR	(1ull << 62) // IBM bit 1
 #define H_CPU_BEHAV_BNDS_CHK_SPEC_BAR	(1ull << 61) // IBM bit 2
+#define H_CPU_BEHAV_FLUSH_COUNT_CACHE	(1ull << 58) // IBM bit 5
 
 #ifndef __ASSEMBLY__
 #include <linux/types.h>
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 45e2aefece16..08e5df3395fa 100644
- --- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -199,8 +199,7 @@ struct paca_struct {
 	 */
 	u64 exrfi[13] __aligned(0x80);
 	void *rfi_flush_fallback_area;
- -	u64 l1d_flush_congruence;
- -	u64 l1d_flush_sets;
+	u64 l1d_flush_size;
 #endif
 };
 
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 7ab04fc59e24..faf1bb045dee 100644
- --- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -147,6 +147,7 @@
 #define PPC_INST_LWSYNC			0x7c2004ac
 #define PPC_INST_SYNC			0x7c0004ac
 #define PPC_INST_SYNC_MASK		0xfc0007fe
+#define PPC_INST_ISYNC			0x4c00012c
 #define PPC_INST_LXVD2X			0x7c000698
 #define PPC_INST_MCRXR			0x7c000400
 #define PPC_INST_MCRXR_MASK		0xfc0007fe
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 160bb2311bbb..d219816b3e19 100644
- --- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -821,4 +821,15 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,945)
 	.long 0x2400004c  /* rfid				*/
 #endif /* !CONFIG_PPC_BOOK3E */
 #endif /*  __ASSEMBLY__ */
+
+#ifdef CONFIG_PPC_FSL_BOOK3E
+#define BTB_FLUSH(reg)			\
+	lis reg,BUCSR_INIT@h;		\
+	ori reg,reg,BUCSR_INIT@l;	\
+	mtspr SPRN_BUCSR,reg;		\
+	isync;
+#else
+#define BTB_FLUSH(reg)
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
 #endif /* _ASM_POWERPC_PPC_ASM_H */
diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
new file mode 100644
index 000000000000..759597bf0fd8
- --- /dev/null
+++ b/arch/powerpc/include/asm/security_features.h
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Security related feature bit definitions.
+ *
+ * Copyright 2018, Michael Ellerman, IBM Corporation.
+ */
+
+#ifndef _ASM_POWERPC_SECURITY_FEATURES_H
+#define _ASM_POWERPC_SECURITY_FEATURES_H
+
+
+extern unsigned long powerpc_security_features;
+extern bool rfi_flush;
+
+/* These are bit flags */
+enum stf_barrier_type {
+	STF_BARRIER_NONE	= 0x1,
+	STF_BARRIER_FALLBACK	= 0x2,
+	STF_BARRIER_EIEIO	= 0x4,
+	STF_BARRIER_SYNC_ORI	= 0x8,
+};
+
+void setup_stf_barrier(void);
+void do_stf_barrier_fixups(enum stf_barrier_type types);
+void setup_count_cache_flush(void);
+
+static inline void security_ftr_set(unsigned long feature)
+{
+	powerpc_security_features |= feature;
+}
+
+static inline void security_ftr_clear(unsigned long feature)
+{
+	powerpc_security_features &= ~feature;
+}
+
+static inline bool security_ftr_enabled(unsigned long feature)
+{
+	return !!(powerpc_security_features & feature);
+}
+
+
+// Features indicating support for Spectre/Meltdown mitigations
+
+// The L1-D cache can be flushed with ori r30,r30,0
+#define SEC_FTR_L1D_FLUSH_ORI30		0x0000000000000001ull
+
+// The L1-D cache can be flushed with mtspr 882,r0 (aka SPRN_TRIG2)
+#define SEC_FTR_L1D_FLUSH_TRIG2		0x0000000000000002ull
+
+// ori r31,r31,0 acts as a speculation barrier
+#define SEC_FTR_SPEC_BAR_ORI31		0x0000000000000004ull
+
+// Speculation past bctr is disabled
+#define SEC_FTR_BCCTRL_SERIALISED	0x0000000000000008ull
+
+// Entries in L1-D are private to a SMT thread
+#define SEC_FTR_L1D_THREAD_PRIV		0x0000000000000010ull
+
+// Indirect branch prediction cache disabled
+#define SEC_FTR_COUNT_CACHE_DISABLED	0x0000000000000020ull
+
+// bcctr 2,0,0 triggers a hardware assisted count cache flush
+#define SEC_FTR_BCCTR_FLUSH_ASSIST	0x0000000000000800ull
+
+
+// Features indicating need for Spectre/Meltdown mitigations
+
+// The L1-D cache should be flushed on MSR[HV] 1->0 transition (hypervisor to guest)
+#define SEC_FTR_L1D_FLUSH_HV		0x0000000000000040ull
+
+// The L1-D cache should be flushed on MSR[PR] 0->1 transition (kernel to userspace)
+#define SEC_FTR_L1D_FLUSH_PR		0x0000000000000080ull
+
+// A speculation barrier should be used for bounds checks (Spectre variant 1)
+#define SEC_FTR_BNDS_CHK_SPEC_BAR	0x0000000000000100ull
+
+// Firmware configuration indicates user favours security over performance
+#define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
+
+// Software required to flush count cache on context switch
+#define SEC_FTR_FLUSH_COUNT_CACHE	0x0000000000000400ull
+
+
+// Features enabled by default
+#define SEC_FTR_DEFAULT \
+	(SEC_FTR_L1D_FLUSH_HV | \
+	 SEC_FTR_L1D_FLUSH_PR | \
+	 SEC_FTR_BNDS_CHK_SPEC_BAR | \
+	 SEC_FTR_FAVOUR_SECURITY)
+
+#endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 7916b56f2e60..d299479c770b 100644
- --- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -8,6 +8,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
 
 extern unsigned int rtas_data;
 extern unsigned long long memory_limit;
+extern bool init_mem_is_free;
 extern unsigned long klimit;
 extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
 
@@ -36,8 +37,28 @@ enum l1d_flush_type {
 	L1D_FLUSH_MTTRIG	= 0x8,
 };
 
- -void __init setup_rfi_flush(enum l1d_flush_type, bool enable);
+void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
+void setup_barrier_nospec(void);
+#else
+static inline void setup_barrier_nospec(void) { };
+#endif
+void do_barrier_nospec_fixups(bool enable);
+extern bool barrier_nospec_enabled;
+
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
+void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
+#else
+static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
+#endif
+
+#ifdef CONFIG_PPC_FSL_BOOK3E
+void setup_spectre_v2(void);
+#else
+static inline void setup_spectre_v2(void) {};
+#endif
+void do_btb_flush_fixups(void);
 
 #endif /* !__ASSEMBLY__ */
 
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 05f1389228d2..e51ce5a0e221 100644
- --- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -269,6 +269,7 @@ do {								\
 	__chk_user_ptr(ptr);					\
 	if (!is_kernel_addr((unsigned long)__gu_addr))		\
 		might_fault();					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -283,6 +284,7 @@ do {								\
 	__chk_user_ptr(ptr);					\
 	if (!is_kernel_addr((unsigned long)__gu_addr))		\
 		might_fault();					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -295,8 +297,10 @@ do {								\
 	unsigned long  __gu_val = 0;					\
 	__typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
 	might_fault();							\
- -	if (access_ok(VERIFY_READ, __gu_addr, (size)))			\
+	if (access_ok(VERIFY_READ, __gu_addr, (size))) {		\
+		barrier_nospec();					\
 		__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
+	}								\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
 	__gu_err;							\
 })
@@ -307,6 +311,7 @@ do {								\
 	unsigned long __gu_val;					\
 	__typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
 	__chk_user_ptr(ptr);					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -323,8 +328,10 @@ extern unsigned long __copy_tofrom_user(void __user *to,
 static inline unsigned long copy_from_user(void *to,
 		const void __user *from, unsigned long n)
 {
- -	if (likely(access_ok(VERIFY_READ, from, n)))
+	if (likely(access_ok(VERIFY_READ, from, n))) {
+		barrier_nospec();
 		return __copy_tofrom_user((__force void __user *)to, from, n);
+	}
 	memset(to, 0, n);
 	return n;
 }
@@ -359,21 +366,27 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
 
 		switch (n) {
 		case 1:
+			barrier_nospec();
 			__get_user_size(*(u8 *)to, from, 1, ret);
 			break;
 		case 2:
+			barrier_nospec();
 			__get_user_size(*(u16 *)to, from, 2, ret);
 			break;
 		case 4:
+			barrier_nospec();
 			__get_user_size(*(u32 *)to, from, 4, ret);
 			break;
 		case 8:
+			barrier_nospec();
 			__get_user_size(*(u64 *)to, from, 8, ret);
 			break;
 		}
 		if (ret == 0)
 			return 0;
 	}
+
+	barrier_nospec();
 	return __copy_tofrom_user((__force void __user *)to, from, n);
 }
 
@@ -400,6 +413,7 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
 		if (ret == 0)
 			return 0;
 	}
+
 	return __copy_tofrom_user(to, (__force const void __user *)from, n);
 }
 
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ba336930d448..22ed3c32fca8 100644
- --- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -44,6 +44,7 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o
 obj-$(CONFIG_PPC_BOOK3S_64)	+= mce.o mce_power.o
 obj64-$(CONFIG_RELOCATABLE)	+= reloc_64.o
 obj-$(CONFIG_PPC_BOOK3E_64)	+= exceptions-64e.o idle_book3e.o
+obj-$(CONFIG_PPC_BARRIER_NOSPEC) += security.o
 obj-$(CONFIG_PPC64)		+= vdso64/
 obj-$(CONFIG_ALTIVEC)		+= vecemu.o
 obj-$(CONFIG_PPC_970_NAP)	+= idle_power4.o
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index d92705e3a0c1..de3c29c51503 100644
- --- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -245,8 +245,7 @@ int main(void)
 	DEFINE(PACA_IN_MCE, offsetof(struct paca_struct, in_mce));
 	DEFINE(PACA_RFI_FLUSH_FALLBACK_AREA, offsetof(struct paca_struct, rfi_flush_fallback_area));
 	DEFINE(PACA_EXRFI, offsetof(struct paca_struct, exrfi));
- -	DEFINE(PACA_L1D_FLUSH_CONGRUENCE, offsetof(struct paca_struct, l1d_flush_congruence));
- -	DEFINE(PACA_L1D_FLUSH_SETS, offsetof(struct paca_struct, l1d_flush_sets));
+	DEFINE(PACA_L1D_FLUSH_SIZE, offsetof(struct paca_struct, l1d_flush_size));
 #endif
 	DEFINE(PACAHWCPUID, offsetof(struct paca_struct, hw_cpu_id));
 	DEFINE(PACAKEXECSTATE, offsetof(struct paca_struct, kexec_state));
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 59be96917369..6d36a4fb4acf 100644
- --- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -25,6 +25,7 @@
 #include <asm/page.h>
 #include <asm/mmu.h>
 #include <asm/thread_info.h>
+#include <asm/code-patching-asm.h>
 #include <asm/ppc_asm.h>
 #include <asm/asm-offsets.h>
 #include <asm/cputable.h>
@@ -36,6 +37,7 @@
 #include <asm/hw_irq.h>
 #include <asm/context_tracking.h>
 #include <asm/tm.h>
+#include <asm/barrier.h>
 #ifdef CONFIG_PPC_BOOK3S
 #include <asm/exception-64s.h>
 #else
@@ -75,6 +77,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
 	std	r0,GPR0(r1)
 	std	r10,GPR1(r1)
 	beq	2f			/* if from kernel mode */
+#ifdef CONFIG_PPC_FSL_BOOK3E
+START_BTB_FLUSH_SECTION
+	BTB_FLUSH(r10)
+END_BTB_FLUSH_SECTION
+#endif
 	ACCOUNT_CPU_USER_ENTRY(r10, r11)
 2:	std	r2,GPR2(r1)
 	std	r3,GPR3(r1)
@@ -177,6 +184,15 @@ system_call:			/* label this so stack traces look sane */
 	clrldi	r8,r8,32
 15:
 	slwi	r0,r0,4
+
+	barrier_nospec_asm
+	/*
+	 * Prevent the load of the handler below (based on the user-passed
+	 * system call number) being speculatively executed until the test
+	 * against NR_syscalls and branch to .Lsyscall_enosys above has
+	 * committed.
+	 */
+
 	ldx	r12,r11,r0	/* Fetch system call handler [ptr] */
 	mtctr   r12
 	bctrl			/* Call handler */
@@ -440,6 +456,57 @@ _GLOBAL(ret_from_kernel_thread)
 	li	r3,0
 	b	.Lsyscall_exit
 
+#ifdef CONFIG_PPC_BOOK3S_64
+
+#define FLUSH_COUNT_CACHE	\
+1:	nop;			\
+	patch_site 1b, patch__call_flush_count_cache
+
+
+#define BCCTR_FLUSH	.long 0x4c400420
+
+.macro nops number
+	.rept \number
+	nop
+	.endr
+.endm
+
+.balign 32
+.global flush_count_cache
+flush_count_cache:
+	/* Save LR into r9 */
+	mflr	r9
+
+	.rept 64
+	bl	.+4
+	.endr
+	b	1f
+	nops	6
+
+	.balign 32
+	/* Restore LR */
+1:	mtlr	r9
+	li	r9,0x7fff
+	mtctr	r9
+
+	BCCTR_FLUSH
+
+2:	nop
+	patch_site 2b patch__flush_count_cache_return
+
+	nops	3
+
+	.rept 278
+	.balign 32
+	BCCTR_FLUSH
+	nops	7
+	.endr
+
+	blr
+#else
+#define FLUSH_COUNT_CACHE
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
 /*
  * This routine switches between two different tasks.  The process
  * state of one is saved on its kernel stack.  Then the state
@@ -503,6 +570,8 @@ BEGIN_FTR_SECTION
 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
 #endif
 
+	FLUSH_COUNT_CACHE
+
 #ifdef CONFIG_SMP
 	/* We need a sync somewhere here to make sure that if the
 	 * previous task gets rescheduled on another CPU, it sees all
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 5cc93f0b52ca..48ec841ea1bf 100644
- --- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -295,7 +295,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	andi.	r10,r11,MSR_PR;		/* save stack pointer */	    \
 	beq	1f;			/* branch around if supervisor */   \
 	ld	r1,PACAKSAVE(r13);	/* get kernel stack coming from usr */\
- -1:	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
+1:	type##_BTB_FLUSH		\
+	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
 	bge-	cr1,exc_##n##_bad_stack;/* bad stack (TODO: out of line) */ \
 	mfspr	r10,SPRN_##type##_SRR0;	/* read SRR0 before touching stack */
 
@@ -327,6 +328,30 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 #define SPRN_MC_SRR0	SPRN_MCSRR0
 #define SPRN_MC_SRR1	SPRN_MCSRR1
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+#define GEN_BTB_FLUSH			\
+	START_BTB_FLUSH_SECTION		\
+		beq 1f;			\
+		BTB_FLUSH(r10)			\
+		1:		\
+	END_BTB_FLUSH_SECTION
+
+#define CRIT_BTB_FLUSH			\
+	START_BTB_FLUSH_SECTION		\
+		BTB_FLUSH(r10)		\
+	END_BTB_FLUSH_SECTION
+
+#define DBG_BTB_FLUSH CRIT_BTB_FLUSH
+#define MC_BTB_FLUSH CRIT_BTB_FLUSH
+#define GDBELL_BTB_FLUSH GEN_BTB_FLUSH
+#else
+#define GEN_BTB_FLUSH
+#define CRIT_BTB_FLUSH
+#define DBG_BTB_FLUSH
+#define MC_BTB_FLUSH
+#define GDBELL_BTB_FLUSH
+#endif
+
 #define NORMAL_EXCEPTION_PROLOG(n, intnum, addition)			    \
 	EXCEPTION_PROLOG(n, intnum, GEN, addition##_GEN(n))
 
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 938a30fef031..10e7cec9553d 100644
- --- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -36,6 +36,7 @@ BEGIN_FTR_SECTION						\
 END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
 	mr	r9,r13 ;					\
 	GET_PACA(r13) ;						\
+	INTERRUPT_TO_KERNEL ;					\
 	mfspr	r11,SPRN_SRR0 ;					\
 0:
 
@@ -292,7 +293,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	. = 0x900
 	.globl decrementer_pSeries
 decrementer_pSeries:
- -	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
+	SET_SCRATCH0(r13)
+	EXCEPTION_PROLOG_0(PACA_EXGEN)
+	b	decrementer_ool
 
 	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
 
@@ -319,6 +322,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
 	HMT_MEDIUM;
 	std	r10,PACA_EXGEN+EX_R10(r13)
+	INTERRUPT_TO_KERNEL
 	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
 	mfcr	r9
 	KVMTEST(0xc00)
@@ -607,6 +611,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 
 	.align	7
 	/* moved from 0xe00 */
+	MASKABLE_EXCEPTION_OOL(0x900, decrementer)
 	STD_EXCEPTION_HV_OOL(0xe02, h_data_storage)
 	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02)
 	STD_EXCEPTION_HV_OOL(0xe22, h_instr_storage)
@@ -1564,6 +1569,21 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	blr
 #endif
 
+	.balign 16
+	.globl stf_barrier_fallback
+stf_barrier_fallback:
+	std	r9,PACA_EXRFI+EX_R9(r13)
+	std	r10,PACA_EXRFI+EX_R10(r13)
+	sync
+	ld	r9,PACA_EXRFI+EX_R9(r13)
+	ld	r10,PACA_EXRFI+EX_R10(r13)
+	ori	31,31,0
+	.rept 14
+	b	1f
+1:
+	.endr
+	blr
+
 	.globl rfi_flush_fallback
 rfi_flush_fallback:
 	SET_SCRATCH0(r13);
@@ -1571,39 +1591,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	std	r9,PACA_EXRFI+EX_R9(r13)
 	std	r10,PACA_EXRFI+EX_R10(r13)
 	std	r11,PACA_EXRFI+EX_R11(r13)
- -	std	r12,PACA_EXRFI+EX_R12(r13)
- -	std	r8,PACA_EXRFI+EX_R13(r13)
 	mfctr	r9
 	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
- -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
- -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
- -	/*
- -	 * The load adresses are at staggered offsets within cachelines,
- -	 * which suits some pipelines better (on others it should not
- -	 * hurt).
- -	 */
- -	addi	r12,r12,8
+	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
+	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
 	mtctr	r11
 	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
 
 	/* order ld/st prior to dcbt stop all streams with flushing */
 	sync
- -1:	li	r8,0
- -	.rept	8 /* 8-way set associative */
- -	ldx	r11,r10,r8
- -	add	r8,r8,r12
- -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
- -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
- -	.endr
- -	addi	r10,r10,128 /* 128 byte cache line */
+
+	/*
+	 * The load adresses are at staggered offsets within cachelines,
+	 * which suits some pipelines better (on others it should not
+	 * hurt).
+	 */
+1:
+	ld	r11,(0x80 + 8)*0(r10)
+	ld	r11,(0x80 + 8)*1(r10)
+	ld	r11,(0x80 + 8)*2(r10)
+	ld	r11,(0x80 + 8)*3(r10)
+	ld	r11,(0x80 + 8)*4(r10)
+	ld	r11,(0x80 + 8)*5(r10)
+	ld	r11,(0x80 + 8)*6(r10)
+	ld	r11,(0x80 + 8)*7(r10)
+	addi	r10,r10,0x80*8
 	bdnz	1b
 
 	mtctr	r9
 	ld	r9,PACA_EXRFI+EX_R9(r13)
 	ld	r10,PACA_EXRFI+EX_R10(r13)
 	ld	r11,PACA_EXRFI+EX_R11(r13)
- -	ld	r12,PACA_EXRFI+EX_R12(r13)
- -	ld	r8,PACA_EXRFI+EX_R13(r13)
 	GET_SCRATCH0(r13);
 	rfid
 
@@ -1614,39 +1632,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	std	r9,PACA_EXRFI+EX_R9(r13)
 	std	r10,PACA_EXRFI+EX_R10(r13)
 	std	r11,PACA_EXRFI+EX_R11(r13)
- -	std	r12,PACA_EXRFI+EX_R12(r13)
- -	std	r8,PACA_EXRFI+EX_R13(r13)
 	mfctr	r9
 	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
- -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
- -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
- -	/*
- -	 * The load adresses are at staggered offsets within cachelines,
- -	 * which suits some pipelines better (on others it should not
- -	 * hurt).
- -	 */
- -	addi	r12,r12,8
+	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
+	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
 	mtctr	r11
 	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
 
 	/* order ld/st prior to dcbt stop all streams with flushing */
 	sync
- -1:	li	r8,0
- -	.rept	8 /* 8-way set associative */
- -	ldx	r11,r10,r8
- -	add	r8,r8,r12
- -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
- -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
- -	.endr
- -	addi	r10,r10,128 /* 128 byte cache line */
+
+	/*
+	 * The load adresses are at staggered offsets within cachelines,
+	 * which suits some pipelines better (on others it should not
+	 * hurt).
+	 */
+1:
+	ld	r11,(0x80 + 8)*0(r10)
+	ld	r11,(0x80 + 8)*1(r10)
+	ld	r11,(0x80 + 8)*2(r10)
+	ld	r11,(0x80 + 8)*3(r10)
+	ld	r11,(0x80 + 8)*4(r10)
+	ld	r11,(0x80 + 8)*5(r10)
+	ld	r11,(0x80 + 8)*6(r10)
+	ld	r11,(0x80 + 8)*7(r10)
+	addi	r10,r10,0x80*8
 	bdnz	1b
 
 	mtctr	r9
 	ld	r9,PACA_EXRFI+EX_R9(r13)
 	ld	r10,PACA_EXRFI+EX_R10(r13)
 	ld	r11,PACA_EXRFI+EX_R11(r13)
- -	ld	r12,PACA_EXRFI+EX_R12(r13)
- -	ld	r8,PACA_EXRFI+EX_R13(r13)
 	GET_SCRATCH0(r13);
 	hrfid
 
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index 9547381b631a..ff009be97a42 100644
- --- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -67,7 +67,15 @@ int module_finalize(const Elf_Ehdr *hdr,
 		do_feature_fixups(powerpc_firmware_features,
 				  (void *)sect->sh_addr,
 				  (void *)sect->sh_addr + sect->sh_size);
- -#endif
+#endif /* CONFIG_PPC64 */
+
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
+	sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
+	if (sect != NULL)
+		do_barrier_nospec_fixups_range(barrier_nospec_enabled,
+				  (void *)sect->sh_addr,
+				  (void *)sect->sh_addr + sect->sh_size);
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 	sect = find_section(hdr, sechdrs, "__lwsync_fixup");
 	if (sect != NULL)
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
new file mode 100644
index 000000000000..58f0602a92b9
- --- /dev/null
+++ b/arch/powerpc/kernel/security.c
@@ -0,0 +1,433 @@
+// SPDX-License-Identifier: GPL-2.0+
+//
+// Security related flags and so on.
+//
+// Copyright 2018, Michael Ellerman, IBM Corporation.
+
+#include <linux/kernel.h>
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/seq_buf.h>
+
+#include <asm/debug.h>
+#include <asm/asm-prototypes.h>
+#include <asm/code-patching.h>
+#include <asm/security_features.h>
+#include <asm/setup.h>
+
+
+unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
+
+enum count_cache_flush_type {
+	COUNT_CACHE_FLUSH_NONE	= 0x1,
+	COUNT_CACHE_FLUSH_SW	= 0x2,
+	COUNT_CACHE_FLUSH_HW	= 0x4,
+};
+static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
+
+bool barrier_nospec_enabled;
+static bool no_nospec;
+static bool btb_flush_enabled;
+#ifdef CONFIG_PPC_FSL_BOOK3E
+static bool no_spectrev2;
+#endif
+
+static void enable_barrier_nospec(bool enable)
+{
+	barrier_nospec_enabled = enable;
+	do_barrier_nospec_fixups(enable);
+}
+
+void setup_barrier_nospec(void)
+{
+	bool enable;
+
+	/*
+	 * It would make sense to check SEC_FTR_SPEC_BAR_ORI31 below as well.
+	 * But there's a good reason not to. The two flags we check below are
+	 * both are enabled by default in the kernel, so if the hcall is not
+	 * functional they will be enabled.
+	 * On a system where the host firmware has been updated (so the ori
+	 * functions as a barrier), but on which the hypervisor (KVM/Qemu) has
+	 * not been updated, we would like to enable the barrier. Dropping the
+	 * check for SEC_FTR_SPEC_BAR_ORI31 achieves that. The only downside is
+	 * we potentially enable the barrier on systems where the host firmware
+	 * is not updated, but that's harmless as it's a no-op.
+	 */
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
+
+	if (!no_nospec)
+		enable_barrier_nospec(enable);
+}
+
+static int __init handle_nospectre_v1(char *p)
+{
+	no_nospec = true;
+
+	return 0;
+}
+early_param("nospectre_v1", handle_nospectre_v1);
+
+#ifdef CONFIG_DEBUG_FS
+static int barrier_nospec_set(void *data, u64 val)
+{
+	switch (val) {
+	case 0:
+	case 1:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (!!val == !!barrier_nospec_enabled)
+		return 0;
+
+	enable_barrier_nospec(!!val);
+
+	return 0;
+}
+
+static int barrier_nospec_get(void *data, u64 *val)
+{
+	*val = barrier_nospec_enabled ? 1 : 0;
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_barrier_nospec,
+			barrier_nospec_get, barrier_nospec_set, "%llu\n");
+
+static __init int barrier_nospec_debugfs_init(void)
+{
+	debugfs_create_file("barrier_nospec", 0600, powerpc_debugfs_root, NULL,
+			    &fops_barrier_nospec);
+	return 0;
+}
+device_initcall(barrier_nospec_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
+
+#ifdef CONFIG_PPC_FSL_BOOK3E
+static int __init handle_nospectre_v2(char *p)
+{
+	no_spectrev2 = true;
+
+	return 0;
+}
+early_param("nospectre_v2", handle_nospectre_v2);
+void setup_spectre_v2(void)
+{
+	if (no_spectrev2)
+		do_btb_flush_fixups();
+	else
+		btb_flush_enabled = true;
+}
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
+#ifdef CONFIG_PPC_BOOK3S_64
+ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	bool thread_priv;
+
+	thread_priv = security_ftr_enabled(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (rfi_flush || thread_priv) {
+		struct seq_buf s;
+		seq_buf_init(&s, buf, PAGE_SIZE - 1);
+
+		seq_buf_printf(&s, "Mitigation: ");
+
+		if (rfi_flush)
+			seq_buf_printf(&s, "RFI Flush");
+
+		if (rfi_flush && thread_priv)
+			seq_buf_printf(&s, ", ");
+
+		if (thread_priv)
+			seq_buf_printf(&s, "L1D private per thread");
+
+		seq_buf_printf(&s, "\n");
+
+		return s.len;
+	}
+
+	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
+	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
+#endif
+
+ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	struct seq_buf s;
+
+	seq_buf_init(&s, buf, PAGE_SIZE - 1);
+
+	if (security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR)) {
+		if (barrier_nospec_enabled)
+			seq_buf_printf(&s, "Mitigation: __user pointer sanitization");
+		else
+			seq_buf_printf(&s, "Vulnerable");
+
+		if (security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31))
+			seq_buf_printf(&s, ", ori31 speculation barrier enabled");
+
+		seq_buf_printf(&s, "\n");
+	} else
+		seq_buf_printf(&s, "Not affected\n");
+
+	return s.len;
+}
+
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	struct seq_buf s;
+	bool bcs, ccd;
+
+	seq_buf_init(&s, buf, PAGE_SIZE - 1);
+
+	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
+	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
+
+	if (bcs || ccd) {
+		seq_buf_printf(&s, "Mitigation: ");
+
+		if (bcs)
+			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
+
+		if (bcs && ccd)
+			seq_buf_printf(&s, ", ");
+
+		if (ccd)
+			seq_buf_printf(&s, "Indirect branch cache disabled");
+	} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
+		seq_buf_printf(&s, "Mitigation: Software count cache flush");
+
+		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
+			seq_buf_printf(&s, " (hardware accelerated)");
+	} else if (btb_flush_enabled) {
+		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
+	} else {
+		seq_buf_printf(&s, "Vulnerable");
+	}
+
+	seq_buf_printf(&s, "\n");
+
+	return s.len;
+}
+
+#ifdef CONFIG_PPC_BOOK3S_64
+/*
+ * Store-forwarding barrier support.
+ */
+
+static enum stf_barrier_type stf_enabled_flush_types;
+static bool no_stf_barrier;
+bool stf_barrier;
+
+static int __init handle_no_stf_barrier(char *p)
+{
+	pr_info("stf-barrier: disabled on command line.");
+	no_stf_barrier = true;
+	return 0;
+}
+
+early_param("no_stf_barrier", handle_no_stf_barrier);
+
+/* This is the generic flag used by other architectures */
+static int __init handle_ssbd(char *p)
+{
+	if (!p || strncmp(p, "auto", 5) == 0 || strncmp(p, "on", 2) == 0 ) {
+		/* Until firmware tells us, we have the barrier with auto */
+		return 0;
+	} else if (strncmp(p, "off", 3) == 0) {
+		handle_no_stf_barrier(NULL);
+		return 0;
+	} else
+		return 1;
+
+	return 0;
+}
+early_param("spec_store_bypass_disable", handle_ssbd);
+
+/* This is the generic flag used by other architectures */
+static int __init handle_no_ssbd(char *p)
+{
+	handle_no_stf_barrier(NULL);
+	return 0;
+}
+early_param("nospec_store_bypass_disable", handle_no_ssbd);
+
+static void stf_barrier_enable(bool enable)
+{
+	if (enable)
+		do_stf_barrier_fixups(stf_enabled_flush_types);
+	else
+		do_stf_barrier_fixups(STF_BARRIER_NONE);
+
+	stf_barrier = enable;
+}
+
+void setup_stf_barrier(void)
+{
+	enum stf_barrier_type type;
+	bool enable, hv;
+
+	hv = cpu_has_feature(CPU_FTR_HVMODE);
+
+	/* Default to fallback in case fw-features are not available */
+	if (cpu_has_feature(CPU_FTR_ARCH_207S))
+		type = STF_BARRIER_SYNC_ORI;
+	else if (cpu_has_feature(CPU_FTR_ARCH_206))
+		type = STF_BARRIER_FALLBACK;
+	else
+		type = STF_BARRIER_NONE;
+
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+		(security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) ||
+		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) && hv));
+
+	if (type == STF_BARRIER_FALLBACK) {
+		pr_info("stf-barrier: fallback barrier available\n");
+	} else if (type == STF_BARRIER_SYNC_ORI) {
+		pr_info("stf-barrier: hwsync barrier available\n");
+	} else if (type == STF_BARRIER_EIEIO) {
+		pr_info("stf-barrier: eieio barrier available\n");
+	}
+
+	stf_enabled_flush_types = type;
+
+	if (!no_stf_barrier)
+		stf_barrier_enable(enable);
+}
+
+ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	if (stf_barrier && stf_enabled_flush_types != STF_BARRIER_NONE) {
+		const char *type;
+		switch (stf_enabled_flush_types) {
+		case STF_BARRIER_EIEIO:
+			type = "eieio";
+			break;
+		case STF_BARRIER_SYNC_ORI:
+			type = "hwsync";
+			break;
+		case STF_BARRIER_FALLBACK:
+			type = "fallback";
+			break;
+		default:
+			type = "unknown";
+		}
+		return sprintf(buf, "Mitigation: Kernel entry/exit barrier (%s)\n", type);
+	}
+
+	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
+	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int stf_barrier_set(void *data, u64 val)
+{
+	bool enable;
+
+	if (val == 1)
+		enable = true;
+	else if (val == 0)
+		enable = false;
+	else
+		return -EINVAL;
+
+	/* Only do anything if we're changing state */
+	if (enable != stf_barrier)
+		stf_barrier_enable(enable);
+
+	return 0;
+}
+
+static int stf_barrier_get(void *data, u64 *val)
+{
+	*val = stf_barrier ? 1 : 0;
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_stf_barrier, stf_barrier_get, stf_barrier_set, "%llu\n");
+
+static __init int stf_barrier_debugfs_init(void)
+{
+	debugfs_create_file("stf_barrier", 0600, powerpc_debugfs_root, NULL, &fops_stf_barrier);
+	return 0;
+}
+device_initcall(stf_barrier_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
+
+static void toggle_count_cache_flush(bool enable)
+{
+	if (!enable || !security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
+		patch_instruction_site(&patch__call_flush_count_cache, PPC_INST_NOP);
+		count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
+		pr_info("count-cache-flush: software flush disabled.\n");
+		return;
+	}
+
+	patch_branch_site(&patch__call_flush_count_cache,
+			  (u64)&flush_count_cache, BRANCH_SET_LINK);
+
+	if (!security_ftr_enabled(SEC_FTR_BCCTR_FLUSH_ASSIST)) {
+		count_cache_flush_type = COUNT_CACHE_FLUSH_SW;
+		pr_info("count-cache-flush: full software flush sequence enabled.\n");
+		return;
+	}
+
+	patch_instruction_site(&patch__flush_count_cache_return, PPC_INST_BLR);
+	count_cache_flush_type = COUNT_CACHE_FLUSH_HW;
+	pr_info("count-cache-flush: hardware assisted flush sequence enabled\n");
+}
+
+void setup_count_cache_flush(void)
+{
+	toggle_count_cache_flush(true);
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int count_cache_flush_set(void *data, u64 val)
+{
+	bool enable;
+
+	if (val == 1)
+		enable = true;
+	else if (val == 0)
+		enable = false;
+	else
+		return -EINVAL;
+
+	toggle_count_cache_flush(enable);
+
+	return 0;
+}
+
+static int count_cache_flush_get(void *data, u64 *val)
+{
+	if (count_cache_flush_type == COUNT_CACHE_FLUSH_NONE)
+		*val = 0;
+	else
+		*val = 1;
+
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_count_cache_flush, count_cache_flush_get,
+			count_cache_flush_set, "%llu\n");
+
+static __init int count_cache_flush_debugfs_init(void)
+{
+	debugfs_create_file("count_cache_flush", 0600, powerpc_debugfs_root,
+			    NULL, &fops_count_cache_flush);
+	return 0;
+}
+device_initcall(count_cache_flush_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
+#endif /* CONFIG_PPC_BOOK3S_64 */
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index ad8c9db61237..5a9f035bcd6b 100644
- --- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -322,6 +322,8 @@ void __init setup_arch(char **cmdline_p)
 		ppc_md.setup_arch();
 	if ( ppc_md.progress ) ppc_md.progress("arch: exit", 0x3eab);
 
+	setup_barrier_nospec();
+
 	paging_init();
 
 	/* Initialize the MMU context management stuff */
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 9eb469bed22b..6bb731ababc6 100644
- --- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -736,6 +736,8 @@ void __init setup_arch(char **cmdline_p)
 	if (ppc_md.setup_arch)
 		ppc_md.setup_arch();
 
+	setup_barrier_nospec();
+
 	paging_init();
 
 	/* Initialize the MMU context management stuff */
@@ -873,9 +875,6 @@ static void do_nothing(void *unused)
 
 void rfi_flush_enable(bool enable)
 {
- -	if (rfi_flush == enable)
- -		return;
- -
 	if (enable) {
 		do_rfi_flush_fixups(enabled_flush_types);
 		on_each_cpu(do_nothing, NULL, 1);
@@ -885,11 +884,15 @@ void rfi_flush_enable(bool enable)
 	rfi_flush = enable;
 }
 
- -static void init_fallback_flush(void)
+static void __ref init_fallback_flush(void)
 {
 	u64 l1d_size, limit;
 	int cpu;
 
+	/* Only allocate the fallback flush area once (at boot time). */
+	if (l1d_flush_fallback_area)
+		return;
+
 	l1d_size = ppc64_caches.dsize;
 	limit = min(safe_stack_limit(), ppc64_rma_size);
 
@@ -902,34 +905,23 @@ static void init_fallback_flush(void)
 	memset(l1d_flush_fallback_area, 0, l1d_size * 2);
 
 	for_each_possible_cpu(cpu) {
- -		/*
- -		 * The fallback flush is currently coded for 8-way
- -		 * associativity. Different associativity is possible, but it
- -		 * will be treated as 8-way and may not evict the lines as
- -		 * effectively.
- -		 *
- -		 * 128 byte lines are mandatory.
- -		 */
- -		u64 c = l1d_size / 8;
- -
 		paca[cpu].rfi_flush_fallback_area = l1d_flush_fallback_area;
- -		paca[cpu].l1d_flush_congruence = c;
- -		paca[cpu].l1d_flush_sets = c / 128;
+		paca[cpu].l1d_flush_size = l1d_size;
 	}
 }
 
- -void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
+void setup_rfi_flush(enum l1d_flush_type types, bool enable)
 {
 	if (types & L1D_FLUSH_FALLBACK) {
- -		pr_info("rfi-flush: Using fallback displacement flush\n");
+		pr_info("rfi-flush: fallback displacement flush available\n");
 		init_fallback_flush();
 	}
 
 	if (types & L1D_FLUSH_ORI)
- -		pr_info("rfi-flush: Using ori type flush\n");
+		pr_info("rfi-flush: ori type flush available\n");
 
 	if (types & L1D_FLUSH_MTTRIG)
- -		pr_info("rfi-flush: Using mttrig type flush\n");
+		pr_info("rfi-flush: mttrig type flush available\n");
 
 	enabled_flush_types = types;
 
@@ -940,13 +932,19 @@ void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
 #ifdef CONFIG_DEBUG_FS
 static int rfi_flush_set(void *data, u64 val)
 {
+	bool enable;
+
 	if (val == 1)
- -		rfi_flush_enable(true);
+		enable = true;
 	else if (val == 0)
- -		rfi_flush_enable(false);
+		enable = false;
 	else
 		return -EINVAL;
 
+	/* Only do anything if we're changing state */
+	if (enable != rfi_flush)
+		rfi_flush_enable(enable);
+
 	return 0;
 }
 
@@ -965,12 +963,4 @@ static __init int rfi_flush_debugfs_init(void)
 }
 device_initcall(rfi_flush_debugfs_init);
 #endif
- -
- -ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
- -{
- -	if (rfi_flush)
- -		return sprintf(buf, "Mitigation: RFI Flush\n");
- -
- -	return sprintf(buf, "Vulnerable\n");
- -}
 #endif /* CONFIG_PPC_BOOK3S_64 */
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 072a23a17350..876ac9d52afc 100644
- --- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -73,14 +73,45 @@ SECTIONS
 	RODATA
 
 #ifdef CONFIG_PPC64
+	. = ALIGN(8);
+	__stf_entry_barrier_fixup : AT(ADDR(__stf_entry_barrier_fixup) - LOAD_OFFSET) {
+		__start___stf_entry_barrier_fixup = .;
+		*(__stf_entry_barrier_fixup)
+		__stop___stf_entry_barrier_fixup = .;
+	}
+
+	. = ALIGN(8);
+	__stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) {
+		__start___stf_exit_barrier_fixup = .;
+		*(__stf_exit_barrier_fixup)
+		__stop___stf_exit_barrier_fixup = .;
+	}
+
 	. = ALIGN(8);
 	__rfi_flush_fixup : AT(ADDR(__rfi_flush_fixup) - LOAD_OFFSET) {
 		__start___rfi_flush_fixup = .;
 		*(__rfi_flush_fixup)
 		__stop___rfi_flush_fixup = .;
 	}
- -#endif
+#endif /* CONFIG_PPC64 */
 
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
+	. = ALIGN(8);
+	__spec_barrier_fixup : AT(ADDR(__spec_barrier_fixup) - LOAD_OFFSET) {
+		__start___barrier_nospec_fixup = .;
+		*(__barrier_nospec_fixup)
+		__stop___barrier_nospec_fixup = .;
+	}
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
+
+#ifdef CONFIG_PPC_FSL_BOOK3E
+	. = ALIGN(8);
+	__spec_btb_flush_fixup : AT(ADDR(__spec_btb_flush_fixup) - LOAD_OFFSET) {
+		__start__btb_flush_fixup = .;
+		*(__btb_flush_fixup)
+		__stop__btb_flush_fixup = .;
+	}
+#endif
 	EXCEPTION_TABLE(0)
 
 	NOTES :kernel :notes
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index d5edbeb8eb82..570c06a00db6 100644
- --- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -14,12 +14,25 @@
 #include <asm/page.h>
 #include <asm/code-patching.h>
 #include <asm/uaccess.h>
+#include <asm/setup.h>
+#include <asm/sections.h>
 
 
+static inline bool is_init(unsigned int *addr)
+{
+	return addr >= (unsigned int *)__init_begin && addr < (unsigned int *)__init_end;
+}
+
 int patch_instruction(unsigned int *addr, unsigned int instr)
 {
 	int err;
 
+	/* Make sure we aren't patching a freed init section */
+	if (init_mem_is_free && is_init(addr)) {
+		pr_debug("Skipping init section patching addr: 0x%px\n", addr);
+		return 0;
+	}
+
 	__put_user_size(instr, addr, 4, err);
 	if (err)
 		return err;
@@ -32,6 +45,22 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags)
 	return patch_instruction(addr, create_branch(addr, target, flags));
 }
 
+int patch_branch_site(s32 *site, unsigned long target, int flags)
+{
+	unsigned int *addr;
+
+	addr = (unsigned int *)((unsigned long)site + *site);
+	return patch_instruction(addr, create_branch(addr, target, flags));
+}
+
+int patch_instruction_site(s32 *site, unsigned int instr)
+{
+	unsigned int *addr;
+
+	addr = (unsigned int *)((unsigned long)site + *site);
+	return patch_instruction(addr, instr);
+}
+
 unsigned int create_branch(const unsigned int *addr,
 			   unsigned long target, int flags)
 {
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index 3af014684872..7bdfc19a491d 100644
- --- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -21,7 +21,7 @@
 #include <asm/page.h>
 #include <asm/sections.h>
 #include <asm/setup.h>
- -
+#include <asm/security_features.h>
 
 struct fixup_entry {
 	unsigned long	mask;
@@ -115,6 +115,120 @@ void do_feature_fixups(unsigned long value, void *fixup_start, void *fixup_end)
 }
 
 #ifdef CONFIG_PPC_BOOK3S_64
+void do_stf_entry_barrier_fixups(enum stf_barrier_type types)
+{
+	unsigned int instrs[3], *dest;
+	long *start, *end;
+	int i;
+
+	start = PTRRELOC(&__start___stf_entry_barrier_fixup),
+	end = PTRRELOC(&__stop___stf_entry_barrier_fixup);
+
+	instrs[0] = 0x60000000; /* nop */
+	instrs[1] = 0x60000000; /* nop */
+	instrs[2] = 0x60000000; /* nop */
+
+	i = 0;
+	if (types & STF_BARRIER_FALLBACK) {
+		instrs[i++] = 0x7d4802a6; /* mflr r10		*/
+		instrs[i++] = 0x60000000; /* branch patched below */
+		instrs[i++] = 0x7d4803a6; /* mtlr r10		*/
+	} else if (types & STF_BARRIER_EIEIO) {
+		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
+	} else if (types & STF_BARRIER_SYNC_ORI) {
+		instrs[i++] = 0x7c0004ac; /* hwsync		*/
+		instrs[i++] = 0xe94d0000; /* ld r10,0(r13)	*/
+		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+
+		patch_instruction(dest, instrs[0]);
+
+		if (types & STF_BARRIER_FALLBACK)
+			patch_branch(dest + 1, (unsigned long)&stf_barrier_fallback,
+				     BRANCH_SET_LINK);
+		else
+			patch_instruction(dest + 1, instrs[1]);
+
+		patch_instruction(dest + 2, instrs[2]);
+	}
+
+	printk(KERN_DEBUG "stf-barrier: patched %d entry locations (%s barrier)\n", i,
+		(types == STF_BARRIER_NONE)                  ? "no" :
+		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
+		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
+		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
+		                                           : "unknown");
+}
+
+void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
+{
+	unsigned int instrs[6], *dest;
+	long *start, *end;
+	int i;
+
+	start = PTRRELOC(&__start___stf_exit_barrier_fixup),
+	end = PTRRELOC(&__stop___stf_exit_barrier_fixup);
+
+	instrs[0] = 0x60000000; /* nop */
+	instrs[1] = 0x60000000; /* nop */
+	instrs[2] = 0x60000000; /* nop */
+	instrs[3] = 0x60000000; /* nop */
+	instrs[4] = 0x60000000; /* nop */
+	instrs[5] = 0x60000000; /* nop */
+
+	i = 0;
+	if (types & STF_BARRIER_FALLBACK || types & STF_BARRIER_SYNC_ORI) {
+		if (cpu_has_feature(CPU_FTR_HVMODE)) {
+			instrs[i++] = 0x7db14ba6; /* mtspr 0x131, r13 (HSPRG1) */
+			instrs[i++] = 0x7db04aa6; /* mfspr r13, 0x130 (HSPRG0) */
+		} else {
+			instrs[i++] = 0x7db243a6; /* mtsprg 2,r13	*/
+			instrs[i++] = 0x7db142a6; /* mfsprg r13,1    */
+	        }
+		instrs[i++] = 0x7c0004ac; /* hwsync		*/
+		instrs[i++] = 0xe9ad0000; /* ld r13,0(r13)	*/
+		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+		if (cpu_has_feature(CPU_FTR_HVMODE)) {
+			instrs[i++] = 0x7db14aa6; /* mfspr r13, 0x131 (HSPRG1) */
+		} else {
+			instrs[i++] = 0x7db242a6; /* mfsprg r13,2 */
+		}
+	} else if (types & STF_BARRIER_EIEIO) {
+		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+
+		patch_instruction(dest, instrs[0]);
+		patch_instruction(dest + 1, instrs[1]);
+		patch_instruction(dest + 2, instrs[2]);
+		patch_instruction(dest + 3, instrs[3]);
+		patch_instruction(dest + 4, instrs[4]);
+		patch_instruction(dest + 5, instrs[5]);
+	}
+	printk(KERN_DEBUG "stf-barrier: patched %d exit locations (%s barrier)\n", i,
+		(types == STF_BARRIER_NONE)                  ? "no" :
+		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
+		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
+		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
+		                                           : "unknown");
+}
+
+
+void do_stf_barrier_fixups(enum stf_barrier_type types)
+{
+	do_stf_entry_barrier_fixups(types);
+	do_stf_exit_barrier_fixups(types);
+}
+
 void do_rfi_flush_fixups(enum l1d_flush_type types)
 {
 	unsigned int instrs[3], *dest;
@@ -151,10 +265,110 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
 		patch_instruction(dest + 2, instrs[2]);
 	}
 
- -	printk(KERN_DEBUG "rfi-flush: patched %d locations\n", i);
+	printk(KERN_DEBUG "rfi-flush: patched %d locations (%s flush)\n", i,
+		(types == L1D_FLUSH_NONE)       ? "no" :
+		(types == L1D_FLUSH_FALLBACK)   ? "fallback displacement" :
+		(types &  L1D_FLUSH_ORI)        ? (types & L1D_FLUSH_MTTRIG)
+							? "ori+mttrig type"
+							: "ori type" :
+		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
+						: "unknown");
+}
+
+void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
+{
+	unsigned int instr, *dest;
+	long *start, *end;
+	int i;
+
+	start = fixup_start;
+	end = fixup_end;
+
+	instr = 0x60000000; /* nop */
+
+	if (enable) {
+		pr_info("barrier-nospec: using ORI speculation barrier\n");
+		instr = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+		patch_instruction(dest, instr);
+	}
+
+	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
 }
+
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
+void do_barrier_nospec_fixups(bool enable)
+{
+	void *start, *end;
+
+	start = PTRRELOC(&__start___barrier_nospec_fixup),
+	end = PTRRELOC(&__stop___barrier_nospec_fixup);
+
+	do_barrier_nospec_fixups_range(enable, start, end);
+}
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
+
+#ifdef CONFIG_PPC_FSL_BOOK3E
+void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
+{
+	unsigned int instr[2], *dest;
+	long *start, *end;
+	int i;
+
+	start = fixup_start;
+	end = fixup_end;
+
+	instr[0] = PPC_INST_NOP;
+	instr[1] = PPC_INST_NOP;
+
+	if (enable) {
+		pr_info("barrier-nospec: using isync; sync as speculation barrier\n");
+		instr[0] = PPC_INST_ISYNC;
+		instr[1] = PPC_INST_SYNC;
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+		patch_instruction(dest, instr[0]);
+		patch_instruction(dest + 1, instr[1]);
+	}
+
+	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
+}
+
+static void patch_btb_flush_section(long *curr)
+{
+	unsigned int *start, *end;
+
+	start = (void *)curr + *curr;
+	end = (void *)curr + *(curr + 1);
+	for (; start < end; start++) {
+		pr_devel("patching dest %lx\n", (unsigned long)start);
+		patch_instruction(start, PPC_INST_NOP);
+	}
+}
+
+void do_btb_flush_fixups(void)
+{
+	long *start, *end;
+
+	start = PTRRELOC(&__start__btb_flush_fixup);
+	end = PTRRELOC(&__stop__btb_flush_fixup);
+
+	for (; start < end; start += 2)
+		patch_btb_flush_section(start);
+}
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
 {
 	long *start, *end;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 22d94c3e6fc4..1efe5ca5c3bc 100644
- --- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -62,6 +62,7 @@
 #endif
 
 unsigned long long memory_limit;
+bool init_mem_is_free;
 
 #ifdef CONFIG_HIGHMEM
 pte_t *kmap_pte;
@@ -381,6 +382,7 @@ void __init mem_init(void)
 void free_initmem(void)
 {
 	ppc_md.progress = ppc_printk_progress;
+	init_mem_is_free = true;
 	free_initmem_default(POISON_FREE_INITMEM);
 }
 
diff --git a/arch/powerpc/mm/tlb_low_64e.S b/arch/powerpc/mm/tlb_low_64e.S
index 29d6987c37ba..5486d56da289 100644
- --- a/arch/powerpc/mm/tlb_low_64e.S
+++ b/arch/powerpc/mm/tlb_low_64e.S
@@ -69,6 +69,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	std	r15,EX_TLB_R15(r12)
 	std	r10,EX_TLB_CR(r12)
 #ifdef CONFIG_PPC_FSL_BOOK3E
+START_BTB_FLUSH_SECTION
+	mfspr r11, SPRN_SRR1
+	andi. r10,r11,MSR_PR
+	beq 1f
+	BTB_FLUSH(r10)
+1:
+END_BTB_FLUSH_SECTION
 	std	r7,EX_TLB_R7(r12)
 #endif
 	TLB_MISS_PROLOG_STATS
diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index c57afc619b20..e14b52c7ebd8 100644
- --- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -37,53 +37,99 @@
 #include <asm/smp.h>
 #include <asm/tm.h>
 #include <asm/setup.h>
+#include <asm/security_features.h>
 
 #include "powernv.h"
 
+
+static bool fw_feature_is(const char *state, const char *name,
+			  struct device_node *fw_features)
+{
+	struct device_node *np;
+	bool rc = false;
+
+	np = of_get_child_by_name(fw_features, name);
+	if (np) {
+		rc = of_property_read_bool(np, state);
+		of_node_put(np);
+	}
+
+	return rc;
+}
+
+static void init_fw_feat_flags(struct device_node *np)
+{
+	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
+		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
+
+	if (fw_feature_is("enabled", "fw-bcctrl-serialized", np))
+		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
+
+	if (fw_feature_is("enabled", "inst-l1d-flush-ori30,30,0", np))
+		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
+
+	if (fw_feature_is("enabled", "inst-l1d-flush-trig2", np))
+		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
+
+	if (fw_feature_is("enabled", "fw-l1d-thread-split", np))
+		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (fw_feature_is("enabled", "fw-count-cache-disabled", np))
+		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
+
+	if (fw_feature_is("enabled", "fw-count-cache-flush-bcctr2,0,0", np))
+		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
+
+	if (fw_feature_is("enabled", "needs-count-cache-flush-on-context-switch", np))
+		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
+
+	/*
+	 * The features below are enabled by default, so we instead look to see
+	 * if firmware has *disabled* them, and clear them if so.
+	 */
+	if (fw_feature_is("disabled", "speculation-policy-favor-security", np))
+		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
+
+	if (fw_feature_is("disabled", "needs-l1d-flush-msr-pr-0-to-1", np))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
+
+	if (fw_feature_is("disabled", "needs-l1d-flush-msr-hv-1-to-0", np))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
+
+	if (fw_feature_is("disabled", "needs-spec-barrier-for-bound-checks", np))
+		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
+}
+
 static void pnv_setup_rfi_flush(void)
 {
 	struct device_node *np, *fw_features;
 	enum l1d_flush_type type;
- -	int enable;
+	bool enable;
 
 	/* Default to fallback in case fw-features are not available */
 	type = L1D_FLUSH_FALLBACK;
- -	enable = 1;
 
 	np = of_find_node_by_name(NULL, "ibm,opal");
 	fw_features = of_get_child_by_name(np, "fw-features");
 	of_node_put(np);
 
 	if (fw_features) {
- -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-trig2");
- -		if (np && of_property_read_bool(np, "enabled"))
- -			type = L1D_FLUSH_MTTRIG;
+		init_fw_feat_flags(fw_features);
+		of_node_put(fw_features);
 
- -		of_node_put(np);
+		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
+			type = L1D_FLUSH_MTTRIG;
 
- -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-ori30,30,0");
- -		if (np && of_property_read_bool(np, "enabled"))
+		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
 			type = L1D_FLUSH_ORI;
- -
- -		of_node_put(np);
- -
- -		/* Enable unless firmware says NOT to */
- -		enable = 2;
- -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-hv-1-to-0");
- -		if (np && of_property_read_bool(np, "disabled"))
- -			enable--;
- -
- -		of_node_put(np);
- -
- -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-pr-0-to-1");
- -		if (np && of_property_read_bool(np, "disabled"))
- -			enable--;
- -
- -		of_node_put(np);
- -		of_node_put(fw_features);
 	}
 
- -	setup_rfi_flush(type, enable > 0);
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
+		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR)   || \
+		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
+
+	setup_rfi_flush(type, enable);
+	setup_count_cache_flush();
 }
 
 static void __init pnv_setup_arch(void)
@@ -91,6 +137,7 @@ static void __init pnv_setup_arch(void)
 	set_arch_panic_timeout(10, ARCH_PANIC_TIMEOUT);
 
 	pnv_setup_rfi_flush();
+	setup_stf_barrier();
 
 	/* Initialize SMP */
 	pnv_smp_init();
diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
index 8dd0c8edefd6..c773396d0969 100644
- --- a/arch/powerpc/platforms/pseries/mobility.c
+++ b/arch/powerpc/platforms/pseries/mobility.c
@@ -314,6 +314,9 @@ void post_mobility_fixup(void)
 		printk(KERN_ERR "Post-mobility device tree update "
 			"failed: %d\n", rc);
 
+	/* Possibly switch to a new RFI flush type */
+	pseries_setup_rfi_flush();
+
 	return;
 }
 
diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h
index 8411c27293e4..e7d80797384d 100644
- --- a/arch/powerpc/platforms/pseries/pseries.h
+++ b/arch/powerpc/platforms/pseries/pseries.h
@@ -81,4 +81,6 @@ extern struct pci_controller_ops pseries_pci_controller_ops;
 
 unsigned long pseries_memory_block_size(void);
 
+void pseries_setup_rfi_flush(void);
+
 #endif /* _PSERIES_PSERIES_H */
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index dd2545fc9947..9cc976ff7fec 100644
- --- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -67,6 +67,7 @@
 #include <asm/eeh.h>
 #include <asm/reg.h>
 #include <asm/plpar_wrappers.h>
+#include <asm/security_features.h>
 
 #include "pseries.h"
 
@@ -499,37 +500,87 @@ static void __init find_and_init_phbs(void)
 	of_pci_check_probe_only();
 }
 
- -static void pseries_setup_rfi_flush(void)
+static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
+{
+	/*
+	 * The features below are disabled by default, so we instead look to see
+	 * if firmware has *enabled* them, and set them if so.
+	 */
+	if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31)
+		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
+
+	if (result->character & H_CPU_CHAR_BCCTRL_SERIALISED)
+		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
+
+	if (result->character & H_CPU_CHAR_L1D_FLUSH_ORI30)
+		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
+
+	if (result->character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
+		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
+
+	if (result->character & H_CPU_CHAR_L1D_THREAD_PRIV)
+		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (result->character & H_CPU_CHAR_COUNT_CACHE_DISABLED)
+		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
+
+	if (result->character & H_CPU_CHAR_BCCTR_FLUSH_ASSIST)
+		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
+
+	if (result->behaviour & H_CPU_BEHAV_FLUSH_COUNT_CACHE)
+		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
+
+	/*
+	 * The features below are enabled by default, so we instead look to see
+	 * if firmware has *disabled* them, and clear them if so.
+	 */
+	if (!(result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY))
+		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
+
+	if (!(result->behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
+
+	if (!(result->behaviour & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
+		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
+}
+
+void pseries_setup_rfi_flush(void)
 {
 	struct h_cpu_char_result result;
 	enum l1d_flush_type types;
 	bool enable;
 	long rc;
 
- -	/* Enable by default */
- -	enable = true;
+	/*
+	 * Set features to the defaults assumed by init_cpu_char_feature_flags()
+	 * so it can set/clear again any features that might have changed after
+	 * migration, and in case the hypercall fails and it is not even called.
+	 */
+	powerpc_security_features = SEC_FTR_DEFAULT;
 
 	rc = plpar_get_cpu_characteristics(&result);
- -	if (rc == H_SUCCESS) {
- -		types = L1D_FLUSH_NONE;
+	if (rc == H_SUCCESS)
+		init_cpu_char_feature_flags(&result);
 
- -		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
- -			types |= L1D_FLUSH_MTTRIG;
- -		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
- -			types |= L1D_FLUSH_ORI;
+	/*
+	 * We're the guest so this doesn't apply to us, clear it to simplify
+	 * handling of it elsewhere.
+	 */
+	security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
 
- -		/* Use fallback if nothing set in hcall */
- -		if (types == L1D_FLUSH_NONE)
- -			types = L1D_FLUSH_FALLBACK;
+	types = L1D_FLUSH_FALLBACK;
 
- -		if (!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
- -			enable = false;
- -	} else {
- -		/* Default to fallback if case hcall is not available */
- -		types = L1D_FLUSH_FALLBACK;
- -	}
+	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
+		types |= L1D_FLUSH_MTTRIG;
+
+	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
+		types |= L1D_FLUSH_ORI;
+
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
+		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
 
 	setup_rfi_flush(types, enable);
+	setup_count_cache_flush();
 }
 
 static void __init pSeries_setup_arch(void)
@@ -549,6 +600,7 @@ static void __init pSeries_setup_arch(void)
 	fwnmi_init();
 
 	pseries_setup_rfi_flush();
+	setup_stf_barrier();
 
 	/* By default, only probe PCI (can be overridden by rtas_pci) */
 	pci_add_flags(PCI_PROBE_ONLY);
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 786bf01691c9..83619ebede93 100644
- --- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -2144,6 +2144,8 @@ static void dump_one_paca(int cpu)
 	DUMP(p, slb_cache_ptr, "x");
 	for (i = 0; i < SLB_CACHE_ENTRIES; i++)
 		printf(" slb_cache[%d]:        = 0x%016lx\n", i, p->slb_cache[i]);
+
+	DUMP(p, rfi_flush_fallback_area, "px");
 #endif
 	DUMP(p, dscr_default, "llx");
 #ifdef CONFIG_PPC_BOOK3E
- -- 
2.20.1

-----BEGIN PGP SIGNATURE-----

iQIcBAEBAgAGBQJcvHWhAAoJEFHr6jzI4aWA6nsP/0YskmAfLovcUmERQ7+bIjq6
IcS1T466dvy6MlqeBXU4x8pVgInWeHKEC9XJdkM1lOeib/SLW7Hbz4kgJeOGwFGY
lOTaexrxvsBqPm7f6GC0zbl9obEIIIIUs+TielFQANBgqm+q8Wio+XXPP9bpKeKY
agSpQ3nwL/PYixznbNmN/lP9py5p89LQ0IBcR7dDBGGWJtD/AXeZ9hslsZxPbPtI
nZJ0vdnjuoB2z+hCxfKWlYfLwH0VfoTpqP5x3ALCkvbBr67e8bf6EK8+trnvhyQ8
iLY4bp1pm2epAI0/3NfyEiDMsGjVJ6IFlkyhDkHJgJNu0BGcGOSX2GpyU3juviAK
c95FtBft/i8AwigOMCivg2mN5edYjsSiPoEItwT5KWqgByJsdr5i5mYVx8cUjMOz
iAxLZCdg+UHZYuCBCAO2ZI1G9bVXI1Pa3btMspiCOOOsYGjXGf0oFfKQ+7957hUO
ftYYJoGHlMHiHR1OPas6T3lk6YKF9uvfIDTE3OKw2obHbbRz3u82xoWMRGW503MN
7WpkpAP7oZ9RgqIWFVhatWy5f+7GFL0akEi4o2tsZHhYlPau7YWo+nToTd87itwt
GBaWJipzge4s13VkhAE+jWFO35Fvwi8uNZ7UgpuKMBECEjkGbtzBTq2MjSF5G8wc
yPEod5jby/Iqb7DkGPVG
=6DnF
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 01/52] powerpc/xmon: Add RFI flush related fields to paca dump
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:19   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 274920a3ecd5f43af0cc380bc0a9ee73a52b9f8a upstream.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/xmon/xmon.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 786bf01691c9..5f0c17b043de 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -2144,6 +2144,10 @@ static void dump_one_paca(int cpu)
 	DUMP(p, slb_cache_ptr, "x");
 	for (i = 0; i < SLB_CACHE_ENTRIES; i++)
 		printf(" slb_cache[%d]:        = 0x%016lx\n", i, p->slb_cache[i]);
+
+	DUMP(p, rfi_flush_fallback_area, "px");
+	DUMP(p, l1d_flush_congruence, "llx");
+	DUMP(p, l1d_flush_sets, "llx");
 #endif
 	DUMP(p, dscr_default, "llx");
 #ifdef CONFIG_PPC_BOOK3E
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 01/52] powerpc/xmon: Add RFI flush related fields to paca dump
@ 2019-04-21 14:19   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 274920a3ecd5f43af0cc380bc0a9ee73a52b9f8a upstream.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/xmon/xmon.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 786bf01691c9..5f0c17b043de 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -2144,6 +2144,10 @@ static void dump_one_paca(int cpu)
 	DUMP(p, slb_cache_ptr, "x");
 	for (i = 0; i < SLB_CACHE_ENTRIES; i++)
 		printf(" slb_cache[%d]:        = 0x%016lx\n", i, p->slb_cache[i]);
+
+	DUMP(p, rfi_flush_fallback_area, "px");
+	DUMP(p, l1d_flush_congruence, "llx");
+	DUMP(p, l1d_flush_sets, "llx");
 #endif
 	DUMP(p, dscr_default, "llx");
 #ifdef CONFIG_PPC_BOOK3E
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 02/52] powerpc/64s: Improve RFI L1-D cache flush fallback
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:19   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Nicholas Piggin <npiggin@gmail.com>

commit bdcb1aefc5b3f7d0f1dc8b02673602bca2ff7a4b upstream.

The fallback RFI flush is used when firmware does not provide a way
to flush the cache. It's a "displacement flush" that evicts useful
data by displacing it with an uninteresting buffer.

The flush has to take care to work with implementation specific cache
replacment policies, so the recipe has been in flux. The initial
slow but conservative approach is to touch all lines of a congruence
class, with dependencies between each load. It has since been
determined that a linear pattern of loads without dependencies is
sufficient, and is significantly faster.

Measuring the speed of a null syscall with RFI fallback flush enabled
gives the relative improvement:

P8 - 1.83x
P9 - 1.75x

The flush also becomes simpler and more adaptable to different cache
geometries.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/paca.h      |  3 +-
 arch/powerpc/kernel/asm-offsets.c    |  3 +-
 arch/powerpc/kernel/exceptions-64s.S | 76 +++++++++++++---------------
 arch/powerpc/kernel/setup_64.c       | 13 +----
 arch/powerpc/xmon/xmon.c             |  2 -
 5 files changed, 39 insertions(+), 58 deletions(-)

diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 45e2aefece16..08e5df3395fa 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -199,8 +199,7 @@ struct paca_struct {
 	 */
 	u64 exrfi[13] __aligned(0x80);
 	void *rfi_flush_fallback_area;
-	u64 l1d_flush_congruence;
-	u64 l1d_flush_sets;
+	u64 l1d_flush_size;
 #endif
 };
 
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index d92705e3a0c1..de3c29c51503 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -245,8 +245,7 @@ int main(void)
 	DEFINE(PACA_IN_MCE, offsetof(struct paca_struct, in_mce));
 	DEFINE(PACA_RFI_FLUSH_FALLBACK_AREA, offsetof(struct paca_struct, rfi_flush_fallback_area));
 	DEFINE(PACA_EXRFI, offsetof(struct paca_struct, exrfi));
-	DEFINE(PACA_L1D_FLUSH_CONGRUENCE, offsetof(struct paca_struct, l1d_flush_congruence));
-	DEFINE(PACA_L1D_FLUSH_SETS, offsetof(struct paca_struct, l1d_flush_sets));
+	DEFINE(PACA_L1D_FLUSH_SIZE, offsetof(struct paca_struct, l1d_flush_size));
 #endif
 	DEFINE(PACAHWCPUID, offsetof(struct paca_struct, hw_cpu_id));
 	DEFINE(PACAKEXECSTATE, offsetof(struct paca_struct, kexec_state));
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 938a30fef031..d2ff233ddc53 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1571,39 +1571,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	std	r9,PACA_EXRFI+EX_R9(r13)
 	std	r10,PACA_EXRFI+EX_R10(r13)
 	std	r11,PACA_EXRFI+EX_R11(r13)
-	std	r12,PACA_EXRFI+EX_R12(r13)
-	std	r8,PACA_EXRFI+EX_R13(r13)
 	mfctr	r9
 	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
-	ld	r11,PACA_L1D_FLUSH_SETS(r13)
-	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
-	/*
-	 * The load adresses are at staggered offsets within cachelines,
-	 * which suits some pipelines better (on others it should not
-	 * hurt).
-	 */
-	addi	r12,r12,8
+	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
+	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
 	mtctr	r11
 	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
 
 	/* order ld/st prior to dcbt stop all streams with flushing */
 	sync
-1:	li	r8,0
-	.rept	8 /* 8-way set associative */
-	ldx	r11,r10,r8
-	add	r8,r8,r12
-	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
-	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
-	.endr
-	addi	r10,r10,128 /* 128 byte cache line */
+
+	/*
+	 * The load adresses are at staggered offsets within cachelines,
+	 * which suits some pipelines better (on others it should not
+	 * hurt).
+	 */
+1:
+	ld	r11,(0x80 + 8)*0(r10)
+	ld	r11,(0x80 + 8)*1(r10)
+	ld	r11,(0x80 + 8)*2(r10)
+	ld	r11,(0x80 + 8)*3(r10)
+	ld	r11,(0x80 + 8)*4(r10)
+	ld	r11,(0x80 + 8)*5(r10)
+	ld	r11,(0x80 + 8)*6(r10)
+	ld	r11,(0x80 + 8)*7(r10)
+	addi	r10,r10,0x80*8
 	bdnz	1b
 
 	mtctr	r9
 	ld	r9,PACA_EXRFI+EX_R9(r13)
 	ld	r10,PACA_EXRFI+EX_R10(r13)
 	ld	r11,PACA_EXRFI+EX_R11(r13)
-	ld	r12,PACA_EXRFI+EX_R12(r13)
-	ld	r8,PACA_EXRFI+EX_R13(r13)
 	GET_SCRATCH0(r13);
 	rfid
 
@@ -1614,39 +1612,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	std	r9,PACA_EXRFI+EX_R9(r13)
 	std	r10,PACA_EXRFI+EX_R10(r13)
 	std	r11,PACA_EXRFI+EX_R11(r13)
-	std	r12,PACA_EXRFI+EX_R12(r13)
-	std	r8,PACA_EXRFI+EX_R13(r13)
 	mfctr	r9
 	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
-	ld	r11,PACA_L1D_FLUSH_SETS(r13)
-	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
-	/*
-	 * The load adresses are at staggered offsets within cachelines,
-	 * which suits some pipelines better (on others it should not
-	 * hurt).
-	 */
-	addi	r12,r12,8
+	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
+	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
 	mtctr	r11
 	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
 
 	/* order ld/st prior to dcbt stop all streams with flushing */
 	sync
-1:	li	r8,0
-	.rept	8 /* 8-way set associative */
-	ldx	r11,r10,r8
-	add	r8,r8,r12
-	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
-	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
-	.endr
-	addi	r10,r10,128 /* 128 byte cache line */
+
+	/*
+	 * The load adresses are at staggered offsets within cachelines,
+	 * which suits some pipelines better (on others it should not
+	 * hurt).
+	 */
+1:
+	ld	r11,(0x80 + 8)*0(r10)
+	ld	r11,(0x80 + 8)*1(r10)
+	ld	r11,(0x80 + 8)*2(r10)
+	ld	r11,(0x80 + 8)*3(r10)
+	ld	r11,(0x80 + 8)*4(r10)
+	ld	r11,(0x80 + 8)*5(r10)
+	ld	r11,(0x80 + 8)*6(r10)
+	ld	r11,(0x80 + 8)*7(r10)
+	addi	r10,r10,0x80*8
 	bdnz	1b
 
 	mtctr	r9
 	ld	r9,PACA_EXRFI+EX_R9(r13)
 	ld	r10,PACA_EXRFI+EX_R10(r13)
 	ld	r11,PACA_EXRFI+EX_R11(r13)
-	ld	r12,PACA_EXRFI+EX_R12(r13)
-	ld	r8,PACA_EXRFI+EX_R13(r13)
 	GET_SCRATCH0(r13);
 	hrfid
 
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 9eb469bed22b..1d2712d628c3 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -902,19 +902,8 @@ static void init_fallback_flush(void)
 	memset(l1d_flush_fallback_area, 0, l1d_size * 2);
 
 	for_each_possible_cpu(cpu) {
-		/*
-		 * The fallback flush is currently coded for 8-way
-		 * associativity. Different associativity is possible, but it
-		 * will be treated as 8-way and may not evict the lines as
-		 * effectively.
-		 *
-		 * 128 byte lines are mandatory.
-		 */
-		u64 c = l1d_size / 8;
-
 		paca[cpu].rfi_flush_fallback_area = l1d_flush_fallback_area;
-		paca[cpu].l1d_flush_congruence = c;
-		paca[cpu].l1d_flush_sets = c / 128;
+		paca[cpu].l1d_flush_size = l1d_size;
 	}
 }
 
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 5f0c17b043de..83619ebede93 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -2146,8 +2146,6 @@ static void dump_one_paca(int cpu)
 		printf(" slb_cache[%d]:        = 0x%016lx\n", i, p->slb_cache[i]);
 
 	DUMP(p, rfi_flush_fallback_area, "px");
-	DUMP(p, l1d_flush_congruence, "llx");
-	DUMP(p, l1d_flush_sets, "llx");
 #endif
 	DUMP(p, dscr_default, "llx");
 #ifdef CONFIG_PPC_BOOK3E
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 02/52] powerpc/64s: Improve RFI L1-D cache flush fallback
@ 2019-04-21 14:19   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Nicholas Piggin <npiggin@gmail.com>

commit bdcb1aefc5b3f7d0f1dc8b02673602bca2ff7a4b upstream.

The fallback RFI flush is used when firmware does not provide a way
to flush the cache. It's a "displacement flush" that evicts useful
data by displacing it with an uninteresting buffer.

The flush has to take care to work with implementation specific cache
replacment policies, so the recipe has been in flux. The initial
slow but conservative approach is to touch all lines of a congruence
class, with dependencies between each load. It has since been
determined that a linear pattern of loads without dependencies is
sufficient, and is significantly faster.

Measuring the speed of a null syscall with RFI fallback flush enabled
gives the relative improvement:

P8 - 1.83x
P9 - 1.75x

The flush also becomes simpler and more adaptable to different cache
geometries.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/paca.h      |  3 +-
 arch/powerpc/kernel/asm-offsets.c    |  3 +-
 arch/powerpc/kernel/exceptions-64s.S | 76 +++++++++++++---------------
 arch/powerpc/kernel/setup_64.c       | 13 +----
 arch/powerpc/xmon/xmon.c             |  2 -
 5 files changed, 39 insertions(+), 58 deletions(-)

diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 45e2aefece16..08e5df3395fa 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -199,8 +199,7 @@ struct paca_struct {
 	 */
 	u64 exrfi[13] __aligned(0x80);
 	void *rfi_flush_fallback_area;
-	u64 l1d_flush_congruence;
-	u64 l1d_flush_sets;
+	u64 l1d_flush_size;
 #endif
 };
 
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index d92705e3a0c1..de3c29c51503 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -245,8 +245,7 @@ int main(void)
 	DEFINE(PACA_IN_MCE, offsetof(struct paca_struct, in_mce));
 	DEFINE(PACA_RFI_FLUSH_FALLBACK_AREA, offsetof(struct paca_struct, rfi_flush_fallback_area));
 	DEFINE(PACA_EXRFI, offsetof(struct paca_struct, exrfi));
-	DEFINE(PACA_L1D_FLUSH_CONGRUENCE, offsetof(struct paca_struct, l1d_flush_congruence));
-	DEFINE(PACA_L1D_FLUSH_SETS, offsetof(struct paca_struct, l1d_flush_sets));
+	DEFINE(PACA_L1D_FLUSH_SIZE, offsetof(struct paca_struct, l1d_flush_size));
 #endif
 	DEFINE(PACAHWCPUID, offsetof(struct paca_struct, hw_cpu_id));
 	DEFINE(PACAKEXECSTATE, offsetof(struct paca_struct, kexec_state));
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 938a30fef031..d2ff233ddc53 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1571,39 +1571,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	std	r9,PACA_EXRFI+EX_R9(r13)
 	std	r10,PACA_EXRFI+EX_R10(r13)
 	std	r11,PACA_EXRFI+EX_R11(r13)
-	std	r12,PACA_EXRFI+EX_R12(r13)
-	std	r8,PACA_EXRFI+EX_R13(r13)
 	mfctr	r9
 	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
-	ld	r11,PACA_L1D_FLUSH_SETS(r13)
-	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
-	/*
-	 * The load adresses are at staggered offsets within cachelines,
-	 * which suits some pipelines better (on others it should not
-	 * hurt).
-	 */
-	addi	r12,r12,8
+	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
+	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
 	mtctr	r11
 	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
 
 	/* order ld/st prior to dcbt stop all streams with flushing */
 	sync
-1:	li	r8,0
-	.rept	8 /* 8-way set associative */
-	ldx	r11,r10,r8
-	add	r8,r8,r12
-	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
-	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
-	.endr
-	addi	r10,r10,128 /* 128 byte cache line */
+
+	/*
+	 * The load adresses are at staggered offsets within cachelines,
+	 * which suits some pipelines better (on others it should not
+	 * hurt).
+	 */
+1:
+	ld	r11,(0x80 + 8)*0(r10)
+	ld	r11,(0x80 + 8)*1(r10)
+	ld	r11,(0x80 + 8)*2(r10)
+	ld	r11,(0x80 + 8)*3(r10)
+	ld	r11,(0x80 + 8)*4(r10)
+	ld	r11,(0x80 + 8)*5(r10)
+	ld	r11,(0x80 + 8)*6(r10)
+	ld	r11,(0x80 + 8)*7(r10)
+	addi	r10,r10,0x80*8
 	bdnz	1b
 
 	mtctr	r9
 	ld	r9,PACA_EXRFI+EX_R9(r13)
 	ld	r10,PACA_EXRFI+EX_R10(r13)
 	ld	r11,PACA_EXRFI+EX_R11(r13)
-	ld	r12,PACA_EXRFI+EX_R12(r13)
-	ld	r8,PACA_EXRFI+EX_R13(r13)
 	GET_SCRATCH0(r13);
 	rfid
 
@@ -1614,39 +1612,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	std	r9,PACA_EXRFI+EX_R9(r13)
 	std	r10,PACA_EXRFI+EX_R10(r13)
 	std	r11,PACA_EXRFI+EX_R11(r13)
-	std	r12,PACA_EXRFI+EX_R12(r13)
-	std	r8,PACA_EXRFI+EX_R13(r13)
 	mfctr	r9
 	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
-	ld	r11,PACA_L1D_FLUSH_SETS(r13)
-	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
-	/*
-	 * The load adresses are at staggered offsets within cachelines,
-	 * which suits some pipelines better (on others it should not
-	 * hurt).
-	 */
-	addi	r12,r12,8
+	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
+	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
 	mtctr	r11
 	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
 
 	/* order ld/st prior to dcbt stop all streams with flushing */
 	sync
-1:	li	r8,0
-	.rept	8 /* 8-way set associative */
-	ldx	r11,r10,r8
-	add	r8,r8,r12
-	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
-	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
-	.endr
-	addi	r10,r10,128 /* 128 byte cache line */
+
+	/*
+	 * The load adresses are at staggered offsets within cachelines,
+	 * which suits some pipelines better (on others it should not
+	 * hurt).
+	 */
+1:
+	ld	r11,(0x80 + 8)*0(r10)
+	ld	r11,(0x80 + 8)*1(r10)
+	ld	r11,(0x80 + 8)*2(r10)
+	ld	r11,(0x80 + 8)*3(r10)
+	ld	r11,(0x80 + 8)*4(r10)
+	ld	r11,(0x80 + 8)*5(r10)
+	ld	r11,(0x80 + 8)*6(r10)
+	ld	r11,(0x80 + 8)*7(r10)
+	addi	r10,r10,0x80*8
 	bdnz	1b
 
 	mtctr	r9
 	ld	r9,PACA_EXRFI+EX_R9(r13)
 	ld	r10,PACA_EXRFI+EX_R10(r13)
 	ld	r11,PACA_EXRFI+EX_R11(r13)
-	ld	r12,PACA_EXRFI+EX_R12(r13)
-	ld	r8,PACA_EXRFI+EX_R13(r13)
 	GET_SCRATCH0(r13);
 	hrfid
 
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 9eb469bed22b..1d2712d628c3 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -902,19 +902,8 @@ static void init_fallback_flush(void)
 	memset(l1d_flush_fallback_area, 0, l1d_size * 2);
 
 	for_each_possible_cpu(cpu) {
-		/*
-		 * The fallback flush is currently coded for 8-way
-		 * associativity. Different associativity is possible, but it
-		 * will be treated as 8-way and may not evict the lines as
-		 * effectively.
-		 *
-		 * 128 byte lines are mandatory.
-		 */
-		u64 c = l1d_size / 8;
-
 		paca[cpu].rfi_flush_fallback_area = l1d_flush_fallback_area;
-		paca[cpu].l1d_flush_congruence = c;
-		paca[cpu].l1d_flush_sets = c / 128;
+		paca[cpu].l1d_flush_size = l1d_size;
 	}
 }
 
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 5f0c17b043de..83619ebede93 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -2146,8 +2146,6 @@ static void dump_one_paca(int cpu)
 		printf(" slb_cache[%d]:        = 0x%016lx\n", i, p->slb_cache[i]);
 
 	DUMP(p, rfi_flush_fallback_area, "px");
-	DUMP(p, l1d_flush_congruence, "llx");
-	DUMP(p, l1d_flush_sets, "llx");
 #endif
 	DUMP(p, dscr_default, "llx");
 #ifdef CONFIG_PPC_BOOK3E
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 03/52] powerpc/pseries: Support firmware disable of RFI flush
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:19   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 582605a429e20ae68fd0b041b2e840af296edd08 upstream.

Some versions of firmware will have a setting that can be configured
to disable the RFI flush, add support for it.

Fixes: 8989d56878a7 ("powerpc/pseries: Query hypervisor for RFI flush settings")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/pseries/setup.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index dd2545fc9947..ec1d1768a799 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -522,7 +522,8 @@ static void pseries_setup_rfi_flush(void)
 		if (types == L1D_FLUSH_NONE)
 			types = L1D_FLUSH_FALLBACK;
 
-		if (!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
+		if ((!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR)) ||
+		    (!(result.behaviour & H_CPU_BEHAV_FAVOUR_SECURITY)))
 			enable = false;
 	} else {
 		/* Default to fallback if case hcall is not available */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 03/52] powerpc/pseries: Support firmware disable of RFI flush
@ 2019-04-21 14:19   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 582605a429e20ae68fd0b041b2e840af296edd08 upstream.

Some versions of firmware will have a setting that can be configured
to disable the RFI flush, add support for it.

Fixes: 8989d56878a7 ("powerpc/pseries: Query hypervisor for RFI flush settings")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/pseries/setup.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index dd2545fc9947..ec1d1768a799 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -522,7 +522,8 @@ static void pseries_setup_rfi_flush(void)
 		if (types == L1D_FLUSH_NONE)
 			types = L1D_FLUSH_FALLBACK;
 
-		if (!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
+		if ((!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR)) ||
+		    (!(result.behaviour & H_CPU_BEHAV_FAVOUR_SECURITY)))
 			enable = false;
 	} else {
 		/* Default to fallback if case hcall is not available */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 04/52] powerpc/powernv: Support firmware disable of RFI flush
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:19   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit eb0a2d2620ae431c543963c8c7f08f597366fc60 upstream.

Some versions of firmware will have a setting that can be configured
to disable the RFI flush, add support for it.

Fixes: 6e032b350cd1 ("powerpc/powernv: Check device-tree for RFI flush settings")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/powernv/setup.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index c57afc619b20..fdc5f25a1b4a 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -79,6 +79,10 @@ static void pnv_setup_rfi_flush(void)
 		if (np && of_property_read_bool(np, "disabled"))
 			enable--;
 
+		np = of_get_child_by_name(fw_features, "speculation-policy-favor-security");
+		if (np && of_property_read_bool(np, "disabled"))
+			enable = 0;
+
 		of_node_put(np);
 		of_node_put(fw_features);
 	}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 04/52] powerpc/powernv: Support firmware disable of RFI flush
@ 2019-04-21 14:19   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit eb0a2d2620ae431c543963c8c7f08f597366fc60 upstream.

Some versions of firmware will have a setting that can be configured
to disable the RFI flush, add support for it.

Fixes: 6e032b350cd1 ("powerpc/powernv: Check device-tree for RFI flush settings")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/powernv/setup.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index c57afc619b20..fdc5f25a1b4a 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -79,6 +79,10 @@ static void pnv_setup_rfi_flush(void)
 		if (np && of_property_read_bool(np, "disabled"))
 			enable--;
 
+		np = of_get_child_by_name(fw_features, "speculation-policy-favor-security");
+		if (np && of_property_read_bool(np, "disabled"))
+			enable = 0;
+
 		of_node_put(np);
 		of_node_put(fw_features);
 	}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 05/52] powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs code
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:19   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 1e2a9fc7496955faacbbed49461d611b704a7505 upstream.

rfi_flush_enable() includes a check to see if we're already
enabled (or disabled), and in that case does nothing.

But that means calling setup_rfi_flush() a 2nd time doesn't actually
work, which is a bit confusing.

Move that check into the debugfs code, where it really belongs.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/setup_64.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 1d2712d628c3..5bb4a6811371 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -873,9 +873,6 @@ static void do_nothing(void *unused)
 
 void rfi_flush_enable(bool enable)
 {
-	if (rfi_flush == enable)
-		return;
-
 	if (enable) {
 		do_rfi_flush_fixups(enabled_flush_types);
 		on_each_cpu(do_nothing, NULL, 1);
@@ -929,13 +926,19 @@ void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
 #ifdef CONFIG_DEBUG_FS
 static int rfi_flush_set(void *data, u64 val)
 {
+	bool enable;
+
 	if (val == 1)
-		rfi_flush_enable(true);
+		enable = true;
 	else if (val == 0)
-		rfi_flush_enable(false);
+		enable = false;
 	else
 		return -EINVAL;
 
+	/* Only do anything if we're changing state */
+	if (enable != rfi_flush)
+		rfi_flush_enable(enable);
+
 	return 0;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 05/52] powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs code
@ 2019-04-21 14:19   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 1e2a9fc7496955faacbbed49461d611b704a7505 upstream.

rfi_flush_enable() includes a check to see if we're already
enabled (or disabled), and in that case does nothing.

But that means calling setup_rfi_flush() a 2nd time doesn't actually
work, which is a bit confusing.

Move that check into the debugfs code, where it really belongs.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/setup_64.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 1d2712d628c3..5bb4a6811371 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -873,9 +873,6 @@ static void do_nothing(void *unused)
 
 void rfi_flush_enable(bool enable)
 {
-	if (rfi_flush == enable)
-		return;
-
 	if (enable) {
 		do_rfi_flush_fixups(enabled_flush_types);
 		on_each_cpu(do_nothing, NULL, 1);
@@ -929,13 +926,19 @@ void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
 #ifdef CONFIG_DEBUG_FS
 static int rfi_flush_set(void *data, u64 val)
 {
+	bool enable;
+
 	if (val == 1)
-		rfi_flush_enable(true);
+		enable = true;
 	else if (val == 0)
-		rfi_flush_enable(false);
+		enable = false;
 	else
 		return -EINVAL;
 
+	/* Only do anything if we're changing state */
+	if (enable != rfi_flush)
+		rfi_flush_enable(enable);
+
 	return 0;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 06/52] powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:19   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit abf110f3e1cea40f5ea15e85f5d67c39c14568a7 upstream.

For PowerVM migration we want to be able to call setup_rfi_flush()
again after we've migrated the partition.

To support that we need to check that we're not trying to allocate the
fallback flush area after memblock has gone away (i.e., boot-time only).

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/setup.h | 2 +-
 arch/powerpc/kernel/setup_64.c   | 6 +++++-
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 7916b56f2e60..3733195be997 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -36,7 +36,7 @@ enum l1d_flush_type {
 	L1D_FLUSH_MTTRIG	= 0x8,
 };
 
-void __init setup_rfi_flush(enum l1d_flush_type, bool enable);
+void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 5bb4a6811371..6e9a4c1e8a4d 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -887,6 +887,10 @@ static void init_fallback_flush(void)
 	u64 l1d_size, limit;
 	int cpu;
 
+	/* Only allocate the fallback flush area once (at boot time). */
+	if (l1d_flush_fallback_area)
+		return;
+
 	l1d_size = ppc64_caches.dsize;
 	limit = min(safe_stack_limit(), ppc64_rma_size);
 
@@ -904,7 +908,7 @@ static void init_fallback_flush(void)
 	}
 }
 
-void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
+void setup_rfi_flush(enum l1d_flush_type types, bool enable)
 {
 	if (types & L1D_FLUSH_FALLBACK) {
 		pr_info("rfi-flush: Using fallback displacement flush\n");
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 06/52] powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again
@ 2019-04-21 14:19   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit abf110f3e1cea40f5ea15e85f5d67c39c14568a7 upstream.

For PowerVM migration we want to be able to call setup_rfi_flush()
again after we've migrated the partition.

To support that we need to check that we're not trying to allocate the
fallback flush area after memblock has gone away (i.e., boot-time only).

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/setup.h | 2 +-
 arch/powerpc/kernel/setup_64.c   | 6 +++++-
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 7916b56f2e60..3733195be997 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -36,7 +36,7 @@ enum l1d_flush_type {
 	L1D_FLUSH_MTTRIG	= 0x8,
 };
 
-void __init setup_rfi_flush(enum l1d_flush_type, bool enable);
+void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 5bb4a6811371..6e9a4c1e8a4d 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -887,6 +887,10 @@ static void init_fallback_flush(void)
 	u64 l1d_size, limit;
 	int cpu;
 
+	/* Only allocate the fallback flush area once (at boot time). */
+	if (l1d_flush_fallback_area)
+		return;
+
 	l1d_size = ppc64_caches.dsize;
 	limit = min(safe_stack_limit(), ppc64_rma_size);
 
@@ -904,7 +908,7 @@ static void init_fallback_flush(void)
 	}
 }
 
-void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
+void setup_rfi_flush(enum l1d_flush_type types, bool enable)
 {
 	if (types & L1D_FLUSH_FALLBACK) {
 		pr_info("rfi-flush: Using fallback displacement flush\n");
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 07/52] powerpc/rfi-flush: Always enable fallback flush on pseries
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:19   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 84749a58b6e382f109abf1e734bc4dd43c2c25bb upstream.

This ensures the fallback flush area is always allocated on pseries,
so in case a LPAR is migrated from a patched to an unpatched system,
it is possible to enable the fallback flush in the target system.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/pseries/setup.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index ec1d1768a799..b831044d3b7d 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -508,26 +508,18 @@ static void pseries_setup_rfi_flush(void)
 
 	/* Enable by default */
 	enable = true;
+	types = L1D_FLUSH_FALLBACK;
 
 	rc = plpar_get_cpu_characteristics(&result);
 	if (rc == H_SUCCESS) {
-		types = L1D_FLUSH_NONE;
-
 		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
 			types |= L1D_FLUSH_MTTRIG;
 		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
 			types |= L1D_FLUSH_ORI;
 
-		/* Use fallback if nothing set in hcall */
-		if (types == L1D_FLUSH_NONE)
-			types = L1D_FLUSH_FALLBACK;
-
 		if ((!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR)) ||
 		    (!(result.behaviour & H_CPU_BEHAV_FAVOUR_SECURITY)))
 			enable = false;
-	} else {
-		/* Default to fallback if case hcall is not available */
-		types = L1D_FLUSH_FALLBACK;
 	}
 
 	setup_rfi_flush(types, enable);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 07/52] powerpc/rfi-flush: Always enable fallback flush on pseries
@ 2019-04-21 14:19   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 84749a58b6e382f109abf1e734bc4dd43c2c25bb upstream.

This ensures the fallback flush area is always allocated on pseries,
so in case a LPAR is migrated from a patched to an unpatched system,
it is possible to enable the fallback flush in the target system.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/pseries/setup.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index ec1d1768a799..b831044d3b7d 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -508,26 +508,18 @@ static void pseries_setup_rfi_flush(void)
 
 	/* Enable by default */
 	enable = true;
+	types = L1D_FLUSH_FALLBACK;
 
 	rc = plpar_get_cpu_characteristics(&result);
 	if (rc == H_SUCCESS) {
-		types = L1D_FLUSH_NONE;
-
 		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
 			types |= L1D_FLUSH_MTTRIG;
 		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
 			types |= L1D_FLUSH_ORI;
 
-		/* Use fallback if nothing set in hcall */
-		if (types == L1D_FLUSH_NONE)
-			types = L1D_FLUSH_FALLBACK;
-
 		if ((!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR)) ||
 		    (!(result.behaviour & H_CPU_BEHAV_FAVOUR_SECURITY)))
 			enable = false;
-	} else {
-		/* Default to fallback if case hcall is not available */
-		types = L1D_FLUSH_FALLBACK;
 	}
 
 	setup_rfi_flush(types, enable);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 08/52] powerpc/rfi-flush: Differentiate enabled and patched flush types
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:19   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>

commit 0063d61ccfc011f379a31acaeba6de7c926fed2c upstream.

Currently the rfi-flush messages print 'Using <type> flush' for all
enabled_flush_types, but that is not necessarily true -- as now the
fallback flush is always enabled on pseries, but the fixup function
overwrites its nop/branch slot with other flush types, if available.

So, replace the 'Using <type> flush' messages with '<type> flush is
available'.

Also, print the patched flush types in the fixup function, so users
can know what is (not) being used (e.g., the slower, fallback flush,
or no flush type at all if flush is disabled via the debugfs switch).

Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/setup_64.c    | 6 +++---
 arch/powerpc/lib/feature-fixups.c | 9 ++++++++-
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 6e9a4c1e8a4d..e975f2533767 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -911,15 +911,15 @@ static void init_fallback_flush(void)
 void setup_rfi_flush(enum l1d_flush_type types, bool enable)
 {
 	if (types & L1D_FLUSH_FALLBACK) {
-		pr_info("rfi-flush: Using fallback displacement flush\n");
+		pr_info("rfi-flush: fallback displacement flush available\n");
 		init_fallback_flush();
 	}
 
 	if (types & L1D_FLUSH_ORI)
-		pr_info("rfi-flush: Using ori type flush\n");
+		pr_info("rfi-flush: ori type flush available\n");
 
 	if (types & L1D_FLUSH_MTTRIG)
-		pr_info("rfi-flush: Using mttrig type flush\n");
+		pr_info("rfi-flush: mttrig type flush available\n");
 
 	enabled_flush_types = types;
 
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index 3af014684872..b76b9b6b3a85 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -151,7 +151,14 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
 		patch_instruction(dest + 2, instrs[2]);
 	}
 
-	printk(KERN_DEBUG "rfi-flush: patched %d locations\n", i);
+	printk(KERN_DEBUG "rfi-flush: patched %d locations (%s flush)\n", i,
+		(types == L1D_FLUSH_NONE)       ? "no" :
+		(types == L1D_FLUSH_FALLBACK)   ? "fallback displacement" :
+		(types &  L1D_FLUSH_ORI)        ? (types & L1D_FLUSH_MTTRIG)
+							? "ori+mttrig type"
+							: "ori type" :
+		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
+						: "unknown");
 }
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 08/52] powerpc/rfi-flush: Differentiate enabled and patched flush types
@ 2019-04-21 14:19   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>

commit 0063d61ccfc011f379a31acaeba6de7c926fed2c upstream.

Currently the rfi-flush messages print 'Using <type> flush' for all
enabled_flush_types, but that is not necessarily true -- as now the
fallback flush is always enabled on pseries, but the fixup function
overwrites its nop/branch slot with other flush types, if available.

So, replace the 'Using <type> flush' messages with '<type> flush is
available'.

Also, print the patched flush types in the fixup function, so users
can know what is (not) being used (e.g., the slower, fallback flush,
or no flush type at all if flush is disabled via the debugfs switch).

Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/setup_64.c    | 6 +++---
 arch/powerpc/lib/feature-fixups.c | 9 ++++++++-
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 6e9a4c1e8a4d..e975f2533767 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -911,15 +911,15 @@ static void init_fallback_flush(void)
 void setup_rfi_flush(enum l1d_flush_type types, bool enable)
 {
 	if (types & L1D_FLUSH_FALLBACK) {
-		pr_info("rfi-flush: Using fallback displacement flush\n");
+		pr_info("rfi-flush: fallback displacement flush available\n");
 		init_fallback_flush();
 	}
 
 	if (types & L1D_FLUSH_ORI)
-		pr_info("rfi-flush: Using ori type flush\n");
+		pr_info("rfi-flush: ori type flush available\n");
 
 	if (types & L1D_FLUSH_MTTRIG)
-		pr_info("rfi-flush: Using mttrig type flush\n");
+		pr_info("rfi-flush: mttrig type flush available\n");
 
 	enabled_flush_types = types;
 
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index 3af014684872..b76b9b6b3a85 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -151,7 +151,14 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
 		patch_instruction(dest + 2, instrs[2]);
 	}
 
-	printk(KERN_DEBUG "rfi-flush: patched %d locations\n", i);
+	printk(KERN_DEBUG "rfi-flush: patched %d locations (%s flush)\n", i,
+		(types == L1D_FLUSH_NONE)       ? "no" :
+		(types == L1D_FLUSH_FALLBACK)   ? "fallback displacement" :
+		(types &  L1D_FLUSH_ORI)        ? (types & L1D_FLUSH_MTTRIG)
+							? "ori+mttrig type"
+							: "ori type" :
+		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
+						: "unknown");
 }
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 09/52] powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:19   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit c4bc36628d7f8b664657d8bd6ad1c44c177880b7 upstream.

Add some additional values which have been defined for the
H_GET_CPU_CHARACTERISTICS hypercall.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/hvcall.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 449bbb87c257..6d7938deb624 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -292,6 +292,9 @@
 #define H_CPU_CHAR_L1D_FLUSH_ORI30	(1ull << 61) // IBM bit 2
 #define H_CPU_CHAR_L1D_FLUSH_TRIG2	(1ull << 60) // IBM bit 3
 #define H_CPU_CHAR_L1D_THREAD_PRIV	(1ull << 59) // IBM bit 4
+#define H_CPU_CHAR_BRANCH_HINTS_HONORED	(1ull << 58) // IBM bit 5
+#define H_CPU_CHAR_THREAD_RECONFIG_CTRL	(1ull << 57) // IBM bit 6
+#define H_CPU_CHAR_COUNT_CACHE_DISABLED	(1ull << 56) // IBM bit 7
 
 #define H_CPU_BEHAV_FAVOUR_SECURITY	(1ull << 63) // IBM bit 0
 #define H_CPU_BEHAV_L1D_FLUSH_PR	(1ull << 62) // IBM bit 1
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 09/52] powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags
@ 2019-04-21 14:19   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit c4bc36628d7f8b664657d8bd6ad1c44c177880b7 upstream.

Add some additional values which have been defined for the
H_GET_CPU_CHARACTERISTICS hypercall.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/hvcall.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 449bbb87c257..6d7938deb624 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -292,6 +292,9 @@
 #define H_CPU_CHAR_L1D_FLUSH_ORI30	(1ull << 61) // IBM bit 2
 #define H_CPU_CHAR_L1D_FLUSH_TRIG2	(1ull << 60) // IBM bit 3
 #define H_CPU_CHAR_L1D_THREAD_PRIV	(1ull << 59) // IBM bit 4
+#define H_CPU_CHAR_BRANCH_HINTS_HONORED	(1ull << 58) // IBM bit 5
+#define H_CPU_CHAR_THREAD_RECONFIG_CTRL	(1ull << 57) // IBM bit 6
+#define H_CPU_CHAR_COUNT_CACHE_DISABLED	(1ull << 56) // IBM bit 7
 
 #define H_CPU_BEHAV_FAVOUR_SECURITY	(1ull << 63) // IBM bit 0
 #define H_CPU_BEHAV_L1D_FLUSH_PR	(1ull << 62) // IBM bit 1
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 10/52] powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:19   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 921bc6cf807ceb2ab8005319cf39f33494d6b100 upstream.

We might have migrated to a machine that uses a different flush type,
or doesn't need flushing at all.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/pseries/mobility.c | 3 +++
 arch/powerpc/platforms/pseries/pseries.h  | 2 ++
 arch/powerpc/platforms/pseries/setup.c    | 2 +-
 3 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
index 8dd0c8edefd6..c773396d0969 100644
--- a/arch/powerpc/platforms/pseries/mobility.c
+++ b/arch/powerpc/platforms/pseries/mobility.c
@@ -314,6 +314,9 @@ void post_mobility_fixup(void)
 		printk(KERN_ERR "Post-mobility device tree update "
 			"failed: %d\n", rc);
 
+	/* Possibly switch to a new RFI flush type */
+	pseries_setup_rfi_flush();
+
 	return;
 }
 
diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h
index 8411c27293e4..e7d80797384d 100644
--- a/arch/powerpc/platforms/pseries/pseries.h
+++ b/arch/powerpc/platforms/pseries/pseries.h
@@ -81,4 +81,6 @@ extern struct pci_controller_ops pseries_pci_controller_ops;
 
 unsigned long pseries_memory_block_size(void);
 
+void pseries_setup_rfi_flush(void);
+
 #endif /* _PSERIES_PSERIES_H */
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index b831044d3b7d..ab8c4c8b46bd 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -499,7 +499,7 @@ static void __init find_and_init_phbs(void)
 	of_pci_check_probe_only();
 }
 
-static void pseries_setup_rfi_flush(void)
+void pseries_setup_rfi_flush(void)
 {
 	struct h_cpu_char_result result;
 	enum l1d_flush_type types;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 10/52] powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration
@ 2019-04-21 14:19   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 921bc6cf807ceb2ab8005319cf39f33494d6b100 upstream.

We might have migrated to a machine that uses a different flush type,
or doesn't need flushing at all.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/pseries/mobility.c | 3 +++
 arch/powerpc/platforms/pseries/pseries.h  | 2 ++
 arch/powerpc/platforms/pseries/setup.c    | 2 +-
 3 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
index 8dd0c8edefd6..c773396d0969 100644
--- a/arch/powerpc/platforms/pseries/mobility.c
+++ b/arch/powerpc/platforms/pseries/mobility.c
@@ -314,6 +314,9 @@ void post_mobility_fixup(void)
 		printk(KERN_ERR "Post-mobility device tree update "
 			"failed: %d\n", rc);
 
+	/* Possibly switch to a new RFI flush type */
+	pseries_setup_rfi_flush();
+
 	return;
 }
 
diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h
index 8411c27293e4..e7d80797384d 100644
--- a/arch/powerpc/platforms/pseries/pseries.h
+++ b/arch/powerpc/platforms/pseries/pseries.h
@@ -81,4 +81,6 @@ extern struct pci_controller_ops pseries_pci_controller_ops;
 
 unsigned long pseries_memory_block_size(void);
 
+void pseries_setup_rfi_flush(void);
+
 #endif /* _PSERIES_PSERIES_H */
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index b831044d3b7d..ab8c4c8b46bd 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -499,7 +499,7 @@ static void __init find_and_init_phbs(void)
 	of_pci_check_probe_only();
 }
 
-static void pseries_setup_rfi_flush(void)
+void pseries_setup_rfi_flush(void)
 {
 	struct h_cpu_char_result result;
 	enum l1d_flush_type types;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 11/52] powerpc: Add security feature flags for Spectre/Meltdown
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:19   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 9a868f634349e62922c226834aa23e3d1329ae7f upstream.

This commit adds security feature flags to reflect the settings we
receive from firmware regarding Spectre/Meltdown mitigations.

The feature names reflect the names we are given by firmware on bare
metal machines. See the hostboot source for details.

Arguably these could be firmware features, but that then requires them
to be read early in boot so they're available prior to asm feature
patching, but we don't actually want to use them for patching. We may
also want to dynamically update them in future, which would be
incompatible with the way firmware features work (at the moment at
least). So for now just make them separate flags.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/security_features.h | 65 ++++++++++++++++++++
 arch/powerpc/kernel/Makefile                 |  2 +-
 arch/powerpc/kernel/security.c               | 15 +++++
 3 files changed, 81 insertions(+), 1 deletion(-)
 create mode 100644 arch/powerpc/include/asm/security_features.h
 create mode 100644 arch/powerpc/kernel/security.c

diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
new file mode 100644
index 000000000000..db00ad2c72c2
--- /dev/null
+++ b/arch/powerpc/include/asm/security_features.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Security related feature bit definitions.
+ *
+ * Copyright 2018, Michael Ellerman, IBM Corporation.
+ */
+
+#ifndef _ASM_POWERPC_SECURITY_FEATURES_H
+#define _ASM_POWERPC_SECURITY_FEATURES_H
+
+
+extern unsigned long powerpc_security_features;
+
+static inline void security_ftr_set(unsigned long feature)
+{
+	powerpc_security_features |= feature;
+}
+
+static inline void security_ftr_clear(unsigned long feature)
+{
+	powerpc_security_features &= ~feature;
+}
+
+static inline bool security_ftr_enabled(unsigned long feature)
+{
+	return !!(powerpc_security_features & feature);
+}
+
+
+// Features indicating support for Spectre/Meltdown mitigations
+
+// The L1-D cache can be flushed with ori r30,r30,0
+#define SEC_FTR_L1D_FLUSH_ORI30		0x0000000000000001ull
+
+// The L1-D cache can be flushed with mtspr 882,r0 (aka SPRN_TRIG2)
+#define SEC_FTR_L1D_FLUSH_TRIG2		0x0000000000000002ull
+
+// ori r31,r31,0 acts as a speculation barrier
+#define SEC_FTR_SPEC_BAR_ORI31		0x0000000000000004ull
+
+// Speculation past bctr is disabled
+#define SEC_FTR_BCCTRL_SERIALISED	0x0000000000000008ull
+
+// Entries in L1-D are private to a SMT thread
+#define SEC_FTR_L1D_THREAD_PRIV		0x0000000000000010ull
+
+// Indirect branch prediction cache disabled
+#define SEC_FTR_COUNT_CACHE_DISABLED	0x0000000000000020ull
+
+
+// Features indicating need for Spectre/Meltdown mitigations
+
+// The L1-D cache should be flushed on MSR[HV] 1->0 transition (hypervisor to guest)
+#define SEC_FTR_L1D_FLUSH_HV		0x0000000000000040ull
+
+// The L1-D cache should be flushed on MSR[PR] 0->1 transition (kernel to userspace)
+#define SEC_FTR_L1D_FLUSH_PR		0x0000000000000080ull
+
+// A speculation barrier should be used for bounds checks (Spectre variant 1)
+#define SEC_FTR_BNDS_CHK_SPEC_BAR	0x0000000000000100ull
+
+// Firmware configuration indicates user favours security over performance
+#define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
+
+#endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ba336930d448..e9b0962743b8 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -40,7 +40,7 @@ obj-$(CONFIG_PPC64)		+= setup_64.o sys_ppc32.o \
 obj-$(CONFIG_VDSO32)		+= vdso32/
 obj-$(CONFIG_HAVE_HW_BREAKPOINT)	+= hw_breakpoint.o
 obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_ppc970.o cpu_setup_pa6t.o
-obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o
+obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o security.o
 obj-$(CONFIG_PPC_BOOK3S_64)	+= mce.o mce_power.o
 obj64-$(CONFIG_RELOCATABLE)	+= reloc_64.o
 obj-$(CONFIG_PPC_BOOK3E_64)	+= exceptions-64e.o idle_book3e.o
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
new file mode 100644
index 000000000000..4ccba00d224c
--- /dev/null
+++ b/arch/powerpc/kernel/security.c
@@ -0,0 +1,15 @@
+// SPDX-License-Identifier: GPL-2.0+
+//
+// Security related flags and so on.
+//
+// Copyright 2018, Michael Ellerman, IBM Corporation.
+
+#include <linux/kernel.h>
+#include <asm/security_features.h>
+
+
+unsigned long powerpc_security_features __read_mostly = \
+	SEC_FTR_L1D_FLUSH_HV | \
+	SEC_FTR_L1D_FLUSH_PR | \
+	SEC_FTR_BNDS_CHK_SPEC_BAR | \
+	SEC_FTR_FAVOUR_SECURITY;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 11/52] powerpc: Add security feature flags for Spectre/Meltdown
@ 2019-04-21 14:19   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 9a868f634349e62922c226834aa23e3d1329ae7f upstream.

This commit adds security feature flags to reflect the settings we
receive from firmware regarding Spectre/Meltdown mitigations.

The feature names reflect the names we are given by firmware on bare
metal machines. See the hostboot source for details.

Arguably these could be firmware features, but that then requires them
to be read early in boot so they're available prior to asm feature
patching, but we don't actually want to use them for patching. We may
also want to dynamically update them in future, which would be
incompatible with the way firmware features work (at the moment at
least). So for now just make them separate flags.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/security_features.h | 65 ++++++++++++++++++++
 arch/powerpc/kernel/Makefile                 |  2 +-
 arch/powerpc/kernel/security.c               | 15 +++++
 3 files changed, 81 insertions(+), 1 deletion(-)
 create mode 100644 arch/powerpc/include/asm/security_features.h
 create mode 100644 arch/powerpc/kernel/security.c

diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
new file mode 100644
index 000000000000..db00ad2c72c2
--- /dev/null
+++ b/arch/powerpc/include/asm/security_features.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Security related feature bit definitions.
+ *
+ * Copyright 2018, Michael Ellerman, IBM Corporation.
+ */
+
+#ifndef _ASM_POWERPC_SECURITY_FEATURES_H
+#define _ASM_POWERPC_SECURITY_FEATURES_H
+
+
+extern unsigned long powerpc_security_features;
+
+static inline void security_ftr_set(unsigned long feature)
+{
+	powerpc_security_features |= feature;
+}
+
+static inline void security_ftr_clear(unsigned long feature)
+{
+	powerpc_security_features &= ~feature;
+}
+
+static inline bool security_ftr_enabled(unsigned long feature)
+{
+	return !!(powerpc_security_features & feature);
+}
+
+
+// Features indicating support for Spectre/Meltdown mitigations
+
+// The L1-D cache can be flushed with ori r30,r30,0
+#define SEC_FTR_L1D_FLUSH_ORI30		0x0000000000000001ull
+
+// The L1-D cache can be flushed with mtspr 882,r0 (aka SPRN_TRIG2)
+#define SEC_FTR_L1D_FLUSH_TRIG2		0x0000000000000002ull
+
+// ori r31,r31,0 acts as a speculation barrier
+#define SEC_FTR_SPEC_BAR_ORI31		0x0000000000000004ull
+
+// Speculation past bctr is disabled
+#define SEC_FTR_BCCTRL_SERIALISED	0x0000000000000008ull
+
+// Entries in L1-D are private to a SMT thread
+#define SEC_FTR_L1D_THREAD_PRIV		0x0000000000000010ull
+
+// Indirect branch prediction cache disabled
+#define SEC_FTR_COUNT_CACHE_DISABLED	0x0000000000000020ull
+
+
+// Features indicating need for Spectre/Meltdown mitigations
+
+// The L1-D cache should be flushed on MSR[HV] 1->0 transition (hypervisor to guest)
+#define SEC_FTR_L1D_FLUSH_HV		0x0000000000000040ull
+
+// The L1-D cache should be flushed on MSR[PR] 0->1 transition (kernel to userspace)
+#define SEC_FTR_L1D_FLUSH_PR		0x0000000000000080ull
+
+// A speculation barrier should be used for bounds checks (Spectre variant 1)
+#define SEC_FTR_BNDS_CHK_SPEC_BAR	0x0000000000000100ull
+
+// Firmware configuration indicates user favours security over performance
+#define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
+
+#endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ba336930d448..e9b0962743b8 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -40,7 +40,7 @@ obj-$(CONFIG_PPC64)		+= setup_64.o sys_ppc32.o \
 obj-$(CONFIG_VDSO32)		+= vdso32/
 obj-$(CONFIG_HAVE_HW_BREAKPOINT)	+= hw_breakpoint.o
 obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_ppc970.o cpu_setup_pa6t.o
-obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o
+obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o security.o
 obj-$(CONFIG_PPC_BOOK3S_64)	+= mce.o mce_power.o
 obj64-$(CONFIG_RELOCATABLE)	+= reloc_64.o
 obj-$(CONFIG_PPC_BOOK3E_64)	+= exceptions-64e.o idle_book3e.o
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
new file mode 100644
index 000000000000..4ccba00d224c
--- /dev/null
+++ b/arch/powerpc/kernel/security.c
@@ -0,0 +1,15 @@
+// SPDX-License-Identifier: GPL-2.0+
+//
+// Security related flags and so on.
+//
+// Copyright 2018, Michael Ellerman, IBM Corporation.
+
+#include <linux/kernel.h>
+#include <asm/security_features.h>
+
+
+unsigned long powerpc_security_features __read_mostly = \
+	SEC_FTR_L1D_FLUSH_HV | \
+	SEC_FTR_L1D_FLUSH_PR | \
+	SEC_FTR_BNDS_CHK_SPEC_BAR | \
+	SEC_FTR_FAVOUR_SECURITY;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 12/52] powerpc/pseries: Set or clear security feature flags
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:19   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit f636c14790ead6cc22cf62279b1f8d7e11a67116 upstream.

Now that we have feature flags for security related things, set or
clear them based on what we receive from the hypercall.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/pseries/setup.c | 43 ++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index ab8c4c8b46bd..7c7c95c00252 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -67,6 +67,7 @@
 #include <asm/eeh.h>
 #include <asm/reg.h>
 #include <asm/plpar_wrappers.h>
+#include <asm/security_features.h>
 
 #include "pseries.h"
 
@@ -499,6 +500,40 @@ static void __init find_and_init_phbs(void)
 	of_pci_check_probe_only();
 }
 
+static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
+{
+	if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31)
+		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
+
+	if (result->character & H_CPU_CHAR_BCCTRL_SERIALISED)
+		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
+
+	if (result->character & H_CPU_CHAR_L1D_FLUSH_ORI30)
+		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
+
+	if (result->character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
+		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
+
+	if (result->character & H_CPU_CHAR_L1D_THREAD_PRIV)
+		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (result->character & H_CPU_CHAR_COUNT_CACHE_DISABLED)
+		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
+
+	/*
+	 * The features below are enabled by default, so we instead look to see
+	 * if firmware has *disabled* them, and clear them if so.
+	 */
+	if (!(result->character & H_CPU_BEHAV_FAVOUR_SECURITY))
+		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
+
+	if (!(result->character & H_CPU_BEHAV_L1D_FLUSH_PR))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
+
+	if (!(result->character & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
+		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
+}
+
 void pseries_setup_rfi_flush(void)
 {
 	struct h_cpu_char_result result;
@@ -512,6 +547,8 @@ void pseries_setup_rfi_flush(void)
 
 	rc = plpar_get_cpu_characteristics(&result);
 	if (rc == H_SUCCESS) {
+		init_cpu_char_feature_flags(&result);
+
 		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
 			types |= L1D_FLUSH_MTTRIG;
 		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
@@ -522,6 +559,12 @@ void pseries_setup_rfi_flush(void)
 			enable = false;
 	}
 
+	/*
+	 * We're the guest so this doesn't apply to us, clear it to simplify
+	 * handling of it elsewhere.
+	 */
+	security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
+
 	setup_rfi_flush(types, enable);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 12/52] powerpc/pseries: Set or clear security feature flags
@ 2019-04-21 14:19   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit f636c14790ead6cc22cf62279b1f8d7e11a67116 upstream.

Now that we have feature flags for security related things, set or
clear them based on what we receive from the hypercall.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/pseries/setup.c | 43 ++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index ab8c4c8b46bd..7c7c95c00252 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -67,6 +67,7 @@
 #include <asm/eeh.h>
 #include <asm/reg.h>
 #include <asm/plpar_wrappers.h>
+#include <asm/security_features.h>
 
 #include "pseries.h"
 
@@ -499,6 +500,40 @@ static void __init find_and_init_phbs(void)
 	of_pci_check_probe_only();
 }
 
+static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
+{
+	if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31)
+		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
+
+	if (result->character & H_CPU_CHAR_BCCTRL_SERIALISED)
+		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
+
+	if (result->character & H_CPU_CHAR_L1D_FLUSH_ORI30)
+		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
+
+	if (result->character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
+		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
+
+	if (result->character & H_CPU_CHAR_L1D_THREAD_PRIV)
+		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (result->character & H_CPU_CHAR_COUNT_CACHE_DISABLED)
+		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
+
+	/*
+	 * The features below are enabled by default, so we instead look to see
+	 * if firmware has *disabled* them, and clear them if so.
+	 */
+	if (!(result->character & H_CPU_BEHAV_FAVOUR_SECURITY))
+		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
+
+	if (!(result->character & H_CPU_BEHAV_L1D_FLUSH_PR))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
+
+	if (!(result->character & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
+		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
+}
+
 void pseries_setup_rfi_flush(void)
 {
 	struct h_cpu_char_result result;
@@ -512,6 +547,8 @@ void pseries_setup_rfi_flush(void)
 
 	rc = plpar_get_cpu_characteristics(&result);
 	if (rc == H_SUCCESS) {
+		init_cpu_char_feature_flags(&result);
+
 		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
 			types |= L1D_FLUSH_MTTRIG;
 		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
@@ -522,6 +559,12 @@ void pseries_setup_rfi_flush(void)
 			enable = false;
 	}
 
+	/*
+	 * We're the guest so this doesn't apply to us, clear it to simplify
+	 * handling of it elsewhere.
+	 */
+	security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
+
 	setup_rfi_flush(types, enable);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 13/52] powerpc/powernv: Set or clear security feature flags
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:19   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 77addf6e95c8689e478d607176b399a6242a777e upstream.

Now that we have feature flags for security related things, set or
clear them based on what we see in the device tree provided by
firmware.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/powernv/setup.c | 56 ++++++++++++++++++++++++++
 1 file changed, 56 insertions(+)

diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index fdc5f25a1b4a..1edb4e05921c 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -37,9 +37,63 @@
 #include <asm/smp.h>
 #include <asm/tm.h>
 #include <asm/setup.h>
+#include <asm/security_features.h>
 
 #include "powernv.h"
 
+
+static bool fw_feature_is(const char *state, const char *name,
+			  struct device_node *fw_features)
+{
+	struct device_node *np;
+	bool rc = false;
+
+	np = of_get_child_by_name(fw_features, name);
+	if (np) {
+		rc = of_property_read_bool(np, state);
+		of_node_put(np);
+	}
+
+	return rc;
+}
+
+static void init_fw_feat_flags(struct device_node *np)
+{
+	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
+		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
+
+	if (fw_feature_is("enabled", "fw-bcctrl-serialized", np))
+		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
+
+	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
+		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
+
+	if (fw_feature_is("enabled", "inst-l1d-flush-trig2", np))
+		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
+
+	if (fw_feature_is("enabled", "fw-l1d-thread-split", np))
+		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (fw_feature_is("enabled", "fw-count-cache-disabled", np))
+		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
+
+	/*
+	 * The features below are enabled by default, so we instead look to see
+	 * if firmware has *disabled* them, and clear them if so.
+	 */
+	if (fw_feature_is("disabled", "speculation-policy-favor-security", np))
+		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
+
+	if (fw_feature_is("disabled", "needs-l1d-flush-msr-pr-0-to-1", np))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
+
+	if (fw_feature_is("disabled", "needs-l1d-flush-msr-hv-1-to-0", np))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
+
+	if (fw_feature_is("disabled", "needs-spec-barrier-for-bound-checks", np))
+		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
+}
+
 static void pnv_setup_rfi_flush(void)
 {
 	struct device_node *np, *fw_features;
@@ -55,6 +109,8 @@ static void pnv_setup_rfi_flush(void)
 	of_node_put(np);
 
 	if (fw_features) {
+		init_fw_feat_flags(fw_features);
+
 		np = of_get_child_by_name(fw_features, "inst-l1d-flush-trig2");
 		if (np && of_property_read_bool(np, "enabled"))
 			type = L1D_FLUSH_MTTRIG;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 13/52] powerpc/powernv: Set or clear security feature flags
@ 2019-04-21 14:19   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 77addf6e95c8689e478d607176b399a6242a777e upstream.

Now that we have feature flags for security related things, set or
clear them based on what we see in the device tree provided by
firmware.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/powernv/setup.c | 56 ++++++++++++++++++++++++++
 1 file changed, 56 insertions(+)

diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index fdc5f25a1b4a..1edb4e05921c 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -37,9 +37,63 @@
 #include <asm/smp.h>
 #include <asm/tm.h>
 #include <asm/setup.h>
+#include <asm/security_features.h>
 
 #include "powernv.h"
 
+
+static bool fw_feature_is(const char *state, const char *name,
+			  struct device_node *fw_features)
+{
+	struct device_node *np;
+	bool rc = false;
+
+	np = of_get_child_by_name(fw_features, name);
+	if (np) {
+		rc = of_property_read_bool(np, state);
+		of_node_put(np);
+	}
+
+	return rc;
+}
+
+static void init_fw_feat_flags(struct device_node *np)
+{
+	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
+		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
+
+	if (fw_feature_is("enabled", "fw-bcctrl-serialized", np))
+		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
+
+	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
+		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
+
+	if (fw_feature_is("enabled", "inst-l1d-flush-trig2", np))
+		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
+
+	if (fw_feature_is("enabled", "fw-l1d-thread-split", np))
+		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (fw_feature_is("enabled", "fw-count-cache-disabled", np))
+		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
+
+	/*
+	 * The features below are enabled by default, so we instead look to see
+	 * if firmware has *disabled* them, and clear them if so.
+	 */
+	if (fw_feature_is("disabled", "speculation-policy-favor-security", np))
+		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
+
+	if (fw_feature_is("disabled", "needs-l1d-flush-msr-pr-0-to-1", np))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
+
+	if (fw_feature_is("disabled", "needs-l1d-flush-msr-hv-1-to-0", np))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
+
+	if (fw_feature_is("disabled", "needs-spec-barrier-for-bound-checks", np))
+		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
+}
+
 static void pnv_setup_rfi_flush(void)
 {
 	struct device_node *np, *fw_features;
@@ -55,6 +109,8 @@ static void pnv_setup_rfi_flush(void)
 	of_node_put(np);
 
 	if (fw_features) {
+		init_fw_feat_flags(fw_features);
+
 		np = of_get_child_by_name(fw_features, "inst-l1d-flush-trig2");
 		if (np && of_property_read_bool(np, "enabled"))
 			type = L1D_FLUSH_MTTRIG;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 14/52] powerpc/64s: Move cpu_show_meltdown()
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:19   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 8ad33041563a10b34988800c682ada14b2612533 upstream.

This landed in setup_64.c for no good reason other than we had nowhere
else to put it. Now that we have a security-related file, that is a
better place for it so move it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 11 +++++++++++
 arch/powerpc/kernel/setup_64.c |  8 --------
 2 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 4ccba00d224c..564e7f182a16 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -5,6 +5,8 @@
 // Copyright 2018, Michael Ellerman, IBM Corporation.
 
 #include <linux/kernel.h>
+#include <linux/device.h>
+
 #include <asm/security_features.h>
 
 
@@ -13,3 +15,12 @@ unsigned long powerpc_security_features __read_mostly = \
 	SEC_FTR_L1D_FLUSH_PR | \
 	SEC_FTR_BNDS_CHK_SPEC_BAR | \
 	SEC_FTR_FAVOUR_SECURITY;
+
+
+ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	if (rfi_flush)
+		return sprintf(buf, "Mitigation: RFI Flush\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index e975f2533767..41c537d8a688 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -961,12 +961,4 @@ static __init int rfi_flush_debugfs_init(void)
 }
 device_initcall(rfi_flush_debugfs_init);
 #endif
-
-ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
-{
-	if (rfi_flush)
-		return sprintf(buf, "Mitigation: RFI Flush\n");
-
-	return sprintf(buf, "Vulnerable\n");
-}
 #endif /* CONFIG_PPC_BOOK3S_64 */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 14/52] powerpc/64s: Move cpu_show_meltdown()
@ 2019-04-21 14:19   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:19 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 8ad33041563a10b34988800c682ada14b2612533 upstream.

This landed in setup_64.c for no good reason other than we had nowhere
else to put it. Now that we have a security-related file, that is a
better place for it so move it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 11 +++++++++++
 arch/powerpc/kernel/setup_64.c |  8 --------
 2 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 4ccba00d224c..564e7f182a16 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -5,6 +5,8 @@
 // Copyright 2018, Michael Ellerman, IBM Corporation.
 
 #include <linux/kernel.h>
+#include <linux/device.h>
+
 #include <asm/security_features.h>
 
 
@@ -13,3 +15,12 @@ unsigned long powerpc_security_features __read_mostly = \
 	SEC_FTR_L1D_FLUSH_PR | \
 	SEC_FTR_BNDS_CHK_SPEC_BAR | \
 	SEC_FTR_FAVOUR_SECURITY;
+
+
+ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	if (rfi_flush)
+		return sprintf(buf, "Mitigation: RFI Flush\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index e975f2533767..41c537d8a688 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -961,12 +961,4 @@ static __init int rfi_flush_debugfs_init(void)
 }
 device_initcall(rfi_flush_debugfs_init);
 #endif
-
-ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
-{
-	if (rfi_flush)
-		return sprintf(buf, "Mitigation: RFI Flush\n");
-
-	return sprintf(buf, "Vulnerable\n");
-}
 #endif /* CONFIG_PPC_BOOK3S_64 */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 15/52] powerpc/64s: Enhance the information in cpu_show_meltdown()
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit ff348355e9c72493947be337bb4fae4fc1a41eba upstream.

Now that we have the security feature flags we can make the
information displayed in the "meltdown" file more informative.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/security_features.h |  1 +
 arch/powerpc/kernel/security.c               | 30 ++++++++++++++++++--
 2 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
index db00ad2c72c2..400a9050e035 100644
--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -10,6 +10,7 @@
 
 
 extern unsigned long powerpc_security_features;
+extern bool rfi_flush;
 
 static inline void security_ftr_set(unsigned long feature)
 {
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 564e7f182a16..865db6f8bcca 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -6,6 +6,7 @@
 
 #include <linux/kernel.h>
 #include <linux/device.h>
+#include <linux/seq_buf.h>
 
 #include <asm/security_features.h>
 
@@ -19,8 +20,33 @@ unsigned long powerpc_security_features __read_mostly = \
 
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
-	if (rfi_flush)
-		return sprintf(buf, "Mitigation: RFI Flush\n");
+	bool thread_priv;
+
+	thread_priv = security_ftr_enabled(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (rfi_flush || thread_priv) {
+		struct seq_buf s;
+		seq_buf_init(&s, buf, PAGE_SIZE - 1);
+
+		seq_buf_printf(&s, "Mitigation: ");
+
+		if (rfi_flush)
+			seq_buf_printf(&s, "RFI Flush");
+
+		if (rfi_flush && thread_priv)
+			seq_buf_printf(&s, ", ");
+
+		if (thread_priv)
+			seq_buf_printf(&s, "L1D private per thread");
+
+		seq_buf_printf(&s, "\n");
+
+		return s.len;
+	}
+
+	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
+	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
+		return sprintf(buf, "Not affected\n");
 
 	return sprintf(buf, "Vulnerable\n");
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 15/52] powerpc/64s: Enhance the information in cpu_show_meltdown()
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit ff348355e9c72493947be337bb4fae4fc1a41eba upstream.

Now that we have the security feature flags we can make the
information displayed in the "meltdown" file more informative.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/security_features.h |  1 +
 arch/powerpc/kernel/security.c               | 30 ++++++++++++++++++--
 2 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
index db00ad2c72c2..400a9050e035 100644
--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -10,6 +10,7 @@
 
 
 extern unsigned long powerpc_security_features;
+extern bool rfi_flush;
 
 static inline void security_ftr_set(unsigned long feature)
 {
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 564e7f182a16..865db6f8bcca 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -6,6 +6,7 @@
 
 #include <linux/kernel.h>
 #include <linux/device.h>
+#include <linux/seq_buf.h>
 
 #include <asm/security_features.h>
 
@@ -19,8 +20,33 @@ unsigned long powerpc_security_features __read_mostly = \
 
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
-	if (rfi_flush)
-		return sprintf(buf, "Mitigation: RFI Flush\n");
+	bool thread_priv;
+
+	thread_priv = security_ftr_enabled(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (rfi_flush || thread_priv) {
+		struct seq_buf s;
+		seq_buf_init(&s, buf, PAGE_SIZE - 1);
+
+		seq_buf_printf(&s, "Mitigation: ");
+
+		if (rfi_flush)
+			seq_buf_printf(&s, "RFI Flush");
+
+		if (rfi_flush && thread_priv)
+			seq_buf_printf(&s, ", ");
+
+		if (thread_priv)
+			seq_buf_printf(&s, "L1D private per thread");
+
+		seq_buf_printf(&s, "\n");
+
+		return s.len;
+	}
+
+	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
+	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
+		return sprintf(buf, "Not affected\n");
 
 	return sprintf(buf, "Vulnerable\n");
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 16/52] powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 37c0bdd00d3ae83369ab60a6712c28e11e6458d5 upstream.

Now that we have the security flags we can significantly simplify the
code in pnv_setup_rfi_flush(), because we can use the flags instead of
checking device tree properties and because the security flags have
pessimistic defaults.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/powernv/setup.c | 41 +++++++-------------------
 1 file changed, 10 insertions(+), 31 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index 1edb4e05921c..a91330f79f66 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -65,7 +65,7 @@ static void init_fw_feat_flags(struct device_node *np)
 	if (fw_feature_is("enabled", "fw-bcctrl-serialized", np))
 		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
 
-	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
+	if (fw_feature_is("enabled", "inst-l1d-flush-ori30,30,0", np))
 		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
 
 	if (fw_feature_is("enabled", "inst-l1d-flush-trig2", np))
@@ -98,11 +98,10 @@ static void pnv_setup_rfi_flush(void)
 {
 	struct device_node *np, *fw_features;
 	enum l1d_flush_type type;
-	int enable;
+	bool enable;
 
 	/* Default to fallback in case fw-features are not available */
 	type = L1D_FLUSH_FALLBACK;
-	enable = 1;
 
 	np = of_find_node_by_name(NULL, "ibm,opal");
 	fw_features = of_get_child_by_name(np, "fw-features");
@@ -110,40 +109,20 @@ static void pnv_setup_rfi_flush(void)
 
 	if (fw_features) {
 		init_fw_feat_flags(fw_features);
+		of_node_put(fw_features);
 
-		np = of_get_child_by_name(fw_features, "inst-l1d-flush-trig2");
-		if (np && of_property_read_bool(np, "enabled"))
+		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
 			type = L1D_FLUSH_MTTRIG;
 
-		of_node_put(np);
-
-		np = of_get_child_by_name(fw_features, "inst-l1d-flush-ori30,30,0");
-		if (np && of_property_read_bool(np, "enabled"))
+		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
 			type = L1D_FLUSH_ORI;
-
-		of_node_put(np);
-
-		/* Enable unless firmware says NOT to */
-		enable = 2;
-		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-hv-1-to-0");
-		if (np && of_property_read_bool(np, "disabled"))
-			enable--;
-
-		of_node_put(np);
-
-		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-pr-0-to-1");
-		if (np && of_property_read_bool(np, "disabled"))
-			enable--;
-
-		np = of_get_child_by_name(fw_features, "speculation-policy-favor-security");
-		if (np && of_property_read_bool(np, "disabled"))
-			enable = 0;
-
-		of_node_put(np);
-		of_node_put(fw_features);
 	}
 
-	setup_rfi_flush(type, enable > 0);
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
+		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR)   || \
+		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
+
+	setup_rfi_flush(type, enable);
 }
 
 static void __init pnv_setup_arch(void)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 16/52] powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 37c0bdd00d3ae83369ab60a6712c28e11e6458d5 upstream.

Now that we have the security flags we can significantly simplify the
code in pnv_setup_rfi_flush(), because we can use the flags instead of
checking device tree properties and because the security flags have
pessimistic defaults.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/powernv/setup.c | 41 +++++++-------------------
 1 file changed, 10 insertions(+), 31 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index 1edb4e05921c..a91330f79f66 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -65,7 +65,7 @@ static void init_fw_feat_flags(struct device_node *np)
 	if (fw_feature_is("enabled", "fw-bcctrl-serialized", np))
 		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
 
-	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
+	if (fw_feature_is("enabled", "inst-l1d-flush-ori30,30,0", np))
 		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
 
 	if (fw_feature_is("enabled", "inst-l1d-flush-trig2", np))
@@ -98,11 +98,10 @@ static void pnv_setup_rfi_flush(void)
 {
 	struct device_node *np, *fw_features;
 	enum l1d_flush_type type;
-	int enable;
+	bool enable;
 
 	/* Default to fallback in case fw-features are not available */
 	type = L1D_FLUSH_FALLBACK;
-	enable = 1;
 
 	np = of_find_node_by_name(NULL, "ibm,opal");
 	fw_features = of_get_child_by_name(np, "fw-features");
@@ -110,40 +109,20 @@ static void pnv_setup_rfi_flush(void)
 
 	if (fw_features) {
 		init_fw_feat_flags(fw_features);
+		of_node_put(fw_features);
 
-		np = of_get_child_by_name(fw_features, "inst-l1d-flush-trig2");
-		if (np && of_property_read_bool(np, "enabled"))
+		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
 			type = L1D_FLUSH_MTTRIG;
 
-		of_node_put(np);
-
-		np = of_get_child_by_name(fw_features, "inst-l1d-flush-ori30,30,0");
-		if (np && of_property_read_bool(np, "enabled"))
+		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
 			type = L1D_FLUSH_ORI;
-
-		of_node_put(np);
-
-		/* Enable unless firmware says NOT to */
-		enable = 2;
-		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-hv-1-to-0");
-		if (np && of_property_read_bool(np, "disabled"))
-			enable--;
-
-		of_node_put(np);
-
-		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-pr-0-to-1");
-		if (np && of_property_read_bool(np, "disabled"))
-			enable--;
-
-		np = of_get_child_by_name(fw_features, "speculation-policy-favor-security");
-		if (np && of_property_read_bool(np, "disabled"))
-			enable = 0;
-
-		of_node_put(np);
-		of_node_put(fw_features);
 	}
 
-	setup_rfi_flush(type, enable > 0);
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
+		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR)   || \
+		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
+
+	setup_rfi_flush(type, enable);
 }
 
 static void __init pnv_setup_arch(void)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 17/52] powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 2e4a16161fcd324b1f9bf6cb6856529f7eaf0689 upstream.

Now that we have the security flags we can simplify the code in
pseries_setup_rfi_flush() because the security flags have pessimistic
defaults.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/pseries/setup.c | 27 ++++++++++++--------------
 1 file changed, 12 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 7c7c95c00252..3462194ed329 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -541,30 +541,27 @@ void pseries_setup_rfi_flush(void)
 	bool enable;
 	long rc;
 
-	/* Enable by default */
-	enable = true;
-	types = L1D_FLUSH_FALLBACK;
-
 	rc = plpar_get_cpu_characteristics(&result);
-	if (rc == H_SUCCESS) {
+	if (rc == H_SUCCESS)
 		init_cpu_char_feature_flags(&result);
 
-		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
-			types |= L1D_FLUSH_MTTRIG;
-		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
-			types |= L1D_FLUSH_ORI;
-
-		if ((!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR)) ||
-		    (!(result.behaviour & H_CPU_BEHAV_FAVOUR_SECURITY)))
-			enable = false;
-	}
-
 	/*
 	 * We're the guest so this doesn't apply to us, clear it to simplify
 	 * handling of it elsewhere.
 	 */
 	security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
 
+	types = L1D_FLUSH_FALLBACK;
+
+	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
+		types |= L1D_FLUSH_MTTRIG;
+
+	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
+		types |= L1D_FLUSH_ORI;
+
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
+		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
+
 	setup_rfi_flush(types, enable);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 17/52] powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 2e4a16161fcd324b1f9bf6cb6856529f7eaf0689 upstream.

Now that we have the security flags we can simplify the code in
pseries_setup_rfi_flush() because the security flags have pessimistic
defaults.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/pseries/setup.c | 27 ++++++++++++--------------
 1 file changed, 12 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 7c7c95c00252..3462194ed329 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -541,30 +541,27 @@ void pseries_setup_rfi_flush(void)
 	bool enable;
 	long rc;
 
-	/* Enable by default */
-	enable = true;
-	types = L1D_FLUSH_FALLBACK;
-
 	rc = plpar_get_cpu_characteristics(&result);
-	if (rc == H_SUCCESS) {
+	if (rc == H_SUCCESS)
 		init_cpu_char_feature_flags(&result);
 
-		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
-			types |= L1D_FLUSH_MTTRIG;
-		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
-			types |= L1D_FLUSH_ORI;
-
-		if ((!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR)) ||
-		    (!(result.behaviour & H_CPU_BEHAV_FAVOUR_SECURITY)))
-			enable = false;
-	}
-
 	/*
 	 * We're the guest so this doesn't apply to us, clear it to simplify
 	 * handling of it elsewhere.
 	 */
 	security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
 
+	types = L1D_FLUSH_FALLBACK;
+
+	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
+		types |= L1D_FLUSH_MTTRIG;
+
+	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
+		types |= L1D_FLUSH_ORI;
+
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
+		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
+
 	setup_rfi_flush(types, enable);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 18/52] powerpc/64s: Wire up cpu_show_spectre_v1()
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 56986016cb8cd9050e601831fe89f332b4e3c46e upstream.

Add a definition for cpu_show_spectre_v1() to override the generic
version. Currently this just prints "Not affected" or "Vulnerable"
based on the firmware flag.

Although the kernel does have array_index_nospec() in a few places, we
haven't yet audited all the powerpc code to see where it's necessary,
so for now we don't list that as a mitigation.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 865db6f8bcca..0eace3cac818 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -50,3 +50,11 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, cha
 
 	return sprintf(buf, "Vulnerable\n");
 }
+
+ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	if (!security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR))
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 18/52] powerpc/64s: Wire up cpu_show_spectre_v1()
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 56986016cb8cd9050e601831fe89f332b4e3c46e upstream.

Add a definition for cpu_show_spectre_v1() to override the generic
version. Currently this just prints "Not affected" or "Vulnerable"
based on the firmware flag.

Although the kernel does have array_index_nospec() in a few places, we
haven't yet audited all the powerpc code to see where it's necessary,
so for now we don't list that as a mitigation.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 865db6f8bcca..0eace3cac818 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -50,3 +50,11 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, cha
 
 	return sprintf(buf, "Vulnerable\n");
 }
+
+ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	if (!security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR))
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 19/52] powerpc/64s: Wire up cpu_show_spectre_v2()
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit d6fbe1c55c55c6937cbea3531af7da84ab7473c3 upstream.

Add a definition for cpu_show_spectre_v2() to override the generic
version. This has several permuations, though in practice some may not
occur we cater for any combination.

The most verbose is:

  Mitigation: Indirect branch serialisation (kernel only), Indirect
  branch cache disabled, ori31 speculation barrier enabled

We don't treat the ori31 speculation barrier as a mitigation on its
own, because it has to be *used* by code in order to be a mitigation
and we don't know if userspace is doing that. So if that's all we see
we say:

  Vulnerable, ori31 speculation barrier enabled

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 0eace3cac818..2cee3dcd231b 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -58,3 +58,36 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, c
 
 	return sprintf(buf, "Vulnerable\n");
 }
+
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	bool bcs, ccd, ori;
+	struct seq_buf s;
+
+	seq_buf_init(&s, buf, PAGE_SIZE - 1);
+
+	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
+	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
+	ori = security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31);
+
+	if (bcs || ccd) {
+		seq_buf_printf(&s, "Mitigation: ");
+
+		if (bcs)
+			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
+
+		if (bcs && ccd)
+			seq_buf_printf(&s, ", ");
+
+		if (ccd)
+			seq_buf_printf(&s, "Indirect branch cache disabled");
+	} else
+		seq_buf_printf(&s, "Vulnerable");
+
+	if (ori)
+		seq_buf_printf(&s, ", ori31 speculation barrier enabled");
+
+	seq_buf_printf(&s, "\n");
+
+	return s.len;
+}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 19/52] powerpc/64s: Wire up cpu_show_spectre_v2()
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit d6fbe1c55c55c6937cbea3531af7da84ab7473c3 upstream.

Add a definition for cpu_show_spectre_v2() to override the generic
version. This has several permuations, though in practice some may not
occur we cater for any combination.

The most verbose is:

  Mitigation: Indirect branch serialisation (kernel only), Indirect
  branch cache disabled, ori31 speculation barrier enabled

We don't treat the ori31 speculation barrier as a mitigation on its
own, because it has to be *used* by code in order to be a mitigation
and we don't know if userspace is doing that. So if that's all we see
we say:

  Vulnerable, ori31 speculation barrier enabled

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 0eace3cac818..2cee3dcd231b 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -58,3 +58,36 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, c
 
 	return sprintf(buf, "Vulnerable\n");
 }
+
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	bool bcs, ccd, ori;
+	struct seq_buf s;
+
+	seq_buf_init(&s, buf, PAGE_SIZE - 1);
+
+	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
+	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
+	ori = security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31);
+
+	if (bcs || ccd) {
+		seq_buf_printf(&s, "Mitigation: ");
+
+		if (bcs)
+			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
+
+		if (bcs && ccd)
+			seq_buf_printf(&s, ", ");
+
+		if (ccd)
+			seq_buf_printf(&s, "Indirect branch cache disabled");
+	} else
+		seq_buf_printf(&s, "Vulnerable");
+
+	if (ori)
+		seq_buf_printf(&s, ", ori31 speculation barrier enabled");
+
+	seq_buf_printf(&s, "\n");
+
+	return s.len;
+}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 20/52] powerpc/pseries: Fix clearing of security feature flags
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>

commit 0f9bdfe3c77091e8704d2e510eb7c2c2c6cde524 upstream.

The H_CPU_BEHAV_* flags should be checked for in the 'behaviour' field
of 'struct h_cpu_char_result' -- 'character' is for H_CPU_CHAR_*
flags.

Found by playing around with QEMU's implementation of the hypercall:

  H_CPU_CHAR=0xf000000000000000
  H_CPU_BEHAV=0x0000000000000000

  This clears H_CPU_BEHAV_FAVOUR_SECURITY and H_CPU_BEHAV_L1D_FLUSH_PR
  so pseries_setup_rfi_flush() disables 'rfi_flush'; and it also
  clears H_CPU_CHAR_L1D_THREAD_PRIV flag. So there is no RFI flush
  mitigation at all for cpu_show_meltdown() to report; but currently
  it does:

  Original kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/meltdown
    Mitigation: RFI Flush

  Patched kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/meltdown
    Not affected

  H_CPU_CHAR=0x0000000000000000
  H_CPU_BEHAV=0xf000000000000000

  This sets H_CPU_BEHAV_BNDS_CHK_SPEC_BAR so cpu_show_spectre_v1() should
  report vulnerable; but currently it doesn't:

  Original kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/spectre_v1
    Not affected

  Patched kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/spectre_v1
    Vulnerable

Brown-paper-bag-by: Michael Ellerman <mpe@ellerman.id.au>
Fixes: f636c14790ea ("powerpc/pseries: Set or clear security feature flags")
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/pseries/setup.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 3462194ed329..0a2ca848b8db 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -524,13 +524,13 @@ static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
 	 * The features below are enabled by default, so we instead look to see
 	 * if firmware has *disabled* them, and clear them if so.
 	 */
-	if (!(result->character & H_CPU_BEHAV_FAVOUR_SECURITY))
+	if (!(result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY))
 		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
 
-	if (!(result->character & H_CPU_BEHAV_L1D_FLUSH_PR))
+	if (!(result->behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
 		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
 
-	if (!(result->character & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
+	if (!(result->behaviour & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
 		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 20/52] powerpc/pseries: Fix clearing of security feature flags
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>

commit 0f9bdfe3c77091e8704d2e510eb7c2c2c6cde524 upstream.

The H_CPU_BEHAV_* flags should be checked for in the 'behaviour' field
of 'struct h_cpu_char_result' -- 'character' is for H_CPU_CHAR_*
flags.

Found by playing around with QEMU's implementation of the hypercall:

  H_CPU_CHAR=0xf000000000000000
  H_CPU_BEHAV=0x0000000000000000

  This clears H_CPU_BEHAV_FAVOUR_SECURITY and H_CPU_BEHAV_L1D_FLUSH_PR
  so pseries_setup_rfi_flush() disables 'rfi_flush'; and it also
  clears H_CPU_CHAR_L1D_THREAD_PRIV flag. So there is no RFI flush
  mitigation at all for cpu_show_meltdown() to report; but currently
  it does:

  Original kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/meltdown
    Mitigation: RFI Flush

  Patched kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/meltdown
    Not affected

  H_CPU_CHAR=0x0000000000000000
  H_CPU_BEHAV=0xf000000000000000

  This sets H_CPU_BEHAV_BNDS_CHK_SPEC_BAR so cpu_show_spectre_v1() should
  report vulnerable; but currently it doesn't:

  Original kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/spectre_v1
    Not affected

  Patched kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/spectre_v1
    Vulnerable

Brown-paper-bag-by: Michael Ellerman <mpe@ellerman.id.au>
Fixes: f636c14790ea ("powerpc/pseries: Set or clear security feature flags")
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/pseries/setup.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 3462194ed329..0a2ca848b8db 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -524,13 +524,13 @@ static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
 	 * The features below are enabled by default, so we instead look to see
 	 * if firmware has *disabled* them, and clear them if so.
 	 */
-	if (!(result->character & H_CPU_BEHAV_FAVOUR_SECURITY))
+	if (!(result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY))
 		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
 
-	if (!(result->character & H_CPU_BEHAV_L1D_FLUSH_PR))
+	if (!(result->behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
 		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
 
-	if (!(result->character & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
+	if (!(result->behaviour & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
 		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 21/52] powerpc: Move default security feature flags
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>

commit e7347a86830f38dc3e40c8f7e28c04412b12a2e7 upstream.

This moves the definition of the default security feature flags
(i.e., enabled by default) closer to the security feature flags.

This can be used to restore current flags to the default flags.

Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/security_features.h | 8 ++++++++
 arch/powerpc/kernel/security.c               | 7 +------
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
index 400a9050e035..fa4d2e1cf772 100644
--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -63,4 +63,12 @@ static inline bool security_ftr_enabled(unsigned long feature)
 // Firmware configuration indicates user favours security over performance
 #define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
 
+
+// Features enabled by default
+#define SEC_FTR_DEFAULT \
+	(SEC_FTR_L1D_FLUSH_HV | \
+	 SEC_FTR_L1D_FLUSH_PR | \
+	 SEC_FTR_BNDS_CHK_SPEC_BAR | \
+	 SEC_FTR_FAVOUR_SECURITY)
+
 #endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 2cee3dcd231b..bab5a27ea805 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -11,12 +11,7 @@
 #include <asm/security_features.h>
 
 
-unsigned long powerpc_security_features __read_mostly = \
-	SEC_FTR_L1D_FLUSH_HV | \
-	SEC_FTR_L1D_FLUSH_PR | \
-	SEC_FTR_BNDS_CHK_SPEC_BAR | \
-	SEC_FTR_FAVOUR_SECURITY;
-
+unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 21/52] powerpc: Move default security feature flags
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>

commit e7347a86830f38dc3e40c8f7e28c04412b12a2e7 upstream.

This moves the definition of the default security feature flags
(i.e., enabled by default) closer to the security feature flags.

This can be used to restore current flags to the default flags.

Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/security_features.h | 8 ++++++++
 arch/powerpc/kernel/security.c               | 7 +------
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
index 400a9050e035..fa4d2e1cf772 100644
--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -63,4 +63,12 @@ static inline bool security_ftr_enabled(unsigned long feature)
 // Firmware configuration indicates user favours security over performance
 #define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
 
+
+// Features enabled by default
+#define SEC_FTR_DEFAULT \
+	(SEC_FTR_L1D_FLUSH_HV | \
+	 SEC_FTR_L1D_FLUSH_PR | \
+	 SEC_FTR_BNDS_CHK_SPEC_BAR | \
+	 SEC_FTR_FAVOUR_SECURITY)
+
 #endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 2cee3dcd231b..bab5a27ea805 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -11,12 +11,7 @@
 #include <asm/security_features.h>
 
 
-unsigned long powerpc_security_features __read_mostly = \
-	SEC_FTR_L1D_FLUSH_HV | \
-	SEC_FTR_L1D_FLUSH_PR | \
-	SEC_FTR_BNDS_CHK_SPEC_BAR | \
-	SEC_FTR_FAVOUR_SECURITY;
-
+unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 22/52] powerpc/pseries: Restore default security feature flags on setup
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>

commit 6232774f1599028a15418179d17f7df47ede770a upstream.

After migration the security feature flags might have changed (e.g.,
destination system with unpatched firmware), but some flags are not
set/clear again in init_cpu_char_feature_flags() because it assumes
the security flags to be the defaults.

Additionally, if the H_GET_CPU_CHARACTERISTICS hypercall fails then
init_cpu_char_feature_flags() does not run again, which potentially
might leave the system in an insecure or sub-optimal configuration.

So, just restore the security feature flags to the defaults assumed
by init_cpu_char_feature_flags() so it can set/clear them correctly,
and to ensure safe settings are in place in case the hypercall fail.

Fixes: f636c14790ea ("powerpc/pseries: Set or clear security feature flags")
Depends-on: 19887d6a28e2 ("powerpc: Move default security feature flags")
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/pseries/setup.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 0a2ca848b8db..9aa61b5e8568 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -502,6 +502,10 @@ static void __init find_and_init_phbs(void)
 
 static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
 {
+	/*
+	 * The features below are disabled by default, so we instead look to see
+	 * if firmware has *enabled* them, and set them if so.
+	 */
 	if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31)
 		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
 
@@ -541,6 +545,13 @@ void pseries_setup_rfi_flush(void)
 	bool enable;
 	long rc;
 
+	/*
+	 * Set features to the defaults assumed by init_cpu_char_feature_flags()
+	 * so it can set/clear again any features that might have changed after
+	 * migration, and in case the hypercall fails and it is not even called.
+	 */
+	powerpc_security_features = SEC_FTR_DEFAULT;
+
 	rc = plpar_get_cpu_characteristics(&result);
 	if (rc == H_SUCCESS)
 		init_cpu_char_feature_flags(&result);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 22/52] powerpc/pseries: Restore default security feature flags on setup
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>

commit 6232774f1599028a15418179d17f7df47ede770a upstream.

After migration the security feature flags might have changed (e.g.,
destination system with unpatched firmware), but some flags are not
set/clear again in init_cpu_char_feature_flags() because it assumes
the security flags to be the defaults.

Additionally, if the H_GET_CPU_CHARACTERISTICS hypercall fails then
init_cpu_char_feature_flags() does not run again, which potentially
might leave the system in an insecure or sub-optimal configuration.

So, just restore the security feature flags to the defaults assumed
by init_cpu_char_feature_flags() so it can set/clear them correctly,
and to ensure safe settings are in place in case the hypercall fail.

Fixes: f636c14790ea ("powerpc/pseries: Set or clear security feature flags")
Depends-on: 19887d6a28e2 ("powerpc: Move default security feature flags")
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/pseries/setup.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 0a2ca848b8db..9aa61b5e8568 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -502,6 +502,10 @@ static void __init find_and_init_phbs(void)
 
 static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
 {
+	/*
+	 * The features below are disabled by default, so we instead look to see
+	 * if firmware has *enabled* them, and set them if so.
+	 */
 	if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31)
 		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
 
@@ -541,6 +545,13 @@ void pseries_setup_rfi_flush(void)
 	bool enable;
 	long rc;
 
+	/*
+	 * Set features to the defaults assumed by init_cpu_char_feature_flags()
+	 * so it can set/clear again any features that might have changed after
+	 * migration, and in case the hypercall fails and it is not even called.
+	 */
+	powerpc_security_features = SEC_FTR_DEFAULT;
+
 	rc = plpar_get_cpu_characteristics(&result);
 	if (rc == H_SUCCESS)
 		init_cpu_char_feature_flags(&result);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 23/52] powerpc/64s: Fix section mismatch warnings from setup_rfi_flush()
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 501a78cbc17c329fabf8e9750a1e9ab810c88a0e upstream.

The recent LPM changes to setup_rfi_flush() are causing some section
mismatch warnings because we removed the __init annotation on
setup_rfi_flush():

  The function setup_rfi_flush() references
  the function __init ppc64_bolted_size().
  the function __init memblock_alloc_base().

The references are actually in init_fallback_flush(), but that is
inlined into setup_rfi_flush().

These references are safe because:
 - only pseries calls setup_rfi_flush() at runtime
 - pseries always passes L1D_FLUSH_FALLBACK at boot
 - so the fallback flush area will always be allocated
 - so the check in init_fallback_flush() will always return early:
   /* Only allocate the fallback flush area once (at boot time). */
   if (l1d_flush_fallback_area)
   	return;

 - and therefore we won't actually call the freed init routines.

We should rework the code to make it safer by default rather than
relying on the above, but for now as a quick-fix just add a __ref
annotation to squash the warning.

Fixes: abf110f3e1ce ("powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/setup_64.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 41c537d8a688..64c1e76b5972 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -882,7 +882,7 @@ void rfi_flush_enable(bool enable)
 	rfi_flush = enable;
 }
 
-static void init_fallback_flush(void)
+static void __ref init_fallback_flush(void)
 {
 	u64 l1d_size, limit;
 	int cpu;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 23/52] powerpc/64s: Fix section mismatch warnings from setup_rfi_flush()
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 501a78cbc17c329fabf8e9750a1e9ab810c88a0e upstream.

The recent LPM changes to setup_rfi_flush() are causing some section
mismatch warnings because we removed the __init annotation on
setup_rfi_flush():

  The function setup_rfi_flush() references
  the function __init ppc64_bolted_size().
  the function __init memblock_alloc_base().

The references are actually in init_fallback_flush(), but that is
inlined into setup_rfi_flush().

These references are safe because:
 - only pseries calls setup_rfi_flush() at runtime
 - pseries always passes L1D_FLUSH_FALLBACK at boot
 - so the fallback flush area will always be allocated
 - so the check in init_fallback_flush() will always return early:
   /* Only allocate the fallback flush area once (at boot time). */
   if (l1d_flush_fallback_area)
   	return;

 - and therefore we won't actually call the freed init routines.

We should rework the code to make it safer by default rather than
relying on the above, but for now as a quick-fix just add a __ref
annotation to squash the warning.

Fixes: abf110f3e1ce ("powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/setup_64.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 41c537d8a688..64c1e76b5972 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -882,7 +882,7 @@ void rfi_flush_enable(bool enable)
 	rfi_flush = enable;
 }
 
-static void init_fallback_flush(void)
+static void __ref init_fallback_flush(void)
 {
 	u64 l1d_size, limit;
 	int cpu;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 24/52] powerpc/64s: Add support for a store forwarding barrier at kernel entry/exit
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Nicholas Piggin <npiggin@gmail.com>

commit a048a07d7f4535baa4cbad6bc024f175317ab938 upstream.

On some CPUs we can prevent a vulnerability related to store-to-load
forwarding by preventing store forwarding between privilege domains,
by inserting a barrier in kernel entry and exit paths.

This is known to be the case on at least Power7, Power8 and Power9
powerpc CPUs.

Barriers must be inserted generally before the first load after moving
to a higher privilege, and after the last store before moving to a
lower privilege, HV and PR privilege transitions must be protected.

Barriers are added as patch sections, with all kernel/hypervisor entry
points patched, and the exit points to lower privilge levels patched
similarly to the RFI flush patching.

Firmware advertisement is not implemented yet, so CPU flush types
are hard coded.

Thanks to Michal Suchánek for bug fixes and review.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michal Suchánek <msuchanek@suse.de>
[mpe: 4.4 doesn't have EXC_REAL_OOL_MASKABLE, so do it manually]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
---
 arch/powerpc/include/asm/exception-64s.h     |  35 +++++
 arch/powerpc/include/asm/feature-fixups.h    |  19 +++
 arch/powerpc/include/asm/security_features.h |  11 ++
 arch/powerpc/kernel/exceptions-64s.S         |  22 ++-
 arch/powerpc/kernel/security.c               | 148 +++++++++++++++++++
 arch/powerpc/kernel/vmlinux.lds.S            |  14 ++
 arch/powerpc/lib/feature-fixups.c            | 116 ++++++++++++++-
 arch/powerpc/platforms/powernv/setup.c       |   1 +
 arch/powerpc/platforms/pseries/setup.c       |   1 +
 9 files changed, 365 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 9bddbec441b8..3ed536bec462 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -50,6 +50,27 @@
 #define EX_PPR		88	/* SMT thread status register (priority) */
 #define EX_CTR		96
 
+#define STF_ENTRY_BARRIER_SLOT						\
+	STF_ENTRY_BARRIER_FIXUP_SECTION;				\
+	nop;								\
+	nop;								\
+	nop
+
+#define STF_EXIT_BARRIER_SLOT						\
+	STF_EXIT_BARRIER_FIXUP_SECTION;					\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop
+
+/*
+ * r10 must be free to use, r13 must be paca
+ */
+#define INTERRUPT_TO_KERNEL						\
+	STF_ENTRY_BARRIER_SLOT
+
 /*
  * Macros for annotating the expected destination of (h)rfid
  *
@@ -66,16 +87,19 @@
 	rfid
 
 #define RFI_TO_USER							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
 
 #define RFI_TO_USER_OR_KERNEL						\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
 
 #define RFI_TO_GUEST							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
@@ -84,21 +108,25 @@
 	hrfid
 
 #define HRFI_TO_USER							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_USER_OR_KERNEL						\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_GUEST							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_UNKNOWN							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
@@ -226,6 +254,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 #define __EXCEPTION_PROLOG_1(area, extra, vec)				\
 	OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR);		\
 	OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);		\
+	INTERRUPT_TO_KERNEL;						\
 	SAVE_CTR(r10, area);						\
 	mfcr	r9;							\
 	extra(vec);							\
@@ -512,6 +541,12 @@ label##_relon_hv:						\
 #define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
 	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
 
+#define MASKABLE_EXCEPTION_OOL(vec, label)				\
+	.globl label##_ool;						\
+label##_ool:								\
+	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
+	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
+
 #define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
 	. = loc;							\
 	.globl label##_pSeries;						\
diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
index 7068bafbb2d6..350be873a941 100644
--- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -184,6 +184,22 @@ label##3:					       	\
 	FTR_ENTRY_OFFSET label##1b-label##3b;		\
 	.popsection;
 
+#define STF_ENTRY_BARRIER_FIXUP_SECTION			\
+953:							\
+	.pushsection __stf_entry_barrier_fixup,"a";	\
+	.align 2;					\
+954:							\
+	FTR_ENTRY_OFFSET 953b-954b;			\
+	.popsection;
+
+#define STF_EXIT_BARRIER_FIXUP_SECTION			\
+955:							\
+	.pushsection __stf_exit_barrier_fixup,"a";	\
+	.align 2;					\
+956:							\
+	FTR_ENTRY_OFFSET 955b-956b;			\
+	.popsection;
+
 #define RFI_FLUSH_FIXUP_SECTION				\
 951:							\
 	.pushsection __rfi_flush_fixup,"a";		\
@@ -195,6 +211,9 @@ label##3:					       	\
 
 #ifndef __ASSEMBLY__
 
+extern long stf_barrier_fallback;
+extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
+extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
 extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
 
 #endif
diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
index fa4d2e1cf772..44989b22383c 100644
--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -12,6 +12,17 @@
 extern unsigned long powerpc_security_features;
 extern bool rfi_flush;
 
+/* These are bit flags */
+enum stf_barrier_type {
+	STF_BARRIER_NONE	= 0x1,
+	STF_BARRIER_FALLBACK	= 0x2,
+	STF_BARRIER_EIEIO	= 0x4,
+	STF_BARRIER_SYNC_ORI	= 0x8,
+};
+
+void setup_stf_barrier(void);
+void do_stf_barrier_fixups(enum stf_barrier_type types);
+
 static inline void security_ftr_set(unsigned long feature)
 {
 	powerpc_security_features |= feature;
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index d2ff233ddc53..10e7cec9553d 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -36,6 +36,7 @@ BEGIN_FTR_SECTION						\
 END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
 	mr	r9,r13 ;					\
 	GET_PACA(r13) ;						\
+	INTERRUPT_TO_KERNEL ;					\
 	mfspr	r11,SPRN_SRR0 ;					\
 0:
 
@@ -292,7 +293,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	. = 0x900
 	.globl decrementer_pSeries
 decrementer_pSeries:
-	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
+	SET_SCRATCH0(r13)
+	EXCEPTION_PROLOG_0(PACA_EXGEN)
+	b	decrementer_ool
 
 	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
 
@@ -319,6 +322,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
 	HMT_MEDIUM;
 	std	r10,PACA_EXGEN+EX_R10(r13)
+	INTERRUPT_TO_KERNEL
 	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
 	mfcr	r9
 	KVMTEST(0xc00)
@@ -607,6 +611,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 
 	.align	7
 	/* moved from 0xe00 */
+	MASKABLE_EXCEPTION_OOL(0x900, decrementer)
 	STD_EXCEPTION_HV_OOL(0xe02, h_data_storage)
 	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02)
 	STD_EXCEPTION_HV_OOL(0xe22, h_instr_storage)
@@ -1564,6 +1569,21 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	blr
 #endif
 
+	.balign 16
+	.globl stf_barrier_fallback
+stf_barrier_fallback:
+	std	r9,PACA_EXRFI+EX_R9(r13)
+	std	r10,PACA_EXRFI+EX_R10(r13)
+	sync
+	ld	r9,PACA_EXRFI+EX_R9(r13)
+	ld	r10,PACA_EXRFI+EX_R10(r13)
+	ori	31,31,0
+	.rept 14
+	b	1f
+1:
+	.endr
+	blr
+
 	.globl rfi_flush_fallback
 rfi_flush_fallback:
 	SET_SCRATCH0(r13);
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index bab5a27ea805..e19216472ed7 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -5,9 +5,11 @@
 // Copyright 2018, Michael Ellerman, IBM Corporation.
 
 #include <linux/kernel.h>
+#include <linux/debugfs.h>
 #include <linux/device.h>
 #include <linux/seq_buf.h>
 
+#include <asm/debug.h>
 #include <asm/security_features.h>
 
 
@@ -86,3 +88,149 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
 
 	return s.len;
 }
+
+/*
+ * Store-forwarding barrier support.
+ */
+
+static enum stf_barrier_type stf_enabled_flush_types;
+static bool no_stf_barrier;
+bool stf_barrier;
+
+static int __init handle_no_stf_barrier(char *p)
+{
+	pr_info("stf-barrier: disabled on command line.");
+	no_stf_barrier = true;
+	return 0;
+}
+
+early_param("no_stf_barrier", handle_no_stf_barrier);
+
+/* This is the generic flag used by other architectures */
+static int __init handle_ssbd(char *p)
+{
+	if (!p || strncmp(p, "auto", 5) == 0 || strncmp(p, "on", 2) == 0 ) {
+		/* Until firmware tells us, we have the barrier with auto */
+		return 0;
+	} else if (strncmp(p, "off", 3) == 0) {
+		handle_no_stf_barrier(NULL);
+		return 0;
+	} else
+		return 1;
+
+	return 0;
+}
+early_param("spec_store_bypass_disable", handle_ssbd);
+
+/* This is the generic flag used by other architectures */
+static int __init handle_no_ssbd(char *p)
+{
+	handle_no_stf_barrier(NULL);
+	return 0;
+}
+early_param("nospec_store_bypass_disable", handle_no_ssbd);
+
+static void stf_barrier_enable(bool enable)
+{
+	if (enable)
+		do_stf_barrier_fixups(stf_enabled_flush_types);
+	else
+		do_stf_barrier_fixups(STF_BARRIER_NONE);
+
+	stf_barrier = enable;
+}
+
+void setup_stf_barrier(void)
+{
+	enum stf_barrier_type type;
+	bool enable, hv;
+
+	hv = cpu_has_feature(CPU_FTR_HVMODE);
+
+	/* Default to fallback in case fw-features are not available */
+	if (cpu_has_feature(CPU_FTR_ARCH_207S))
+		type = STF_BARRIER_SYNC_ORI;
+	else if (cpu_has_feature(CPU_FTR_ARCH_206))
+		type = STF_BARRIER_FALLBACK;
+	else
+		type = STF_BARRIER_NONE;
+
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+		(security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) ||
+		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) && hv));
+
+	if (type == STF_BARRIER_FALLBACK) {
+		pr_info("stf-barrier: fallback barrier available\n");
+	} else if (type == STF_BARRIER_SYNC_ORI) {
+		pr_info("stf-barrier: hwsync barrier available\n");
+	} else if (type == STF_BARRIER_EIEIO) {
+		pr_info("stf-barrier: eieio barrier available\n");
+	}
+
+	stf_enabled_flush_types = type;
+
+	if (!no_stf_barrier)
+		stf_barrier_enable(enable);
+}
+
+ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	if (stf_barrier && stf_enabled_flush_types != STF_BARRIER_NONE) {
+		const char *type;
+		switch (stf_enabled_flush_types) {
+		case STF_BARRIER_EIEIO:
+			type = "eieio";
+			break;
+		case STF_BARRIER_SYNC_ORI:
+			type = "hwsync";
+			break;
+		case STF_BARRIER_FALLBACK:
+			type = "fallback";
+			break;
+		default:
+			type = "unknown";
+		}
+		return sprintf(buf, "Mitigation: Kernel entry/exit barrier (%s)\n", type);
+	}
+
+	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
+	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int stf_barrier_set(void *data, u64 val)
+{
+	bool enable;
+
+	if (val == 1)
+		enable = true;
+	else if (val == 0)
+		enable = false;
+	else
+		return -EINVAL;
+
+	/* Only do anything if we're changing state */
+	if (enable != stf_barrier)
+		stf_barrier_enable(enable);
+
+	return 0;
+}
+
+static int stf_barrier_get(void *data, u64 *val)
+{
+	*val = stf_barrier ? 1 : 0;
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_stf_barrier, stf_barrier_get, stf_barrier_set, "%llu\n");
+
+static __init int stf_barrier_debugfs_init(void)
+{
+	debugfs_create_file("stf_barrier", 0600, powerpc_debugfs_root, NULL, &fops_stf_barrier);
+	return 0;
+}
+device_initcall(stf_barrier_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 072a23a17350..b454e27d784d 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -73,6 +73,20 @@ SECTIONS
 	RODATA
 
 #ifdef CONFIG_PPC64
+	. = ALIGN(8);
+	__stf_entry_barrier_fixup : AT(ADDR(__stf_entry_barrier_fixup) - LOAD_OFFSET) {
+		__start___stf_entry_barrier_fixup = .;
+		*(__stf_entry_barrier_fixup)
+		__stop___stf_entry_barrier_fixup = .;
+	}
+
+	. = ALIGN(8);
+	__stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) {
+		__start___stf_exit_barrier_fixup = .;
+		*(__stf_exit_barrier_fixup)
+		__stop___stf_exit_barrier_fixup = .;
+	}
+
 	. = ALIGN(8);
 	__rfi_flush_fixup : AT(ADDR(__rfi_flush_fixup) - LOAD_OFFSET) {
 		__start___rfi_flush_fixup = .;
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index b76b9b6b3a85..a1865309b7fc 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -21,7 +21,7 @@
 #include <asm/page.h>
 #include <asm/sections.h>
 #include <asm/setup.h>
-
+#include <asm/security_features.h>
 
 struct fixup_entry {
 	unsigned long	mask;
@@ -115,6 +115,120 @@ void do_feature_fixups(unsigned long value, void *fixup_start, void *fixup_end)
 }
 
 #ifdef CONFIG_PPC_BOOK3S_64
+void do_stf_entry_barrier_fixups(enum stf_barrier_type types)
+{
+	unsigned int instrs[3], *dest;
+	long *start, *end;
+	int i;
+
+	start = PTRRELOC(&__start___stf_entry_barrier_fixup),
+	end = PTRRELOC(&__stop___stf_entry_barrier_fixup);
+
+	instrs[0] = 0x60000000; /* nop */
+	instrs[1] = 0x60000000; /* nop */
+	instrs[2] = 0x60000000; /* nop */
+
+	i = 0;
+	if (types & STF_BARRIER_FALLBACK) {
+		instrs[i++] = 0x7d4802a6; /* mflr r10		*/
+		instrs[i++] = 0x60000000; /* branch patched below */
+		instrs[i++] = 0x7d4803a6; /* mtlr r10		*/
+	} else if (types & STF_BARRIER_EIEIO) {
+		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
+	} else if (types & STF_BARRIER_SYNC_ORI) {
+		instrs[i++] = 0x7c0004ac; /* hwsync		*/
+		instrs[i++] = 0xe94d0000; /* ld r10,0(r13)	*/
+		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+
+		patch_instruction(dest, instrs[0]);
+
+		if (types & STF_BARRIER_FALLBACK)
+			patch_branch(dest + 1, (unsigned long)&stf_barrier_fallback,
+				     BRANCH_SET_LINK);
+		else
+			patch_instruction(dest + 1, instrs[1]);
+
+		patch_instruction(dest + 2, instrs[2]);
+	}
+
+	printk(KERN_DEBUG "stf-barrier: patched %d entry locations (%s barrier)\n", i,
+		(types == STF_BARRIER_NONE)                  ? "no" :
+		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
+		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
+		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
+		                                           : "unknown");
+}
+
+void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
+{
+	unsigned int instrs[6], *dest;
+	long *start, *end;
+	int i;
+
+	start = PTRRELOC(&__start___stf_exit_barrier_fixup),
+	end = PTRRELOC(&__stop___stf_exit_barrier_fixup);
+
+	instrs[0] = 0x60000000; /* nop */
+	instrs[1] = 0x60000000; /* nop */
+	instrs[2] = 0x60000000; /* nop */
+	instrs[3] = 0x60000000; /* nop */
+	instrs[4] = 0x60000000; /* nop */
+	instrs[5] = 0x60000000; /* nop */
+
+	i = 0;
+	if (types & STF_BARRIER_FALLBACK || types & STF_BARRIER_SYNC_ORI) {
+		if (cpu_has_feature(CPU_FTR_HVMODE)) {
+			instrs[i++] = 0x7db14ba6; /* mtspr 0x131, r13 (HSPRG1) */
+			instrs[i++] = 0x7db04aa6; /* mfspr r13, 0x130 (HSPRG0) */
+		} else {
+			instrs[i++] = 0x7db243a6; /* mtsprg 2,r13	*/
+			instrs[i++] = 0x7db142a6; /* mfsprg r13,1    */
+	        }
+		instrs[i++] = 0x7c0004ac; /* hwsync		*/
+		instrs[i++] = 0xe9ad0000; /* ld r13,0(r13)	*/
+		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+		if (cpu_has_feature(CPU_FTR_HVMODE)) {
+			instrs[i++] = 0x7db14aa6; /* mfspr r13, 0x131 (HSPRG1) */
+		} else {
+			instrs[i++] = 0x7db242a6; /* mfsprg r13,2 */
+		}
+	} else if (types & STF_BARRIER_EIEIO) {
+		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+
+		patch_instruction(dest, instrs[0]);
+		patch_instruction(dest + 1, instrs[1]);
+		patch_instruction(dest + 2, instrs[2]);
+		patch_instruction(dest + 3, instrs[3]);
+		patch_instruction(dest + 4, instrs[4]);
+		patch_instruction(dest + 5, instrs[5]);
+	}
+	printk(KERN_DEBUG "stf-barrier: patched %d exit locations (%s barrier)\n", i,
+		(types == STF_BARRIER_NONE)                  ? "no" :
+		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
+		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
+		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
+		                                           : "unknown");
+}
+
+
+void do_stf_barrier_fixups(enum stf_barrier_type types)
+{
+	do_stf_entry_barrier_fixups(types);
+	do_stf_exit_barrier_fixups(types);
+}
+
 void do_rfi_flush_fixups(enum l1d_flush_type types)
 {
 	unsigned int instrs[3], *dest;
diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index a91330f79f66..c3df9e1ad135 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -130,6 +130,7 @@ static void __init pnv_setup_arch(void)
 	set_arch_panic_timeout(10, ARCH_PANIC_TIMEOUT);
 
 	pnv_setup_rfi_flush();
+	setup_stf_barrier();
 
 	/* Initialize SMP */
 	pnv_smp_init();
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 9aa61b5e8568..1e2b61d178b8 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -593,6 +593,7 @@ static void __init pSeries_setup_arch(void)
 	fwnmi_init();
 
 	pseries_setup_rfi_flush();
+	setup_stf_barrier();
 
 	/* By default, only probe PCI (can be overridden by rtas_pci) */
 	pci_add_flags(PCI_PROBE_ONLY);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 24/52] powerpc/64s: Add support for a store forwarding barrier at kernel entry/exit
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Nicholas Piggin <npiggin@gmail.com>

commit a048a07d7f4535baa4cbad6bc024f175317ab938 upstream.

On some CPUs we can prevent a vulnerability related to store-to-load
forwarding by preventing store forwarding between privilege domains,
by inserting a barrier in kernel entry and exit paths.

This is known to be the case on at least Power7, Power8 and Power9
powerpc CPUs.

Barriers must be inserted generally before the first load after moving
to a higher privilege, and after the last store before moving to a
lower privilege, HV and PR privilege transitions must be protected.

Barriers are added as patch sections, with all kernel/hypervisor entry
points patched, and the exit points to lower privilge levels patched
similarly to the RFI flush patching.

Firmware advertisement is not implemented yet, so CPU flush types
are hard coded.

Thanks to Michal Suchánek for bug fixes and review.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michal Suchánek <msuchanek@suse.de>
[mpe: 4.4 doesn't have EXC_REAL_OOL_MASKABLE, so do it manually]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
---
 arch/powerpc/include/asm/exception-64s.h     |  35 +++++
 arch/powerpc/include/asm/feature-fixups.h    |  19 +++
 arch/powerpc/include/asm/security_features.h |  11 ++
 arch/powerpc/kernel/exceptions-64s.S         |  22 ++-
 arch/powerpc/kernel/security.c               | 148 +++++++++++++++++++
 arch/powerpc/kernel/vmlinux.lds.S            |  14 ++
 arch/powerpc/lib/feature-fixups.c            | 116 ++++++++++++++-
 arch/powerpc/platforms/powernv/setup.c       |   1 +
 arch/powerpc/platforms/pseries/setup.c       |   1 +
 9 files changed, 365 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 9bddbec441b8..3ed536bec462 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -50,6 +50,27 @@
 #define EX_PPR		88	/* SMT thread status register (priority) */
 #define EX_CTR		96
 
+#define STF_ENTRY_BARRIER_SLOT						\
+	STF_ENTRY_BARRIER_FIXUP_SECTION;				\
+	nop;								\
+	nop;								\
+	nop
+
+#define STF_EXIT_BARRIER_SLOT						\
+	STF_EXIT_BARRIER_FIXUP_SECTION;					\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop
+
+/*
+ * r10 must be free to use, r13 must be paca
+ */
+#define INTERRUPT_TO_KERNEL						\
+	STF_ENTRY_BARRIER_SLOT
+
 /*
  * Macros for annotating the expected destination of (h)rfid
  *
@@ -66,16 +87,19 @@
 	rfid
 
 #define RFI_TO_USER							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
 
 #define RFI_TO_USER_OR_KERNEL						\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
 
 #define RFI_TO_GUEST							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
@@ -84,21 +108,25 @@
 	hrfid
 
 #define HRFI_TO_USER							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_USER_OR_KERNEL						\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_GUEST							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_UNKNOWN							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
@@ -226,6 +254,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 #define __EXCEPTION_PROLOG_1(area, extra, vec)				\
 	OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR);		\
 	OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);		\
+	INTERRUPT_TO_KERNEL;						\
 	SAVE_CTR(r10, area);						\
 	mfcr	r9;							\
 	extra(vec);							\
@@ -512,6 +541,12 @@ label##_relon_hv:						\
 #define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
 	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
 
+#define MASKABLE_EXCEPTION_OOL(vec, label)				\
+	.globl label##_ool;						\
+label##_ool:								\
+	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
+	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
+
 #define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
 	. = loc;							\
 	.globl label##_pSeries;						\
diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
index 7068bafbb2d6..350be873a941 100644
--- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -184,6 +184,22 @@ label##3:					       	\
 	FTR_ENTRY_OFFSET label##1b-label##3b;		\
 	.popsection;
 
+#define STF_ENTRY_BARRIER_FIXUP_SECTION			\
+953:							\
+	.pushsection __stf_entry_barrier_fixup,"a";	\
+	.align 2;					\
+954:							\
+	FTR_ENTRY_OFFSET 953b-954b;			\
+	.popsection;
+
+#define STF_EXIT_BARRIER_FIXUP_SECTION			\
+955:							\
+	.pushsection __stf_exit_barrier_fixup,"a";	\
+	.align 2;					\
+956:							\
+	FTR_ENTRY_OFFSET 955b-956b;			\
+	.popsection;
+
 #define RFI_FLUSH_FIXUP_SECTION				\
 951:							\
 	.pushsection __rfi_flush_fixup,"a";		\
@@ -195,6 +211,9 @@ label##3:					       	\
 
 #ifndef __ASSEMBLY__
 
+extern long stf_barrier_fallback;
+extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
+extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
 extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
 
 #endif
diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
index fa4d2e1cf772..44989b22383c 100644
--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -12,6 +12,17 @@
 extern unsigned long powerpc_security_features;
 extern bool rfi_flush;
 
+/* These are bit flags */
+enum stf_barrier_type {
+	STF_BARRIER_NONE	= 0x1,
+	STF_BARRIER_FALLBACK	= 0x2,
+	STF_BARRIER_EIEIO	= 0x4,
+	STF_BARRIER_SYNC_ORI	= 0x8,
+};
+
+void setup_stf_barrier(void);
+void do_stf_barrier_fixups(enum stf_barrier_type types);
+
 static inline void security_ftr_set(unsigned long feature)
 {
 	powerpc_security_features |= feature;
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index d2ff233ddc53..10e7cec9553d 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -36,6 +36,7 @@ BEGIN_FTR_SECTION						\
 END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
 	mr	r9,r13 ;					\
 	GET_PACA(r13) ;						\
+	INTERRUPT_TO_KERNEL ;					\
 	mfspr	r11,SPRN_SRR0 ;					\
 0:
 
@@ -292,7 +293,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	. = 0x900
 	.globl decrementer_pSeries
 decrementer_pSeries:
-	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
+	SET_SCRATCH0(r13)
+	EXCEPTION_PROLOG_0(PACA_EXGEN)
+	b	decrementer_ool
 
 	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
 
@@ -319,6 +322,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
 	HMT_MEDIUM;
 	std	r10,PACA_EXGEN+EX_R10(r13)
+	INTERRUPT_TO_KERNEL
 	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
 	mfcr	r9
 	KVMTEST(0xc00)
@@ -607,6 +611,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 
 	.align	7
 	/* moved from 0xe00 */
+	MASKABLE_EXCEPTION_OOL(0x900, decrementer)
 	STD_EXCEPTION_HV_OOL(0xe02, h_data_storage)
 	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02)
 	STD_EXCEPTION_HV_OOL(0xe22, h_instr_storage)
@@ -1564,6 +1569,21 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	blr
 #endif
 
+	.balign 16
+	.globl stf_barrier_fallback
+stf_barrier_fallback:
+	std	r9,PACA_EXRFI+EX_R9(r13)
+	std	r10,PACA_EXRFI+EX_R10(r13)
+	sync
+	ld	r9,PACA_EXRFI+EX_R9(r13)
+	ld	r10,PACA_EXRFI+EX_R10(r13)
+	ori	31,31,0
+	.rept 14
+	b	1f
+1:
+	.endr
+	blr
+
 	.globl rfi_flush_fallback
 rfi_flush_fallback:
 	SET_SCRATCH0(r13);
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index bab5a27ea805..e19216472ed7 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -5,9 +5,11 @@
 // Copyright 2018, Michael Ellerman, IBM Corporation.
 
 #include <linux/kernel.h>
+#include <linux/debugfs.h>
 #include <linux/device.h>
 #include <linux/seq_buf.h>
 
+#include <asm/debug.h>
 #include <asm/security_features.h>
 
 
@@ -86,3 +88,149 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
 
 	return s.len;
 }
+
+/*
+ * Store-forwarding barrier support.
+ */
+
+static enum stf_barrier_type stf_enabled_flush_types;
+static bool no_stf_barrier;
+bool stf_barrier;
+
+static int __init handle_no_stf_barrier(char *p)
+{
+	pr_info("stf-barrier: disabled on command line.");
+	no_stf_barrier = true;
+	return 0;
+}
+
+early_param("no_stf_barrier", handle_no_stf_barrier);
+
+/* This is the generic flag used by other architectures */
+static int __init handle_ssbd(char *p)
+{
+	if (!p || strncmp(p, "auto", 5) == 0 || strncmp(p, "on", 2) == 0 ) {
+		/* Until firmware tells us, we have the barrier with auto */
+		return 0;
+	} else if (strncmp(p, "off", 3) == 0) {
+		handle_no_stf_barrier(NULL);
+		return 0;
+	} else
+		return 1;
+
+	return 0;
+}
+early_param("spec_store_bypass_disable", handle_ssbd);
+
+/* This is the generic flag used by other architectures */
+static int __init handle_no_ssbd(char *p)
+{
+	handle_no_stf_barrier(NULL);
+	return 0;
+}
+early_param("nospec_store_bypass_disable", handle_no_ssbd);
+
+static void stf_barrier_enable(bool enable)
+{
+	if (enable)
+		do_stf_barrier_fixups(stf_enabled_flush_types);
+	else
+		do_stf_barrier_fixups(STF_BARRIER_NONE);
+
+	stf_barrier = enable;
+}
+
+void setup_stf_barrier(void)
+{
+	enum stf_barrier_type type;
+	bool enable, hv;
+
+	hv = cpu_has_feature(CPU_FTR_HVMODE);
+
+	/* Default to fallback in case fw-features are not available */
+	if (cpu_has_feature(CPU_FTR_ARCH_207S))
+		type = STF_BARRIER_SYNC_ORI;
+	else if (cpu_has_feature(CPU_FTR_ARCH_206))
+		type = STF_BARRIER_FALLBACK;
+	else
+		type = STF_BARRIER_NONE;
+
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+		(security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) ||
+		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) && hv));
+
+	if (type == STF_BARRIER_FALLBACK) {
+		pr_info("stf-barrier: fallback barrier available\n");
+	} else if (type == STF_BARRIER_SYNC_ORI) {
+		pr_info("stf-barrier: hwsync barrier available\n");
+	} else if (type == STF_BARRIER_EIEIO) {
+		pr_info("stf-barrier: eieio barrier available\n");
+	}
+
+	stf_enabled_flush_types = type;
+
+	if (!no_stf_barrier)
+		stf_barrier_enable(enable);
+}
+
+ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	if (stf_barrier && stf_enabled_flush_types != STF_BARRIER_NONE) {
+		const char *type;
+		switch (stf_enabled_flush_types) {
+		case STF_BARRIER_EIEIO:
+			type = "eieio";
+			break;
+		case STF_BARRIER_SYNC_ORI:
+			type = "hwsync";
+			break;
+		case STF_BARRIER_FALLBACK:
+			type = "fallback";
+			break;
+		default:
+			type = "unknown";
+		}
+		return sprintf(buf, "Mitigation: Kernel entry/exit barrier (%s)\n", type);
+	}
+
+	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
+	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int stf_barrier_set(void *data, u64 val)
+{
+	bool enable;
+
+	if (val == 1)
+		enable = true;
+	else if (val == 0)
+		enable = false;
+	else
+		return -EINVAL;
+
+	/* Only do anything if we're changing state */
+	if (enable != stf_barrier)
+		stf_barrier_enable(enable);
+
+	return 0;
+}
+
+static int stf_barrier_get(void *data, u64 *val)
+{
+	*val = stf_barrier ? 1 : 0;
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_stf_barrier, stf_barrier_get, stf_barrier_set, "%llu\n");
+
+static __init int stf_barrier_debugfs_init(void)
+{
+	debugfs_create_file("stf_barrier", 0600, powerpc_debugfs_root, NULL, &fops_stf_barrier);
+	return 0;
+}
+device_initcall(stf_barrier_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 072a23a17350..b454e27d784d 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -73,6 +73,20 @@ SECTIONS
 	RODATA
 
 #ifdef CONFIG_PPC64
+	. = ALIGN(8);
+	__stf_entry_barrier_fixup : AT(ADDR(__stf_entry_barrier_fixup) - LOAD_OFFSET) {
+		__start___stf_entry_barrier_fixup = .;
+		*(__stf_entry_barrier_fixup)
+		__stop___stf_entry_barrier_fixup = .;
+	}
+
+	. = ALIGN(8);
+	__stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) {
+		__start___stf_exit_barrier_fixup = .;
+		*(__stf_exit_barrier_fixup)
+		__stop___stf_exit_barrier_fixup = .;
+	}
+
 	. = ALIGN(8);
 	__rfi_flush_fixup : AT(ADDR(__rfi_flush_fixup) - LOAD_OFFSET) {
 		__start___rfi_flush_fixup = .;
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index b76b9b6b3a85..a1865309b7fc 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -21,7 +21,7 @@
 #include <asm/page.h>
 #include <asm/sections.h>
 #include <asm/setup.h>
-
+#include <asm/security_features.h>
 
 struct fixup_entry {
 	unsigned long	mask;
@@ -115,6 +115,120 @@ void do_feature_fixups(unsigned long value, void *fixup_start, void *fixup_end)
 }
 
 #ifdef CONFIG_PPC_BOOK3S_64
+void do_stf_entry_barrier_fixups(enum stf_barrier_type types)
+{
+	unsigned int instrs[3], *dest;
+	long *start, *end;
+	int i;
+
+	start = PTRRELOC(&__start___stf_entry_barrier_fixup),
+	end = PTRRELOC(&__stop___stf_entry_barrier_fixup);
+
+	instrs[0] = 0x60000000; /* nop */
+	instrs[1] = 0x60000000; /* nop */
+	instrs[2] = 0x60000000; /* nop */
+
+	i = 0;
+	if (types & STF_BARRIER_FALLBACK) {
+		instrs[i++] = 0x7d4802a6; /* mflr r10		*/
+		instrs[i++] = 0x60000000; /* branch patched below */
+		instrs[i++] = 0x7d4803a6; /* mtlr r10		*/
+	} else if (types & STF_BARRIER_EIEIO) {
+		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
+	} else if (types & STF_BARRIER_SYNC_ORI) {
+		instrs[i++] = 0x7c0004ac; /* hwsync		*/
+		instrs[i++] = 0xe94d0000; /* ld r10,0(r13)	*/
+		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+
+		patch_instruction(dest, instrs[0]);
+
+		if (types & STF_BARRIER_FALLBACK)
+			patch_branch(dest + 1, (unsigned long)&stf_barrier_fallback,
+				     BRANCH_SET_LINK);
+		else
+			patch_instruction(dest + 1, instrs[1]);
+
+		patch_instruction(dest + 2, instrs[2]);
+	}
+
+	printk(KERN_DEBUG "stf-barrier: patched %d entry locations (%s barrier)\n", i,
+		(types == STF_BARRIER_NONE)                  ? "no" :
+		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
+		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
+		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
+		                                           : "unknown");
+}
+
+void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
+{
+	unsigned int instrs[6], *dest;
+	long *start, *end;
+	int i;
+
+	start = PTRRELOC(&__start___stf_exit_barrier_fixup),
+	end = PTRRELOC(&__stop___stf_exit_barrier_fixup);
+
+	instrs[0] = 0x60000000; /* nop */
+	instrs[1] = 0x60000000; /* nop */
+	instrs[2] = 0x60000000; /* nop */
+	instrs[3] = 0x60000000; /* nop */
+	instrs[4] = 0x60000000; /* nop */
+	instrs[5] = 0x60000000; /* nop */
+
+	i = 0;
+	if (types & STF_BARRIER_FALLBACK || types & STF_BARRIER_SYNC_ORI) {
+		if (cpu_has_feature(CPU_FTR_HVMODE)) {
+			instrs[i++] = 0x7db14ba6; /* mtspr 0x131, r13 (HSPRG1) */
+			instrs[i++] = 0x7db04aa6; /* mfspr r13, 0x130 (HSPRG0) */
+		} else {
+			instrs[i++] = 0x7db243a6; /* mtsprg 2,r13	*/
+			instrs[i++] = 0x7db142a6; /* mfsprg r13,1    */
+	        }
+		instrs[i++] = 0x7c0004ac; /* hwsync		*/
+		instrs[i++] = 0xe9ad0000; /* ld r13,0(r13)	*/
+		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+		if (cpu_has_feature(CPU_FTR_HVMODE)) {
+			instrs[i++] = 0x7db14aa6; /* mfspr r13, 0x131 (HSPRG1) */
+		} else {
+			instrs[i++] = 0x7db242a6; /* mfsprg r13,2 */
+		}
+	} else if (types & STF_BARRIER_EIEIO) {
+		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+
+		patch_instruction(dest, instrs[0]);
+		patch_instruction(dest + 1, instrs[1]);
+		patch_instruction(dest + 2, instrs[2]);
+		patch_instruction(dest + 3, instrs[3]);
+		patch_instruction(dest + 4, instrs[4]);
+		patch_instruction(dest + 5, instrs[5]);
+	}
+	printk(KERN_DEBUG "stf-barrier: patched %d exit locations (%s barrier)\n", i,
+		(types == STF_BARRIER_NONE)                  ? "no" :
+		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
+		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
+		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
+		                                           : "unknown");
+}
+
+
+void do_stf_barrier_fixups(enum stf_barrier_type types)
+{
+	do_stf_entry_barrier_fixups(types);
+	do_stf_exit_barrier_fixups(types);
+}
+
 void do_rfi_flush_fixups(enum l1d_flush_type types)
 {
 	unsigned int instrs[3], *dest;
diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index a91330f79f66..c3df9e1ad135 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -130,6 +130,7 @@ static void __init pnv_setup_arch(void)
 	set_arch_panic_timeout(10, ARCH_PANIC_TIMEOUT);
 
 	pnv_setup_rfi_flush();
+	setup_stf_barrier();
 
 	/* Initialize SMP */
 	pnv_smp_init();
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 9aa61b5e8568..1e2b61d178b8 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -593,6 +593,7 @@ static void __init pSeries_setup_arch(void)
 	fwnmi_init();
 
 	pseries_setup_rfi_flush();
+	setup_stf_barrier();
 
 	/* By default, only probe PCI (can be overridden by rtas_pci) */
 	pci_add_flags(PCI_PROBE_ONLY);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 25/52] powerpc/64s: Add barrier_nospec
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Michal Suchanek <msuchanek@suse.de>

commit a6b3964ad71a61bb7c61d80a60bea7d42187b2eb upstream.

A no-op form of ori (or immediate of 0 into r31 and the result stored
in r31) has been re-tasked as a speculation barrier. The instruction
only acts as a barrier on newer machines with appropriate firmware
support. On older CPUs it remains a harmless no-op.

Implement barrier_nospec using this instruction.

mpe: The semantics of the instruction are believed to be that it
prevents execution of subsequent instructions until preceding branches
have been fully resolved and are no longer executing speculatively.
There is no further documentation available at this time.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/barrier.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index b9e16855a037..ef86063c662a 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -92,4 +92,19 @@ do {									\
 #define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
+#ifdef CONFIG_PPC_BOOK3S_64
+/*
+ * Prevent execution of subsequent instructions until preceding branches have
+ * been fully resolved and are no longer executing speculatively.
+ */
+#define barrier_nospec_asm ori 31,31,0
+
+// This also acts as a compiler barrier due to the memory clobber.
+#define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
+
+#else /* !CONFIG_PPC_BOOK3S_64 */
+#define barrier_nospec_asm
+#define barrier_nospec()
+#endif
+
 #endif /* _ASM_POWERPC_BARRIER_H */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 25/52] powerpc/64s: Add barrier_nospec
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Michal Suchanek <msuchanek@suse.de>

commit a6b3964ad71a61bb7c61d80a60bea7d42187b2eb upstream.

A no-op form of ori (or immediate of 0 into r31 and the result stored
in r31) has been re-tasked as a speculation barrier. The instruction
only acts as a barrier on newer machines with appropriate firmware
support. On older CPUs it remains a harmless no-op.

Implement barrier_nospec using this instruction.

mpe: The semantics of the instruction are believed to be that it
prevents execution of subsequent instructions until preceding branches
have been fully resolved and are no longer executing speculatively.
There is no further documentation available at this time.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/barrier.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index b9e16855a037..ef86063c662a 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -92,4 +92,19 @@ do {									\
 #define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
+#ifdef CONFIG_PPC_BOOK3S_64
+/*
+ * Prevent execution of subsequent instructions until preceding branches have
+ * been fully resolved and are no longer executing speculatively.
+ */
+#define barrier_nospec_asm ori 31,31,0
+
+// This also acts as a compiler barrier due to the memory clobber.
+#define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
+
+#else /* !CONFIG_PPC_BOOK3S_64 */
+#define barrier_nospec_asm
+#define barrier_nospec()
+#endif
+
 #endif /* _ASM_POWERPC_BARRIER_H */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 26/52] powerpc/64s: Add support for ori barrier_nospec patching
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Michal Suchanek <msuchanek@suse.de>

commit 2eea7f067f495e33b8b116b35b5988ab2b8aec55 upstream.

Based on the RFI patching. This is required to be able to disable the
speculation barrier.

Only one barrier type is supported and it does nothing when the
firmware does not enable it. Also re-patching modules is not supported
So the only meaningful thing that can be done is patching out the
speculation barrier at boot when the user says it is not wanted.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/barrier.h        |  2 +-
 arch/powerpc/include/asm/feature-fixups.h |  9 ++++++++
 arch/powerpc/include/asm/setup.h          |  1 +
 arch/powerpc/kernel/security.c            |  9 ++++++++
 arch/powerpc/kernel/vmlinux.lds.S         |  7 ++++++
 arch/powerpc/lib/feature-fixups.c         | 27 +++++++++++++++++++++++
 6 files changed, 54 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index ef86063c662a..8e7cbf0ea614 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -97,7 +97,7 @@ do {									\
  * Prevent execution of subsequent instructions until preceding branches have
  * been fully resolved and are no longer executing speculatively.
  */
-#define barrier_nospec_asm ori 31,31,0
+#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; nop
 
 // This also acts as a compiler barrier due to the memory clobber.
 #define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
index 350be873a941..7983390ff0a9 100644
--- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -208,6 +208,14 @@ label##3:					       	\
 	FTR_ENTRY_OFFSET 951b-952b;			\
 	.popsection;
 
+#define NOSPEC_BARRIER_FIXUP_SECTION			\
+953:							\
+	.pushsection __barrier_nospec_fixup,"a";	\
+	.align 2;					\
+954:							\
+	FTR_ENTRY_OFFSET 953b-954b;			\
+	.popsection;
+
 
 #ifndef __ASSEMBLY__
 
@@ -215,6 +223,7 @@ extern long stf_barrier_fallback;
 extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
 extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
 extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
+extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
 
 #endif
 
diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 3733195be997..80c35275bbdb 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -38,6 +38,7 @@ enum l1d_flush_type {
 
 void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
+void do_barrier_nospec_fixups(bool enable);
 
 #endif /* !__ASSEMBLY__ */
 
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index e19216472ed7..0b7f6471f8f5 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -11,10 +11,19 @@
 
 #include <asm/debug.h>
 #include <asm/security_features.h>
+#include <asm/setup.h>
 
 
 unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
+static bool barrier_nospec_enabled;
+
+static void enable_barrier_nospec(bool enable)
+{
+	barrier_nospec_enabled = enable;
+	do_barrier_nospec_fixups(enable);
+}
+
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
 	bool thread_priv;
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index b454e27d784d..977e859b4d4c 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -93,6 +93,13 @@ SECTIONS
 		*(__rfi_flush_fixup)
 		__stop___rfi_flush_fixup = .;
 	}
+
+	. = ALIGN(8);
+	__spec_barrier_fixup : AT(ADDR(__spec_barrier_fixup) - LOAD_OFFSET) {
+		__start___barrier_nospec_fixup = .;
+		*(__barrier_nospec_fixup)
+		__stop___barrier_nospec_fixup = .;
+	}
 #endif
 
 	EXCEPTION_TABLE(0)
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index a1865309b7fc..17a3c2d5c80b 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -274,6 +274,33 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
 		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
 						: "unknown");
 }
+
+void do_barrier_nospec_fixups(bool enable)
+{
+	unsigned int instr, *dest;
+	long *start, *end;
+	int i;
+
+	start = PTRRELOC(&__start___barrier_nospec_fixup),
+	end = PTRRELOC(&__stop___barrier_nospec_fixup);
+
+	instr = 0x60000000; /* nop */
+
+	if (enable) {
+		pr_info("barrier-nospec: using ORI speculation barrier\n");
+		instr = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+		patch_instruction(dest, instr);
+	}
+
+	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
+}
+
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 26/52] powerpc/64s: Add support for ori barrier_nospec patching
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Michal Suchanek <msuchanek@suse.de>

commit 2eea7f067f495e33b8b116b35b5988ab2b8aec55 upstream.

Based on the RFI patching. This is required to be able to disable the
speculation barrier.

Only one barrier type is supported and it does nothing when the
firmware does not enable it. Also re-patching modules is not supported
So the only meaningful thing that can be done is patching out the
speculation barrier at boot when the user says it is not wanted.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/barrier.h        |  2 +-
 arch/powerpc/include/asm/feature-fixups.h |  9 ++++++++
 arch/powerpc/include/asm/setup.h          |  1 +
 arch/powerpc/kernel/security.c            |  9 ++++++++
 arch/powerpc/kernel/vmlinux.lds.S         |  7 ++++++
 arch/powerpc/lib/feature-fixups.c         | 27 +++++++++++++++++++++++
 6 files changed, 54 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index ef86063c662a..8e7cbf0ea614 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -97,7 +97,7 @@ do {									\
  * Prevent execution of subsequent instructions until preceding branches have
  * been fully resolved and are no longer executing speculatively.
  */
-#define barrier_nospec_asm ori 31,31,0
+#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; nop
 
 // This also acts as a compiler barrier due to the memory clobber.
 #define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
index 350be873a941..7983390ff0a9 100644
--- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -208,6 +208,14 @@ label##3:					       	\
 	FTR_ENTRY_OFFSET 951b-952b;			\
 	.popsection;
 
+#define NOSPEC_BARRIER_FIXUP_SECTION			\
+953:							\
+	.pushsection __barrier_nospec_fixup,"a";	\
+	.align 2;					\
+954:							\
+	FTR_ENTRY_OFFSET 953b-954b;			\
+	.popsection;
+
 
 #ifndef __ASSEMBLY__
 
@@ -215,6 +223,7 @@ extern long stf_barrier_fallback;
 extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
 extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
 extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
+extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
 
 #endif
 
diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 3733195be997..80c35275bbdb 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -38,6 +38,7 @@ enum l1d_flush_type {
 
 void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
+void do_barrier_nospec_fixups(bool enable);
 
 #endif /* !__ASSEMBLY__ */
 
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index e19216472ed7..0b7f6471f8f5 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -11,10 +11,19 @@
 
 #include <asm/debug.h>
 #include <asm/security_features.h>
+#include <asm/setup.h>
 
 
 unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
+static bool barrier_nospec_enabled;
+
+static void enable_barrier_nospec(bool enable)
+{
+	barrier_nospec_enabled = enable;
+	do_barrier_nospec_fixups(enable);
+}
+
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
 	bool thread_priv;
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index b454e27d784d..977e859b4d4c 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -93,6 +93,13 @@ SECTIONS
 		*(__rfi_flush_fixup)
 		__stop___rfi_flush_fixup = .;
 	}
+
+	. = ALIGN(8);
+	__spec_barrier_fixup : AT(ADDR(__spec_barrier_fixup) - LOAD_OFFSET) {
+		__start___barrier_nospec_fixup = .;
+		*(__barrier_nospec_fixup)
+		__stop___barrier_nospec_fixup = .;
+	}
 #endif
 
 	EXCEPTION_TABLE(0)
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index a1865309b7fc..17a3c2d5c80b 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -274,6 +274,33 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
 		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
 						: "unknown");
 }
+
+void do_barrier_nospec_fixups(bool enable)
+{
+	unsigned int instr, *dest;
+	long *start, *end;
+	int i;
+
+	start = PTRRELOC(&__start___barrier_nospec_fixup),
+	end = PTRRELOC(&__stop___barrier_nospec_fixup);
+
+	instr = 0x60000000; /* nop */
+
+	if (enable) {
+		pr_info("barrier-nospec: using ORI speculation barrier\n");
+		instr = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+		patch_instruction(dest, instr);
+	}
+
+	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
+}
+
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 27/52] powerpc/64s: Patch barrier_nospec in modules
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Michal Suchanek <msuchanek@suse.de>

commit 815069ca57c142eb71d27439bc27f41a433a67b3 upstream.

Note that unlike RFI which is patched only in kernel the nospec state
reflects settings at the time the module was loaded.

Iterating all modules and re-patching every time the settings change
is not implemented.

Based on lwsync patching.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/setup.h  |  7 +++++++
 arch/powerpc/kernel/module.c      |  6 ++++++
 arch/powerpc/kernel/security.c    |  2 +-
 arch/powerpc/lib/feature-fixups.c | 16 +++++++++++++---
 4 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 80c35275bbdb..fed0b352e24f 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -39,6 +39,13 @@ enum l1d_flush_type {
 void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
 void do_barrier_nospec_fixups(bool enable);
+extern bool barrier_nospec_enabled;
+
+#ifdef CONFIG_PPC_BOOK3S_64
+void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
+#else
+static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
+#endif
 
 #endif /* !__ASSEMBLY__ */
 
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index 9547381b631a..340528d79233 100644
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -67,6 +67,12 @@ int module_finalize(const Elf_Ehdr *hdr,
 		do_feature_fixups(powerpc_firmware_features,
 				  (void *)sect->sh_addr,
 				  (void *)sect->sh_addr + sect->sh_size);
+
+	sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
+	if (sect != NULL)
+		do_barrier_nospec_fixups_range(barrier_nospec_enabled,
+				  (void *)sect->sh_addr,
+				  (void *)sect->sh_addr + sect->sh_size);
 #endif
 
 	sect = find_section(hdr, sechdrs, "__lwsync_fixup");
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 0b7f6471f8f5..a1ae02f9d03d 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -16,7 +16,7 @@
 
 unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
-static bool barrier_nospec_enabled;
+bool barrier_nospec_enabled;
 
 static void enable_barrier_nospec(bool enable)
 {
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index 17a3c2d5c80b..e9373f41f0da 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -275,14 +275,14 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
 						: "unknown");
 }
 
-void do_barrier_nospec_fixups(bool enable)
+void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
 {
 	unsigned int instr, *dest;
 	long *start, *end;
 	int i;
 
-	start = PTRRELOC(&__start___barrier_nospec_fixup),
-	end = PTRRELOC(&__stop___barrier_nospec_fixup);
+	start = fixup_start;
+	end = fixup_end;
 
 	instr = 0x60000000; /* nop */
 
@@ -301,6 +301,16 @@ void do_barrier_nospec_fixups(bool enable)
 	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
 }
 
+void do_barrier_nospec_fixups(bool enable)
+{
+	void *start, *end;
+
+	start = PTRRELOC(&__start___barrier_nospec_fixup),
+	end = PTRRELOC(&__stop___barrier_nospec_fixup);
+
+	do_barrier_nospec_fixups_range(enable, start, end);
+}
+
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 27/52] powerpc/64s: Patch barrier_nospec in modules
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Michal Suchanek <msuchanek@suse.de>

commit 815069ca57c142eb71d27439bc27f41a433a67b3 upstream.

Note that unlike RFI which is patched only in kernel the nospec state
reflects settings at the time the module was loaded.

Iterating all modules and re-patching every time the settings change
is not implemented.

Based on lwsync patching.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/setup.h  |  7 +++++++
 arch/powerpc/kernel/module.c      |  6 ++++++
 arch/powerpc/kernel/security.c    |  2 +-
 arch/powerpc/lib/feature-fixups.c | 16 +++++++++++++---
 4 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 80c35275bbdb..fed0b352e24f 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -39,6 +39,13 @@ enum l1d_flush_type {
 void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
 void do_barrier_nospec_fixups(bool enable);
+extern bool barrier_nospec_enabled;
+
+#ifdef CONFIG_PPC_BOOK3S_64
+void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
+#else
+static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
+#endif
 
 #endif /* !__ASSEMBLY__ */
 
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index 9547381b631a..340528d79233 100644
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -67,6 +67,12 @@ int module_finalize(const Elf_Ehdr *hdr,
 		do_feature_fixups(powerpc_firmware_features,
 				  (void *)sect->sh_addr,
 				  (void *)sect->sh_addr + sect->sh_size);
+
+	sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
+	if (sect != NULL)
+		do_barrier_nospec_fixups_range(barrier_nospec_enabled,
+				  (void *)sect->sh_addr,
+				  (void *)sect->sh_addr + sect->sh_size);
 #endif
 
 	sect = find_section(hdr, sechdrs, "__lwsync_fixup");
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 0b7f6471f8f5..a1ae02f9d03d 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -16,7 +16,7 @@
 
 unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
-static bool barrier_nospec_enabled;
+bool barrier_nospec_enabled;
 
 static void enable_barrier_nospec(bool enable)
 {
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index 17a3c2d5c80b..e9373f41f0da 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -275,14 +275,14 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
 						: "unknown");
 }
 
-void do_barrier_nospec_fixups(bool enable)
+void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
 {
 	unsigned int instr, *dest;
 	long *start, *end;
 	int i;
 
-	start = PTRRELOC(&__start___barrier_nospec_fixup),
-	end = PTRRELOC(&__stop___barrier_nospec_fixup);
+	start = fixup_start;
+	end = fixup_end;
 
 	instr = 0x60000000; /* nop */
 
@@ -301,6 +301,16 @@ void do_barrier_nospec_fixups(bool enable)
 	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
 }
 
+void do_barrier_nospec_fixups(bool enable)
+{
+	void *start, *end;
+
+	start = PTRRELOC(&__start___barrier_nospec_fixup),
+	end = PTRRELOC(&__stop___barrier_nospec_fixup);
+
+	do_barrier_nospec_fixups_range(enable, start, end);
+}
+
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 28/52] powerpc/64s: Enable barrier_nospec based on firmware settings
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Michal Suchanek <msuchanek@suse.de>

commit cb3d6759a93c6d0aea1c10deb6d00e111c29c19c upstream.

Check what firmware told us and enable/disable the barrier_nospec as
appropriate.

We err on the side of enabling the barrier, as it's no-op on older
systems, see the comment for more detail.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/setup.h       |  1 +
 arch/powerpc/kernel/security.c         | 59 ++++++++++++++++++++++++++
 arch/powerpc/platforms/powernv/setup.c |  1 +
 arch/powerpc/platforms/pseries/setup.c |  1 +
 4 files changed, 62 insertions(+)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index fed0b352e24f..ac4002e5a09e 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -38,6 +38,7 @@ enum l1d_flush_type {
 
 void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
+void setup_barrier_nospec(void);
 void do_barrier_nospec_fixups(bool enable);
 extern bool barrier_nospec_enabled;
 
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index a1ae02f9d03d..ae15f53b23d7 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -24,6 +24,65 @@ static void enable_barrier_nospec(bool enable)
 	do_barrier_nospec_fixups(enable);
 }
 
+void setup_barrier_nospec(void)
+{
+	bool enable;
+
+	/*
+	 * It would make sense to check SEC_FTR_SPEC_BAR_ORI31 below as well.
+	 * But there's a good reason not to. The two flags we check below are
+	 * both are enabled by default in the kernel, so if the hcall is not
+	 * functional they will be enabled.
+	 * On a system where the host firmware has been updated (so the ori
+	 * functions as a barrier), but on which the hypervisor (KVM/Qemu) has
+	 * not been updated, we would like to enable the barrier. Dropping the
+	 * check for SEC_FTR_SPEC_BAR_ORI31 achieves that. The only downside is
+	 * we potentially enable the barrier on systems where the host firmware
+	 * is not updated, but that's harmless as it's a no-op.
+	 */
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
+
+	enable_barrier_nospec(enable);
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int barrier_nospec_set(void *data, u64 val)
+{
+	switch (val) {
+	case 0:
+	case 1:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (!!val == !!barrier_nospec_enabled)
+		return 0;
+
+	enable_barrier_nospec(!!val);
+
+	return 0;
+}
+
+static int barrier_nospec_get(void *data, u64 *val)
+{
+	*val = barrier_nospec_enabled ? 1 : 0;
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_barrier_nospec,
+			barrier_nospec_get, barrier_nospec_set, "%llu\n");
+
+static __init int barrier_nospec_debugfs_init(void)
+{
+	debugfs_create_file("barrier_nospec", 0600, powerpc_debugfs_root, NULL,
+			    &fops_barrier_nospec);
+	return 0;
+}
+device_initcall(barrier_nospec_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
+
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
 	bool thread_priv;
diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index c3df9e1ad135..0fe70973e3a3 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -123,6 +123,7 @@ static void pnv_setup_rfi_flush(void)
 		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
 
 	setup_rfi_flush(type, enable);
+	setup_barrier_nospec();
 }
 
 static void __init pnv_setup_arch(void)
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 1e2b61d178b8..0a6e091a6778 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -574,6 +574,7 @@ void pseries_setup_rfi_flush(void)
 		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
 
 	setup_rfi_flush(types, enable);
+	setup_barrier_nospec();
 }
 
 static void __init pSeries_setup_arch(void)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 28/52] powerpc/64s: Enable barrier_nospec based on firmware settings
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Michal Suchanek <msuchanek@suse.de>

commit cb3d6759a93c6d0aea1c10deb6d00e111c29c19c upstream.

Check what firmware told us and enable/disable the barrier_nospec as
appropriate.

We err on the side of enabling the barrier, as it's no-op on older
systems, see the comment for more detail.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/setup.h       |  1 +
 arch/powerpc/kernel/security.c         | 59 ++++++++++++++++++++++++++
 arch/powerpc/platforms/powernv/setup.c |  1 +
 arch/powerpc/platforms/pseries/setup.c |  1 +
 4 files changed, 62 insertions(+)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index fed0b352e24f..ac4002e5a09e 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -38,6 +38,7 @@ enum l1d_flush_type {
 
 void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
+void setup_barrier_nospec(void);
 void do_barrier_nospec_fixups(bool enable);
 extern bool barrier_nospec_enabled;
 
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index a1ae02f9d03d..ae15f53b23d7 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -24,6 +24,65 @@ static void enable_barrier_nospec(bool enable)
 	do_barrier_nospec_fixups(enable);
 }
 
+void setup_barrier_nospec(void)
+{
+	bool enable;
+
+	/*
+	 * It would make sense to check SEC_FTR_SPEC_BAR_ORI31 below as well.
+	 * But there's a good reason not to. The two flags we check below are
+	 * both are enabled by default in the kernel, so if the hcall is not
+	 * functional they will be enabled.
+	 * On a system where the host firmware has been updated (so the ori
+	 * functions as a barrier), but on which the hypervisor (KVM/Qemu) has
+	 * not been updated, we would like to enable the barrier. Dropping the
+	 * check for SEC_FTR_SPEC_BAR_ORI31 achieves that. The only downside is
+	 * we potentially enable the barrier on systems where the host firmware
+	 * is not updated, but that's harmless as it's a no-op.
+	 */
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
+
+	enable_barrier_nospec(enable);
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int barrier_nospec_set(void *data, u64 val)
+{
+	switch (val) {
+	case 0:
+	case 1:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (!!val == !!barrier_nospec_enabled)
+		return 0;
+
+	enable_barrier_nospec(!!val);
+
+	return 0;
+}
+
+static int barrier_nospec_get(void *data, u64 *val)
+{
+	*val = barrier_nospec_enabled ? 1 : 0;
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_barrier_nospec,
+			barrier_nospec_get, barrier_nospec_set, "%llu\n");
+
+static __init int barrier_nospec_debugfs_init(void)
+{
+	debugfs_create_file("barrier_nospec", 0600, powerpc_debugfs_root, NULL,
+			    &fops_barrier_nospec);
+	return 0;
+}
+device_initcall(barrier_nospec_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
+
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
 	bool thread_priv;
diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index c3df9e1ad135..0fe70973e3a3 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -123,6 +123,7 @@ static void pnv_setup_rfi_flush(void)
 		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
 
 	setup_rfi_flush(type, enable);
+	setup_barrier_nospec();
 }
 
 static void __init pnv_setup_arch(void)
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 1e2b61d178b8..0a6e091a6778 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -574,6 +574,7 @@ void pseries_setup_rfi_flush(void)
 		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
 
 	setup_rfi_flush(types, enable);
+	setup_barrier_nospec();
 }
 
 static void __init pSeries_setup_arch(void)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 29/52] powerpc/64: Use barrier_nospec in syscall entry
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 51973a815c6b46d7b23b68d6af371ad1c9d503ca upstream.

Our syscall entry is done in assembly so patch in an explicit
barrier_nospec.

Based on a patch by Michal Suchanek.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/entry_64.S | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 59be96917369..9c2e58e0e55e 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -36,6 +36,7 @@
 #include <asm/hw_irq.h>
 #include <asm/context_tracking.h>
 #include <asm/tm.h>
+#include <asm/barrier.h>
 #ifdef CONFIG_PPC_BOOK3S
 #include <asm/exception-64s.h>
 #else
@@ -177,6 +178,15 @@ system_call:			/* label this so stack traces look sane */
 	clrldi	r8,r8,32
 15:
 	slwi	r0,r0,4
+
+	barrier_nospec_asm
+	/*
+	 * Prevent the load of the handler below (based on the user-passed
+	 * system call number) being speculatively executed until the test
+	 * against NR_syscalls and branch to .Lsyscall_enosys above has
+	 * committed.
+	 */
+
 	ldx	r12,r11,r0	/* Fetch system call handler [ptr] */
 	mtctr   r12
 	bctrl			/* Call handler */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 29/52] powerpc/64: Use barrier_nospec in syscall entry
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 51973a815c6b46d7b23b68d6af371ad1c9d503ca upstream.

Our syscall entry is done in assembly so patch in an explicit
barrier_nospec.

Based on a patch by Michal Suchanek.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/entry_64.S | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 59be96917369..9c2e58e0e55e 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -36,6 +36,7 @@
 #include <asm/hw_irq.h>
 #include <asm/context_tracking.h>
 #include <asm/tm.h>
+#include <asm/barrier.h>
 #ifdef CONFIG_PPC_BOOK3S
 #include <asm/exception-64s.h>
 #else
@@ -177,6 +178,15 @@ system_call:			/* label this so stack traces look sane */
 	clrldi	r8,r8,32
 15:
 	slwi	r0,r0,4
+
+	barrier_nospec_asm
+	/*
+	 * Prevent the load of the handler below (based on the user-passed
+	 * system call number) being speculatively executed until the test
+	 * against NR_syscalls and branch to .Lsyscall_enosys above has
+	 * committed.
+	 */
+
 	ldx	r12,r11,r0	/* Fetch system call handler [ptr] */
 	mtctr   r12
 	bctrl			/* Call handler */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 30/52] powerpc: Use barrier_nospec in copy_from_user()
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit ddf35cf3764b5a182b178105f57515b42e2634f8 upstream.

Based on the x86 commit doing the same.

See commit 304ec1b05031 ("x86/uaccess: Use __uaccess_begin_nospec()
and uaccess_try_nospec") and b3bbfb3fb5d2 ("x86: Introduce
__uaccess_begin_nospec() and uaccess_try_nospec") for more detail.

In all cases we are ordering the load from the potentially
user-controlled pointer vs a previous branch based on an access_ok()
check or similar.

Base on a patch from Michal Suchanek.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/uaccess.h | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 05f1389228d2..e51ce5a0e221 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -269,6 +269,7 @@ do {								\
 	__chk_user_ptr(ptr);					\
 	if (!is_kernel_addr((unsigned long)__gu_addr))		\
 		might_fault();					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -283,6 +284,7 @@ do {								\
 	__chk_user_ptr(ptr);					\
 	if (!is_kernel_addr((unsigned long)__gu_addr))		\
 		might_fault();					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -295,8 +297,10 @@ do {								\
 	unsigned long  __gu_val = 0;					\
 	__typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
 	might_fault();							\
-	if (access_ok(VERIFY_READ, __gu_addr, (size)))			\
+	if (access_ok(VERIFY_READ, __gu_addr, (size))) {		\
+		barrier_nospec();					\
 		__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
+	}								\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
 	__gu_err;							\
 })
@@ -307,6 +311,7 @@ do {								\
 	unsigned long __gu_val;					\
 	__typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
 	__chk_user_ptr(ptr);					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -323,8 +328,10 @@ extern unsigned long __copy_tofrom_user(void __user *to,
 static inline unsigned long copy_from_user(void *to,
 		const void __user *from, unsigned long n)
 {
-	if (likely(access_ok(VERIFY_READ, from, n)))
+	if (likely(access_ok(VERIFY_READ, from, n))) {
+		barrier_nospec();
 		return __copy_tofrom_user((__force void __user *)to, from, n);
+	}
 	memset(to, 0, n);
 	return n;
 }
@@ -359,21 +366,27 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
 
 		switch (n) {
 		case 1:
+			barrier_nospec();
 			__get_user_size(*(u8 *)to, from, 1, ret);
 			break;
 		case 2:
+			barrier_nospec();
 			__get_user_size(*(u16 *)to, from, 2, ret);
 			break;
 		case 4:
+			barrier_nospec();
 			__get_user_size(*(u32 *)to, from, 4, ret);
 			break;
 		case 8:
+			barrier_nospec();
 			__get_user_size(*(u64 *)to, from, 8, ret);
 			break;
 		}
 		if (ret == 0)
 			return 0;
 	}
+
+	barrier_nospec();
 	return __copy_tofrom_user((__force void __user *)to, from, n);
 }
 
@@ -400,6 +413,7 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
 		if (ret == 0)
 			return 0;
 	}
+
 	return __copy_tofrom_user(to, (__force const void __user *)from, n);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 30/52] powerpc: Use barrier_nospec in copy_from_user()
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit ddf35cf3764b5a182b178105f57515b42e2634f8 upstream.

Based on the x86 commit doing the same.

See commit 304ec1b05031 ("x86/uaccess: Use __uaccess_begin_nospec()
and uaccess_try_nospec") and b3bbfb3fb5d2 ("x86: Introduce
__uaccess_begin_nospec() and uaccess_try_nospec") for more detail.

In all cases we are ordering the load from the potentially
user-controlled pointer vs a previous branch based on an access_ok()
check or similar.

Base on a patch from Michal Suchanek.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/uaccess.h | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 05f1389228d2..e51ce5a0e221 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -269,6 +269,7 @@ do {								\
 	__chk_user_ptr(ptr);					\
 	if (!is_kernel_addr((unsigned long)__gu_addr))		\
 		might_fault();					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -283,6 +284,7 @@ do {								\
 	__chk_user_ptr(ptr);					\
 	if (!is_kernel_addr((unsigned long)__gu_addr))		\
 		might_fault();					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -295,8 +297,10 @@ do {								\
 	unsigned long  __gu_val = 0;					\
 	__typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
 	might_fault();							\
-	if (access_ok(VERIFY_READ, __gu_addr, (size)))			\
+	if (access_ok(VERIFY_READ, __gu_addr, (size))) {		\
+		barrier_nospec();					\
 		__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
+	}								\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
 	__gu_err;							\
 })
@@ -307,6 +311,7 @@ do {								\
 	unsigned long __gu_val;					\
 	__typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
 	__chk_user_ptr(ptr);					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -323,8 +328,10 @@ extern unsigned long __copy_tofrom_user(void __user *to,
 static inline unsigned long copy_from_user(void *to,
 		const void __user *from, unsigned long n)
 {
-	if (likely(access_ok(VERIFY_READ, from, n)))
+	if (likely(access_ok(VERIFY_READ, from, n))) {
+		barrier_nospec();
 		return __copy_tofrom_user((__force void __user *)to, from, n);
+	}
 	memset(to, 0, n);
 	return n;
 }
@@ -359,21 +366,27 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
 
 		switch (n) {
 		case 1:
+			barrier_nospec();
 			__get_user_size(*(u8 *)to, from, 1, ret);
 			break;
 		case 2:
+			barrier_nospec();
 			__get_user_size(*(u16 *)to, from, 2, ret);
 			break;
 		case 4:
+			barrier_nospec();
 			__get_user_size(*(u32 *)to, from, 4, ret);
 			break;
 		case 8:
+			barrier_nospec();
 			__get_user_size(*(u64 *)to, from, 8, ret);
 			break;
 		}
 		if (ret == 0)
 			return 0;
 	}
+
+	barrier_nospec();
 	return __copy_tofrom_user((__force void __user *)to, from, n);
 }
 
@@ -400,6 +413,7 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
 		if (ret == 0)
 			return 0;
 	}
+
 	return __copy_tofrom_user(to, (__force const void __user *)from, n);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 31/52] powerpc/64s: Enhance the information in cpu_show_spectre_v1()
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Michal Suchanek <msuchanek@suse.de>

commit a377514519b9a20fa1ea9adddbb4129573129cef upstream.

We now have barrier_nospec as mitigation so print it in
cpu_show_spectre_v1() when enabled.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index ae15f53b23d7..202083daebfb 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -121,6 +121,9 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, c
 	if (!security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR))
 		return sprintf(buf, "Not affected\n");
 
+	if (barrier_nospec_enabled)
+		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+
 	return sprintf(buf, "Vulnerable\n");
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 31/52] powerpc/64s: Enhance the information in cpu_show_spectre_v1()
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Michal Suchanek <msuchanek@suse.de>

commit a377514519b9a20fa1ea9adddbb4129573129cef upstream.

We now have barrier_nospec as mitigation so print it in
cpu_show_spectre_v1() when enabled.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index ae15f53b23d7..202083daebfb 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -121,6 +121,9 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, c
 	if (!security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR))
 		return sprintf(buf, "Not affected\n");
 
+	if (barrier_nospec_enabled)
+		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+
 	return sprintf(buf, "Vulnerable\n");
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 32/52] powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 6d44acae1937b81cf8115ada8958e04f601f3f2e upstream.

When I added the spectre_v2 information in sysfs, I included the
availability of the ori31 speculation barrier.

Although the ori31 barrier can be used to mitigate v2, it's primarily
intended as a spectre v1 mitigation. Spectre v2 is mitigated by
hardware changes.

So rework the sysfs files to show the ori31 information in the
spectre_v1 file, rather than v2.

Currently we display eg:

  $ grep . spectre_v*
  spectre_v1:Mitigation: __user pointer sanitization
  spectre_v2:Mitigation: Indirect branch cache disabled, ori31 speculation barrier enabled

After:

  $ grep . spectre_v*
  spectre_v1:Mitigation: __user pointer sanitization, ori31 speculation barrier enabled
  spectre_v2:Mitigation: Indirect branch cache disabled

Fixes: d6fbe1c55c55 ("powerpc/64s: Wire up cpu_show_spectre_v2()")
Cc: stable@vger.kernel.org # v4.17+
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 202083daebfb..e74057ba2e36 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -118,25 +118,35 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, cha
 
 ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
 {
-	if (!security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR))
-		return sprintf(buf, "Not affected\n");
+	struct seq_buf s;
+
+	seq_buf_init(&s, buf, PAGE_SIZE - 1);
 
-	if (barrier_nospec_enabled)
-		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+	if (security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR)) {
+		if (barrier_nospec_enabled)
+			seq_buf_printf(&s, "Mitigation: __user pointer sanitization");
+		else
+			seq_buf_printf(&s, "Vulnerable");
 
-	return sprintf(buf, "Vulnerable\n");
+		if (security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31))
+			seq_buf_printf(&s, ", ori31 speculation barrier enabled");
+
+		seq_buf_printf(&s, "\n");
+	} else
+		seq_buf_printf(&s, "Not affected\n");
+
+	return s.len;
 }
 
 ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
 {
-	bool bcs, ccd, ori;
 	struct seq_buf s;
+	bool bcs, ccd;
 
 	seq_buf_init(&s, buf, PAGE_SIZE - 1);
 
 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
-	ori = security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31);
 
 	if (bcs || ccd) {
 		seq_buf_printf(&s, "Mitigation: ");
@@ -152,9 +162,6 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
 	} else
 		seq_buf_printf(&s, "Vulnerable");
 
-	if (ori)
-		seq_buf_printf(&s, ", ori31 speculation barrier enabled");
-
 	seq_buf_printf(&s, "\n");
 
 	return s.len;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 32/52] powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 6d44acae1937b81cf8115ada8958e04f601f3f2e upstream.

When I added the spectre_v2 information in sysfs, I included the
availability of the ori31 speculation barrier.

Although the ori31 barrier can be used to mitigate v2, it's primarily
intended as a spectre v1 mitigation. Spectre v2 is mitigated by
hardware changes.

So rework the sysfs files to show the ori31 information in the
spectre_v1 file, rather than v2.

Currently we display eg:

  $ grep . spectre_v*
  spectre_v1:Mitigation: __user pointer sanitization
  spectre_v2:Mitigation: Indirect branch cache disabled, ori31 speculation barrier enabled

After:

  $ grep . spectre_v*
  spectre_v1:Mitigation: __user pointer sanitization, ori31 speculation barrier enabled
  spectre_v2:Mitigation: Indirect branch cache disabled

Fixes: d6fbe1c55c55 ("powerpc/64s: Wire up cpu_show_spectre_v2()")
Cc: stable@vger.kernel.org # v4.17+
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 202083daebfb..e74057ba2e36 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -118,25 +118,35 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, cha
 
 ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
 {
-	if (!security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR))
-		return sprintf(buf, "Not affected\n");
+	struct seq_buf s;
+
+	seq_buf_init(&s, buf, PAGE_SIZE - 1);
 
-	if (barrier_nospec_enabled)
-		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+	if (security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR)) {
+		if (barrier_nospec_enabled)
+			seq_buf_printf(&s, "Mitigation: __user pointer sanitization");
+		else
+			seq_buf_printf(&s, "Vulnerable");
 
-	return sprintf(buf, "Vulnerable\n");
+		if (security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31))
+			seq_buf_printf(&s, ", ori31 speculation barrier enabled");
+
+		seq_buf_printf(&s, "\n");
+	} else
+		seq_buf_printf(&s, "Not affected\n");
+
+	return s.len;
 }
 
 ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
 {
-	bool bcs, ccd, ori;
 	struct seq_buf s;
+	bool bcs, ccd;
 
 	seq_buf_init(&s, buf, PAGE_SIZE - 1);
 
 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
-	ori = security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31);
 
 	if (bcs || ccd) {
 		seq_buf_printf(&s, "Mitigation: ");
@@ -152,9 +162,6 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
 	} else
 		seq_buf_printf(&s, "Vulnerable");
 
-	if (ori)
-		seq_buf_printf(&s, ", ori31 speculation barrier enabled");
-
 	seq_buf_printf(&s, "\n");
 
 	return s.len;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 33/52] powerpc/64: Disable the speculation barrier from the command line
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Diana Craciun <diana.craciun@nxp.com>

commit cf175dc315f90185128fb061dc05b6fbb211aa2f upstream.

The speculation barrier can be disabled from the command line
with the parameter: "nospectre_v1".

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index e74057ba2e36..acfb11853d29 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -17,6 +17,7 @@
 unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
 bool barrier_nospec_enabled;
+static bool no_nospec;
 
 static void enable_barrier_nospec(bool enable)
 {
@@ -43,9 +44,18 @@ void setup_barrier_nospec(void)
 	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
 		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
 
-	enable_barrier_nospec(enable);
+	if (!no_nospec)
+		enable_barrier_nospec(enable);
 }
 
+static int __init handle_nospectre_v1(char *p)
+{
+	no_nospec = true;
+
+	return 0;
+}
+early_param("nospectre_v1", handle_nospectre_v1);
+
 #ifdef CONFIG_DEBUG_FS
 static int barrier_nospec_set(void *data, u64 val)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 33/52] powerpc/64: Disable the speculation barrier from the command line
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Diana Craciun <diana.craciun@nxp.com>

commit cf175dc315f90185128fb061dc05b6fbb211aa2f upstream.

The speculation barrier can be disabled from the command line
with the parameter: "nospectre_v1".

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index e74057ba2e36..acfb11853d29 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -17,6 +17,7 @@
 unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
 bool barrier_nospec_enabled;
+static bool no_nospec;
 
 static void enable_barrier_nospec(bool enable)
 {
@@ -43,9 +44,18 @@ void setup_barrier_nospec(void)
 	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
 		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
 
-	enable_barrier_nospec(enable);
+	if (!no_nospec)
+		enable_barrier_nospec(enable);
 }
 
+static int __init handle_nospectre_v1(char *p)
+{
+	no_nospec = true;
+
+	return 0;
+}
+early_param("nospectre_v1", handle_nospectre_v1);
+
 #ifdef CONFIG_DEBUG_FS
 static int barrier_nospec_set(void *data, u64 val)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 34/52] powerpc/64: Make stf barrier PPC_BOOK3S_64 specific.
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Diana Craciun <diana.craciun@nxp.com>

commit 6453b532f2c8856a80381e6b9a1f5ea2f12294df upstream.

NXP Book3E platforms are not vulnerable to speculative store
bypass, so make the mitigations PPC_BOOK3S_64 specific.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index acfb11853d29..b5b7f047fb19 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -177,6 +177,7 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
 	return s.len;
 }
 
+#ifdef CONFIG_PPC_BOOK3S_64
 /*
  * Store-forwarding barrier support.
  */
@@ -322,3 +323,4 @@ static __init int stf_barrier_debugfs_init(void)
 }
 device_initcall(stf_barrier_debugfs_init);
 #endif /* CONFIG_DEBUG_FS */
+#endif /* CONFIG_PPC_BOOK3S_64 */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 34/52] powerpc/64: Make stf barrier PPC_BOOK3S_64 specific.
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Diana Craciun <diana.craciun@nxp.com>

commit 6453b532f2c8856a80381e6b9a1f5ea2f12294df upstream.

NXP Book3E platforms are not vulnerable to speculative store
bypass, so make the mitigations PPC_BOOK3S_64 specific.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index acfb11853d29..b5b7f047fb19 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -177,6 +177,7 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
 	return s.len;
 }
 
+#ifdef CONFIG_PPC_BOOK3S_64
 /*
  * Store-forwarding barrier support.
  */
@@ -322,3 +323,4 @@ static __init int stf_barrier_debugfs_init(void)
 }
 device_initcall(stf_barrier_debugfs_init);
 #endif /* CONFIG_DEBUG_FS */
+#endif /* CONFIG_PPC_BOOK3S_64 */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 35/52] powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 179ab1cbf883575c3a585bcfc0f2160f1d22a149 upstream.

Add a config symbol to encode which platforms support the
barrier_nospec speculation barrier. Currently this is just Book3S 64
but we will add Book3E in a future patch.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/Kconfig               | 7 ++++++-
 arch/powerpc/include/asm/barrier.h | 6 +++---
 arch/powerpc/include/asm/setup.h   | 2 +-
 arch/powerpc/kernel/Makefile       | 3 ++-
 arch/powerpc/kernel/module.c       | 4 +++-
 arch/powerpc/kernel/vmlinux.lds.S  | 4 +++-
 arch/powerpc/lib/feature-fixups.c  | 6 ++++--
 7 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 58a1fa979655..9d16459632bb 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -136,7 +136,7 @@ config PPC
 	select GENERIC_SMP_IDLE_THREAD
 	select GENERIC_CMOS_UPDATE
 	select GENERIC_TIME_VSYSCALL_OLD
-	select GENERIC_CPU_VULNERABILITIES	if PPC_BOOK3S_64
+	select GENERIC_CPU_VULNERABILITIES	if PPC_BARRIER_NOSPEC
 	select GENERIC_CLOCKEVENTS
 	select GENERIC_CLOCKEVENTS_BROADCAST if SMP
 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
@@ -162,6 +162,11 @@ config PPC
 	select ARCH_HAS_DMA_SET_COHERENT_MASK
 	select HAVE_ARCH_SECCOMP_FILTER
 
+config PPC_BARRIER_NOSPEC
+    bool
+    default y
+    depends on PPC_BOOK3S_64
+
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
 
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 8e7cbf0ea614..a422e4a69c1a 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -92,7 +92,7 @@ do {									\
 #define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 /*
  * Prevent execution of subsequent instructions until preceding branches have
  * been fully resolved and are no longer executing speculatively.
@@ -102,9 +102,9 @@ do {									\
 // This also acts as a compiler barrier due to the memory clobber.
 #define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
 
-#else /* !CONFIG_PPC_BOOK3S_64 */
+#else /* !CONFIG_PPC_BARRIER_NOSPEC */
 #define barrier_nospec_asm
 #define barrier_nospec()
-#endif
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 #endif /* _ASM_POWERPC_BARRIER_H */
diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index ac4002e5a09e..217a53ceecf3 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -42,7 +42,7 @@ void setup_barrier_nospec(void);
 void do_barrier_nospec_fixups(bool enable);
 extern bool barrier_nospec_enabled;
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
 #else
 static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index e9b0962743b8..22ed3c32fca8 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -40,10 +40,11 @@ obj-$(CONFIG_PPC64)		+= setup_64.o sys_ppc32.o \
 obj-$(CONFIG_VDSO32)		+= vdso32/
 obj-$(CONFIG_HAVE_HW_BREAKPOINT)	+= hw_breakpoint.o
 obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_ppc970.o cpu_setup_pa6t.o
-obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o security.o
+obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o
 obj-$(CONFIG_PPC_BOOK3S_64)	+= mce.o mce_power.o
 obj64-$(CONFIG_RELOCATABLE)	+= reloc_64.o
 obj-$(CONFIG_PPC_BOOK3E_64)	+= exceptions-64e.o idle_book3e.o
+obj-$(CONFIG_PPC_BARRIER_NOSPEC) += security.o
 obj-$(CONFIG_PPC64)		+= vdso64/
 obj-$(CONFIG_ALTIVEC)		+= vecemu.o
 obj-$(CONFIG_PPC_970_NAP)	+= idle_power4.o
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index 340528d79233..ff009be97a42 100644
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -67,13 +67,15 @@ int module_finalize(const Elf_Ehdr *hdr,
 		do_feature_fixups(powerpc_firmware_features,
 				  (void *)sect->sh_addr,
 				  (void *)sect->sh_addr + sect->sh_size);
+#endif /* CONFIG_PPC64 */
 
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 	sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
 	if (sect != NULL)
 		do_barrier_nospec_fixups_range(barrier_nospec_enabled,
 				  (void *)sect->sh_addr,
 				  (void *)sect->sh_addr + sect->sh_size);
-#endif
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 	sect = find_section(hdr, sechdrs, "__lwsync_fixup");
 	if (sect != NULL)
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 977e859b4d4c..4f9e7733e015 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -93,14 +93,16 @@ SECTIONS
 		*(__rfi_flush_fixup)
 		__stop___rfi_flush_fixup = .;
 	}
+#endif /* CONFIG_PPC64 */
 
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 	. = ALIGN(8);
 	__spec_barrier_fixup : AT(ADDR(__spec_barrier_fixup) - LOAD_OFFSET) {
 		__start___barrier_nospec_fixup = .;
 		*(__barrier_nospec_fixup)
 		__stop___barrier_nospec_fixup = .;
 	}
-#endif
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 	EXCEPTION_TABLE(0)
 
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index e9373f41f0da..64f216af3c1f 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -301,6 +301,9 @@ void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_
 	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
 }
 
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 void do_barrier_nospec_fixups(bool enable)
 {
 	void *start, *end;
@@ -310,8 +313,7 @@ void do_barrier_nospec_fixups(bool enable)
 
 	do_barrier_nospec_fixups_range(enable, start, end);
 }
-
-#endif /* CONFIG_PPC_BOOK3S_64 */
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 35/52] powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 179ab1cbf883575c3a585bcfc0f2160f1d22a149 upstream.

Add a config symbol to encode which platforms support the
barrier_nospec speculation barrier. Currently this is just Book3S 64
but we will add Book3E in a future patch.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/Kconfig               | 7 ++++++-
 arch/powerpc/include/asm/barrier.h | 6 +++---
 arch/powerpc/include/asm/setup.h   | 2 +-
 arch/powerpc/kernel/Makefile       | 3 ++-
 arch/powerpc/kernel/module.c       | 4 +++-
 arch/powerpc/kernel/vmlinux.lds.S  | 4 +++-
 arch/powerpc/lib/feature-fixups.c  | 6 ++++--
 7 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 58a1fa979655..9d16459632bb 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -136,7 +136,7 @@ config PPC
 	select GENERIC_SMP_IDLE_THREAD
 	select GENERIC_CMOS_UPDATE
 	select GENERIC_TIME_VSYSCALL_OLD
-	select GENERIC_CPU_VULNERABILITIES	if PPC_BOOK3S_64
+	select GENERIC_CPU_VULNERABILITIES	if PPC_BARRIER_NOSPEC
 	select GENERIC_CLOCKEVENTS
 	select GENERIC_CLOCKEVENTS_BROADCAST if SMP
 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
@@ -162,6 +162,11 @@ config PPC
 	select ARCH_HAS_DMA_SET_COHERENT_MASK
 	select HAVE_ARCH_SECCOMP_FILTER
 
+config PPC_BARRIER_NOSPEC
+    bool
+    default y
+    depends on PPC_BOOK3S_64
+
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
 
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 8e7cbf0ea614..a422e4a69c1a 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -92,7 +92,7 @@ do {									\
 #define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 /*
  * Prevent execution of subsequent instructions until preceding branches have
  * been fully resolved and are no longer executing speculatively.
@@ -102,9 +102,9 @@ do {									\
 // This also acts as a compiler barrier due to the memory clobber.
 #define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
 
-#else /* !CONFIG_PPC_BOOK3S_64 */
+#else /* !CONFIG_PPC_BARRIER_NOSPEC */
 #define barrier_nospec_asm
 #define barrier_nospec()
-#endif
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 #endif /* _ASM_POWERPC_BARRIER_H */
diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index ac4002e5a09e..217a53ceecf3 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -42,7 +42,7 @@ void setup_barrier_nospec(void);
 void do_barrier_nospec_fixups(bool enable);
 extern bool barrier_nospec_enabled;
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
 #else
 static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index e9b0962743b8..22ed3c32fca8 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -40,10 +40,11 @@ obj-$(CONFIG_PPC64)		+= setup_64.o sys_ppc32.o \
 obj-$(CONFIG_VDSO32)		+= vdso32/
 obj-$(CONFIG_HAVE_HW_BREAKPOINT)	+= hw_breakpoint.o
 obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_ppc970.o cpu_setup_pa6t.o
-obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o security.o
+obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o
 obj-$(CONFIG_PPC_BOOK3S_64)	+= mce.o mce_power.o
 obj64-$(CONFIG_RELOCATABLE)	+= reloc_64.o
 obj-$(CONFIG_PPC_BOOK3E_64)	+= exceptions-64e.o idle_book3e.o
+obj-$(CONFIG_PPC_BARRIER_NOSPEC) += security.o
 obj-$(CONFIG_PPC64)		+= vdso64/
 obj-$(CONFIG_ALTIVEC)		+= vecemu.o
 obj-$(CONFIG_PPC_970_NAP)	+= idle_power4.o
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index 340528d79233..ff009be97a42 100644
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -67,13 +67,15 @@ int module_finalize(const Elf_Ehdr *hdr,
 		do_feature_fixups(powerpc_firmware_features,
 				  (void *)sect->sh_addr,
 				  (void *)sect->sh_addr + sect->sh_size);
+#endif /* CONFIG_PPC64 */
 
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 	sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
 	if (sect != NULL)
 		do_barrier_nospec_fixups_range(barrier_nospec_enabled,
 				  (void *)sect->sh_addr,
 				  (void *)sect->sh_addr + sect->sh_size);
-#endif
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 	sect = find_section(hdr, sechdrs, "__lwsync_fixup");
 	if (sect != NULL)
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 977e859b4d4c..4f9e7733e015 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -93,14 +93,16 @@ SECTIONS
 		*(__rfi_flush_fixup)
 		__stop___rfi_flush_fixup = .;
 	}
+#endif /* CONFIG_PPC64 */
 
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 	. = ALIGN(8);
 	__spec_barrier_fixup : AT(ADDR(__spec_barrier_fixup) - LOAD_OFFSET) {
 		__start___barrier_nospec_fixup = .;
 		*(__barrier_nospec_fixup)
 		__stop___barrier_nospec_fixup = .;
 	}
-#endif
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 	EXCEPTION_TABLE(0)
 
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index e9373f41f0da..64f216af3c1f 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -301,6 +301,9 @@ void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_
 	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
 }
 
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 void do_barrier_nospec_fixups(bool enable)
 {
 	void *start, *end;
@@ -310,8 +313,7 @@ void do_barrier_nospec_fixups(bool enable)
 
 	do_barrier_nospec_fixups_range(enable, start, end);
 }
-
-#endif /* CONFIG_PPC_BOOK3S_64 */
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 36/52] powerpc/64: Call setup_barrier_nospec() from setup_arch()
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit af375eefbfb27cbb5b831984e66d724a40d26b5c upstream.

Currently we require platform code to call setup_barrier_nospec(). But
if we add an empty definition for the !CONFIG_PPC_BARRIER_NOSPEC case
then we can call it in setup_arch().

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/setup.h       | 4 ++++
 arch/powerpc/kernel/setup_32.c         | 2 ++
 arch/powerpc/kernel/setup_64.c         | 2 ++
 arch/powerpc/platforms/powernv/setup.c | 1 -
 arch/powerpc/platforms/pseries/setup.c | 1 -
 5 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 217a53ceecf3..ca6f8713e7ad 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -38,7 +38,11 @@ enum l1d_flush_type {
 
 void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 void setup_barrier_nospec(void);
+#else
+static inline void setup_barrier_nospec(void) { };
+#endif
 void do_barrier_nospec_fixups(bool enable);
 extern bool barrier_nospec_enabled;
 
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index ad8c9db61237..5a9f035bcd6b 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -322,6 +322,8 @@ void __init setup_arch(char **cmdline_p)
 		ppc_md.setup_arch();
 	if ( ppc_md.progress ) ppc_md.progress("arch: exit", 0x3eab);
 
+	setup_barrier_nospec();
+
 	paging_init();
 
 	/* Initialize the MMU context management stuff */
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 64c1e76b5972..6bb731ababc6 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -736,6 +736,8 @@ void __init setup_arch(char **cmdline_p)
 	if (ppc_md.setup_arch)
 		ppc_md.setup_arch();
 
+	setup_barrier_nospec();
+
 	paging_init();
 
 	/* Initialize the MMU context management stuff */
diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index 0fe70973e3a3..c3df9e1ad135 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -123,7 +123,6 @@ static void pnv_setup_rfi_flush(void)
 		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
 
 	setup_rfi_flush(type, enable);
-	setup_barrier_nospec();
 }
 
 static void __init pnv_setup_arch(void)
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 0a6e091a6778..1e2b61d178b8 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -574,7 +574,6 @@ void pseries_setup_rfi_flush(void)
 		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
 
 	setup_rfi_flush(types, enable);
-	setup_barrier_nospec();
 }
 
 static void __init pSeries_setup_arch(void)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 36/52] powerpc/64: Call setup_barrier_nospec() from setup_arch()
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit af375eefbfb27cbb5b831984e66d724a40d26b5c upstream.

Currently we require platform code to call setup_barrier_nospec(). But
if we add an empty definition for the !CONFIG_PPC_BARRIER_NOSPEC case
then we can call it in setup_arch().

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/setup.h       | 4 ++++
 arch/powerpc/kernel/setup_32.c         | 2 ++
 arch/powerpc/kernel/setup_64.c         | 2 ++
 arch/powerpc/platforms/powernv/setup.c | 1 -
 arch/powerpc/platforms/pseries/setup.c | 1 -
 5 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 217a53ceecf3..ca6f8713e7ad 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -38,7 +38,11 @@ enum l1d_flush_type {
 
 void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 void setup_barrier_nospec(void);
+#else
+static inline void setup_barrier_nospec(void) { };
+#endif
 void do_barrier_nospec_fixups(bool enable);
 extern bool barrier_nospec_enabled;
 
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index ad8c9db61237..5a9f035bcd6b 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -322,6 +322,8 @@ void __init setup_arch(char **cmdline_p)
 		ppc_md.setup_arch();
 	if ( ppc_md.progress ) ppc_md.progress("arch: exit", 0x3eab);
 
+	setup_barrier_nospec();
+
 	paging_init();
 
 	/* Initialize the MMU context management stuff */
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 64c1e76b5972..6bb731ababc6 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -736,6 +736,8 @@ void __init setup_arch(char **cmdline_p)
 	if (ppc_md.setup_arch)
 		ppc_md.setup_arch();
 
+	setup_barrier_nospec();
+
 	paging_init();
 
 	/* Initialize the MMU context management stuff */
diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index 0fe70973e3a3..c3df9e1ad135 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -123,7 +123,6 @@ static void pnv_setup_rfi_flush(void)
 		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
 
 	setup_rfi_flush(type, enable);
-	setup_barrier_nospec();
 }
 
 static void __init pnv_setup_arch(void)
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 0a6e091a6778..1e2b61d178b8 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -574,7 +574,6 @@ void pseries_setup_rfi_flush(void)
 		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
 
 	setup_rfi_flush(types, enable);
-	setup_barrier_nospec();
 }
 
 static void __init pSeries_setup_arch(void)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 37/52] powerpc/64: Make meltdown reporting Book3S 64 specific
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Diana Craciun <diana.craciun@nxp.com>

commit 406d2b6ae3420f5bb2b3db6986dc6f0b6dbb637b upstream.

In a subsequent patch we will enable building security.c for Book3E.
However the NXP platforms are not vulnerable to Meltdown, so make the
Meltdown vulnerability reporting PPC_BOOK3S_64 specific.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
[mpe: Split out of larger patch]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index b5b7f047fb19..e04afd171b13 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -93,6 +93,7 @@ static __init int barrier_nospec_debugfs_init(void)
 device_initcall(barrier_nospec_debugfs_init);
 #endif /* CONFIG_DEBUG_FS */
 
+#ifdef CONFIG_PPC_BOOK3S_64
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
 	bool thread_priv;
@@ -125,6 +126,7 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, cha
 
 	return sprintf(buf, "Vulnerable\n");
 }
+#endif
 
 ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 37/52] powerpc/64: Make meltdown reporting Book3S 64 specific
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Diana Craciun <diana.craciun@nxp.com>

commit 406d2b6ae3420f5bb2b3db6986dc6f0b6dbb637b upstream.

In a subsequent patch we will enable building security.c for Book3E.
However the NXP platforms are not vulnerable to Meltdown, so make the
Meltdown vulnerability reporting PPC_BOOK3S_64 specific.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
[mpe: Split out of larger patch]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index b5b7f047fb19..e04afd171b13 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -93,6 +93,7 @@ static __init int barrier_nospec_debugfs_init(void)
 device_initcall(barrier_nospec_debugfs_init);
 #endif /* CONFIG_DEBUG_FS */
 
+#ifdef CONFIG_PPC_BOOK3S_64
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
 	bool thread_priv;
@@ -125,6 +126,7 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, cha
 
 	return sprintf(buf, "Vulnerable\n");
 }
+#endif
 
 ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 38/52] powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Diana Craciun <diana.craciun@nxp.com>

commit ebcd1bfc33c7a90df941df68a6e5d4018c022fba upstream.

Implement the barrier_nospec as a isync;sync instruction sequence.
The implementation uses the infrastructure built for BOOK3S 64.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
[mpe: Add PPC_INST_ISYNC for backport]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/Kconfig                  |  2 +-
 arch/powerpc/include/asm/barrier.h    |  8 ++++++-
 arch/powerpc/include/asm/ppc-opcode.h |  1 +
 arch/powerpc/lib/feature-fixups.c     | 31 +++++++++++++++++++++++++++
 4 files changed, 40 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 9d16459632bb..01b6c00a7060 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -165,7 +165,7 @@ config PPC
 config PPC_BARRIER_NOSPEC
     bool
     default y
-    depends on PPC_BOOK3S_64
+    depends on PPC_BOOK3S_64 || PPC_FSL_BOOK3E
 
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index a422e4a69c1a..e7cb72cdb2ba 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -92,12 +92,18 @@ do {									\
 #define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
+#ifdef CONFIG_PPC_BOOK3S_64
+#define NOSPEC_BARRIER_SLOT   nop
+#elif defined(CONFIG_PPC_FSL_BOOK3E)
+#define NOSPEC_BARRIER_SLOT   nop; nop
+#endif
+
 #ifdef CONFIG_PPC_BARRIER_NOSPEC
 /*
  * Prevent execution of subsequent instructions until preceding branches have
  * been fully resolved and are no longer executing speculatively.
  */
-#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; nop
+#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; NOSPEC_BARRIER_SLOT
 
 // This also acts as a compiler barrier due to the memory clobber.
 #define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 7ab04fc59e24..faf1bb045dee 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -147,6 +147,7 @@
 #define PPC_INST_LWSYNC			0x7c2004ac
 #define PPC_INST_SYNC			0x7c0004ac
 #define PPC_INST_SYNC_MASK		0xfc0007fe
+#define PPC_INST_ISYNC			0x4c00012c
 #define PPC_INST_LXVD2X			0x7c000698
 #define PPC_INST_MCRXR			0x7c000400
 #define PPC_INST_MCRXR_MASK		0xfc0007fe
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index 64f216af3c1f..68e089a48b2f 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -315,6 +315,37 @@ void do_barrier_nospec_fixups(bool enable)
 }
 #endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
+{
+	unsigned int instr[2], *dest;
+	long *start, *end;
+	int i;
+
+	start = fixup_start;
+	end = fixup_end;
+
+	instr[0] = PPC_INST_NOP;
+	instr[1] = PPC_INST_NOP;
+
+	if (enable) {
+		pr_info("barrier-nospec: using isync; sync as speculation barrier\n");
+		instr[0] = PPC_INST_ISYNC;
+		instr[1] = PPC_INST_SYNC;
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+		patch_instruction(dest, instr[0]);
+		patch_instruction(dest + 1, instr[1]);
+	}
+
+	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
+}
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
 {
 	long *start, *end;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 38/52] powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Diana Craciun <diana.craciun@nxp.com>

commit ebcd1bfc33c7a90df941df68a6e5d4018c022fba upstream.

Implement the barrier_nospec as a isync;sync instruction sequence.
The implementation uses the infrastructure built for BOOK3S 64.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
[mpe: Add PPC_INST_ISYNC for backport]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/Kconfig                  |  2 +-
 arch/powerpc/include/asm/barrier.h    |  8 ++++++-
 arch/powerpc/include/asm/ppc-opcode.h |  1 +
 arch/powerpc/lib/feature-fixups.c     | 31 +++++++++++++++++++++++++++
 4 files changed, 40 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 9d16459632bb..01b6c00a7060 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -165,7 +165,7 @@ config PPC
 config PPC_BARRIER_NOSPEC
     bool
     default y
-    depends on PPC_BOOK3S_64
+    depends on PPC_BOOK3S_64 || PPC_FSL_BOOK3E
 
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index a422e4a69c1a..e7cb72cdb2ba 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -92,12 +92,18 @@ do {									\
 #define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
+#ifdef CONFIG_PPC_BOOK3S_64
+#define NOSPEC_BARRIER_SLOT   nop
+#elif defined(CONFIG_PPC_FSL_BOOK3E)
+#define NOSPEC_BARRIER_SLOT   nop; nop
+#endif
+
 #ifdef CONFIG_PPC_BARRIER_NOSPEC
 /*
  * Prevent execution of subsequent instructions until preceding branches have
  * been fully resolved and are no longer executing speculatively.
  */
-#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; nop
+#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; NOSPEC_BARRIER_SLOT
 
 // This also acts as a compiler barrier due to the memory clobber.
 #define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 7ab04fc59e24..faf1bb045dee 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -147,6 +147,7 @@
 #define PPC_INST_LWSYNC			0x7c2004ac
 #define PPC_INST_SYNC			0x7c0004ac
 #define PPC_INST_SYNC_MASK		0xfc0007fe
+#define PPC_INST_ISYNC			0x4c00012c
 #define PPC_INST_LXVD2X			0x7c000698
 #define PPC_INST_MCRXR			0x7c000400
 #define PPC_INST_MCRXR_MASK		0xfc0007fe
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index 64f216af3c1f..68e089a48b2f 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -315,6 +315,37 @@ void do_barrier_nospec_fixups(bool enable)
 }
 #endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
+{
+	unsigned int instr[2], *dest;
+	long *start, *end;
+	int i;
+
+	start = fixup_start;
+	end = fixup_end;
+
+	instr[0] = PPC_INST_NOP;
+	instr[1] = PPC_INST_NOP;
+
+	if (enable) {
+		pr_info("barrier-nospec: using isync; sync as speculation barrier\n");
+		instr[0] = PPC_INST_ISYNC;
+		instr[1] = PPC_INST_SYNC;
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+		patch_instruction(dest, instr[0]);
+		patch_instruction(dest + 1, instr[1]);
+	}
+
+	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
+}
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
 {
 	long *start, *end;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 39/52] powerpc/asm: Add a patch_site macro & helpers for patching instructions
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 06d0bbc6d0f56dacac3a79900e9a9a0d5972d818 upstream.

Add a macro and some helper C functions for patching single asm
instructions.

The gas macro means we can do something like:

  1:	nop
  	patch_site 1b, patch__foo

Which is less visually distracting than defining a GLOBAL symbol at 1,
and also doesn't pollute the symbol table which can confuse eg. perf.

These are obviously similar to our existing feature sections, but are
not automatically patched based on CPU/MMU features, rather they are
designed to be manually patched by C code at some arbitrary point.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/code-patching-asm.h | 18 ++++++++++++++++++
 arch/powerpc/include/asm/code-patching.h     |  2 ++
 arch/powerpc/lib/code-patching.c             | 16 ++++++++++++++++
 3 files changed, 36 insertions(+)
 create mode 100644 arch/powerpc/include/asm/code-patching-asm.h

diff --git a/arch/powerpc/include/asm/code-patching-asm.h b/arch/powerpc/include/asm/code-patching-asm.h
new file mode 100644
index 000000000000..ed7b1448493a
--- /dev/null
+++ b/arch/powerpc/include/asm/code-patching-asm.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Copyright 2018, Michael Ellerman, IBM Corporation.
+ */
+#ifndef _ASM_POWERPC_CODE_PATCHING_ASM_H
+#define _ASM_POWERPC_CODE_PATCHING_ASM_H
+
+/* Define a "site" that can be patched */
+.macro patch_site label name
+	.pushsection ".rodata"
+	.balign 4
+	.global \name
+\name:
+	.4byte	\label - .
+	.popsection
+.endm
+
+#endif /* _ASM_POWERPC_CODE_PATCHING_ASM_H */
diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
index 840a5509b3f1..a734b4b34d26 100644
--- a/arch/powerpc/include/asm/code-patching.h
+++ b/arch/powerpc/include/asm/code-patching.h
@@ -28,6 +28,8 @@ unsigned int create_cond_branch(const unsigned int *addr,
 				unsigned long target, int flags);
 int patch_branch(unsigned int *addr, unsigned long target, int flags);
 int patch_instruction(unsigned int *addr, unsigned int instr);
+int patch_instruction_site(s32 *addr, unsigned int instr);
+int patch_branch_site(s32 *site, unsigned long target, int flags);
 
 int instr_is_relative_branch(unsigned int instr);
 int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index d5edbeb8eb82..2ce6159d8983 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -32,6 +32,22 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags)
 	return patch_instruction(addr, create_branch(addr, target, flags));
 }
 
+int patch_branch_site(s32 *site, unsigned long target, int flags)
+{
+	unsigned int *addr;
+
+	addr = (unsigned int *)((unsigned long)site + *site);
+	return patch_instruction(addr, create_branch(addr, target, flags));
+}
+
+int patch_instruction_site(s32 *site, unsigned int instr)
+{
+	unsigned int *addr;
+
+	addr = (unsigned int *)((unsigned long)site + *site);
+	return patch_instruction(addr, instr);
+}
+
 unsigned int create_branch(const unsigned int *addr,
 			   unsigned long target, int flags)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 39/52] powerpc/asm: Add a patch_site macro & helpers for patching instructions
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 06d0bbc6d0f56dacac3a79900e9a9a0d5972d818 upstream.

Add a macro and some helper C functions for patching single asm
instructions.

The gas macro means we can do something like:

  1:	nop
  	patch_site 1b, patch__foo

Which is less visually distracting than defining a GLOBAL symbol at 1,
and also doesn't pollute the symbol table which can confuse eg. perf.

These are obviously similar to our existing feature sections, but are
not automatically patched based on CPU/MMU features, rather they are
designed to be manually patched by C code at some arbitrary point.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/code-patching-asm.h | 18 ++++++++++++++++++
 arch/powerpc/include/asm/code-patching.h     |  2 ++
 arch/powerpc/lib/code-patching.c             | 16 ++++++++++++++++
 3 files changed, 36 insertions(+)
 create mode 100644 arch/powerpc/include/asm/code-patching-asm.h

diff --git a/arch/powerpc/include/asm/code-patching-asm.h b/arch/powerpc/include/asm/code-patching-asm.h
new file mode 100644
index 000000000000..ed7b1448493a
--- /dev/null
+++ b/arch/powerpc/include/asm/code-patching-asm.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Copyright 2018, Michael Ellerman, IBM Corporation.
+ */
+#ifndef _ASM_POWERPC_CODE_PATCHING_ASM_H
+#define _ASM_POWERPC_CODE_PATCHING_ASM_H
+
+/* Define a "site" that can be patched */
+.macro patch_site label name
+	.pushsection ".rodata"
+	.balign 4
+	.global \name
+\name:
+	.4byte	\label - .
+	.popsection
+.endm
+
+#endif /* _ASM_POWERPC_CODE_PATCHING_ASM_H */
diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
index 840a5509b3f1..a734b4b34d26 100644
--- a/arch/powerpc/include/asm/code-patching.h
+++ b/arch/powerpc/include/asm/code-patching.h
@@ -28,6 +28,8 @@ unsigned int create_cond_branch(const unsigned int *addr,
 				unsigned long target, int flags);
 int patch_branch(unsigned int *addr, unsigned long target, int flags);
 int patch_instruction(unsigned int *addr, unsigned int instr);
+int patch_instruction_site(s32 *addr, unsigned int instr);
+int patch_branch_site(s32 *site, unsigned long target, int flags);
 
 int instr_is_relative_branch(unsigned int instr);
 int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index d5edbeb8eb82..2ce6159d8983 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -32,6 +32,22 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags)
 	return patch_instruction(addr, create_branch(addr, target, flags));
 }
 
+int patch_branch_site(s32 *site, unsigned long target, int flags)
+{
+	unsigned int *addr;
+
+	addr = (unsigned int *)((unsigned long)site + *site);
+	return patch_instruction(addr, create_branch(addr, target, flags));
+}
+
+int patch_instruction_site(s32 *site, unsigned int instr)
+{
+	unsigned int *addr;
+
+	addr = (unsigned int *)((unsigned long)site + *site);
+	return patch_instruction(addr, instr);
+}
+
 unsigned int create_branch(const unsigned int *addr,
 			   unsigned long target, int flags)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 40/52] powerpc/64s: Add new security feature flags for count cache flush
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit dc8c6cce9a26a51fc19961accb978217a3ba8c75 upstream.

Add security feature flags to indicate the need for software to flush
the count cache on context switch, and for the presence of a hardware
assisted count cache flush.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/security_features.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
index 44989b22383c..a0d47bc18a5c 100644
--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -59,6 +59,9 @@ static inline bool security_ftr_enabled(unsigned long feature)
 // Indirect branch prediction cache disabled
 #define SEC_FTR_COUNT_CACHE_DISABLED	0x0000000000000020ull
 
+// bcctr 2,0,0 triggers a hardware assisted count cache flush
+#define SEC_FTR_BCCTR_FLUSH_ASSIST	0x0000000000000800ull
+
 
 // Features indicating need for Spectre/Meltdown mitigations
 
@@ -74,6 +77,9 @@ static inline bool security_ftr_enabled(unsigned long feature)
 // Firmware configuration indicates user favours security over performance
 #define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
 
+// Software required to flush count cache on context switch
+#define SEC_FTR_FLUSH_COUNT_CACHE	0x0000000000000400ull
+
 
 // Features enabled by default
 #define SEC_FTR_DEFAULT \
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 40/52] powerpc/64s: Add new security feature flags for count cache flush
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit dc8c6cce9a26a51fc19961accb978217a3ba8c75 upstream.

Add security feature flags to indicate the need for software to flush
the count cache on context switch, and for the presence of a hardware
assisted count cache flush.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/security_features.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
index 44989b22383c..a0d47bc18a5c 100644
--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -59,6 +59,9 @@ static inline bool security_ftr_enabled(unsigned long feature)
 // Indirect branch prediction cache disabled
 #define SEC_FTR_COUNT_CACHE_DISABLED	0x0000000000000020ull
 
+// bcctr 2,0,0 triggers a hardware assisted count cache flush
+#define SEC_FTR_BCCTR_FLUSH_ASSIST	0x0000000000000800ull
+
 
 // Features indicating need for Spectre/Meltdown mitigations
 
@@ -74,6 +77,9 @@ static inline bool security_ftr_enabled(unsigned long feature)
 // Firmware configuration indicates user favours security over performance
 #define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
 
+// Software required to flush count cache on context switch
+#define SEC_FTR_FLUSH_COUNT_CACHE	0x0000000000000400ull
+
 
 // Features enabled by default
 #define SEC_FTR_DEFAULT \
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 41/52] powerpc/64s: Add support for software count cache flush
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit ee13cb249fabdff8b90aaff61add347749280087 upstream.

Some CPU revisions support a mode where the count cache needs to be
flushed by software on context switch. Additionally some revisions may
have a hardware accelerated flush, in which case the software flush
sequence can be shortened.

If we detect the appropriate flag from firmware we patch a branch
into _switch() which takes us to a count cache flush sequence.

That sequence in turn may be patched to return early if we detect that
the CPU supports accelerating the flush sequence in hardware.

Add debugfs support for reporting the state of the flush, as well as
runtime disabling it.

And modify the spectre_v2 sysfs file to report the state of the
software flush.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/asm-prototypes.h    | 21 +++++
 arch/powerpc/include/asm/security_features.h |  1 +
 arch/powerpc/kernel/entry_64.S               | 54 +++++++++++
 arch/powerpc/kernel/security.c               | 98 +++++++++++++++++++-
 4 files changed, 169 insertions(+), 5 deletions(-)
 create mode 100644 arch/powerpc/include/asm/asm-prototypes.h

diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
new file mode 100644
index 000000000000..8944c55591cf
--- /dev/null
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -0,0 +1,21 @@
+#ifndef _ASM_POWERPC_ASM_PROTOTYPES_H
+#define _ASM_POWERPC_ASM_PROTOTYPES_H
+/*
+ * This file is for prototypes of C functions that are only called
+ * from asm, and any associated variables.
+ *
+ * Copyright 2016, Daniel Axtens, IBM Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ */
+
+/* Patch sites */
+extern s32 patch__call_flush_count_cache;
+extern s32 patch__flush_count_cache_return;
+
+extern long flush_count_cache;
+
+#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
index a0d47bc18a5c..759597bf0fd8 100644
--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -22,6 +22,7 @@ enum stf_barrier_type {
 
 void setup_stf_barrier(void);
 void do_stf_barrier_fixups(enum stf_barrier_type types);
+void setup_count_cache_flush(void);
 
 static inline void security_ftr_set(unsigned long feature)
 {
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 9c2e58e0e55e..698bb51f5399 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -25,6 +25,7 @@
 #include <asm/page.h>
 #include <asm/mmu.h>
 #include <asm/thread_info.h>
+#include <asm/code-patching-asm.h>
 #include <asm/ppc_asm.h>
 #include <asm/asm-offsets.h>
 #include <asm/cputable.h>
@@ -450,6 +451,57 @@ _GLOBAL(ret_from_kernel_thread)
 	li	r3,0
 	b	.Lsyscall_exit
 
+#ifdef CONFIG_PPC_BOOK3S_64
+
+#define FLUSH_COUNT_CACHE	\
+1:	nop;			\
+	patch_site 1b, patch__call_flush_count_cache
+
+
+#define BCCTR_FLUSH	.long 0x4c400420
+
+.macro nops number
+	.rept \number
+	nop
+	.endr
+.endm
+
+.balign 32
+.global flush_count_cache
+flush_count_cache:
+	/* Save LR into r9 */
+	mflr	r9
+
+	.rept 64
+	bl	.+4
+	.endr
+	b	1f
+	nops	6
+
+	.balign 32
+	/* Restore LR */
+1:	mtlr	r9
+	li	r9,0x7fff
+	mtctr	r9
+
+	BCCTR_FLUSH
+
+2:	nop
+	patch_site 2b patch__flush_count_cache_return
+
+	nops	3
+
+	.rept 278
+	.balign 32
+	BCCTR_FLUSH
+	nops	7
+	.endr
+
+	blr
+#else
+#define FLUSH_COUNT_CACHE
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
 /*
  * This routine switches between two different tasks.  The process
  * state of one is saved on its kernel stack.  Then the state
@@ -513,6 +565,8 @@ BEGIN_FTR_SECTION
 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
 #endif
 
+	FLUSH_COUNT_CACHE
+
 #ifdef CONFIG_SMP
 	/* We need a sync somewhere here to make sure that if the
 	 * previous task gets rescheduled on another CPU, it sees all
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index e04afd171b13..108d271a218d 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -10,12 +10,21 @@
 #include <linux/seq_buf.h>
 
 #include <asm/debug.h>
+#include <asm/asm-prototypes.h>
+#include <asm/code-patching.h>
 #include <asm/security_features.h>
 #include <asm/setup.h>
 
 
 unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
+enum count_cache_flush_type {
+	COUNT_CACHE_FLUSH_NONE	= 0x1,
+	COUNT_CACHE_FLUSH_SW	= 0x2,
+	COUNT_CACHE_FLUSH_HW	= 0x4,
+};
+static enum count_cache_flush_type count_cache_flush_type;
+
 bool barrier_nospec_enabled;
 static bool no_nospec;
 
@@ -160,17 +169,29 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
 
-	if (bcs || ccd) {
+	if (bcs || ccd || count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
+		bool comma = false;
 		seq_buf_printf(&s, "Mitigation: ");
 
-		if (bcs)
+		if (bcs) {
 			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
+			comma = true;
+		}
+
+		if (ccd) {
+			if (comma)
+				seq_buf_printf(&s, ", ");
+			seq_buf_printf(&s, "Indirect branch cache disabled");
+			comma = true;
+		}
 
-		if (bcs && ccd)
+		if (comma)
 			seq_buf_printf(&s, ", ");
 
-		if (ccd)
-			seq_buf_printf(&s, "Indirect branch cache disabled");
+		seq_buf_printf(&s, "Software count cache flush");
+
+		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
+			seq_buf_printf(&s, "(hardware accelerated)");
 	} else
 		seq_buf_printf(&s, "Vulnerable");
 
@@ -325,4 +346,71 @@ static __init int stf_barrier_debugfs_init(void)
 }
 device_initcall(stf_barrier_debugfs_init);
 #endif /* CONFIG_DEBUG_FS */
+
+static void toggle_count_cache_flush(bool enable)
+{
+	if (!enable || !security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
+		patch_instruction_site(&patch__call_flush_count_cache, PPC_INST_NOP);
+		count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
+		pr_info("count-cache-flush: software flush disabled.\n");
+		return;
+	}
+
+	patch_branch_site(&patch__call_flush_count_cache,
+			  (u64)&flush_count_cache, BRANCH_SET_LINK);
+
+	if (!security_ftr_enabled(SEC_FTR_BCCTR_FLUSH_ASSIST)) {
+		count_cache_flush_type = COUNT_CACHE_FLUSH_SW;
+		pr_info("count-cache-flush: full software flush sequence enabled.\n");
+		return;
+	}
+
+	patch_instruction_site(&patch__flush_count_cache_return, PPC_INST_BLR);
+	count_cache_flush_type = COUNT_CACHE_FLUSH_HW;
+	pr_info("count-cache-flush: hardware assisted flush sequence enabled\n");
+}
+
+void setup_count_cache_flush(void)
+{
+	toggle_count_cache_flush(true);
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int count_cache_flush_set(void *data, u64 val)
+{
+	bool enable;
+
+	if (val == 1)
+		enable = true;
+	else if (val == 0)
+		enable = false;
+	else
+		return -EINVAL;
+
+	toggle_count_cache_flush(enable);
+
+	return 0;
+}
+
+static int count_cache_flush_get(void *data, u64 *val)
+{
+	if (count_cache_flush_type == COUNT_CACHE_FLUSH_NONE)
+		*val = 0;
+	else
+		*val = 1;
+
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_count_cache_flush, count_cache_flush_get,
+			count_cache_flush_set, "%llu\n");
+
+static __init int count_cache_flush_debugfs_init(void)
+{
+	debugfs_create_file("count_cache_flush", 0600, powerpc_debugfs_root,
+			    NULL, &fops_count_cache_flush);
+	return 0;
+}
+device_initcall(count_cache_flush_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
 #endif /* CONFIG_PPC_BOOK3S_64 */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 41/52] powerpc/64s: Add support for software count cache flush
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit ee13cb249fabdff8b90aaff61add347749280087 upstream.

Some CPU revisions support a mode where the count cache needs to be
flushed by software on context switch. Additionally some revisions may
have a hardware accelerated flush, in which case the software flush
sequence can be shortened.

If we detect the appropriate flag from firmware we patch a branch
into _switch() which takes us to a count cache flush sequence.

That sequence in turn may be patched to return early if we detect that
the CPU supports accelerating the flush sequence in hardware.

Add debugfs support for reporting the state of the flush, as well as
runtime disabling it.

And modify the spectre_v2 sysfs file to report the state of the
software flush.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/asm-prototypes.h    | 21 +++++
 arch/powerpc/include/asm/security_features.h |  1 +
 arch/powerpc/kernel/entry_64.S               | 54 +++++++++++
 arch/powerpc/kernel/security.c               | 98 +++++++++++++++++++-
 4 files changed, 169 insertions(+), 5 deletions(-)
 create mode 100644 arch/powerpc/include/asm/asm-prototypes.h

diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
new file mode 100644
index 000000000000..8944c55591cf
--- /dev/null
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -0,0 +1,21 @@
+#ifndef _ASM_POWERPC_ASM_PROTOTYPES_H
+#define _ASM_POWERPC_ASM_PROTOTYPES_H
+/*
+ * This file is for prototypes of C functions that are only called
+ * from asm, and any associated variables.
+ *
+ * Copyright 2016, Daniel Axtens, IBM Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ */
+
+/* Patch sites */
+extern s32 patch__call_flush_count_cache;
+extern s32 patch__flush_count_cache_return;
+
+extern long flush_count_cache;
+
+#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
index a0d47bc18a5c..759597bf0fd8 100644
--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -22,6 +22,7 @@ enum stf_barrier_type {
 
 void setup_stf_barrier(void);
 void do_stf_barrier_fixups(enum stf_barrier_type types);
+void setup_count_cache_flush(void);
 
 static inline void security_ftr_set(unsigned long feature)
 {
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 9c2e58e0e55e..698bb51f5399 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -25,6 +25,7 @@
 #include <asm/page.h>
 #include <asm/mmu.h>
 #include <asm/thread_info.h>
+#include <asm/code-patching-asm.h>
 #include <asm/ppc_asm.h>
 #include <asm/asm-offsets.h>
 #include <asm/cputable.h>
@@ -450,6 +451,57 @@ _GLOBAL(ret_from_kernel_thread)
 	li	r3,0
 	b	.Lsyscall_exit
 
+#ifdef CONFIG_PPC_BOOK3S_64
+
+#define FLUSH_COUNT_CACHE	\
+1:	nop;			\
+	patch_site 1b, patch__call_flush_count_cache
+
+
+#define BCCTR_FLUSH	.long 0x4c400420
+
+.macro nops number
+	.rept \number
+	nop
+	.endr
+.endm
+
+.balign 32
+.global flush_count_cache
+flush_count_cache:
+	/* Save LR into r9 */
+	mflr	r9
+
+	.rept 64
+	bl	.+4
+	.endr
+	b	1f
+	nops	6
+
+	.balign 32
+	/* Restore LR */
+1:	mtlr	r9
+	li	r9,0x7fff
+	mtctr	r9
+
+	BCCTR_FLUSH
+
+2:	nop
+	patch_site 2b patch__flush_count_cache_return
+
+	nops	3
+
+	.rept 278
+	.balign 32
+	BCCTR_FLUSH
+	nops	7
+	.endr
+
+	blr
+#else
+#define FLUSH_COUNT_CACHE
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
 /*
  * This routine switches between two different tasks.  The process
  * state of one is saved on its kernel stack.  Then the state
@@ -513,6 +565,8 @@ BEGIN_FTR_SECTION
 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
 #endif
 
+	FLUSH_COUNT_CACHE
+
 #ifdef CONFIG_SMP
 	/* We need a sync somewhere here to make sure that if the
 	 * previous task gets rescheduled on another CPU, it sees all
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index e04afd171b13..108d271a218d 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -10,12 +10,21 @@
 #include <linux/seq_buf.h>
 
 #include <asm/debug.h>
+#include <asm/asm-prototypes.h>
+#include <asm/code-patching.h>
 #include <asm/security_features.h>
 #include <asm/setup.h>
 
 
 unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
+enum count_cache_flush_type {
+	COUNT_CACHE_FLUSH_NONE	= 0x1,
+	COUNT_CACHE_FLUSH_SW	= 0x2,
+	COUNT_CACHE_FLUSH_HW	= 0x4,
+};
+static enum count_cache_flush_type count_cache_flush_type;
+
 bool barrier_nospec_enabled;
 static bool no_nospec;
 
@@ -160,17 +169,29 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
 
-	if (bcs || ccd) {
+	if (bcs || ccd || count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
+		bool comma = false;
 		seq_buf_printf(&s, "Mitigation: ");
 
-		if (bcs)
+		if (bcs) {
 			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
+			comma = true;
+		}
+
+		if (ccd) {
+			if (comma)
+				seq_buf_printf(&s, ", ");
+			seq_buf_printf(&s, "Indirect branch cache disabled");
+			comma = true;
+		}
 
-		if (bcs && ccd)
+		if (comma)
 			seq_buf_printf(&s, ", ");
 
-		if (ccd)
-			seq_buf_printf(&s, "Indirect branch cache disabled");
+		seq_buf_printf(&s, "Software count cache flush");
+
+		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
+			seq_buf_printf(&s, "(hardware accelerated)");
 	} else
 		seq_buf_printf(&s, "Vulnerable");
 
@@ -325,4 +346,71 @@ static __init int stf_barrier_debugfs_init(void)
 }
 device_initcall(stf_barrier_debugfs_init);
 #endif /* CONFIG_DEBUG_FS */
+
+static void toggle_count_cache_flush(bool enable)
+{
+	if (!enable || !security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
+		patch_instruction_site(&patch__call_flush_count_cache, PPC_INST_NOP);
+		count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
+		pr_info("count-cache-flush: software flush disabled.\n");
+		return;
+	}
+
+	patch_branch_site(&patch__call_flush_count_cache,
+			  (u64)&flush_count_cache, BRANCH_SET_LINK);
+
+	if (!security_ftr_enabled(SEC_FTR_BCCTR_FLUSH_ASSIST)) {
+		count_cache_flush_type = COUNT_CACHE_FLUSH_SW;
+		pr_info("count-cache-flush: full software flush sequence enabled.\n");
+		return;
+	}
+
+	patch_instruction_site(&patch__flush_count_cache_return, PPC_INST_BLR);
+	count_cache_flush_type = COUNT_CACHE_FLUSH_HW;
+	pr_info("count-cache-flush: hardware assisted flush sequence enabled\n");
+}
+
+void setup_count_cache_flush(void)
+{
+	toggle_count_cache_flush(true);
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int count_cache_flush_set(void *data, u64 val)
+{
+	bool enable;
+
+	if (val == 1)
+		enable = true;
+	else if (val == 0)
+		enable = false;
+	else
+		return -EINVAL;
+
+	toggle_count_cache_flush(enable);
+
+	return 0;
+}
+
+static int count_cache_flush_get(void *data, u64 *val)
+{
+	if (count_cache_flush_type == COUNT_CACHE_FLUSH_NONE)
+		*val = 0;
+	else
+		*val = 1;
+
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_count_cache_flush, count_cache_flush_get,
+			count_cache_flush_set, "%llu\n");
+
+static __init int count_cache_flush_debugfs_init(void)
+{
+	debugfs_create_file("count_cache_flush", 0600, powerpc_debugfs_root,
+			    NULL, &fops_count_cache_flush);
+	return 0;
+}
+device_initcall(count_cache_flush_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
 #endif /* CONFIG_PPC_BOOK3S_64 */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 42/52] powerpc/pseries: Query hypervisor for count cache flush settings
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit ba72dc171954b782a79d25e0f4b3ed91090c3b1e upstream.

Use the existing hypercall to determine the appropriate settings for
the count cache flush, and then call the generic powerpc code to set
it up based on the security feature flags.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/hvcall.h      | 2 ++
 arch/powerpc/platforms/pseries/setup.c | 7 +++++++
 2 files changed, 9 insertions(+)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 6d7938deb624..b57db9d09db9 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -295,10 +295,12 @@
 #define H_CPU_CHAR_BRANCH_HINTS_HONORED	(1ull << 58) // IBM bit 5
 #define H_CPU_CHAR_THREAD_RECONFIG_CTRL	(1ull << 57) // IBM bit 6
 #define H_CPU_CHAR_COUNT_CACHE_DISABLED	(1ull << 56) // IBM bit 7
+#define H_CPU_CHAR_BCCTR_FLUSH_ASSIST	(1ull << 54) // IBM bit 9
 
 #define H_CPU_BEHAV_FAVOUR_SECURITY	(1ull << 63) // IBM bit 0
 #define H_CPU_BEHAV_L1D_FLUSH_PR	(1ull << 62) // IBM bit 1
 #define H_CPU_BEHAV_BNDS_CHK_SPEC_BAR	(1ull << 61) // IBM bit 2
+#define H_CPU_BEHAV_FLUSH_COUNT_CACHE	(1ull << 58) // IBM bit 5
 
 #ifndef __ASSEMBLY__
 #include <linux/types.h>
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 1e2b61d178b8..9cc976ff7fec 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -524,6 +524,12 @@ static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
 	if (result->character & H_CPU_CHAR_COUNT_CACHE_DISABLED)
 		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
 
+	if (result->character & H_CPU_CHAR_BCCTR_FLUSH_ASSIST)
+		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
+
+	if (result->behaviour & H_CPU_BEHAV_FLUSH_COUNT_CACHE)
+		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
+
 	/*
 	 * The features below are enabled by default, so we instead look to see
 	 * if firmware has *disabled* them, and clear them if so.
@@ -574,6 +580,7 @@ void pseries_setup_rfi_flush(void)
 		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
 
 	setup_rfi_flush(types, enable);
+	setup_count_cache_flush();
 }
 
 static void __init pSeries_setup_arch(void)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 42/52] powerpc/pseries: Query hypervisor for count cache flush settings
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit ba72dc171954b782a79d25e0f4b3ed91090c3b1e upstream.

Use the existing hypercall to determine the appropriate settings for
the count cache flush, and then call the generic powerpc code to set
it up based on the security feature flags.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/hvcall.h      | 2 ++
 arch/powerpc/platforms/pseries/setup.c | 7 +++++++
 2 files changed, 9 insertions(+)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 6d7938deb624..b57db9d09db9 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -295,10 +295,12 @@
 #define H_CPU_CHAR_BRANCH_HINTS_HONORED	(1ull << 58) // IBM bit 5
 #define H_CPU_CHAR_THREAD_RECONFIG_CTRL	(1ull << 57) // IBM bit 6
 #define H_CPU_CHAR_COUNT_CACHE_DISABLED	(1ull << 56) // IBM bit 7
+#define H_CPU_CHAR_BCCTR_FLUSH_ASSIST	(1ull << 54) // IBM bit 9
 
 #define H_CPU_BEHAV_FAVOUR_SECURITY	(1ull << 63) // IBM bit 0
 #define H_CPU_BEHAV_L1D_FLUSH_PR	(1ull << 62) // IBM bit 1
 #define H_CPU_BEHAV_BNDS_CHK_SPEC_BAR	(1ull << 61) // IBM bit 2
+#define H_CPU_BEHAV_FLUSH_COUNT_CACHE	(1ull << 58) // IBM bit 5
 
 #ifndef __ASSEMBLY__
 #include <linux/types.h>
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 1e2b61d178b8..9cc976ff7fec 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -524,6 +524,12 @@ static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
 	if (result->character & H_CPU_CHAR_COUNT_CACHE_DISABLED)
 		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
 
+	if (result->character & H_CPU_CHAR_BCCTR_FLUSH_ASSIST)
+		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
+
+	if (result->behaviour & H_CPU_BEHAV_FLUSH_COUNT_CACHE)
+		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
+
 	/*
 	 * The features below are enabled by default, so we instead look to see
 	 * if firmware has *disabled* them, and clear them if so.
@@ -574,6 +580,7 @@ void pseries_setup_rfi_flush(void)
 		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
 
 	setup_rfi_flush(types, enable);
+	setup_count_cache_flush();
 }
 
 static void __init pSeries_setup_arch(void)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 43/52] powerpc/powernv: Query firmware for count cache flush settings
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 99d54754d3d5f896a8f616b0b6520662bc99d66b upstream.

Look for fw-features properties to determine the appropriate settings
for the count cache flush, and then call the generic powerpc code to
set it up based on the security feature flags.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/powernv/setup.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index c3df9e1ad135..e14b52c7ebd8 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -77,6 +77,12 @@ static void init_fw_feat_flags(struct device_node *np)
 	if (fw_feature_is("enabled", "fw-count-cache-disabled", np))
 		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
 
+	if (fw_feature_is("enabled", "fw-count-cache-flush-bcctr2,0,0", np))
+		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
+
+	if (fw_feature_is("enabled", "needs-count-cache-flush-on-context-switch", np))
+		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
+
 	/*
 	 * The features below are enabled by default, so we instead look to see
 	 * if firmware has *disabled* them, and clear them if so.
@@ -123,6 +129,7 @@ static void pnv_setup_rfi_flush(void)
 		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
 
 	setup_rfi_flush(type, enable);
+	setup_count_cache_flush();
 }
 
 static void __init pnv_setup_arch(void)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 43/52] powerpc/powernv: Query firmware for count cache flush settings
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 99d54754d3d5f896a8f616b0b6520662bc99d66b upstream.

Look for fw-features properties to determine the appropriate settings
for the count cache flush, and then call the generic powerpc code to
set it up based on the security feature flags.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/platforms/powernv/setup.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index c3df9e1ad135..e14b52c7ebd8 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -77,6 +77,12 @@ static void init_fw_feat_flags(struct device_node *np)
 	if (fw_feature_is("enabled", "fw-count-cache-disabled", np))
 		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
 
+	if (fw_feature_is("enabled", "fw-count-cache-flush-bcctr2,0,0", np))
+		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
+
+	if (fw_feature_is("enabled", "needs-count-cache-flush-on-context-switch", np))
+		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
+
 	/*
 	 * The features below are enabled by default, so we instead look to see
 	 * if firmware has *disabled* them, and clear them if so.
@@ -123,6 +129,7 @@ static void pnv_setup_rfi_flush(void)
 		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
 
 	setup_rfi_flush(type, enable);
+	setup_count_cache_flush();
 }
 
 static void __init pnv_setup_arch(void)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 44/52] powerpc: Avoid code patching freed init sections
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Michael Neuling <mikey@neuling.org>

commit 51c3c62b58b357e8d35e4cc32f7b4ec907426fe3 upstream.

This stops us from doing code patching in init sections after they've
been freed.

In this chain:
  kvm_guest_init() ->
    kvm_use_magic_page() ->
      fault_in_pages_readable() ->
	 __get_user() ->
	   __get_user_nocheck() ->
	     barrier_nospec();

We have a code patching location at barrier_nospec() and
kvm_guest_init() is an init function. This whole chain gets inlined,
so when we free the init section (hence kvm_guest_init()), this code
goes away and hence should no longer be patched.

We seen this as userspace memory corruption when using a memory
checker while doing partition migration testing on powervm (this
starts the code patching post migration via
/sys/kernel/mobility/migration). In theory, it could also happen when
using /sys/kernel/debug/powerpc/barrier_nospec.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/setup.h |  1 +
 arch/powerpc/lib/code-patching.c | 13 +++++++++++++
 arch/powerpc/mm/mem.c            |  2 ++
 3 files changed, 16 insertions(+)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index ca6f8713e7ad..21daee862399 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -8,6 +8,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
 
 extern unsigned int rtas_data;
 extern unsigned long long memory_limit;
+extern bool init_mem_is_free;
 extern unsigned long klimit;
 extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
 
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 2ce6159d8983..570c06a00db6 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -14,12 +14,25 @@
 #include <asm/page.h>
 #include <asm/code-patching.h>
 #include <asm/uaccess.h>
+#include <asm/setup.h>
+#include <asm/sections.h>
 
 
+static inline bool is_init(unsigned int *addr)
+{
+	return addr >= (unsigned int *)__init_begin && addr < (unsigned int *)__init_end;
+}
+
 int patch_instruction(unsigned int *addr, unsigned int instr)
 {
 	int err;
 
+	/* Make sure we aren't patching a freed init section */
+	if (init_mem_is_free && is_init(addr)) {
+		pr_debug("Skipping init section patching addr: 0x%px\n", addr);
+		return 0;
+	}
+
 	__put_user_size(instr, addr, 4, err);
 	if (err)
 		return err;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 22d94c3e6fc4..1efe5ca5c3bc 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -62,6 +62,7 @@
 #endif
 
 unsigned long long memory_limit;
+bool init_mem_is_free;
 
 #ifdef CONFIG_HIGHMEM
 pte_t *kmap_pte;
@@ -381,6 +382,7 @@ void __init mem_init(void)
 void free_initmem(void)
 {
 	ppc_md.progress = ppc_printk_progress;
+	init_mem_is_free = true;
 	free_initmem_default(POISON_FREE_INITMEM);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 44/52] powerpc: Avoid code patching freed init sections
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Michael Neuling <mikey@neuling.org>

commit 51c3c62b58b357e8d35e4cc32f7b4ec907426fe3 upstream.

This stops us from doing code patching in init sections after they've
been freed.

In this chain:
  kvm_guest_init() ->
    kvm_use_magic_page() ->
      fault_in_pages_readable() ->
	 __get_user() ->
	   __get_user_nocheck() ->
	     barrier_nospec();

We have a code patching location at barrier_nospec() and
kvm_guest_init() is an init function. This whole chain gets inlined,
so when we free the init section (hence kvm_guest_init()), this code
goes away and hence should no longer be patched.

We seen this as userspace memory corruption when using a memory
checker while doing partition migration testing on powervm (this
starts the code patching post migration via
/sys/kernel/mobility/migration). In theory, it could also happen when
using /sys/kernel/debug/powerpc/barrier_nospec.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/setup.h |  1 +
 arch/powerpc/lib/code-patching.c | 13 +++++++++++++
 arch/powerpc/mm/mem.c            |  2 ++
 3 files changed, 16 insertions(+)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index ca6f8713e7ad..21daee862399 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -8,6 +8,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
 
 extern unsigned int rtas_data;
 extern unsigned long long memory_limit;
+extern bool init_mem_is_free;
 extern unsigned long klimit;
 extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
 
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 2ce6159d8983..570c06a00db6 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -14,12 +14,25 @@
 #include <asm/page.h>
 #include <asm/code-patching.h>
 #include <asm/uaccess.h>
+#include <asm/setup.h>
+#include <asm/sections.h>
 
 
+static inline bool is_init(unsigned int *addr)
+{
+	return addr >= (unsigned int *)__init_begin && addr < (unsigned int *)__init_end;
+}
+
 int patch_instruction(unsigned int *addr, unsigned int instr)
 {
 	int err;
 
+	/* Make sure we aren't patching a freed init section */
+	if (init_mem_is_free && is_init(addr)) {
+		pr_debug("Skipping init section patching addr: 0x%px\n", addr);
+		return 0;
+	}
+
 	__put_user_size(instr, addr, 4, err);
 	if (err)
 		return err;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 22d94c3e6fc4..1efe5ca5c3bc 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -62,6 +62,7 @@
 #endif
 
 unsigned long long memory_limit;
+bool init_mem_is_free;
 
 #ifdef CONFIG_HIGHMEM
 pte_t *kmap_pte;
@@ -381,6 +382,7 @@ void __init mem_init(void)
 void free_initmem(void)
 {
 	ppc_md.progress = ppc_printk_progress;
+	init_mem_is_free = true;
 	free_initmem_default(POISON_FREE_INITMEM);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 45/52] powerpc/fsl: Add infrastructure to fixup branch predictor flush
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Diana Craciun <diana.craciun@nxp.com>

commit 76a5eaa38b15dda92cd6964248c39b5a6f3a4e9d upstream.

In order to protect against speculation attacks (Spectre
variant 2) on NXP PowerPC platforms, the branch predictor
should be flushed when the privillege level is changed.
This patch is adding the infrastructure to fixup at runtime
the code sections that are performing the branch predictor flush
depending on a boot arg parameter which is added later in a
separate patch.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/feature-fixups.h | 12 ++++++++++++
 arch/powerpc/include/asm/setup.h          |  2 ++
 arch/powerpc/kernel/vmlinux.lds.S         |  8 ++++++++
 arch/powerpc/lib/feature-fixups.c         | 23 +++++++++++++++++++++++
 4 files changed, 45 insertions(+)

diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
index 7983390ff0a9..145a37ab2d3e 100644
--- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -216,6 +216,17 @@ label##3:					       	\
 	FTR_ENTRY_OFFSET 953b-954b;			\
 	.popsection;
 
+#define START_BTB_FLUSH_SECTION			\
+955:							\
+
+#define END_BTB_FLUSH_SECTION			\
+956:							\
+	.pushsection __btb_flush_fixup,"a";	\
+	.align 2;							\
+957:						\
+	FTR_ENTRY_OFFSET 955b-957b;			\
+	FTR_ENTRY_OFFSET 956b-957b;			\
+	.popsection;
 
 #ifndef __ASSEMBLY__
 
@@ -224,6 +235,7 @@ extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
 extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
 extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
 extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
+extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
 
 #endif
 
diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 21daee862399..4d6446408a1b 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -53,6 +53,8 @@ void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
 static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
 #endif
 
+void do_btb_flush_fixups(void);
+
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_SETUP_H */
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 4f9e7733e015..876ac9d52afc 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -104,6 +104,14 @@ SECTIONS
 	}
 #endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+	. = ALIGN(8);
+	__spec_btb_flush_fixup : AT(ADDR(__spec_btb_flush_fixup) - LOAD_OFFSET) {
+		__start__btb_flush_fixup = .;
+		*(__btb_flush_fixup)
+		__stop__btb_flush_fixup = .;
+	}
+#endif
 	EXCEPTION_TABLE(0)
 
 	NOTES :kernel :notes
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index 68e089a48b2f..7bdfc19a491d 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -344,6 +344,29 @@ void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_
 
 	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
 }
+
+static void patch_btb_flush_section(long *curr)
+{
+	unsigned int *start, *end;
+
+	start = (void *)curr + *curr;
+	end = (void *)curr + *(curr + 1);
+	for (; start < end; start++) {
+		pr_devel("patching dest %lx\n", (unsigned long)start);
+		patch_instruction(start, PPC_INST_NOP);
+	}
+}
+
+void do_btb_flush_fixups(void)
+{
+	long *start, *end;
+
+	start = PTRRELOC(&__start__btb_flush_fixup);
+	end = PTRRELOC(&__stop__btb_flush_fixup);
+
+	for (; start < end; start += 2)
+		patch_btb_flush_section(start);
+}
 #endif /* CONFIG_PPC_FSL_BOOK3E */
 
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 45/52] powerpc/fsl: Add infrastructure to fixup branch predictor flush
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Diana Craciun <diana.craciun@nxp.com>

commit 76a5eaa38b15dda92cd6964248c39b5a6f3a4e9d upstream.

In order to protect against speculation attacks (Spectre
variant 2) on NXP PowerPC platforms, the branch predictor
should be flushed when the privillege level is changed.
This patch is adding the infrastructure to fixup at runtime
the code sections that are performing the branch predictor flush
depending on a boot arg parameter which is added later in a
separate patch.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/feature-fixups.h | 12 ++++++++++++
 arch/powerpc/include/asm/setup.h          |  2 ++
 arch/powerpc/kernel/vmlinux.lds.S         |  8 ++++++++
 arch/powerpc/lib/feature-fixups.c         | 23 +++++++++++++++++++++++
 4 files changed, 45 insertions(+)

diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
index 7983390ff0a9..145a37ab2d3e 100644
--- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -216,6 +216,17 @@ label##3:					       	\
 	FTR_ENTRY_OFFSET 953b-954b;			\
 	.popsection;
 
+#define START_BTB_FLUSH_SECTION			\
+955:							\
+
+#define END_BTB_FLUSH_SECTION			\
+956:							\
+	.pushsection __btb_flush_fixup,"a";	\
+	.align 2;							\
+957:						\
+	FTR_ENTRY_OFFSET 955b-957b;			\
+	FTR_ENTRY_OFFSET 956b-957b;			\
+	.popsection;
 
 #ifndef __ASSEMBLY__
 
@@ -224,6 +235,7 @@ extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
 extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
 extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
 extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
+extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
 
 #endif
 
diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 21daee862399..4d6446408a1b 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -53,6 +53,8 @@ void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
 static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
 #endif
 
+void do_btb_flush_fixups(void);
+
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_SETUP_H */
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 4f9e7733e015..876ac9d52afc 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -104,6 +104,14 @@ SECTIONS
 	}
 #endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+	. = ALIGN(8);
+	__spec_btb_flush_fixup : AT(ADDR(__spec_btb_flush_fixup) - LOAD_OFFSET) {
+		__start__btb_flush_fixup = .;
+		*(__btb_flush_fixup)
+		__stop__btb_flush_fixup = .;
+	}
+#endif
 	EXCEPTION_TABLE(0)
 
 	NOTES :kernel :notes
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index 68e089a48b2f..7bdfc19a491d 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -344,6 +344,29 @@ void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_
 
 	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
 }
+
+static void patch_btb_flush_section(long *curr)
+{
+	unsigned int *start, *end;
+
+	start = (void *)curr + *curr;
+	end = (void *)curr + *(curr + 1);
+	for (; start < end; start++) {
+		pr_devel("patching dest %lx\n", (unsigned long)start);
+		patch_instruction(start, PPC_INST_NOP);
+	}
+}
+
+void do_btb_flush_fixups(void)
+{
+	long *start, *end;
+
+	start = PTRRELOC(&__start__btb_flush_fixup);
+	end = PTRRELOC(&__stop__btb_flush_fixup);
+
+	for (; start < end; start += 2)
+		patch_btb_flush_section(start);
+}
 #endif /* CONFIG_PPC_FSL_BOOK3E */
 
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 46/52] powerpc/fsl: Add macro to flush the branch predictor
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Diana Craciun <diana.craciun@nxp.com>

commit 1cbf8990d79ff69da8ad09e8a3df014e1494462b upstream.

The BUCSR register can be used to invalidate the entries in the
branch prediction mechanisms.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/ppc_asm.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 160bb2311bbb..d219816b3e19 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -821,4 +821,15 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,945)
 	.long 0x2400004c  /* rfid				*/
 #endif /* !CONFIG_PPC_BOOK3E */
 #endif /*  __ASSEMBLY__ */
+
+#ifdef CONFIG_PPC_FSL_BOOK3E
+#define BTB_FLUSH(reg)			\
+	lis reg,BUCSR_INIT@h;		\
+	ori reg,reg,BUCSR_INIT@l;	\
+	mtspr SPRN_BUCSR,reg;		\
+	isync;
+#else
+#define BTB_FLUSH(reg)
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
 #endif /* _ASM_POWERPC_PPC_ASM_H */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 46/52] powerpc/fsl: Add macro to flush the branch predictor
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Diana Craciun <diana.craciun@nxp.com>

commit 1cbf8990d79ff69da8ad09e8a3df014e1494462b upstream.

The BUCSR register can be used to invalidate the entries in the
branch prediction mechanisms.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/ppc_asm.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 160bb2311bbb..d219816b3e19 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -821,4 +821,15 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,945)
 	.long 0x2400004c  /* rfid				*/
 #endif /* !CONFIG_PPC_BOOK3E */
 #endif /*  __ASSEMBLY__ */
+
+#ifdef CONFIG_PPC_FSL_BOOK3E
+#define BTB_FLUSH(reg)			\
+	lis reg,BUCSR_INIT@h;		\
+	ori reg,reg,BUCSR_INIT@l;	\
+	mtspr SPRN_BUCSR,reg;		\
+	isync;
+#else
+#define BTB_FLUSH(reg)
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
 #endif /* _ASM_POWERPC_PPC_ASM_H */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 47/52] powerpc/fsl: Fix spectre_v2 mitigations reporting
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Diana Craciun <diana.craciun@nxp.com>

commit 7d8bad99ba5a22892f0cad6881289fdc3875a930 upstream.

Currently for CONFIG_PPC_FSL_BOOK3E the spectre_v2 file is incorrect:

  $ cat /sys/devices/system/cpu/vulnerabilities/spectre_v2
  "Mitigation: Software count cache flush"

Which is wrong. Fix it to report vulnerable for now.

Fixes: ee13cb249fab ("powerpc/64s: Add support for software count cache flush")
Cc: stable@vger.kernel.org # v4.19+
Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 108d271a218d..2d9c46f95e74 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -23,7 +23,7 @@ enum count_cache_flush_type {
 	COUNT_CACHE_FLUSH_SW	= 0x2,
 	COUNT_CACHE_FLUSH_HW	= 0x4,
 };
-static enum count_cache_flush_type count_cache_flush_type;
+static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
 
 bool barrier_nospec_enabled;
 static bool no_nospec;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 47/52] powerpc/fsl: Fix spectre_v2 mitigations reporting
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Diana Craciun <diana.craciun@nxp.com>

commit 7d8bad99ba5a22892f0cad6881289fdc3875a930 upstream.

Currently for CONFIG_PPC_FSL_BOOK3E the spectre_v2 file is incorrect:

  $ cat /sys/devices/system/cpu/vulnerabilities/spectre_v2
  "Mitigation: Software count cache flush"

Which is wrong. Fix it to report vulnerable for now.

Fixes: ee13cb249fab ("powerpc/64s: Add support for software count cache flush")
Cc: stable@vger.kernel.org # v4.19+
Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 108d271a218d..2d9c46f95e74 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -23,7 +23,7 @@ enum count_cache_flush_type {
 	COUNT_CACHE_FLUSH_SW	= 0x2,
 	COUNT_CACHE_FLUSH_HW	= 0x4,
 };
-static enum count_cache_flush_type count_cache_flush_type;
+static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
 
 bool barrier_nospec_enabled;
 static bool no_nospec;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 48/52] powerpc/fsl: Add nospectre_v2 command line argument
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Diana Craciun <diana.craciun@nxp.com>

commit f633a8ad636efb5d4bba1a047d4a0f1ef719aa06 upstream.

When the command line argument is present, the Spectre variant 2
mitigations are disabled.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/setup.h |  5 +++++
 arch/powerpc/kernel/security.c   | 21 +++++++++++++++++++++
 2 files changed, 26 insertions(+)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 4d6446408a1b..d299479c770b 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -53,6 +53,11 @@ void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
 static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
 #endif
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+void setup_spectre_v2(void);
+#else
+static inline void setup_spectre_v2(void) {};
+#endif
 void do_btb_flush_fixups(void);
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 2d9c46f95e74..1fd1aabc1193 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -27,6 +27,10 @@ static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NO
 
 bool barrier_nospec_enabled;
 static bool no_nospec;
+static bool btb_flush_enabled;
+#ifdef CONFIG_PPC_FSL_BOOK3E
+static bool no_spectrev2;
+#endif
 
 static void enable_barrier_nospec(bool enable)
 {
@@ -102,6 +106,23 @@ static __init int barrier_nospec_debugfs_init(void)
 device_initcall(barrier_nospec_debugfs_init);
 #endif /* CONFIG_DEBUG_FS */
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+static int __init handle_nospectre_v2(char *p)
+{
+	no_spectrev2 = true;
+
+	return 0;
+}
+early_param("nospectre_v2", handle_nospectre_v2);
+void setup_spectre_v2(void)
+{
+	if (no_spectrev2)
+		do_btb_flush_fixups();
+	else
+		btb_flush_enabled = true;
+}
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
 #ifdef CONFIG_PPC_BOOK3S_64
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 48/52] powerpc/fsl: Add nospectre_v2 command line argument
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Diana Craciun <diana.craciun@nxp.com>

commit f633a8ad636efb5d4bba1a047d4a0f1ef719aa06 upstream.

When the command line argument is present, the Spectre variant 2
mitigations are disabled.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/setup.h |  5 +++++
 arch/powerpc/kernel/security.c   | 21 +++++++++++++++++++++
 2 files changed, 26 insertions(+)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 4d6446408a1b..d299479c770b 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -53,6 +53,11 @@ void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
 static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
 #endif
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+void setup_spectre_v2(void);
+#else
+static inline void setup_spectre_v2(void) {};
+#endif
 void do_btb_flush_fixups(void);
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 2d9c46f95e74..1fd1aabc1193 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -27,6 +27,10 @@ static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NO
 
 bool barrier_nospec_enabled;
 static bool no_nospec;
+static bool btb_flush_enabled;
+#ifdef CONFIG_PPC_FSL_BOOK3E
+static bool no_spectrev2;
+#endif
 
 static void enable_barrier_nospec(bool enable)
 {
@@ -102,6 +106,23 @@ static __init int barrier_nospec_debugfs_init(void)
 device_initcall(barrier_nospec_debugfs_init);
 #endif /* CONFIG_DEBUG_FS */
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+static int __init handle_nospectre_v2(char *p)
+{
+	no_spectrev2 = true;
+
+	return 0;
+}
+early_param("nospectre_v2", handle_nospectre_v2);
+void setup_spectre_v2(void)
+{
+	if (no_spectrev2)
+		do_btb_flush_fixups();
+	else
+		btb_flush_enabled = true;
+}
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
 #ifdef CONFIG_PPC_BOOK3S_64
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 49/52] powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Diana Craciun <diana.craciun@nxp.com>

commit 10c5e83afd4a3f01712d97d3bb1ae34d5b74a185 upstream.

In order to protect against speculation attacks on
indirect branches, the branch predictor is flushed at
kernel entry to protect for the following situations:
- userspace process attacking another userspace process
- userspace process attacking the kernel
Basically when the privillege level change (i.e. the
kernel is entered), the branch predictor state is flushed.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/entry_64.S       |  5 +++++
 arch/powerpc/kernel/exceptions-64e.S | 26 +++++++++++++++++++++++++-
 arch/powerpc/mm/tlb_low_64e.S        |  7 +++++++
 3 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 698bb51f5399..6d36a4fb4acf 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -77,6 +77,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
 	std	r0,GPR0(r1)
 	std	r10,GPR1(r1)
 	beq	2f			/* if from kernel mode */
+#ifdef CONFIG_PPC_FSL_BOOK3E
+START_BTB_FLUSH_SECTION
+	BTB_FLUSH(r10)
+END_BTB_FLUSH_SECTION
+#endif
 	ACCOUNT_CPU_USER_ENTRY(r10, r11)
 2:	std	r2,GPR2(r1)
 	std	r3,GPR3(r1)
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 5cc93f0b52ca..b6d28b1e316c 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -295,7 +295,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	andi.	r10,r11,MSR_PR;		/* save stack pointer */	    \
 	beq	1f;			/* branch around if supervisor */   \
 	ld	r1,PACAKSAVE(r13);	/* get kernel stack coming from usr */\
-1:	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
+1:	type##_BTB_FLUSH		\
+	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
 	bge-	cr1,exc_##n##_bad_stack;/* bad stack (TODO: out of line) */ \
 	mfspr	r10,SPRN_##type##_SRR0;	/* read SRR0 before touching stack */
 
@@ -327,6 +328,29 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 #define SPRN_MC_SRR0	SPRN_MCSRR0
 #define SPRN_MC_SRR1	SPRN_MCSRR1
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+#define GEN_BTB_FLUSH			\
+	START_BTB_FLUSH_SECTION		\
+		beq 1f;			\
+		BTB_FLUSH(r10)			\
+		1:		\
+	END_BTB_FLUSH_SECTION
+
+#define CRIT_BTB_FLUSH			\
+	START_BTB_FLUSH_SECTION		\
+		BTB_FLUSH(r10)		\
+	END_BTB_FLUSH_SECTION
+
+#define DBG_BTB_FLUSH CRIT_BTB_FLUSH
+#define MC_BTB_FLUSH CRIT_BTB_FLUSH
+#define GDBELL_BTB_FLUSH GEN_BTB_FLUSH
+#else
+#define GEN_BTB_FLUSH
+#define CRIT_BTB_FLUSH
+#define DBG_BTB_FLUSH
+#define GDBELL_BTB_FLUSH
+#endif
+
 #define NORMAL_EXCEPTION_PROLOG(n, intnum, addition)			    \
 	EXCEPTION_PROLOG(n, intnum, GEN, addition##_GEN(n))
 
diff --git a/arch/powerpc/mm/tlb_low_64e.S b/arch/powerpc/mm/tlb_low_64e.S
index 29d6987c37ba..5486d56da289 100644
--- a/arch/powerpc/mm/tlb_low_64e.S
+++ b/arch/powerpc/mm/tlb_low_64e.S
@@ -69,6 +69,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	std	r15,EX_TLB_R15(r12)
 	std	r10,EX_TLB_CR(r12)
 #ifdef CONFIG_PPC_FSL_BOOK3E
+START_BTB_FLUSH_SECTION
+	mfspr r11, SPRN_SRR1
+	andi. r10,r11,MSR_PR
+	beq 1f
+	BTB_FLUSH(r10)
+1:
+END_BTB_FLUSH_SECTION
 	std	r7,EX_TLB_R7(r12)
 #endif
 	TLB_MISS_PROLOG_STATS
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 49/52] powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Diana Craciun <diana.craciun@nxp.com>

commit 10c5e83afd4a3f01712d97d3bb1ae34d5b74a185 upstream.

In order to protect against speculation attacks on
indirect branches, the branch predictor is flushed at
kernel entry to protect for the following situations:
- userspace process attacking another userspace process
- userspace process attacking the kernel
Basically when the privillege level change (i.e. the
kernel is entered), the branch predictor state is flushed.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/entry_64.S       |  5 +++++
 arch/powerpc/kernel/exceptions-64e.S | 26 +++++++++++++++++++++++++-
 arch/powerpc/mm/tlb_low_64e.S        |  7 +++++++
 3 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 698bb51f5399..6d36a4fb4acf 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -77,6 +77,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
 	std	r0,GPR0(r1)
 	std	r10,GPR1(r1)
 	beq	2f			/* if from kernel mode */
+#ifdef CONFIG_PPC_FSL_BOOK3E
+START_BTB_FLUSH_SECTION
+	BTB_FLUSH(r10)
+END_BTB_FLUSH_SECTION
+#endif
 	ACCOUNT_CPU_USER_ENTRY(r10, r11)
 2:	std	r2,GPR2(r1)
 	std	r3,GPR3(r1)
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 5cc93f0b52ca..b6d28b1e316c 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -295,7 +295,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	andi.	r10,r11,MSR_PR;		/* save stack pointer */	    \
 	beq	1f;			/* branch around if supervisor */   \
 	ld	r1,PACAKSAVE(r13);	/* get kernel stack coming from usr */\
-1:	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
+1:	type##_BTB_FLUSH		\
+	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
 	bge-	cr1,exc_##n##_bad_stack;/* bad stack (TODO: out of line) */ \
 	mfspr	r10,SPRN_##type##_SRR0;	/* read SRR0 before touching stack */
 
@@ -327,6 +328,29 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 #define SPRN_MC_SRR0	SPRN_MCSRR0
 #define SPRN_MC_SRR1	SPRN_MCSRR1
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+#define GEN_BTB_FLUSH			\
+	START_BTB_FLUSH_SECTION		\
+		beq 1f;			\
+		BTB_FLUSH(r10)			\
+		1:		\
+	END_BTB_FLUSH_SECTION
+
+#define CRIT_BTB_FLUSH			\
+	START_BTB_FLUSH_SECTION		\
+		BTB_FLUSH(r10)		\
+	END_BTB_FLUSH_SECTION
+
+#define DBG_BTB_FLUSH CRIT_BTB_FLUSH
+#define MC_BTB_FLUSH CRIT_BTB_FLUSH
+#define GDBELL_BTB_FLUSH GEN_BTB_FLUSH
+#else
+#define GEN_BTB_FLUSH
+#define CRIT_BTB_FLUSH
+#define DBG_BTB_FLUSH
+#define GDBELL_BTB_FLUSH
+#endif
+
 #define NORMAL_EXCEPTION_PROLOG(n, intnum, addition)			    \
 	EXCEPTION_PROLOG(n, intnum, GEN, addition##_GEN(n))
 
diff --git a/arch/powerpc/mm/tlb_low_64e.S b/arch/powerpc/mm/tlb_low_64e.S
index 29d6987c37ba..5486d56da289 100644
--- a/arch/powerpc/mm/tlb_low_64e.S
+++ b/arch/powerpc/mm/tlb_low_64e.S
@@ -69,6 +69,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	std	r15,EX_TLB_R15(r12)
 	std	r10,EX_TLB_CR(r12)
 #ifdef CONFIG_PPC_FSL_BOOK3E
+START_BTB_FLUSH_SECTION
+	mfspr r11, SPRN_SRR1
+	andi. r10,r11,MSR_PR
+	beq 1f
+	BTB_FLUSH(r10)
+1:
+END_BTB_FLUSH_SECTION
 	std	r7,EX_TLB_R7(r12)
 #endif
 	TLB_MISS_PROLOG_STATS
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 50/52] powerpc/fsl: Update Spectre v2 reporting
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Diana Craciun <diana.craciun@nxp.com>

commit dfa88658fb0583abb92e062c7a9cd5a5b94f2a46 upstream.

Report branch predictor state flush as a mitigation for
Spectre variant 2.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 1fd1aabc1193..523466345d79 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -213,8 +213,11 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
 
 		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
 			seq_buf_printf(&s, "(hardware accelerated)");
-	} else
+	} else if (btb_flush_enabled) {
+		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
+	} else {
 		seq_buf_printf(&s, "Vulnerable");
+	}
 
 	seq_buf_printf(&s, "\n");
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 50/52] powerpc/fsl: Update Spectre v2 reporting
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Diana Craciun <diana.craciun@nxp.com>

commit dfa88658fb0583abb92e062c7a9cd5a5b94f2a46 upstream.

Report branch predictor state flush as a mitigation for
Spectre variant 2.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 1fd1aabc1193..523466345d79 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -213,8 +213,11 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
 
 		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
 			seq_buf_printf(&s, "(hardware accelerated)");
-	} else
+	} else if (btb_flush_enabled) {
+		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
+	} else {
 		seq_buf_printf(&s, "Vulnerable");
+	}
 
 	seq_buf_printf(&s, "\n");
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 51/52] powerpc/security: Fix spectre_v2 reporting
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

commit 92edf8df0ff2ae86cc632eeca0e651fd8431d40d upstream.

When I updated the spectre_v2 reporting to handle software count cache
flush I got the logic wrong when there's no software count cache
enabled at all.

The result is that on systems with the software count cache flush
disabled we print:

  Mitigation: Indirect branch cache disabled, Software count cache flush

Which correctly indicates that the count cache is disabled, but
incorrectly says the software count cache flush is enabled.

The root of the problem is that we are trying to handle all
combinations of options. But we know now that we only expect to see
the software count cache flush enabled if the other options are false.

So split the two cases, which simplifies the logic and fixes the bug.
We were also missing a space before "(hardware accelerated)".

The result is we see one of:

  Mitigation: Indirect branch serialisation (kernel only)
  Mitigation: Indirect branch cache disabled
  Mitigation: Software count cache flush
  Mitigation: Software count cache flush (hardware accelerated)

Fixes: ee13cb249fab ("powerpc/64s: Add support for software count cache flush")
Cc: stable@vger.kernel.org # v4.19+
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Michael Neuling <mikey@neuling.org>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 23 ++++++++---------------
 1 file changed, 8 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 523466345d79..58f0602a92b9 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -190,29 +190,22 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
 
-	if (bcs || ccd || count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
-		bool comma = false;
+	if (bcs || ccd) {
 		seq_buf_printf(&s, "Mitigation: ");
 
-		if (bcs) {
+		if (bcs)
 			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
-			comma = true;
-		}
 
-		if (ccd) {
-			if (comma)
-				seq_buf_printf(&s, ", ");
-			seq_buf_printf(&s, "Indirect branch cache disabled");
-			comma = true;
-		}
-
-		if (comma)
+		if (bcs && ccd)
 			seq_buf_printf(&s, ", ");
 
-		seq_buf_printf(&s, "Software count cache flush");
+		if (ccd)
+			seq_buf_printf(&s, "Indirect branch cache disabled");
+	} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
+		seq_buf_printf(&s, "Mitigation: Software count cache flush");
 
 		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
-			seq_buf_printf(&s, "(hardware accelerated)");
+			seq_buf_printf(&s, " (hardware accelerated)");
 	} else if (btb_flush_enabled) {
 		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
 	} else {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 51/52] powerpc/security: Fix spectre_v2 reporting
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

commit 92edf8df0ff2ae86cc632eeca0e651fd8431d40d upstream.

When I updated the spectre_v2 reporting to handle software count cache
flush I got the logic wrong when there's no software count cache
enabled at all.

The result is that on systems with the software count cache flush
disabled we print:

  Mitigation: Indirect branch cache disabled, Software count cache flush

Which correctly indicates that the count cache is disabled, but
incorrectly says the software count cache flush is enabled.

The root of the problem is that we are trying to handle all
combinations of options. But we know now that we only expect to see
the software count cache flush enabled if the other options are false.

So split the two cases, which simplifies the logic and fixes the bug.
We were also missing a space before "(hardware accelerated)".

The result is we see one of:

  Mitigation: Indirect branch serialisation (kernel only)
  Mitigation: Indirect branch cache disabled
  Mitigation: Software count cache flush
  Mitigation: Software count cache flush (hardware accelerated)

Fixes: ee13cb249fab ("powerpc/64s: Add support for software count cache flush")
Cc: stable@vger.kernel.org # v4.19+
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Michael Neuling <mikey@neuling.org>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/security.c | 23 ++++++++---------------
 1 file changed, 8 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
index 523466345d79..58f0602a92b9 100644
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -190,29 +190,22 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
 
-	if (bcs || ccd || count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
-		bool comma = false;
+	if (bcs || ccd) {
 		seq_buf_printf(&s, "Mitigation: ");
 
-		if (bcs) {
+		if (bcs)
 			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
-			comma = true;
-		}
 
-		if (ccd) {
-			if (comma)
-				seq_buf_printf(&s, ", ");
-			seq_buf_printf(&s, "Indirect branch cache disabled");
-			comma = true;
-		}
-
-		if (comma)
+		if (bcs && ccd)
 			seq_buf_printf(&s, ", ");
 
-		seq_buf_printf(&s, "Software count cache flush");
+		if (ccd)
+			seq_buf_printf(&s, "Indirect branch cache disabled");
+	} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
+		seq_buf_printf(&s, "Mitigation: Software count cache flush");
 
 		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
-			seq_buf_printf(&s, "(hardware accelerated)");
+			seq_buf_printf(&s, " (hardware accelerated)");
 	} else if (btb_flush_enabled) {
 		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
 	} else {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 52/52] powerpc/fsl: Fix the flush of branch predictor.
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 14:20   ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh
  Cc: linuxppc-dev, diana.craciun, msuchanek, npiggin, christophe.leroy

From: Christophe Leroy <christophe.leroy@c-s.fr>

commit 27da80719ef132cf8c80eb406d5aeb37dddf78cc upstream.

The commit identified below adds MC_BTB_FLUSH macro only when
CONFIG_PPC_FSL_BOOK3E is defined. This results in the following error
on some configs (seen several times with kisskb randconfig_defconfig)

arch/powerpc/kernel/exceptions-64e.S:576: Error: Unrecognized opcode: `mc_btb_flush'
make[3]: *** [scripts/Makefile.build:367: arch/powerpc/kernel/exceptions-64e.o] Error 1
make[2]: *** [scripts/Makefile.build:492: arch/powerpc/kernel] Error 2
make[1]: *** [Makefile:1043: arch/powerpc] Error 2
make: *** [Makefile:152: sub-make] Error 2

This patch adds a blank definition of MC_BTB_FLUSH for other cases.

Fixes: 10c5e83afd4a ("powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)")
Cc: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/exceptions-64e.S | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index b6d28b1e316c..48ec841ea1bf 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -348,6 +348,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 #define GEN_BTB_FLUSH
 #define CRIT_BTB_FLUSH
 #define DBG_BTB_FLUSH
+#define MC_BTB_FLUSH
 #define GDBELL_BTB_FLUSH
 #endif
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* [PATCH stable v4.4 52/52] powerpc/fsl: Fix the flush of branch predictor.
@ 2019-04-21 14:20   ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-21 14:20 UTC (permalink / raw)
  To: stable, gregkh; +Cc: diana.craciun, linuxppc-dev, msuchanek, npiggin

From: Christophe Leroy <christophe.leroy@c-s.fr>

commit 27da80719ef132cf8c80eb406d5aeb37dddf78cc upstream.

The commit identified below adds MC_BTB_FLUSH macro only when
CONFIG_PPC_FSL_BOOK3E is defined. This results in the following error
on some configs (seen several times with kisskb randconfig_defconfig)

arch/powerpc/kernel/exceptions-64e.S:576: Error: Unrecognized opcode: `mc_btb_flush'
make[3]: *** [scripts/Makefile.build:367: arch/powerpc/kernel/exceptions-64e.o] Error 1
make[2]: *** [scripts/Makefile.build:492: arch/powerpc/kernel] Error 2
make[1]: *** [Makefile:1043: arch/powerpc] Error 2
make: *** [Makefile:152: sub-make] Error 2

This patch adds a blank definition of MC_BTB_FLUSH for other cases.

Fixes: 10c5e83afd4a ("powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)")
Cc: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/exceptions-64e.S | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index b6d28b1e316c..48ec841ea1bf 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -348,6 +348,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 #define GEN_BTB_FLUSH
 #define CRIT_BTB_FLUSH
 #define DBG_BTB_FLUSH
+#define MC_BTB_FLUSH
 #define GDBELL_BTB_FLUSH
 #endif
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-21 16:34   ` Greg KH
  -1 siblings, 0 replies; 180+ messages in thread
From: Greg KH @ 2019-04-21 16:34 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: stable, linuxppc-dev, diana.craciun, msuchanek, npiggin,
	christophe.leroy

On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Hi Greg/Sasha,
> 
> Please queue up these powerpc patches for 4.4 if you have no objections.

why?  Do you, or someone else, really care about spectre issues in 4.4?
Who is using ppc for 4.4 becides a specific enterprise distro (and they
don't seem to be pulling in my stable updates anyway...)?

I'll be glad to take these, just want to make sure that someone actually
will use them :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
@ 2019-04-21 16:34   ` Greg KH
  0 siblings, 0 replies; 180+ messages in thread
From: Greg KH @ 2019-04-21 16:34 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: npiggin, diana.craciun, linuxppc-dev, stable, msuchanek

On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Hi Greg/Sasha,
> 
> Please queue up these powerpc patches for 4.4 if you have no objections.

why?  Do you, or someone else, really care about spectre issues in 4.4?
Who is using ppc for 4.4 becides a specific enterprise distro (and they
don't seem to be pulling in my stable updates anyway...)?

I'll be glad to take these, just want to make sure that someone actually
will use them :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
  2019-04-21 16:34   ` Greg KH
@ 2019-04-22 15:27     ` Diana Madalina Craciun
  -1 siblings, 0 replies; 180+ messages in thread
From: Diana Madalina Craciun @ 2019-04-22 15:27 UTC (permalink / raw)
  To: Greg KH, Michael Ellerman
  Cc: stable, linuxppc-dev, msuchanek, npiggin, christophe.leroy

On 4/21/2019 7:34 PM, Greg KH wrote:
> On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>> Hi Greg/Sasha,
>>
>> Please queue up these powerpc patches for 4.4 if you have no objections.
> why?  Do you, or someone else, really care about spectre issues in 4.4?
> Who is using ppc for 4.4 becides a specific enterprise distro (and they
> don't seem to be pulling in my stable updates anyway...)?

We (NXP) received questions from customers regarding Spectre mitigations
on kernel 4.4. Not sure if they really need them as some systems are
enclosed embedded ones, but they asked for them.

Thanks,
Diana

> I'll be glad to take these, just want to make sure that someone actually
> will use them :)
>
> thanks,
>
> greg k-h
>


^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
@ 2019-04-22 15:27     ` Diana Madalina Craciun
  0 siblings, 0 replies; 180+ messages in thread
From: Diana Madalina Craciun @ 2019-04-22 15:27 UTC (permalink / raw)
  To: Greg KH, Michael Ellerman; +Cc: linuxppc-dev, msuchanek, npiggin, stable

On 4/21/2019 7:34 PM, Greg KH wrote:
> On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>> Hi Greg/Sasha,
>>
>> Please queue up these powerpc patches for 4.4 if you have no objections.
> why?  Do you, or someone else, really care about spectre issues in 4.4?
> Who is using ppc for 4.4 becides a specific enterprise distro (and they
> don't seem to be pulling in my stable updates anyway...)?

We (NXP) received questions from customers regarding Spectre mitigations
on kernel 4.4. Not sure if they really need them as some systems are
enclosed embedded ones, but they asked for them.

Thanks,
Diana

> I'll be glad to take these, just want to make sure that someone actually
> will use them :)
>
> thanks,
>
> greg k-h
>


^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-22 15:32   ` Diana Madalina Craciun
  -1 siblings, 0 replies; 180+ messages in thread
From: Diana Madalina Craciun @ 2019-04-22 15:32 UTC (permalink / raw)
  To: Michael Ellerman, stable, gregkh
  Cc: linuxppc-dev, msuchanek, npiggin, christophe.leroy

Hi Michael,

There are some missing NXP Spectre v2 patches. I can send them
separately if the series will be accepted. I have merged them, but I did
not test them, I was sick today and incapable of doing that.

Thanks,
Diana


On 4/21/2019 5:21 PM, Michael Ellerman wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi Greg/Sasha,
>
> Please queue up these powerpc patches for 4.4 if you have no objections.
>
> cheers
>
>
> Christophe Leroy (1):
>   powerpc/fsl: Fix the flush of branch predictor.
>
> Diana Craciun (10):
>   powerpc/64: Disable the speculation barrier from the command line
>   powerpc/64: Make stf barrier PPC_BOOK3S_64 specific.
>   powerpc/64: Make meltdown reporting Book3S 64 specific
>   powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E
>   powerpc/fsl: Add infrastructure to fixup branch predictor flush
>   powerpc/fsl: Add macro to flush the branch predictor
>   powerpc/fsl: Fix spectre_v2 mitigations reporting
>   powerpc/fsl: Add nospectre_v2 command line argument
>   powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)
>   powerpc/fsl: Update Spectre v2 reporting
>
> Mauricio Faria de Oliveira (4):
>   powerpc/rfi-flush: Differentiate enabled and patched flush types
>   powerpc/pseries: Fix clearing of security feature flags
>   powerpc: Move default security feature flags
>   powerpc/pseries: Restore default security feature flags on setup
>
> Michael Ellerman (29):
>   powerpc/xmon: Add RFI flush related fields to paca dump
>   powerpc/pseries: Support firmware disable of RFI flush
>   powerpc/powernv: Support firmware disable of RFI flush
>   powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs
>     code
>   powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again
>   powerpc/rfi-flush: Always enable fallback flush on pseries
>   powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags
>   powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration
>   powerpc: Add security feature flags for Spectre/Meltdown
>   powerpc/pseries: Set or clear security feature flags
>   powerpc/powernv: Set or clear security feature flags
>   powerpc/64s: Move cpu_show_meltdown()
>   powerpc/64s: Enhance the information in cpu_show_meltdown()
>   powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()
>   powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()
>   powerpc/64s: Wire up cpu_show_spectre_v1()
>   powerpc/64s: Wire up cpu_show_spectre_v2()
>   powerpc/64s: Fix section mismatch warnings from setup_rfi_flush()
>   powerpc/64: Use barrier_nospec in syscall entry
>   powerpc: Use barrier_nospec in copy_from_user()
>   powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2
>   powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC
>   powerpc/64: Call setup_barrier_nospec() from setup_arch()
>   powerpc/asm: Add a patch_site macro & helpers for patching
>     instructions
>   powerpc/64s: Add new security feature flags for count cache flush
>   powerpc/64s: Add support for software count cache flush
>   powerpc/pseries: Query hypervisor for count cache flush settings
>   powerpc/powernv: Query firmware for count cache flush settings
>   powerpc/security: Fix spectre_v2 reporting
>
> Michael Neuling (1):
>   powerpc: Avoid code patching freed init sections
>
> Michal Suchanek (5):
>   powerpc/64s: Add barrier_nospec
>   powerpc/64s: Add support for ori barrier_nospec patching
>   powerpc/64s: Patch barrier_nospec in modules
>   powerpc/64s: Enable barrier_nospec based on firmware settings
>   powerpc/64s: Enhance the information in cpu_show_spectre_v1()
>
> Nicholas Piggin (2):
>   powerpc/64s: Improve RFI L1-D cache flush fallback
>   powerpc/64s: Add support for a store forwarding barrier at kernel
>     entry/exit
>
>  arch/powerpc/Kconfig                         |   7 +-
>  arch/powerpc/include/asm/asm-prototypes.h    |  21 +
>  arch/powerpc/include/asm/barrier.h           |  21 +
>  arch/powerpc/include/asm/code-patching-asm.h |  18 +
>  arch/powerpc/include/asm/code-patching.h     |   2 +
>  arch/powerpc/include/asm/exception-64s.h     |  35 ++
>  arch/powerpc/include/asm/feature-fixups.h    |  40 ++
>  arch/powerpc/include/asm/hvcall.h            |   5 +
>  arch/powerpc/include/asm/paca.h              |   3 +-
>  arch/powerpc/include/asm/ppc-opcode.h        |   1 +
>  arch/powerpc/include/asm/ppc_asm.h           |  11 +
>  arch/powerpc/include/asm/security_features.h |  92 ++++
>  arch/powerpc/include/asm/setup.h             |  23 +-
>  arch/powerpc/include/asm/uaccess.h           |  18 +-
>  arch/powerpc/kernel/Makefile                 |   1 +
>  arch/powerpc/kernel/asm-offsets.c            |   3 +-
>  arch/powerpc/kernel/entry_64.S               |  69 +++
>  arch/powerpc/kernel/exceptions-64e.S         |  27 +-
>  arch/powerpc/kernel/exceptions-64s.S         |  98 +++--
>  arch/powerpc/kernel/module.c                 |  10 +-
>  arch/powerpc/kernel/security.c               | 433 +++++++++++++++++++
>  arch/powerpc/kernel/setup_32.c               |   2 +
>  arch/powerpc/kernel/setup_64.c               |  50 +--
>  arch/powerpc/kernel/vmlinux.lds.S            |  33 +-
>  arch/powerpc/lib/code-patching.c             |  29 ++
>  arch/powerpc/lib/feature-fixups.c            | 218 +++++++++-
>  arch/powerpc/mm/mem.c                        |   2 +
>  arch/powerpc/mm/tlb_low_64e.S                |   7 +
>  arch/powerpc/platforms/powernv/setup.c       |  99 +++--
>  arch/powerpc/platforms/pseries/mobility.c    |   3 +
>  arch/powerpc/platforms/pseries/pseries.h     |   2 +
>  arch/powerpc/platforms/pseries/setup.c       |  88 +++-
>  arch/powerpc/xmon/xmon.c                     |   2 +
>  33 files changed, 1345 insertions(+), 128 deletions(-)
>  create mode 100644 arch/powerpc/include/asm/asm-prototypes.h
>  create mode 100644 arch/powerpc/include/asm/code-patching-asm.h
>  create mode 100644 arch/powerpc/include/asm/security_features.h
>  create mode 100644 arch/powerpc/kernel/security.c
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 58a1fa979655..01b6c00a7060 100644
> - --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -136,7 +136,7 @@ config PPC
>  	select GENERIC_SMP_IDLE_THREAD
>  	select GENERIC_CMOS_UPDATE
>  	select GENERIC_TIME_VSYSCALL_OLD
> - -	select GENERIC_CPU_VULNERABILITIES	if PPC_BOOK3S_64
> +	select GENERIC_CPU_VULNERABILITIES	if PPC_BARRIER_NOSPEC
>  	select GENERIC_CLOCKEVENTS
>  	select GENERIC_CLOCKEVENTS_BROADCAST if SMP
>  	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
> @@ -162,6 +162,11 @@ config PPC
>  	select ARCH_HAS_DMA_SET_COHERENT_MASK
>  	select HAVE_ARCH_SECCOMP_FILTER
>  
> +config PPC_BARRIER_NOSPEC
> +    bool
> +    default y
> +    depends on PPC_BOOK3S_64 || PPC_FSL_BOOK3E
> +
>  config GENERIC_CSUM
>  	def_bool CPU_LITTLE_ENDIAN
>  
> diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
> new file mode 100644
> index 000000000000..8944c55591cf
> - --- /dev/null
> +++ b/arch/powerpc/include/asm/asm-prototypes.h
> @@ -0,0 +1,21 @@
> +#ifndef _ASM_POWERPC_ASM_PROTOTYPES_H
> +#define _ASM_POWERPC_ASM_PROTOTYPES_H
> +/*
> + * This file is for prototypes of C functions that are only called
> + * from asm, and any associated variables.
> + *
> + * Copyright 2016, Daniel Axtens, IBM Corporation.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version 2
> + * of the License, or (at your option) any later version.
> + */
> +
> +/* Patch sites */
> +extern s32 patch__call_flush_count_cache;
> +extern s32 patch__flush_count_cache_return;
> +
> +extern long flush_count_cache;
> +
> +#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> index b9e16855a037..e7cb72cdb2ba 100644
> - --- a/arch/powerpc/include/asm/barrier.h
> +++ b/arch/powerpc/include/asm/barrier.h
> @@ -92,4 +92,25 @@ do {									\
>  #define smp_mb__after_atomic()      smp_mb()
>  #define smp_mb__before_spinlock()   smp_mb()
>  
> +#ifdef CONFIG_PPC_BOOK3S_64
> +#define NOSPEC_BARRIER_SLOT   nop
> +#elif defined(CONFIG_PPC_FSL_BOOK3E)
> +#define NOSPEC_BARRIER_SLOT   nop; nop
> +#endif
> +
> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
> +/*
> + * Prevent execution of subsequent instructions until preceding branches have
> + * been fully resolved and are no longer executing speculatively.
> + */
> +#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; NOSPEC_BARRIER_SLOT
> +
> +// This also acts as a compiler barrier due to the memory clobber.
> +#define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
> +
> +#else /* !CONFIG_PPC_BARRIER_NOSPEC */
> +#define barrier_nospec_asm
> +#define barrier_nospec()
> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
> +
>  #endif /* _ASM_POWERPC_BARRIER_H */
> diff --git a/arch/powerpc/include/asm/code-patching-asm.h b/arch/powerpc/include/asm/code-patching-asm.h
> new file mode 100644
> index 000000000000..ed7b1448493a
> - --- /dev/null
> +++ b/arch/powerpc/include/asm/code-patching-asm.h
> @@ -0,0 +1,18 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * Copyright 2018, Michael Ellerman, IBM Corporation.
> + */
> +#ifndef _ASM_POWERPC_CODE_PATCHING_ASM_H
> +#define _ASM_POWERPC_CODE_PATCHING_ASM_H
> +
> +/* Define a "site" that can be patched */
> +.macro patch_site label name
> +	.pushsection ".rodata"
> +	.balign 4
> +	.global \name
> +\name:
> +	.4byte	\label - .
> +	.popsection
> +.endm
> +
> +#endif /* _ASM_POWERPC_CODE_PATCHING_ASM_H */
> diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
> index 840a5509b3f1..a734b4b34d26 100644
> - --- a/arch/powerpc/include/asm/code-patching.h
> +++ b/arch/powerpc/include/asm/code-patching.h
> @@ -28,6 +28,8 @@ unsigned int create_cond_branch(const unsigned int *addr,
>  				unsigned long target, int flags);
>  int patch_branch(unsigned int *addr, unsigned long target, int flags);
>  int patch_instruction(unsigned int *addr, unsigned int instr);
> +int patch_instruction_site(s32 *addr, unsigned int instr);
> +int patch_branch_site(s32 *site, unsigned long target, int flags);
>  
>  int instr_is_relative_branch(unsigned int instr);
>  int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
> diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
> index 9bddbec441b8..3ed536bec462 100644
> - --- a/arch/powerpc/include/asm/exception-64s.h
> +++ b/arch/powerpc/include/asm/exception-64s.h
> @@ -50,6 +50,27 @@
>  #define EX_PPR		88	/* SMT thread status register (priority) */
>  #define EX_CTR		96
>  
> +#define STF_ENTRY_BARRIER_SLOT						\
> +	STF_ENTRY_BARRIER_FIXUP_SECTION;				\
> +	nop;								\
> +	nop;								\
> +	nop
> +
> +#define STF_EXIT_BARRIER_SLOT						\
> +	STF_EXIT_BARRIER_FIXUP_SECTION;					\
> +	nop;								\
> +	nop;								\
> +	nop;								\
> +	nop;								\
> +	nop;								\
> +	nop
> +
> +/*
> + * r10 must be free to use, r13 must be paca
> + */
> +#define INTERRUPT_TO_KERNEL						\
> +	STF_ENTRY_BARRIER_SLOT
> +
>  /*
>   * Macros for annotating the expected destination of (h)rfid
>   *
> @@ -66,16 +87,19 @@
>  	rfid
>  
>  #define RFI_TO_USER							\
> +	STF_EXIT_BARRIER_SLOT;						\
>  	RFI_FLUSH_SLOT;							\
>  	rfid;								\
>  	b	rfi_flush_fallback
>  
>  #define RFI_TO_USER_OR_KERNEL						\
> +	STF_EXIT_BARRIER_SLOT;						\
>  	RFI_FLUSH_SLOT;							\
>  	rfid;								\
>  	b	rfi_flush_fallback
>  
>  #define RFI_TO_GUEST							\
> +	STF_EXIT_BARRIER_SLOT;						\
>  	RFI_FLUSH_SLOT;							\
>  	rfid;								\
>  	b	rfi_flush_fallback
> @@ -84,21 +108,25 @@
>  	hrfid
>  
>  #define HRFI_TO_USER							\
> +	STF_EXIT_BARRIER_SLOT;						\
>  	RFI_FLUSH_SLOT;							\
>  	hrfid;								\
>  	b	hrfi_flush_fallback
>  
>  #define HRFI_TO_USER_OR_KERNEL						\
> +	STF_EXIT_BARRIER_SLOT;						\
>  	RFI_FLUSH_SLOT;							\
>  	hrfid;								\
>  	b	hrfi_flush_fallback
>  
>  #define HRFI_TO_GUEST							\
> +	STF_EXIT_BARRIER_SLOT;						\
>  	RFI_FLUSH_SLOT;							\
>  	hrfid;								\
>  	b	hrfi_flush_fallback
>  
>  #define HRFI_TO_UNKNOWN							\
> +	STF_EXIT_BARRIER_SLOT;						\
>  	RFI_FLUSH_SLOT;							\
>  	hrfid;								\
>  	b	hrfi_flush_fallback
> @@ -226,6 +254,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
>  #define __EXCEPTION_PROLOG_1(area, extra, vec)				\
>  	OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR);		\
>  	OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);		\
> +	INTERRUPT_TO_KERNEL;						\
>  	SAVE_CTR(r10, area);						\
>  	mfcr	r9;							\
>  	extra(vec);							\
> @@ -512,6 +541,12 @@ label##_relon_hv:						\
>  #define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
>  	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
>  
> +#define MASKABLE_EXCEPTION_OOL(vec, label)				\
> +	.globl label##_ool;						\
> +label##_ool:								\
> +	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
> +	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
> +
>  #define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
>  	. = loc;							\
>  	.globl label##_pSeries;						\
> diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
> index 7068bafbb2d6..145a37ab2d3e 100644
> - --- a/arch/powerpc/include/asm/feature-fixups.h
> +++ b/arch/powerpc/include/asm/feature-fixups.h
> @@ -184,6 +184,22 @@ label##3:					       	\
>  	FTR_ENTRY_OFFSET label##1b-label##3b;		\
>  	.popsection;
>  
> +#define STF_ENTRY_BARRIER_FIXUP_SECTION			\
> +953:							\
> +	.pushsection __stf_entry_barrier_fixup,"a";	\
> +	.align 2;					\
> +954:							\
> +	FTR_ENTRY_OFFSET 953b-954b;			\
> +	.popsection;
> +
> +#define STF_EXIT_BARRIER_FIXUP_SECTION			\
> +955:							\
> +	.pushsection __stf_exit_barrier_fixup,"a";	\
> +	.align 2;					\
> +956:							\
> +	FTR_ENTRY_OFFSET 955b-956b;			\
> +	.popsection;
> +
>  #define RFI_FLUSH_FIXUP_SECTION				\
>  951:							\
>  	.pushsection __rfi_flush_fixup,"a";		\
> @@ -192,10 +208,34 @@ label##3:					       	\
>  	FTR_ENTRY_OFFSET 951b-952b;			\
>  	.popsection;
>  
> +#define NOSPEC_BARRIER_FIXUP_SECTION			\
> +953:							\
> +	.pushsection __barrier_nospec_fixup,"a";	\
> +	.align 2;					\
> +954:							\
> +	FTR_ENTRY_OFFSET 953b-954b;			\
> +	.popsection;
> +
> +#define START_BTB_FLUSH_SECTION			\
> +955:							\
> +
> +#define END_BTB_FLUSH_SECTION			\
> +956:							\
> +	.pushsection __btb_flush_fixup,"a";	\
> +	.align 2;							\
> +957:						\
> +	FTR_ENTRY_OFFSET 955b-957b;			\
> +	FTR_ENTRY_OFFSET 956b-957b;			\
> +	.popsection;
>  
>  #ifndef __ASSEMBLY__
>  
> +extern long stf_barrier_fallback;
> +extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
> +extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
>  extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
> +extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
> +extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
>  
>  #endif
>  
> diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
> index 449bbb87c257..b57db9d09db9 100644
> - --- a/arch/powerpc/include/asm/hvcall.h
> +++ b/arch/powerpc/include/asm/hvcall.h
> @@ -292,10 +292,15 @@
>  #define H_CPU_CHAR_L1D_FLUSH_ORI30	(1ull << 61) // IBM bit 2
>  #define H_CPU_CHAR_L1D_FLUSH_TRIG2	(1ull << 60) // IBM bit 3
>  #define H_CPU_CHAR_L1D_THREAD_PRIV	(1ull << 59) // IBM bit 4
> +#define H_CPU_CHAR_BRANCH_HINTS_HONORED	(1ull << 58) // IBM bit 5
> +#define H_CPU_CHAR_THREAD_RECONFIG_CTRL	(1ull << 57) // IBM bit 6
> +#define H_CPU_CHAR_COUNT_CACHE_DISABLED	(1ull << 56) // IBM bit 7
> +#define H_CPU_CHAR_BCCTR_FLUSH_ASSIST	(1ull << 54) // IBM bit 9
>  
>  #define H_CPU_BEHAV_FAVOUR_SECURITY	(1ull << 63) // IBM bit 0
>  #define H_CPU_BEHAV_L1D_FLUSH_PR	(1ull << 62) // IBM bit 1
>  #define H_CPU_BEHAV_BNDS_CHK_SPEC_BAR	(1ull << 61) // IBM bit 2
> +#define H_CPU_BEHAV_FLUSH_COUNT_CACHE	(1ull << 58) // IBM bit 5
>  
>  #ifndef __ASSEMBLY__
>  #include <linux/types.h>
> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
> index 45e2aefece16..08e5df3395fa 100644
> - --- a/arch/powerpc/include/asm/paca.h
> +++ b/arch/powerpc/include/asm/paca.h
> @@ -199,8 +199,7 @@ struct paca_struct {
>  	 */
>  	u64 exrfi[13] __aligned(0x80);
>  	void *rfi_flush_fallback_area;
> - -	u64 l1d_flush_congruence;
> - -	u64 l1d_flush_sets;
> +	u64 l1d_flush_size;
>  #endif
>  };
>  
> diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
> index 7ab04fc59e24..faf1bb045dee 100644
> - --- a/arch/powerpc/include/asm/ppc-opcode.h
> +++ b/arch/powerpc/include/asm/ppc-opcode.h
> @@ -147,6 +147,7 @@
>  #define PPC_INST_LWSYNC			0x7c2004ac
>  #define PPC_INST_SYNC			0x7c0004ac
>  #define PPC_INST_SYNC_MASK		0xfc0007fe
> +#define PPC_INST_ISYNC			0x4c00012c
>  #define PPC_INST_LXVD2X			0x7c000698
>  #define PPC_INST_MCRXR			0x7c000400
>  #define PPC_INST_MCRXR_MASK		0xfc0007fe
> diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
> index 160bb2311bbb..d219816b3e19 100644
> - --- a/arch/powerpc/include/asm/ppc_asm.h
> +++ b/arch/powerpc/include/asm/ppc_asm.h
> @@ -821,4 +821,15 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,945)
>  	.long 0x2400004c  /* rfid				*/
>  #endif /* !CONFIG_PPC_BOOK3E */
>  #endif /*  __ASSEMBLY__ */
> +
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +#define BTB_FLUSH(reg)			\
> +	lis reg,BUCSR_INIT@h;		\
> +	ori reg,reg,BUCSR_INIT@l;	\
> +	mtspr SPRN_BUCSR,reg;		\
> +	isync;
> +#else
> +#define BTB_FLUSH(reg)
> +#endif /* CONFIG_PPC_FSL_BOOK3E */
> +
>  #endif /* _ASM_POWERPC_PPC_ASM_H */
> diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
> new file mode 100644
> index 000000000000..759597bf0fd8
> - --- /dev/null
> +++ b/arch/powerpc/include/asm/security_features.h
> @@ -0,0 +1,92 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * Security related feature bit definitions.
> + *
> + * Copyright 2018, Michael Ellerman, IBM Corporation.
> + */
> +
> +#ifndef _ASM_POWERPC_SECURITY_FEATURES_H
> +#define _ASM_POWERPC_SECURITY_FEATURES_H
> +
> +
> +extern unsigned long powerpc_security_features;
> +extern bool rfi_flush;
> +
> +/* These are bit flags */
> +enum stf_barrier_type {
> +	STF_BARRIER_NONE	= 0x1,
> +	STF_BARRIER_FALLBACK	= 0x2,
> +	STF_BARRIER_EIEIO	= 0x4,
> +	STF_BARRIER_SYNC_ORI	= 0x8,
> +};
> +
> +void setup_stf_barrier(void);
> +void do_stf_barrier_fixups(enum stf_barrier_type types);
> +void setup_count_cache_flush(void);
> +
> +static inline void security_ftr_set(unsigned long feature)
> +{
> +	powerpc_security_features |= feature;
> +}
> +
> +static inline void security_ftr_clear(unsigned long feature)
> +{
> +	powerpc_security_features &= ~feature;
> +}
> +
> +static inline bool security_ftr_enabled(unsigned long feature)
> +{
> +	return !!(powerpc_security_features & feature);
> +}
> +
> +
> +// Features indicating support for Spectre/Meltdown mitigations
> +
> +// The L1-D cache can be flushed with ori r30,r30,0
> +#define SEC_FTR_L1D_FLUSH_ORI30		0x0000000000000001ull
> +
> +// The L1-D cache can be flushed with mtspr 882,r0 (aka SPRN_TRIG2)
> +#define SEC_FTR_L1D_FLUSH_TRIG2		0x0000000000000002ull
> +
> +// ori r31,r31,0 acts as a speculation barrier
> +#define SEC_FTR_SPEC_BAR_ORI31		0x0000000000000004ull
> +
> +// Speculation past bctr is disabled
> +#define SEC_FTR_BCCTRL_SERIALISED	0x0000000000000008ull
> +
> +// Entries in L1-D are private to a SMT thread
> +#define SEC_FTR_L1D_THREAD_PRIV		0x0000000000000010ull
> +
> +// Indirect branch prediction cache disabled
> +#define SEC_FTR_COUNT_CACHE_DISABLED	0x0000000000000020ull
> +
> +// bcctr 2,0,0 triggers a hardware assisted count cache flush
> +#define SEC_FTR_BCCTR_FLUSH_ASSIST	0x0000000000000800ull
> +
> +
> +// Features indicating need for Spectre/Meltdown mitigations
> +
> +// The L1-D cache should be flushed on MSR[HV] 1->0 transition (hypervisor to guest)
> +#define SEC_FTR_L1D_FLUSH_HV		0x0000000000000040ull
> +
> +// The L1-D cache should be flushed on MSR[PR] 0->1 transition (kernel to userspace)
> +#define SEC_FTR_L1D_FLUSH_PR		0x0000000000000080ull
> +
> +// A speculation barrier should be used for bounds checks (Spectre variant 1)
> +#define SEC_FTR_BNDS_CHK_SPEC_BAR	0x0000000000000100ull
> +
> +// Firmware configuration indicates user favours security over performance
> +#define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
> +
> +// Software required to flush count cache on context switch
> +#define SEC_FTR_FLUSH_COUNT_CACHE	0x0000000000000400ull
> +
> +
> +// Features enabled by default
> +#define SEC_FTR_DEFAULT \
> +	(SEC_FTR_L1D_FLUSH_HV | \
> +	 SEC_FTR_L1D_FLUSH_PR | \
> +	 SEC_FTR_BNDS_CHK_SPEC_BAR | \
> +	 SEC_FTR_FAVOUR_SECURITY)
> +
> +#endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
> diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
> index 7916b56f2e60..d299479c770b 100644
> - --- a/arch/powerpc/include/asm/setup.h
> +++ b/arch/powerpc/include/asm/setup.h
> @@ -8,6 +8,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
>  
>  extern unsigned int rtas_data;
>  extern unsigned long long memory_limit;
> +extern bool init_mem_is_free;
>  extern unsigned long klimit;
>  extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
>  
> @@ -36,8 +37,28 @@ enum l1d_flush_type {
>  	L1D_FLUSH_MTTRIG	= 0x8,
>  };
>  
> - -void __init setup_rfi_flush(enum l1d_flush_type, bool enable);
> +void setup_rfi_flush(enum l1d_flush_type, bool enable);
>  void do_rfi_flush_fixups(enum l1d_flush_type types);
> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
> +void setup_barrier_nospec(void);
> +#else
> +static inline void setup_barrier_nospec(void) { };
> +#endif
> +void do_barrier_nospec_fixups(bool enable);
> +extern bool barrier_nospec_enabled;
> +
> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
> +void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
> +#else
> +static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
> +#endif
> +
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +void setup_spectre_v2(void);
> +#else
> +static inline void setup_spectre_v2(void) {};
> +#endif
> +void do_btb_flush_fixups(void);
>  
>  #endif /* !__ASSEMBLY__ */
>  
> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
> index 05f1389228d2..e51ce5a0e221 100644
> - --- a/arch/powerpc/include/asm/uaccess.h
> +++ b/arch/powerpc/include/asm/uaccess.h
> @@ -269,6 +269,7 @@ do {								\
>  	__chk_user_ptr(ptr);					\
>  	if (!is_kernel_addr((unsigned long)__gu_addr))		\
>  		might_fault();					\
> +	barrier_nospec();					\
>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>  	(x) = (__typeof__(*(ptr)))__gu_val;			\
>  	__gu_err;						\
> @@ -283,6 +284,7 @@ do {								\
>  	__chk_user_ptr(ptr);					\
>  	if (!is_kernel_addr((unsigned long)__gu_addr))		\
>  		might_fault();					\
> +	barrier_nospec();					\
>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>  	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
>  	__gu_err;						\
> @@ -295,8 +297,10 @@ do {								\
>  	unsigned long  __gu_val = 0;					\
>  	__typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
>  	might_fault();							\
> - -	if (access_ok(VERIFY_READ, __gu_addr, (size)))			\
> +	if (access_ok(VERIFY_READ, __gu_addr, (size))) {		\
> +		barrier_nospec();					\
>  		__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
> +	}								\
>  	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
>  	__gu_err;							\
>  })
> @@ -307,6 +311,7 @@ do {								\
>  	unsigned long __gu_val;					\
>  	__typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
>  	__chk_user_ptr(ptr);					\
> +	barrier_nospec();					\
>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>  	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
>  	__gu_err;						\
> @@ -323,8 +328,10 @@ extern unsigned long __copy_tofrom_user(void __user *to,
>  static inline unsigned long copy_from_user(void *to,
>  		const void __user *from, unsigned long n)
>  {
> - -	if (likely(access_ok(VERIFY_READ, from, n)))
> +	if (likely(access_ok(VERIFY_READ, from, n))) {
> +		barrier_nospec();
>  		return __copy_tofrom_user((__force void __user *)to, from, n);
> +	}
>  	memset(to, 0, n);
>  	return n;
>  }
> @@ -359,21 +366,27 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
>  
>  		switch (n) {
>  		case 1:
> +			barrier_nospec();
>  			__get_user_size(*(u8 *)to, from, 1, ret);
>  			break;
>  		case 2:
> +			barrier_nospec();
>  			__get_user_size(*(u16 *)to, from, 2, ret);
>  			break;
>  		case 4:
> +			barrier_nospec();
>  			__get_user_size(*(u32 *)to, from, 4, ret);
>  			break;
>  		case 8:
> +			barrier_nospec();
>  			__get_user_size(*(u64 *)to, from, 8, ret);
>  			break;
>  		}
>  		if (ret == 0)
>  			return 0;
>  	}
> +
> +	barrier_nospec();
>  	return __copy_tofrom_user((__force void __user *)to, from, n);
>  }
>  
> @@ -400,6 +413,7 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
>  		if (ret == 0)
>  			return 0;
>  	}
> +
>  	return __copy_tofrom_user(to, (__force const void __user *)from, n);
>  }
>  
> diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
> index ba336930d448..22ed3c32fca8 100644
> - --- a/arch/powerpc/kernel/Makefile
> +++ b/arch/powerpc/kernel/Makefile
> @@ -44,6 +44,7 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o
>  obj-$(CONFIG_PPC_BOOK3S_64)	+= mce.o mce_power.o
>  obj64-$(CONFIG_RELOCATABLE)	+= reloc_64.o
>  obj-$(CONFIG_PPC_BOOK3E_64)	+= exceptions-64e.o idle_book3e.o
> +obj-$(CONFIG_PPC_BARRIER_NOSPEC) += security.o
>  obj-$(CONFIG_PPC64)		+= vdso64/
>  obj-$(CONFIG_ALTIVEC)		+= vecemu.o
>  obj-$(CONFIG_PPC_970_NAP)	+= idle_power4.o
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index d92705e3a0c1..de3c29c51503 100644
> - --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -245,8 +245,7 @@ int main(void)
>  	DEFINE(PACA_IN_MCE, offsetof(struct paca_struct, in_mce));
>  	DEFINE(PACA_RFI_FLUSH_FALLBACK_AREA, offsetof(struct paca_struct, rfi_flush_fallback_area));
>  	DEFINE(PACA_EXRFI, offsetof(struct paca_struct, exrfi));
> - -	DEFINE(PACA_L1D_FLUSH_CONGRUENCE, offsetof(struct paca_struct, l1d_flush_congruence));
> - -	DEFINE(PACA_L1D_FLUSH_SETS, offsetof(struct paca_struct, l1d_flush_sets));
> +	DEFINE(PACA_L1D_FLUSH_SIZE, offsetof(struct paca_struct, l1d_flush_size));
>  #endif
>  	DEFINE(PACAHWCPUID, offsetof(struct paca_struct, hw_cpu_id));
>  	DEFINE(PACAKEXECSTATE, offsetof(struct paca_struct, kexec_state));
> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> index 59be96917369..6d36a4fb4acf 100644
> - --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
> @@ -25,6 +25,7 @@
>  #include <asm/page.h>
>  #include <asm/mmu.h>
>  #include <asm/thread_info.h>
> +#include <asm/code-patching-asm.h>
>  #include <asm/ppc_asm.h>
>  #include <asm/asm-offsets.h>
>  #include <asm/cputable.h>
> @@ -36,6 +37,7 @@
>  #include <asm/hw_irq.h>
>  #include <asm/context_tracking.h>
>  #include <asm/tm.h>
> +#include <asm/barrier.h>
>  #ifdef CONFIG_PPC_BOOK3S
>  #include <asm/exception-64s.h>
>  #else
> @@ -75,6 +77,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
>  	std	r0,GPR0(r1)
>  	std	r10,GPR1(r1)
>  	beq	2f			/* if from kernel mode */
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +START_BTB_FLUSH_SECTION
> +	BTB_FLUSH(r10)
> +END_BTB_FLUSH_SECTION
> +#endif
>  	ACCOUNT_CPU_USER_ENTRY(r10, r11)
>  2:	std	r2,GPR2(r1)
>  	std	r3,GPR3(r1)
> @@ -177,6 +184,15 @@ system_call:			/* label this so stack traces look sane */
>  	clrldi	r8,r8,32
>  15:
>  	slwi	r0,r0,4
> +
> +	barrier_nospec_asm
> +	/*
> +	 * Prevent the load of the handler below (based on the user-passed
> +	 * system call number) being speculatively executed until the test
> +	 * against NR_syscalls and branch to .Lsyscall_enosys above has
> +	 * committed.
> +	 */
> +
>  	ldx	r12,r11,r0	/* Fetch system call handler [ptr] */
>  	mtctr   r12
>  	bctrl			/* Call handler */
> @@ -440,6 +456,57 @@ _GLOBAL(ret_from_kernel_thread)
>  	li	r3,0
>  	b	.Lsyscall_exit
>  
> +#ifdef CONFIG_PPC_BOOK3S_64
> +
> +#define FLUSH_COUNT_CACHE	\
> +1:	nop;			\
> +	patch_site 1b, patch__call_flush_count_cache
> +
> +
> +#define BCCTR_FLUSH	.long 0x4c400420
> +
> +.macro nops number
> +	.rept \number
> +	nop
> +	.endr
> +.endm
> +
> +.balign 32
> +.global flush_count_cache
> +flush_count_cache:
> +	/* Save LR into r9 */
> +	mflr	r9
> +
> +	.rept 64
> +	bl	.+4
> +	.endr
> +	b	1f
> +	nops	6
> +
> +	.balign 32
> +	/* Restore LR */
> +1:	mtlr	r9
> +	li	r9,0x7fff
> +	mtctr	r9
> +
> +	BCCTR_FLUSH
> +
> +2:	nop
> +	patch_site 2b patch__flush_count_cache_return
> +
> +	nops	3
> +
> +	.rept 278
> +	.balign 32
> +	BCCTR_FLUSH
> +	nops	7
> +	.endr
> +
> +	blr
> +#else
> +#define FLUSH_COUNT_CACHE
> +#endif /* CONFIG_PPC_BOOK3S_64 */
> +
>  /*
>   * This routine switches between two different tasks.  The process
>   * state of one is saved on its kernel stack.  Then the state
> @@ -503,6 +570,8 @@ BEGIN_FTR_SECTION
>  END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
>  #endif
>  
> +	FLUSH_COUNT_CACHE
> +
>  #ifdef CONFIG_SMP
>  	/* We need a sync somewhere here to make sure that if the
>  	 * previous task gets rescheduled on another CPU, it sees all
> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
> index 5cc93f0b52ca..48ec841ea1bf 100644
> - --- a/arch/powerpc/kernel/exceptions-64e.S
> +++ b/arch/powerpc/kernel/exceptions-64e.S
> @@ -295,7 +295,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>  	andi.	r10,r11,MSR_PR;		/* save stack pointer */	    \
>  	beq	1f;			/* branch around if supervisor */   \
>  	ld	r1,PACAKSAVE(r13);	/* get kernel stack coming from usr */\
> - -1:	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
> +1:	type##_BTB_FLUSH		\
> +	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
>  	bge-	cr1,exc_##n##_bad_stack;/* bad stack (TODO: out of line) */ \
>  	mfspr	r10,SPRN_##type##_SRR0;	/* read SRR0 before touching stack */
>  
> @@ -327,6 +328,30 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>  #define SPRN_MC_SRR0	SPRN_MCSRR0
>  #define SPRN_MC_SRR1	SPRN_MCSRR1
>  
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +#define GEN_BTB_FLUSH			\
> +	START_BTB_FLUSH_SECTION		\
> +		beq 1f;			\
> +		BTB_FLUSH(r10)			\
> +		1:		\
> +	END_BTB_FLUSH_SECTION
> +
> +#define CRIT_BTB_FLUSH			\
> +	START_BTB_FLUSH_SECTION		\
> +		BTB_FLUSH(r10)		\
> +	END_BTB_FLUSH_SECTION
> +
> +#define DBG_BTB_FLUSH CRIT_BTB_FLUSH
> +#define MC_BTB_FLUSH CRIT_BTB_FLUSH
> +#define GDBELL_BTB_FLUSH GEN_BTB_FLUSH
> +#else
> +#define GEN_BTB_FLUSH
> +#define CRIT_BTB_FLUSH
> +#define DBG_BTB_FLUSH
> +#define MC_BTB_FLUSH
> +#define GDBELL_BTB_FLUSH
> +#endif
> +
>  #define NORMAL_EXCEPTION_PROLOG(n, intnum, addition)			    \
>  	EXCEPTION_PROLOG(n, intnum, GEN, addition##_GEN(n))
>  
> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
> index 938a30fef031..10e7cec9553d 100644
> - --- a/arch/powerpc/kernel/exceptions-64s.S
> +++ b/arch/powerpc/kernel/exceptions-64s.S
> @@ -36,6 +36,7 @@ BEGIN_FTR_SECTION						\
>  END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
>  	mr	r9,r13 ;					\
>  	GET_PACA(r13) ;						\
> +	INTERRUPT_TO_KERNEL ;					\
>  	mfspr	r11,SPRN_SRR0 ;					\
>  0:
>  
> @@ -292,7 +293,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
>  	. = 0x900
>  	.globl decrementer_pSeries
>  decrementer_pSeries:
> - -	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
> +	SET_SCRATCH0(r13)
> +	EXCEPTION_PROLOG_0(PACA_EXGEN)
> +	b	decrementer_ool
>  
>  	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
>  
> @@ -319,6 +322,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
>  	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
>  	HMT_MEDIUM;
>  	std	r10,PACA_EXGEN+EX_R10(r13)
> +	INTERRUPT_TO_KERNEL
>  	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
>  	mfcr	r9
>  	KVMTEST(0xc00)
> @@ -607,6 +611,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
>  
>  	.align	7
>  	/* moved from 0xe00 */
> +	MASKABLE_EXCEPTION_OOL(0x900, decrementer)
>  	STD_EXCEPTION_HV_OOL(0xe02, h_data_storage)
>  	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02)
>  	STD_EXCEPTION_HV_OOL(0xe22, h_instr_storage)
> @@ -1564,6 +1569,21 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>  	blr
>  #endif
>  
> +	.balign 16
> +	.globl stf_barrier_fallback
> +stf_barrier_fallback:
> +	std	r9,PACA_EXRFI+EX_R9(r13)
> +	std	r10,PACA_EXRFI+EX_R10(r13)
> +	sync
> +	ld	r9,PACA_EXRFI+EX_R9(r13)
> +	ld	r10,PACA_EXRFI+EX_R10(r13)
> +	ori	31,31,0
> +	.rept 14
> +	b	1f
> +1:
> +	.endr
> +	blr
> +
>  	.globl rfi_flush_fallback
>  rfi_flush_fallback:
>  	SET_SCRATCH0(r13);
> @@ -1571,39 +1591,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>  	std	r9,PACA_EXRFI+EX_R9(r13)
>  	std	r10,PACA_EXRFI+EX_R10(r13)
>  	std	r11,PACA_EXRFI+EX_R11(r13)
> - -	std	r12,PACA_EXRFI+EX_R12(r13)
> - -	std	r8,PACA_EXRFI+EX_R13(r13)
>  	mfctr	r9
>  	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
> - -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
> - -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
> - -	/*
> - -	 * The load adresses are at staggered offsets within cachelines,
> - -	 * which suits some pipelines better (on others it should not
> - -	 * hurt).
> - -	 */
> - -	addi	r12,r12,8
> +	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
> +	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
>  	mtctr	r11
>  	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
>  
>  	/* order ld/st prior to dcbt stop all streams with flushing */
>  	sync
> - -1:	li	r8,0
> - -	.rept	8 /* 8-way set associative */
> - -	ldx	r11,r10,r8
> - -	add	r8,r8,r12
> - -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
> - -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
> - -	.endr
> - -	addi	r10,r10,128 /* 128 byte cache line */
> +
> +	/*
> +	 * The load adresses are at staggered offsets within cachelines,
> +	 * which suits some pipelines better (on others it should not
> +	 * hurt).
> +	 */
> +1:
> +	ld	r11,(0x80 + 8)*0(r10)
> +	ld	r11,(0x80 + 8)*1(r10)
> +	ld	r11,(0x80 + 8)*2(r10)
> +	ld	r11,(0x80 + 8)*3(r10)
> +	ld	r11,(0x80 + 8)*4(r10)
> +	ld	r11,(0x80 + 8)*5(r10)
> +	ld	r11,(0x80 + 8)*6(r10)
> +	ld	r11,(0x80 + 8)*7(r10)
> +	addi	r10,r10,0x80*8
>  	bdnz	1b
>  
>  	mtctr	r9
>  	ld	r9,PACA_EXRFI+EX_R9(r13)
>  	ld	r10,PACA_EXRFI+EX_R10(r13)
>  	ld	r11,PACA_EXRFI+EX_R11(r13)
> - -	ld	r12,PACA_EXRFI+EX_R12(r13)
> - -	ld	r8,PACA_EXRFI+EX_R13(r13)
>  	GET_SCRATCH0(r13);
>  	rfid
>  
> @@ -1614,39 +1632,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>  	std	r9,PACA_EXRFI+EX_R9(r13)
>  	std	r10,PACA_EXRFI+EX_R10(r13)
>  	std	r11,PACA_EXRFI+EX_R11(r13)
> - -	std	r12,PACA_EXRFI+EX_R12(r13)
> - -	std	r8,PACA_EXRFI+EX_R13(r13)
>  	mfctr	r9
>  	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
> - -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
> - -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
> - -	/*
> - -	 * The load adresses are at staggered offsets within cachelines,
> - -	 * which suits some pipelines better (on others it should not
> - -	 * hurt).
> - -	 */
> - -	addi	r12,r12,8
> +	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
> +	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
>  	mtctr	r11
>  	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
>  
>  	/* order ld/st prior to dcbt stop all streams with flushing */
>  	sync
> - -1:	li	r8,0
> - -	.rept	8 /* 8-way set associative */
> - -	ldx	r11,r10,r8
> - -	add	r8,r8,r12
> - -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
> - -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
> - -	.endr
> - -	addi	r10,r10,128 /* 128 byte cache line */
> +
> +	/*
> +	 * The load adresses are at staggered offsets within cachelines,
> +	 * which suits some pipelines better (on others it should not
> +	 * hurt).
> +	 */
> +1:
> +	ld	r11,(0x80 + 8)*0(r10)
> +	ld	r11,(0x80 + 8)*1(r10)
> +	ld	r11,(0x80 + 8)*2(r10)
> +	ld	r11,(0x80 + 8)*3(r10)
> +	ld	r11,(0x80 + 8)*4(r10)
> +	ld	r11,(0x80 + 8)*5(r10)
> +	ld	r11,(0x80 + 8)*6(r10)
> +	ld	r11,(0x80 + 8)*7(r10)
> +	addi	r10,r10,0x80*8
>  	bdnz	1b
>  
>  	mtctr	r9
>  	ld	r9,PACA_EXRFI+EX_R9(r13)
>  	ld	r10,PACA_EXRFI+EX_R10(r13)
>  	ld	r11,PACA_EXRFI+EX_R11(r13)
> - -	ld	r12,PACA_EXRFI+EX_R12(r13)
> - -	ld	r8,PACA_EXRFI+EX_R13(r13)
>  	GET_SCRATCH0(r13);
>  	hrfid
>  
> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
> index 9547381b631a..ff009be97a42 100644
> - --- a/arch/powerpc/kernel/module.c
> +++ b/arch/powerpc/kernel/module.c
> @@ -67,7 +67,15 @@ int module_finalize(const Elf_Ehdr *hdr,
>  		do_feature_fixups(powerpc_firmware_features,
>  				  (void *)sect->sh_addr,
>  				  (void *)sect->sh_addr + sect->sh_size);
> - -#endif
> +#endif /* CONFIG_PPC64 */
> +
> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
> +	sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
> +	if (sect != NULL)
> +		do_barrier_nospec_fixups_range(barrier_nospec_enabled,
> +				  (void *)sect->sh_addr,
> +				  (void *)sect->sh_addr + sect->sh_size);
> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>  
>  	sect = find_section(hdr, sechdrs, "__lwsync_fixup");
>  	if (sect != NULL)
> diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
> new file mode 100644
> index 000000000000..58f0602a92b9
> - --- /dev/null
> +++ b/arch/powerpc/kernel/security.c
> @@ -0,0 +1,433 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +//
> +// Security related flags and so on.
> +//
> +// Copyright 2018, Michael Ellerman, IBM Corporation.
> +
> +#include <linux/kernel.h>
> +#include <linux/debugfs.h>
> +#include <linux/device.h>
> +#include <linux/seq_buf.h>
> +
> +#include <asm/debug.h>
> +#include <asm/asm-prototypes.h>
> +#include <asm/code-patching.h>
> +#include <asm/security_features.h>
> +#include <asm/setup.h>
> +
> +
> +unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
> +
> +enum count_cache_flush_type {
> +	COUNT_CACHE_FLUSH_NONE	= 0x1,
> +	COUNT_CACHE_FLUSH_SW	= 0x2,
> +	COUNT_CACHE_FLUSH_HW	= 0x4,
> +};
> +static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
> +
> +bool barrier_nospec_enabled;
> +static bool no_nospec;
> +static bool btb_flush_enabled;
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +static bool no_spectrev2;
> +#endif
> +
> +static void enable_barrier_nospec(bool enable)
> +{
> +	barrier_nospec_enabled = enable;
> +	do_barrier_nospec_fixups(enable);
> +}
> +
> +void setup_barrier_nospec(void)
> +{
> +	bool enable;
> +
> +	/*
> +	 * It would make sense to check SEC_FTR_SPEC_BAR_ORI31 below as well.
> +	 * But there's a good reason not to. The two flags we check below are
> +	 * both are enabled by default in the kernel, so if the hcall is not
> +	 * functional they will be enabled.
> +	 * On a system where the host firmware has been updated (so the ori
> +	 * functions as a barrier), but on which the hypervisor (KVM/Qemu) has
> +	 * not been updated, we would like to enable the barrier. Dropping the
> +	 * check for SEC_FTR_SPEC_BAR_ORI31 achieves that. The only downside is
> +	 * we potentially enable the barrier on systems where the host firmware
> +	 * is not updated, but that's harmless as it's a no-op.
> +	 */
> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
> +		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
> +
> +	if (!no_nospec)
> +		enable_barrier_nospec(enable);
> +}
> +
> +static int __init handle_nospectre_v1(char *p)
> +{
> +	no_nospec = true;
> +
> +	return 0;
> +}
> +early_param("nospectre_v1", handle_nospectre_v1);
> +
> +#ifdef CONFIG_DEBUG_FS
> +static int barrier_nospec_set(void *data, u64 val)
> +{
> +	switch (val) {
> +	case 0:
> +	case 1:
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	if (!!val == !!barrier_nospec_enabled)
> +		return 0;
> +
> +	enable_barrier_nospec(!!val);
> +
> +	return 0;
> +}
> +
> +static int barrier_nospec_get(void *data, u64 *val)
> +{
> +	*val = barrier_nospec_enabled ? 1 : 0;
> +	return 0;
> +}
> +
> +DEFINE_SIMPLE_ATTRIBUTE(fops_barrier_nospec,
> +			barrier_nospec_get, barrier_nospec_set, "%llu\n");
> +
> +static __init int barrier_nospec_debugfs_init(void)
> +{
> +	debugfs_create_file("barrier_nospec", 0600, powerpc_debugfs_root, NULL,
> +			    &fops_barrier_nospec);
> +	return 0;
> +}
> +device_initcall(barrier_nospec_debugfs_init);
> +#endif /* CONFIG_DEBUG_FS */
> +
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +static int __init handle_nospectre_v2(char *p)
> +{
> +	no_spectrev2 = true;
> +
> +	return 0;
> +}
> +early_param("nospectre_v2", handle_nospectre_v2);
> +void setup_spectre_v2(void)
> +{
> +	if (no_spectrev2)
> +		do_btb_flush_fixups();
> +	else
> +		btb_flush_enabled = true;
> +}
> +#endif /* CONFIG_PPC_FSL_BOOK3E */
> +
> +#ifdef CONFIG_PPC_BOOK3S_64
> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> +	bool thread_priv;
> +
> +	thread_priv = security_ftr_enabled(SEC_FTR_L1D_THREAD_PRIV);
> +
> +	if (rfi_flush || thread_priv) {
> +		struct seq_buf s;
> +		seq_buf_init(&s, buf, PAGE_SIZE - 1);
> +
> +		seq_buf_printf(&s, "Mitigation: ");
> +
> +		if (rfi_flush)
> +			seq_buf_printf(&s, "RFI Flush");
> +
> +		if (rfi_flush && thread_priv)
> +			seq_buf_printf(&s, ", ");
> +
> +		if (thread_priv)
> +			seq_buf_printf(&s, "L1D private per thread");
> +
> +		seq_buf_printf(&s, "\n");
> +
> +		return s.len;
> +	}
> +
> +	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
> +	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
> +		return sprintf(buf, "Not affected\n");
> +
> +	return sprintf(buf, "Vulnerable\n");
> +}
> +#endif
> +
> +ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> +	struct seq_buf s;
> +
> +	seq_buf_init(&s, buf, PAGE_SIZE - 1);
> +
> +	if (security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR)) {
> +		if (barrier_nospec_enabled)
> +			seq_buf_printf(&s, "Mitigation: __user pointer sanitization");
> +		else
> +			seq_buf_printf(&s, "Vulnerable");
> +
> +		if (security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31))
> +			seq_buf_printf(&s, ", ori31 speculation barrier enabled");
> +
> +		seq_buf_printf(&s, "\n");
> +	} else
> +		seq_buf_printf(&s, "Not affected\n");
> +
> +	return s.len;
> +}
> +
> +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> +	struct seq_buf s;
> +	bool bcs, ccd;
> +
> +	seq_buf_init(&s, buf, PAGE_SIZE - 1);
> +
> +	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
> +	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
> +
> +	if (bcs || ccd) {
> +		seq_buf_printf(&s, "Mitigation: ");
> +
> +		if (bcs)
> +			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
> +
> +		if (bcs && ccd)
> +			seq_buf_printf(&s, ", ");
> +
> +		if (ccd)
> +			seq_buf_printf(&s, "Indirect branch cache disabled");
> +	} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
> +		seq_buf_printf(&s, "Mitigation: Software count cache flush");
> +
> +		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
> +			seq_buf_printf(&s, " (hardware accelerated)");
> +	} else if (btb_flush_enabled) {
> +		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
> +	} else {
> +		seq_buf_printf(&s, "Vulnerable");
> +	}
> +
> +	seq_buf_printf(&s, "\n");
> +
> +	return s.len;
> +}
> +
> +#ifdef CONFIG_PPC_BOOK3S_64
> +/*
> + * Store-forwarding barrier support.
> + */
> +
> +static enum stf_barrier_type stf_enabled_flush_types;
> +static bool no_stf_barrier;
> +bool stf_barrier;
> +
> +static int __init handle_no_stf_barrier(char *p)
> +{
> +	pr_info("stf-barrier: disabled on command line.");
> +	no_stf_barrier = true;
> +	return 0;
> +}
> +
> +early_param("no_stf_barrier", handle_no_stf_barrier);
> +
> +/* This is the generic flag used by other architectures */
> +static int __init handle_ssbd(char *p)
> +{
> +	if (!p || strncmp(p, "auto", 5) == 0 || strncmp(p, "on", 2) == 0 ) {
> +		/* Until firmware tells us, we have the barrier with auto */
> +		return 0;
> +	} else if (strncmp(p, "off", 3) == 0) {
> +		handle_no_stf_barrier(NULL);
> +		return 0;
> +	} else
> +		return 1;
> +
> +	return 0;
> +}
> +early_param("spec_store_bypass_disable", handle_ssbd);
> +
> +/* This is the generic flag used by other architectures */
> +static int __init handle_no_ssbd(char *p)
> +{
> +	handle_no_stf_barrier(NULL);
> +	return 0;
> +}
> +early_param("nospec_store_bypass_disable", handle_no_ssbd);
> +
> +static void stf_barrier_enable(bool enable)
> +{
> +	if (enable)
> +		do_stf_barrier_fixups(stf_enabled_flush_types);
> +	else
> +		do_stf_barrier_fixups(STF_BARRIER_NONE);
> +
> +	stf_barrier = enable;
> +}
> +
> +void setup_stf_barrier(void)
> +{
> +	enum stf_barrier_type type;
> +	bool enable, hv;
> +
> +	hv = cpu_has_feature(CPU_FTR_HVMODE);
> +
> +	/* Default to fallback in case fw-features are not available */
> +	if (cpu_has_feature(CPU_FTR_ARCH_207S))
> +		type = STF_BARRIER_SYNC_ORI;
> +	else if (cpu_has_feature(CPU_FTR_ARCH_206))
> +		type = STF_BARRIER_FALLBACK;
> +	else
> +		type = STF_BARRIER_NONE;
> +
> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
> +		(security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) ||
> +		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) && hv));
> +
> +	if (type == STF_BARRIER_FALLBACK) {
> +		pr_info("stf-barrier: fallback barrier available\n");
> +	} else if (type == STF_BARRIER_SYNC_ORI) {
> +		pr_info("stf-barrier: hwsync barrier available\n");
> +	} else if (type == STF_BARRIER_EIEIO) {
> +		pr_info("stf-barrier: eieio barrier available\n");
> +	}
> +
> +	stf_enabled_flush_types = type;
> +
> +	if (!no_stf_barrier)
> +		stf_barrier_enable(enable);
> +}
> +
> +ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> +	if (stf_barrier && stf_enabled_flush_types != STF_BARRIER_NONE) {
> +		const char *type;
> +		switch (stf_enabled_flush_types) {
> +		case STF_BARRIER_EIEIO:
> +			type = "eieio";
> +			break;
> +		case STF_BARRIER_SYNC_ORI:
> +			type = "hwsync";
> +			break;
> +		case STF_BARRIER_FALLBACK:
> +			type = "fallback";
> +			break;
> +		default:
> +			type = "unknown";
> +		}
> +		return sprintf(buf, "Mitigation: Kernel entry/exit barrier (%s)\n", type);
> +	}
> +
> +	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
> +	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
> +		return sprintf(buf, "Not affected\n");
> +
> +	return sprintf(buf, "Vulnerable\n");
> +}
> +
> +#ifdef CONFIG_DEBUG_FS
> +static int stf_barrier_set(void *data, u64 val)
> +{
> +	bool enable;
> +
> +	if (val == 1)
> +		enable = true;
> +	else if (val == 0)
> +		enable = false;
> +	else
> +		return -EINVAL;
> +
> +	/* Only do anything if we're changing state */
> +	if (enable != stf_barrier)
> +		stf_barrier_enable(enable);
> +
> +	return 0;
> +}
> +
> +static int stf_barrier_get(void *data, u64 *val)
> +{
> +	*val = stf_barrier ? 1 : 0;
> +	return 0;
> +}
> +
> +DEFINE_SIMPLE_ATTRIBUTE(fops_stf_barrier, stf_barrier_get, stf_barrier_set, "%llu\n");
> +
> +static __init int stf_barrier_debugfs_init(void)
> +{
> +	debugfs_create_file("stf_barrier", 0600, powerpc_debugfs_root, NULL, &fops_stf_barrier);
> +	return 0;
> +}
> +device_initcall(stf_barrier_debugfs_init);
> +#endif /* CONFIG_DEBUG_FS */
> +
> +static void toggle_count_cache_flush(bool enable)
> +{
> +	if (!enable || !security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
> +		patch_instruction_site(&patch__call_flush_count_cache, PPC_INST_NOP);
> +		count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
> +		pr_info("count-cache-flush: software flush disabled.\n");
> +		return;
> +	}
> +
> +	patch_branch_site(&patch__call_flush_count_cache,
> +			  (u64)&flush_count_cache, BRANCH_SET_LINK);
> +
> +	if (!security_ftr_enabled(SEC_FTR_BCCTR_FLUSH_ASSIST)) {
> +		count_cache_flush_type = COUNT_CACHE_FLUSH_SW;
> +		pr_info("count-cache-flush: full software flush sequence enabled.\n");
> +		return;
> +	}
> +
> +	patch_instruction_site(&patch__flush_count_cache_return, PPC_INST_BLR);
> +	count_cache_flush_type = COUNT_CACHE_FLUSH_HW;
> +	pr_info("count-cache-flush: hardware assisted flush sequence enabled\n");
> +}
> +
> +void setup_count_cache_flush(void)
> +{
> +	toggle_count_cache_flush(true);
> +}
> +
> +#ifdef CONFIG_DEBUG_FS
> +static int count_cache_flush_set(void *data, u64 val)
> +{
> +	bool enable;
> +
> +	if (val == 1)
> +		enable = true;
> +	else if (val == 0)
> +		enable = false;
> +	else
> +		return -EINVAL;
> +
> +	toggle_count_cache_flush(enable);
> +
> +	return 0;
> +}
> +
> +static int count_cache_flush_get(void *data, u64 *val)
> +{
> +	if (count_cache_flush_type == COUNT_CACHE_FLUSH_NONE)
> +		*val = 0;
> +	else
> +		*val = 1;
> +
> +	return 0;
> +}
> +
> +DEFINE_SIMPLE_ATTRIBUTE(fops_count_cache_flush, count_cache_flush_get,
> +			count_cache_flush_set, "%llu\n");
> +
> +static __init int count_cache_flush_debugfs_init(void)
> +{
> +	debugfs_create_file("count_cache_flush", 0600, powerpc_debugfs_root,
> +			    NULL, &fops_count_cache_flush);
> +	return 0;
> +}
> +device_initcall(count_cache_flush_debugfs_init);
> +#endif /* CONFIG_DEBUG_FS */
> +#endif /* CONFIG_PPC_BOOK3S_64 */
> diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
> index ad8c9db61237..5a9f035bcd6b 100644
> - --- a/arch/powerpc/kernel/setup_32.c
> +++ b/arch/powerpc/kernel/setup_32.c
> @@ -322,6 +322,8 @@ void __init setup_arch(char **cmdline_p)
>  		ppc_md.setup_arch();
>  	if ( ppc_md.progress ) ppc_md.progress("arch: exit", 0x3eab);
>  
> +	setup_barrier_nospec();
> +
>  	paging_init();
>  
>  	/* Initialize the MMU context management stuff */
> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
> index 9eb469bed22b..6bb731ababc6 100644
> - --- a/arch/powerpc/kernel/setup_64.c
> +++ b/arch/powerpc/kernel/setup_64.c
> @@ -736,6 +736,8 @@ void __init setup_arch(char **cmdline_p)
>  	if (ppc_md.setup_arch)
>  		ppc_md.setup_arch();
>  
> +	setup_barrier_nospec();
> +
>  	paging_init();
>  
>  	/* Initialize the MMU context management stuff */
> @@ -873,9 +875,6 @@ static void do_nothing(void *unused)
>  
>  void rfi_flush_enable(bool enable)
>  {
> - -	if (rfi_flush == enable)
> - -		return;
> - -
>  	if (enable) {
>  		do_rfi_flush_fixups(enabled_flush_types);
>  		on_each_cpu(do_nothing, NULL, 1);
> @@ -885,11 +884,15 @@ void rfi_flush_enable(bool enable)
>  	rfi_flush = enable;
>  }
>  
> - -static void init_fallback_flush(void)
> +static void __ref init_fallback_flush(void)
>  {
>  	u64 l1d_size, limit;
>  	int cpu;
>  
> +	/* Only allocate the fallback flush area once (at boot time). */
> +	if (l1d_flush_fallback_area)
> +		return;
> +
>  	l1d_size = ppc64_caches.dsize;
>  	limit = min(safe_stack_limit(), ppc64_rma_size);
>  
> @@ -902,34 +905,23 @@ static void init_fallback_flush(void)
>  	memset(l1d_flush_fallback_area, 0, l1d_size * 2);
>  
>  	for_each_possible_cpu(cpu) {
> - -		/*
> - -		 * The fallback flush is currently coded for 8-way
> - -		 * associativity. Different associativity is possible, but it
> - -		 * will be treated as 8-way and may not evict the lines as
> - -		 * effectively.
> - -		 *
> - -		 * 128 byte lines are mandatory.
> - -		 */
> - -		u64 c = l1d_size / 8;
> - -
>  		paca[cpu].rfi_flush_fallback_area = l1d_flush_fallback_area;
> - -		paca[cpu].l1d_flush_congruence = c;
> - -		paca[cpu].l1d_flush_sets = c / 128;
> +		paca[cpu].l1d_flush_size = l1d_size;
>  	}
>  }
>  
> - -void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
> +void setup_rfi_flush(enum l1d_flush_type types, bool enable)
>  {
>  	if (types & L1D_FLUSH_FALLBACK) {
> - -		pr_info("rfi-flush: Using fallback displacement flush\n");
> +		pr_info("rfi-flush: fallback displacement flush available\n");
>  		init_fallback_flush();
>  	}
>  
>  	if (types & L1D_FLUSH_ORI)
> - -		pr_info("rfi-flush: Using ori type flush\n");
> +		pr_info("rfi-flush: ori type flush available\n");
>  
>  	if (types & L1D_FLUSH_MTTRIG)
> - -		pr_info("rfi-flush: Using mttrig type flush\n");
> +		pr_info("rfi-flush: mttrig type flush available\n");
>  
>  	enabled_flush_types = types;
>  
> @@ -940,13 +932,19 @@ void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
>  #ifdef CONFIG_DEBUG_FS
>  static int rfi_flush_set(void *data, u64 val)
>  {
> +	bool enable;
> +
>  	if (val == 1)
> - -		rfi_flush_enable(true);
> +		enable = true;
>  	else if (val == 0)
> - -		rfi_flush_enable(false);
> +		enable = false;
>  	else
>  		return -EINVAL;
>  
> +	/* Only do anything if we're changing state */
> +	if (enable != rfi_flush)
> +		rfi_flush_enable(enable);
> +
>  	return 0;
>  }
>  
> @@ -965,12 +963,4 @@ static __init int rfi_flush_debugfs_init(void)
>  }
>  device_initcall(rfi_flush_debugfs_init);
>  #endif
> - -
> - -ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
> - -{
> - -	if (rfi_flush)
> - -		return sprintf(buf, "Mitigation: RFI Flush\n");
> - -
> - -	return sprintf(buf, "Vulnerable\n");
> - -}
>  #endif /* CONFIG_PPC_BOOK3S_64 */
> diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
> index 072a23a17350..876ac9d52afc 100644
> - --- a/arch/powerpc/kernel/vmlinux.lds.S
> +++ b/arch/powerpc/kernel/vmlinux.lds.S
> @@ -73,14 +73,45 @@ SECTIONS
>  	RODATA
>  
>  #ifdef CONFIG_PPC64
> +	. = ALIGN(8);
> +	__stf_entry_barrier_fixup : AT(ADDR(__stf_entry_barrier_fixup) - LOAD_OFFSET) {
> +		__start___stf_entry_barrier_fixup = .;
> +		*(__stf_entry_barrier_fixup)
> +		__stop___stf_entry_barrier_fixup = .;
> +	}
> +
> +	. = ALIGN(8);
> +	__stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) {
> +		__start___stf_exit_barrier_fixup = .;
> +		*(__stf_exit_barrier_fixup)
> +		__stop___stf_exit_barrier_fixup = .;
> +	}
> +
>  	. = ALIGN(8);
>  	__rfi_flush_fixup : AT(ADDR(__rfi_flush_fixup) - LOAD_OFFSET) {
>  		__start___rfi_flush_fixup = .;
>  		*(__rfi_flush_fixup)
>  		__stop___rfi_flush_fixup = .;
>  	}
> - -#endif
> +#endif /* CONFIG_PPC64 */
>  
> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
> +	. = ALIGN(8);
> +	__spec_barrier_fixup : AT(ADDR(__spec_barrier_fixup) - LOAD_OFFSET) {
> +		__start___barrier_nospec_fixup = .;
> +		*(__barrier_nospec_fixup)
> +		__stop___barrier_nospec_fixup = .;
> +	}
> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
> +
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +	. = ALIGN(8);
> +	__spec_btb_flush_fixup : AT(ADDR(__spec_btb_flush_fixup) - LOAD_OFFSET) {
> +		__start__btb_flush_fixup = .;
> +		*(__btb_flush_fixup)
> +		__stop__btb_flush_fixup = .;
> +	}
> +#endif
>  	EXCEPTION_TABLE(0)
>  
>  	NOTES :kernel :notes
> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
> index d5edbeb8eb82..570c06a00db6 100644
> - --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -14,12 +14,25 @@
>  #include <asm/page.h>
>  #include <asm/code-patching.h>
>  #include <asm/uaccess.h>
> +#include <asm/setup.h>
> +#include <asm/sections.h>
>  
>  
> +static inline bool is_init(unsigned int *addr)
> +{
> +	return addr >= (unsigned int *)__init_begin && addr < (unsigned int *)__init_end;
> +}
> +
>  int patch_instruction(unsigned int *addr, unsigned int instr)
>  {
>  	int err;
>  
> +	/* Make sure we aren't patching a freed init section */
> +	if (init_mem_is_free && is_init(addr)) {
> +		pr_debug("Skipping init section patching addr: 0x%px\n", addr);
> +		return 0;
> +	}
> +
>  	__put_user_size(instr, addr, 4, err);
>  	if (err)
>  		return err;
> @@ -32,6 +45,22 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags)
>  	return patch_instruction(addr, create_branch(addr, target, flags));
>  }
>  
> +int patch_branch_site(s32 *site, unsigned long target, int flags)
> +{
> +	unsigned int *addr;
> +
> +	addr = (unsigned int *)((unsigned long)site + *site);
> +	return patch_instruction(addr, create_branch(addr, target, flags));
> +}
> +
> +int patch_instruction_site(s32 *site, unsigned int instr)
> +{
> +	unsigned int *addr;
> +
> +	addr = (unsigned int *)((unsigned long)site + *site);
> +	return patch_instruction(addr, instr);
> +}
> +
>  unsigned int create_branch(const unsigned int *addr,
>  			   unsigned long target, int flags)
>  {
> diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
> index 3af014684872..7bdfc19a491d 100644
> - --- a/arch/powerpc/lib/feature-fixups.c
> +++ b/arch/powerpc/lib/feature-fixups.c
> @@ -21,7 +21,7 @@
>  #include <asm/page.h>
>  #include <asm/sections.h>
>  #include <asm/setup.h>
> - -
> +#include <asm/security_features.h>
>  
>  struct fixup_entry {
>  	unsigned long	mask;
> @@ -115,6 +115,120 @@ void do_feature_fixups(unsigned long value, void *fixup_start, void *fixup_end)
>  }
>  
>  #ifdef CONFIG_PPC_BOOK3S_64
> +void do_stf_entry_barrier_fixups(enum stf_barrier_type types)
> +{
> +	unsigned int instrs[3], *dest;
> +	long *start, *end;
> +	int i;
> +
> +	start = PTRRELOC(&__start___stf_entry_barrier_fixup),
> +	end = PTRRELOC(&__stop___stf_entry_barrier_fixup);
> +
> +	instrs[0] = 0x60000000; /* nop */
> +	instrs[1] = 0x60000000; /* nop */
> +	instrs[2] = 0x60000000; /* nop */
> +
> +	i = 0;
> +	if (types & STF_BARRIER_FALLBACK) {
> +		instrs[i++] = 0x7d4802a6; /* mflr r10		*/
> +		instrs[i++] = 0x60000000; /* branch patched below */
> +		instrs[i++] = 0x7d4803a6; /* mtlr r10		*/
> +	} else if (types & STF_BARRIER_EIEIO) {
> +		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
> +	} else if (types & STF_BARRIER_SYNC_ORI) {
> +		instrs[i++] = 0x7c0004ac; /* hwsync		*/
> +		instrs[i++] = 0xe94d0000; /* ld r10,0(r13)	*/
> +		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
> +	}
> +
> +	for (i = 0; start < end; start++, i++) {
> +		dest = (void *)start + *start;
> +
> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
> +
> +		patch_instruction(dest, instrs[0]);
> +
> +		if (types & STF_BARRIER_FALLBACK)
> +			patch_branch(dest + 1, (unsigned long)&stf_barrier_fallback,
> +				     BRANCH_SET_LINK);
> +		else
> +			patch_instruction(dest + 1, instrs[1]);
> +
> +		patch_instruction(dest + 2, instrs[2]);
> +	}
> +
> +	printk(KERN_DEBUG "stf-barrier: patched %d entry locations (%s barrier)\n", i,
> +		(types == STF_BARRIER_NONE)                  ? "no" :
> +		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
> +		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
> +		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
> +		                                           : "unknown");
> +}
> +
> +void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
> +{
> +	unsigned int instrs[6], *dest;
> +	long *start, *end;
> +	int i;
> +
> +	start = PTRRELOC(&__start___stf_exit_barrier_fixup),
> +	end = PTRRELOC(&__stop___stf_exit_barrier_fixup);
> +
> +	instrs[0] = 0x60000000; /* nop */
> +	instrs[1] = 0x60000000; /* nop */
> +	instrs[2] = 0x60000000; /* nop */
> +	instrs[3] = 0x60000000; /* nop */
> +	instrs[4] = 0x60000000; /* nop */
> +	instrs[5] = 0x60000000; /* nop */
> +
> +	i = 0;
> +	if (types & STF_BARRIER_FALLBACK || types & STF_BARRIER_SYNC_ORI) {
> +		if (cpu_has_feature(CPU_FTR_HVMODE)) {
> +			instrs[i++] = 0x7db14ba6; /* mtspr 0x131, r13 (HSPRG1) */
> +			instrs[i++] = 0x7db04aa6; /* mfspr r13, 0x130 (HSPRG0) */
> +		} else {
> +			instrs[i++] = 0x7db243a6; /* mtsprg 2,r13	*/
> +			instrs[i++] = 0x7db142a6; /* mfsprg r13,1    */
> +	        }
> +		instrs[i++] = 0x7c0004ac; /* hwsync		*/
> +		instrs[i++] = 0xe9ad0000; /* ld r13,0(r13)	*/
> +		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
> +		if (cpu_has_feature(CPU_FTR_HVMODE)) {
> +			instrs[i++] = 0x7db14aa6; /* mfspr r13, 0x131 (HSPRG1) */
> +		} else {
> +			instrs[i++] = 0x7db242a6; /* mfsprg r13,2 */
> +		}
> +	} else if (types & STF_BARRIER_EIEIO) {
> +		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
> +	}
> +
> +	for (i = 0; start < end; start++, i++) {
> +		dest = (void *)start + *start;
> +
> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
> +
> +		patch_instruction(dest, instrs[0]);
> +		patch_instruction(dest + 1, instrs[1]);
> +		patch_instruction(dest + 2, instrs[2]);
> +		patch_instruction(dest + 3, instrs[3]);
> +		patch_instruction(dest + 4, instrs[4]);
> +		patch_instruction(dest + 5, instrs[5]);
> +	}
> +	printk(KERN_DEBUG "stf-barrier: patched %d exit locations (%s barrier)\n", i,
> +		(types == STF_BARRIER_NONE)                  ? "no" :
> +		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
> +		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
> +		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
> +		                                           : "unknown");
> +}
> +
> +
> +void do_stf_barrier_fixups(enum stf_barrier_type types)
> +{
> +	do_stf_entry_barrier_fixups(types);
> +	do_stf_exit_barrier_fixups(types);
> +}
> +
>  void do_rfi_flush_fixups(enum l1d_flush_type types)
>  {
>  	unsigned int instrs[3], *dest;
> @@ -151,10 +265,110 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
>  		patch_instruction(dest + 2, instrs[2]);
>  	}
>  
> - -	printk(KERN_DEBUG "rfi-flush: patched %d locations\n", i);
> +	printk(KERN_DEBUG "rfi-flush: patched %d locations (%s flush)\n", i,
> +		(types == L1D_FLUSH_NONE)       ? "no" :
> +		(types == L1D_FLUSH_FALLBACK)   ? "fallback displacement" :
> +		(types &  L1D_FLUSH_ORI)        ? (types & L1D_FLUSH_MTTRIG)
> +							? "ori+mttrig type"
> +							: "ori type" :
> +		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
> +						: "unknown");
> +}
> +
> +void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
> +{
> +	unsigned int instr, *dest;
> +	long *start, *end;
> +	int i;
> +
> +	start = fixup_start;
> +	end = fixup_end;
> +
> +	instr = 0x60000000; /* nop */
> +
> +	if (enable) {
> +		pr_info("barrier-nospec: using ORI speculation barrier\n");
> +		instr = 0x63ff0000; /* ori 31,31,0 speculation barrier */
> +	}
> +
> +	for (i = 0; start < end; start++, i++) {
> +		dest = (void *)start + *start;
> +
> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
> +		patch_instruction(dest, instr);
> +	}
> +
> +	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
>  }
> +
>  #endif /* CONFIG_PPC_BOOK3S_64 */
>  
> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
> +void do_barrier_nospec_fixups(bool enable)
> +{
> +	void *start, *end;
> +
> +	start = PTRRELOC(&__start___barrier_nospec_fixup),
> +	end = PTRRELOC(&__stop___barrier_nospec_fixup);
> +
> +	do_barrier_nospec_fixups_range(enable, start, end);
> +}
> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
> +
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
> +{
> +	unsigned int instr[2], *dest;
> +	long *start, *end;
> +	int i;
> +
> +	start = fixup_start;
> +	end = fixup_end;
> +
> +	instr[0] = PPC_INST_NOP;
> +	instr[1] = PPC_INST_NOP;
> +
> +	if (enable) {
> +		pr_info("barrier-nospec: using isync; sync as speculation barrier\n");
> +		instr[0] = PPC_INST_ISYNC;
> +		instr[1] = PPC_INST_SYNC;
> +	}
> +
> +	for (i = 0; start < end; start++, i++) {
> +		dest = (void *)start + *start;
> +
> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
> +		patch_instruction(dest, instr[0]);
> +		patch_instruction(dest + 1, instr[1]);
> +	}
> +
> +	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
> +}
> +
> +static void patch_btb_flush_section(long *curr)
> +{
> +	unsigned int *start, *end;
> +
> +	start = (void *)curr + *curr;
> +	end = (void *)curr + *(curr + 1);
> +	for (; start < end; start++) {
> +		pr_devel("patching dest %lx\n", (unsigned long)start);
> +		patch_instruction(start, PPC_INST_NOP);
> +	}
> +}
> +
> +void do_btb_flush_fixups(void)
> +{
> +	long *start, *end;
> +
> +	start = PTRRELOC(&__start__btb_flush_fixup);
> +	end = PTRRELOC(&__stop__btb_flush_fixup);
> +
> +	for (; start < end; start += 2)
> +		patch_btb_flush_section(start);
> +}
> +#endif /* CONFIG_PPC_FSL_BOOK3E */
> +
>  void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
>  {
>  	long *start, *end;
> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
> index 22d94c3e6fc4..1efe5ca5c3bc 100644
> - --- a/arch/powerpc/mm/mem.c
> +++ b/arch/powerpc/mm/mem.c
> @@ -62,6 +62,7 @@
>  #endif
>  
>  unsigned long long memory_limit;
> +bool init_mem_is_free;
>  
>  #ifdef CONFIG_HIGHMEM
>  pte_t *kmap_pte;
> @@ -381,6 +382,7 @@ void __init mem_init(void)
>  void free_initmem(void)
>  {
>  	ppc_md.progress = ppc_printk_progress;
> +	init_mem_is_free = true;
>  	free_initmem_default(POISON_FREE_INITMEM);
>  }
>  
> diff --git a/arch/powerpc/mm/tlb_low_64e.S b/arch/powerpc/mm/tlb_low_64e.S
> index 29d6987c37ba..5486d56da289 100644
> - --- a/arch/powerpc/mm/tlb_low_64e.S
> +++ b/arch/powerpc/mm/tlb_low_64e.S
> @@ -69,6 +69,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>  	std	r15,EX_TLB_R15(r12)
>  	std	r10,EX_TLB_CR(r12)
>  #ifdef CONFIG_PPC_FSL_BOOK3E
> +START_BTB_FLUSH_SECTION
> +	mfspr r11, SPRN_SRR1
> +	andi. r10,r11,MSR_PR
> +	beq 1f
> +	BTB_FLUSH(r10)
> +1:
> +END_BTB_FLUSH_SECTION
>  	std	r7,EX_TLB_R7(r12)
>  #endif
>  	TLB_MISS_PROLOG_STATS
> diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
> index c57afc619b20..e14b52c7ebd8 100644
> - --- a/arch/powerpc/platforms/powernv/setup.c
> +++ b/arch/powerpc/platforms/powernv/setup.c
> @@ -37,53 +37,99 @@
>  #include <asm/smp.h>
>  #include <asm/tm.h>
>  #include <asm/setup.h>
> +#include <asm/security_features.h>
>  
>  #include "powernv.h"
>  
> +
> +static bool fw_feature_is(const char *state, const char *name,
> +			  struct device_node *fw_features)
> +{
> +	struct device_node *np;
> +	bool rc = false;
> +
> +	np = of_get_child_by_name(fw_features, name);
> +	if (np) {
> +		rc = of_property_read_bool(np, state);
> +		of_node_put(np);
> +	}
> +
> +	return rc;
> +}
> +
> +static void init_fw_feat_flags(struct device_node *np)
> +{
> +	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
> +		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
> +
> +	if (fw_feature_is("enabled", "fw-bcctrl-serialized", np))
> +		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
> +
> +	if (fw_feature_is("enabled", "inst-l1d-flush-ori30,30,0", np))
> +		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
> +
> +	if (fw_feature_is("enabled", "inst-l1d-flush-trig2", np))
> +		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
> +
> +	if (fw_feature_is("enabled", "fw-l1d-thread-split", np))
> +		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
> +
> +	if (fw_feature_is("enabled", "fw-count-cache-disabled", np))
> +		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
> +
> +	if (fw_feature_is("enabled", "fw-count-cache-flush-bcctr2,0,0", np))
> +		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
> +
> +	if (fw_feature_is("enabled", "needs-count-cache-flush-on-context-switch", np))
> +		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
> +
> +	/*
> +	 * The features below are enabled by default, so we instead look to see
> +	 * if firmware has *disabled* them, and clear them if so.
> +	 */
> +	if (fw_feature_is("disabled", "speculation-policy-favor-security", np))
> +		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
> +
> +	if (fw_feature_is("disabled", "needs-l1d-flush-msr-pr-0-to-1", np))
> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
> +
> +	if (fw_feature_is("disabled", "needs-l1d-flush-msr-hv-1-to-0", np))
> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
> +
> +	if (fw_feature_is("disabled", "needs-spec-barrier-for-bound-checks", np))
> +		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
> +}
> +
>  static void pnv_setup_rfi_flush(void)
>  {
>  	struct device_node *np, *fw_features;
>  	enum l1d_flush_type type;
> - -	int enable;
> +	bool enable;
>  
>  	/* Default to fallback in case fw-features are not available */
>  	type = L1D_FLUSH_FALLBACK;
> - -	enable = 1;
>  
>  	np = of_find_node_by_name(NULL, "ibm,opal");
>  	fw_features = of_get_child_by_name(np, "fw-features");
>  	of_node_put(np);
>  
>  	if (fw_features) {
> - -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-trig2");
> - -		if (np && of_property_read_bool(np, "enabled"))
> - -			type = L1D_FLUSH_MTTRIG;
> +		init_fw_feat_flags(fw_features);
> +		of_node_put(fw_features);
>  
> - -		of_node_put(np);
> +		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
> +			type = L1D_FLUSH_MTTRIG;
>  
> - -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-ori30,30,0");
> - -		if (np && of_property_read_bool(np, "enabled"))
> +		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
>  			type = L1D_FLUSH_ORI;
> - -
> - -		of_node_put(np);
> - -
> - -		/* Enable unless firmware says NOT to */
> - -		enable = 2;
> - -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-hv-1-to-0");
> - -		if (np && of_property_read_bool(np, "disabled"))
> - -			enable--;
> - -
> - -		of_node_put(np);
> - -
> - -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-pr-0-to-1");
> - -		if (np && of_property_read_bool(np, "disabled"))
> - -			enable--;
> - -
> - -		of_node_put(np);
> - -		of_node_put(fw_features);
>  	}
>  
> - -	setup_rfi_flush(type, enable > 0);
> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
> +		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR)   || \
> +		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
> +
> +	setup_rfi_flush(type, enable);
> +	setup_count_cache_flush();
>  }
>  
>  static void __init pnv_setup_arch(void)
> @@ -91,6 +137,7 @@ static void __init pnv_setup_arch(void)
>  	set_arch_panic_timeout(10, ARCH_PANIC_TIMEOUT);
>  
>  	pnv_setup_rfi_flush();
> +	setup_stf_barrier();
>  
>  	/* Initialize SMP */
>  	pnv_smp_init();
> diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
> index 8dd0c8edefd6..c773396d0969 100644
> - --- a/arch/powerpc/platforms/pseries/mobility.c
> +++ b/arch/powerpc/platforms/pseries/mobility.c
> @@ -314,6 +314,9 @@ void post_mobility_fixup(void)
>  		printk(KERN_ERR "Post-mobility device tree update "
>  			"failed: %d\n", rc);
>  
> +	/* Possibly switch to a new RFI flush type */
> +	pseries_setup_rfi_flush();
> +
>  	return;
>  }
>  
> diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h
> index 8411c27293e4..e7d80797384d 100644
> - --- a/arch/powerpc/platforms/pseries/pseries.h
> +++ b/arch/powerpc/platforms/pseries/pseries.h
> @@ -81,4 +81,6 @@ extern struct pci_controller_ops pseries_pci_controller_ops;
>  
>  unsigned long pseries_memory_block_size(void);
>  
> +void pseries_setup_rfi_flush(void);
> +
>  #endif /* _PSERIES_PSERIES_H */
> diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
> index dd2545fc9947..9cc976ff7fec 100644
> - --- a/arch/powerpc/platforms/pseries/setup.c
> +++ b/arch/powerpc/platforms/pseries/setup.c
> @@ -67,6 +67,7 @@
>  #include <asm/eeh.h>
>  #include <asm/reg.h>
>  #include <asm/plpar_wrappers.h>
> +#include <asm/security_features.h>
>  
>  #include "pseries.h"
>  
> @@ -499,37 +500,87 @@ static void __init find_and_init_phbs(void)
>  	of_pci_check_probe_only();
>  }
>  
> - -static void pseries_setup_rfi_flush(void)
> +static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
> +{
> +	/*
> +	 * The features below are disabled by default, so we instead look to see
> +	 * if firmware has *enabled* them, and set them if so.
> +	 */
> +	if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31)
> +		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
> +
> +	if (result->character & H_CPU_CHAR_BCCTRL_SERIALISED)
> +		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
> +
> +	if (result->character & H_CPU_CHAR_L1D_FLUSH_ORI30)
> +		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
> +
> +	if (result->character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
> +		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
> +
> +	if (result->character & H_CPU_CHAR_L1D_THREAD_PRIV)
> +		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
> +
> +	if (result->character & H_CPU_CHAR_COUNT_CACHE_DISABLED)
> +		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
> +
> +	if (result->character & H_CPU_CHAR_BCCTR_FLUSH_ASSIST)
> +		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
> +
> +	if (result->behaviour & H_CPU_BEHAV_FLUSH_COUNT_CACHE)
> +		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
> +
> +	/*
> +	 * The features below are enabled by default, so we instead look to see
> +	 * if firmware has *disabled* them, and clear them if so.
> +	 */
> +	if (!(result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY))
> +		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
> +
> +	if (!(result->behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
> +
> +	if (!(result->behaviour & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
> +		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
> +}
> +
> +void pseries_setup_rfi_flush(void)
>  {
>  	struct h_cpu_char_result result;
>  	enum l1d_flush_type types;
>  	bool enable;
>  	long rc;
>  
> - -	/* Enable by default */
> - -	enable = true;
> +	/*
> +	 * Set features to the defaults assumed by init_cpu_char_feature_flags()
> +	 * so it can set/clear again any features that might have changed after
> +	 * migration, and in case the hypercall fails and it is not even called.
> +	 */
> +	powerpc_security_features = SEC_FTR_DEFAULT;
>  
>  	rc = plpar_get_cpu_characteristics(&result);
> - -	if (rc == H_SUCCESS) {
> - -		types = L1D_FLUSH_NONE;
> +	if (rc == H_SUCCESS)
> +		init_cpu_char_feature_flags(&result);
>  
> - -		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
> - -			types |= L1D_FLUSH_MTTRIG;
> - -		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
> - -			types |= L1D_FLUSH_ORI;
> +	/*
> +	 * We're the guest so this doesn't apply to us, clear it to simplify
> +	 * handling of it elsewhere.
> +	 */
> +	security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
>  
> - -		/* Use fallback if nothing set in hcall */
> - -		if (types == L1D_FLUSH_NONE)
> - -			types = L1D_FLUSH_FALLBACK;
> +	types = L1D_FLUSH_FALLBACK;
>  
> - -		if (!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
> - -			enable = false;
> - -	} else {
> - -		/* Default to fallback if case hcall is not available */
> - -		types = L1D_FLUSH_FALLBACK;
> - -	}
> +	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
> +		types |= L1D_FLUSH_MTTRIG;
> +
> +	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
> +		types |= L1D_FLUSH_ORI;
> +
> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
> +		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
>  
>  	setup_rfi_flush(types, enable);
> +	setup_count_cache_flush();
>  }
>  
>  static void __init pSeries_setup_arch(void)
> @@ -549,6 +600,7 @@ static void __init pSeries_setup_arch(void)
>  	fwnmi_init();
>  
>  	pseries_setup_rfi_flush();
> +	setup_stf_barrier();
>  
>  	/* By default, only probe PCI (can be overridden by rtas_pci) */
>  	pci_add_flags(PCI_PROBE_ONLY);
> diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
> index 786bf01691c9..83619ebede93 100644
> - --- a/arch/powerpc/xmon/xmon.c
> +++ b/arch/powerpc/xmon/xmon.c
> @@ -2144,6 +2144,8 @@ static void dump_one_paca(int cpu)
>  	DUMP(p, slb_cache_ptr, "x");
>  	for (i = 0; i < SLB_CACHE_ENTRIES; i++)
>  		printf(" slb_cache[%d]:        = 0x%016lx\n", i, p->slb_cache[i]);
> +
> +	DUMP(p, rfi_flush_fallback_area, "px");
>  #endif
>  	DUMP(p, dscr_default, "llx");
>  #ifdef CONFIG_PPC_BOOK3E
> - -- 
> 2.20.1
>
> -----BEGIN PGP SIGNATURE-----
>
> iQIcBAEBAgAGBQJcvHWhAAoJEFHr6jzI4aWA6nsP/0YskmAfLovcUmERQ7+bIjq6
> IcS1T466dvy6MlqeBXU4x8pVgInWeHKEC9XJdkM1lOeib/SLW7Hbz4kgJeOGwFGY
> lOTaexrxvsBqPm7f6GC0zbl9obEIIIIUs+TielFQANBgqm+q8Wio+XXPP9bpKeKY
> agSpQ3nwL/PYixznbNmN/lP9py5p89LQ0IBcR7dDBGGWJtD/AXeZ9hslsZxPbPtI
> nZJ0vdnjuoB2z+hCxfKWlYfLwH0VfoTpqP5x3ALCkvbBr67e8bf6EK8+trnvhyQ8
> iLY4bp1pm2epAI0/3NfyEiDMsGjVJ6IFlkyhDkHJgJNu0BGcGOSX2GpyU3juviAK
> c95FtBft/i8AwigOMCivg2mN5edYjsSiPoEItwT5KWqgByJsdr5i5mYVx8cUjMOz
> iAxLZCdg+UHZYuCBCAO2ZI1G9bVXI1Pa3btMspiCOOOsYGjXGf0oFfKQ+7957hUO
> ftYYJoGHlMHiHR1OPas6T3lk6YKF9uvfIDTE3OKw2obHbbRz3u82xoWMRGW503MN
> 7WpkpAP7oZ9RgqIWFVhatWy5f+7GFL0akEi4o2tsZHhYlPau7YWo+nToTd87itwt
> GBaWJipzge4s13VkhAE+jWFO35Fvwi8uNZ7UgpuKMBECEjkGbtzBTq2MjSF5G8wc
> yPEod5jby/Iqb7DkGPVG
> =6DnF
> -----END PGP SIGNATURE-----
>


^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
@ 2019-04-22 15:32   ` Diana Madalina Craciun
  0 siblings, 0 replies; 180+ messages in thread
From: Diana Madalina Craciun @ 2019-04-22 15:32 UTC (permalink / raw)
  To: Michael Ellerman, stable, gregkh; +Cc: linuxppc-dev, msuchanek, npiggin

Hi Michael,

There are some missing NXP Spectre v2 patches. I can send them
separately if the series will be accepted. I have merged them, but I did
not test them, I was sick today and incapable of doing that.

Thanks,
Diana


On 4/21/2019 5:21 PM, Michael Ellerman wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi Greg/Sasha,
>
> Please queue up these powerpc patches for 4.4 if you have no objections.
>
> cheers
>
>
> Christophe Leroy (1):
>   powerpc/fsl: Fix the flush of branch predictor.
>
> Diana Craciun (10):
>   powerpc/64: Disable the speculation barrier from the command line
>   powerpc/64: Make stf barrier PPC_BOOK3S_64 specific.
>   powerpc/64: Make meltdown reporting Book3S 64 specific
>   powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E
>   powerpc/fsl: Add infrastructure to fixup branch predictor flush
>   powerpc/fsl: Add macro to flush the branch predictor
>   powerpc/fsl: Fix spectre_v2 mitigations reporting
>   powerpc/fsl: Add nospectre_v2 command line argument
>   powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)
>   powerpc/fsl: Update Spectre v2 reporting
>
> Mauricio Faria de Oliveira (4):
>   powerpc/rfi-flush: Differentiate enabled and patched flush types
>   powerpc/pseries: Fix clearing of security feature flags
>   powerpc: Move default security feature flags
>   powerpc/pseries: Restore default security feature flags on setup
>
> Michael Ellerman (29):
>   powerpc/xmon: Add RFI flush related fields to paca dump
>   powerpc/pseries: Support firmware disable of RFI flush
>   powerpc/powernv: Support firmware disable of RFI flush
>   powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs
>     code
>   powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again
>   powerpc/rfi-flush: Always enable fallback flush on pseries
>   powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags
>   powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration
>   powerpc: Add security feature flags for Spectre/Meltdown
>   powerpc/pseries: Set or clear security feature flags
>   powerpc/powernv: Set or clear security feature flags
>   powerpc/64s: Move cpu_show_meltdown()
>   powerpc/64s: Enhance the information in cpu_show_meltdown()
>   powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()
>   powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()
>   powerpc/64s: Wire up cpu_show_spectre_v1()
>   powerpc/64s: Wire up cpu_show_spectre_v2()
>   powerpc/64s: Fix section mismatch warnings from setup_rfi_flush()
>   powerpc/64: Use barrier_nospec in syscall entry
>   powerpc: Use barrier_nospec in copy_from_user()
>   powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2
>   powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC
>   powerpc/64: Call setup_barrier_nospec() from setup_arch()
>   powerpc/asm: Add a patch_site macro & helpers for patching
>     instructions
>   powerpc/64s: Add new security feature flags for count cache flush
>   powerpc/64s: Add support for software count cache flush
>   powerpc/pseries: Query hypervisor for count cache flush settings
>   powerpc/powernv: Query firmware for count cache flush settings
>   powerpc/security: Fix spectre_v2 reporting
>
> Michael Neuling (1):
>   powerpc: Avoid code patching freed init sections
>
> Michal Suchanek (5):
>   powerpc/64s: Add barrier_nospec
>   powerpc/64s: Add support for ori barrier_nospec patching
>   powerpc/64s: Patch barrier_nospec in modules
>   powerpc/64s: Enable barrier_nospec based on firmware settings
>   powerpc/64s: Enhance the information in cpu_show_spectre_v1()
>
> Nicholas Piggin (2):
>   powerpc/64s: Improve RFI L1-D cache flush fallback
>   powerpc/64s: Add support for a store forwarding barrier at kernel
>     entry/exit
>
>  arch/powerpc/Kconfig                         |   7 +-
>  arch/powerpc/include/asm/asm-prototypes.h    |  21 +
>  arch/powerpc/include/asm/barrier.h           |  21 +
>  arch/powerpc/include/asm/code-patching-asm.h |  18 +
>  arch/powerpc/include/asm/code-patching.h     |   2 +
>  arch/powerpc/include/asm/exception-64s.h     |  35 ++
>  arch/powerpc/include/asm/feature-fixups.h    |  40 ++
>  arch/powerpc/include/asm/hvcall.h            |   5 +
>  arch/powerpc/include/asm/paca.h              |   3 +-
>  arch/powerpc/include/asm/ppc-opcode.h        |   1 +
>  arch/powerpc/include/asm/ppc_asm.h           |  11 +
>  arch/powerpc/include/asm/security_features.h |  92 ++++
>  arch/powerpc/include/asm/setup.h             |  23 +-
>  arch/powerpc/include/asm/uaccess.h           |  18 +-
>  arch/powerpc/kernel/Makefile                 |   1 +
>  arch/powerpc/kernel/asm-offsets.c            |   3 +-
>  arch/powerpc/kernel/entry_64.S               |  69 +++
>  arch/powerpc/kernel/exceptions-64e.S         |  27 +-
>  arch/powerpc/kernel/exceptions-64s.S         |  98 +++--
>  arch/powerpc/kernel/module.c                 |  10 +-
>  arch/powerpc/kernel/security.c               | 433 +++++++++++++++++++
>  arch/powerpc/kernel/setup_32.c               |   2 +
>  arch/powerpc/kernel/setup_64.c               |  50 +--
>  arch/powerpc/kernel/vmlinux.lds.S            |  33 +-
>  arch/powerpc/lib/code-patching.c             |  29 ++
>  arch/powerpc/lib/feature-fixups.c            | 218 +++++++++-
>  arch/powerpc/mm/mem.c                        |   2 +
>  arch/powerpc/mm/tlb_low_64e.S                |   7 +
>  arch/powerpc/platforms/powernv/setup.c       |  99 +++--
>  arch/powerpc/platforms/pseries/mobility.c    |   3 +
>  arch/powerpc/platforms/pseries/pseries.h     |   2 +
>  arch/powerpc/platforms/pseries/setup.c       |  88 +++-
>  arch/powerpc/xmon/xmon.c                     |   2 +
>  33 files changed, 1345 insertions(+), 128 deletions(-)
>  create mode 100644 arch/powerpc/include/asm/asm-prototypes.h
>  create mode 100644 arch/powerpc/include/asm/code-patching-asm.h
>  create mode 100644 arch/powerpc/include/asm/security_features.h
>  create mode 100644 arch/powerpc/kernel/security.c
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 58a1fa979655..01b6c00a7060 100644
> - --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -136,7 +136,7 @@ config PPC
>  	select GENERIC_SMP_IDLE_THREAD
>  	select GENERIC_CMOS_UPDATE
>  	select GENERIC_TIME_VSYSCALL_OLD
> - -	select GENERIC_CPU_VULNERABILITIES	if PPC_BOOK3S_64
> +	select GENERIC_CPU_VULNERABILITIES	if PPC_BARRIER_NOSPEC
>  	select GENERIC_CLOCKEVENTS
>  	select GENERIC_CLOCKEVENTS_BROADCAST if SMP
>  	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
> @@ -162,6 +162,11 @@ config PPC
>  	select ARCH_HAS_DMA_SET_COHERENT_MASK
>  	select HAVE_ARCH_SECCOMP_FILTER
>  
> +config PPC_BARRIER_NOSPEC
> +    bool
> +    default y
> +    depends on PPC_BOOK3S_64 || PPC_FSL_BOOK3E
> +
>  config GENERIC_CSUM
>  	def_bool CPU_LITTLE_ENDIAN
>  
> diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
> new file mode 100644
> index 000000000000..8944c55591cf
> - --- /dev/null
> +++ b/arch/powerpc/include/asm/asm-prototypes.h
> @@ -0,0 +1,21 @@
> +#ifndef _ASM_POWERPC_ASM_PROTOTYPES_H
> +#define _ASM_POWERPC_ASM_PROTOTYPES_H
> +/*
> + * This file is for prototypes of C functions that are only called
> + * from asm, and any associated variables.
> + *
> + * Copyright 2016, Daniel Axtens, IBM Corporation.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version 2
> + * of the License, or (at your option) any later version.
> + */
> +
> +/* Patch sites */
> +extern s32 patch__call_flush_count_cache;
> +extern s32 patch__flush_count_cache_return;
> +
> +extern long flush_count_cache;
> +
> +#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> index b9e16855a037..e7cb72cdb2ba 100644
> - --- a/arch/powerpc/include/asm/barrier.h
> +++ b/arch/powerpc/include/asm/barrier.h
> @@ -92,4 +92,25 @@ do {									\
>  #define smp_mb__after_atomic()      smp_mb()
>  #define smp_mb__before_spinlock()   smp_mb()
>  
> +#ifdef CONFIG_PPC_BOOK3S_64
> +#define NOSPEC_BARRIER_SLOT   nop
> +#elif defined(CONFIG_PPC_FSL_BOOK3E)
> +#define NOSPEC_BARRIER_SLOT   nop; nop
> +#endif
> +
> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
> +/*
> + * Prevent execution of subsequent instructions until preceding branches have
> + * been fully resolved and are no longer executing speculatively.
> + */
> +#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; NOSPEC_BARRIER_SLOT
> +
> +// This also acts as a compiler barrier due to the memory clobber.
> +#define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
> +
> +#else /* !CONFIG_PPC_BARRIER_NOSPEC */
> +#define barrier_nospec_asm
> +#define barrier_nospec()
> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
> +
>  #endif /* _ASM_POWERPC_BARRIER_H */
> diff --git a/arch/powerpc/include/asm/code-patching-asm.h b/arch/powerpc/include/asm/code-patching-asm.h
> new file mode 100644
> index 000000000000..ed7b1448493a
> - --- /dev/null
> +++ b/arch/powerpc/include/asm/code-patching-asm.h
> @@ -0,0 +1,18 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * Copyright 2018, Michael Ellerman, IBM Corporation.
> + */
> +#ifndef _ASM_POWERPC_CODE_PATCHING_ASM_H
> +#define _ASM_POWERPC_CODE_PATCHING_ASM_H
> +
> +/* Define a "site" that can be patched */
> +.macro patch_site label name
> +	.pushsection ".rodata"
> +	.balign 4
> +	.global \name
> +\name:
> +	.4byte	\label - .
> +	.popsection
> +.endm
> +
> +#endif /* _ASM_POWERPC_CODE_PATCHING_ASM_H */
> diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
> index 840a5509b3f1..a734b4b34d26 100644
> - --- a/arch/powerpc/include/asm/code-patching.h
> +++ b/arch/powerpc/include/asm/code-patching.h
> @@ -28,6 +28,8 @@ unsigned int create_cond_branch(const unsigned int *addr,
>  				unsigned long target, int flags);
>  int patch_branch(unsigned int *addr, unsigned long target, int flags);
>  int patch_instruction(unsigned int *addr, unsigned int instr);
> +int patch_instruction_site(s32 *addr, unsigned int instr);
> +int patch_branch_site(s32 *site, unsigned long target, int flags);
>  
>  int instr_is_relative_branch(unsigned int instr);
>  int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
> diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
> index 9bddbec441b8..3ed536bec462 100644
> - --- a/arch/powerpc/include/asm/exception-64s.h
> +++ b/arch/powerpc/include/asm/exception-64s.h
> @@ -50,6 +50,27 @@
>  #define EX_PPR		88	/* SMT thread status register (priority) */
>  #define EX_CTR		96
>  
> +#define STF_ENTRY_BARRIER_SLOT						\
> +	STF_ENTRY_BARRIER_FIXUP_SECTION;				\
> +	nop;								\
> +	nop;								\
> +	nop
> +
> +#define STF_EXIT_BARRIER_SLOT						\
> +	STF_EXIT_BARRIER_FIXUP_SECTION;					\
> +	nop;								\
> +	nop;								\
> +	nop;								\
> +	nop;								\
> +	nop;								\
> +	nop
> +
> +/*
> + * r10 must be free to use, r13 must be paca
> + */
> +#define INTERRUPT_TO_KERNEL						\
> +	STF_ENTRY_BARRIER_SLOT
> +
>  /*
>   * Macros for annotating the expected destination of (h)rfid
>   *
> @@ -66,16 +87,19 @@
>  	rfid
>  
>  #define RFI_TO_USER							\
> +	STF_EXIT_BARRIER_SLOT;						\
>  	RFI_FLUSH_SLOT;							\
>  	rfid;								\
>  	b	rfi_flush_fallback
>  
>  #define RFI_TO_USER_OR_KERNEL						\
> +	STF_EXIT_BARRIER_SLOT;						\
>  	RFI_FLUSH_SLOT;							\
>  	rfid;								\
>  	b	rfi_flush_fallback
>  
>  #define RFI_TO_GUEST							\
> +	STF_EXIT_BARRIER_SLOT;						\
>  	RFI_FLUSH_SLOT;							\
>  	rfid;								\
>  	b	rfi_flush_fallback
> @@ -84,21 +108,25 @@
>  	hrfid
>  
>  #define HRFI_TO_USER							\
> +	STF_EXIT_BARRIER_SLOT;						\
>  	RFI_FLUSH_SLOT;							\
>  	hrfid;								\
>  	b	hrfi_flush_fallback
>  
>  #define HRFI_TO_USER_OR_KERNEL						\
> +	STF_EXIT_BARRIER_SLOT;						\
>  	RFI_FLUSH_SLOT;							\
>  	hrfid;								\
>  	b	hrfi_flush_fallback
>  
>  #define HRFI_TO_GUEST							\
> +	STF_EXIT_BARRIER_SLOT;						\
>  	RFI_FLUSH_SLOT;							\
>  	hrfid;								\
>  	b	hrfi_flush_fallback
>  
>  #define HRFI_TO_UNKNOWN							\
> +	STF_EXIT_BARRIER_SLOT;						\
>  	RFI_FLUSH_SLOT;							\
>  	hrfid;								\
>  	b	hrfi_flush_fallback
> @@ -226,6 +254,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
>  #define __EXCEPTION_PROLOG_1(area, extra, vec)				\
>  	OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR);		\
>  	OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);		\
> +	INTERRUPT_TO_KERNEL;						\
>  	SAVE_CTR(r10, area);						\
>  	mfcr	r9;							\
>  	extra(vec);							\
> @@ -512,6 +541,12 @@ label##_relon_hv:						\
>  #define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
>  	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
>  
> +#define MASKABLE_EXCEPTION_OOL(vec, label)				\
> +	.globl label##_ool;						\
> +label##_ool:								\
> +	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
> +	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
> +
>  #define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
>  	. = loc;							\
>  	.globl label##_pSeries;						\
> diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
> index 7068bafbb2d6..145a37ab2d3e 100644
> - --- a/arch/powerpc/include/asm/feature-fixups.h
> +++ b/arch/powerpc/include/asm/feature-fixups.h
> @@ -184,6 +184,22 @@ label##3:					       	\
>  	FTR_ENTRY_OFFSET label##1b-label##3b;		\
>  	.popsection;
>  
> +#define STF_ENTRY_BARRIER_FIXUP_SECTION			\
> +953:							\
> +	.pushsection __stf_entry_barrier_fixup,"a";	\
> +	.align 2;					\
> +954:							\
> +	FTR_ENTRY_OFFSET 953b-954b;			\
> +	.popsection;
> +
> +#define STF_EXIT_BARRIER_FIXUP_SECTION			\
> +955:							\
> +	.pushsection __stf_exit_barrier_fixup,"a";	\
> +	.align 2;					\
> +956:							\
> +	FTR_ENTRY_OFFSET 955b-956b;			\
> +	.popsection;
> +
>  #define RFI_FLUSH_FIXUP_SECTION				\
>  951:							\
>  	.pushsection __rfi_flush_fixup,"a";		\
> @@ -192,10 +208,34 @@ label##3:					       	\
>  	FTR_ENTRY_OFFSET 951b-952b;			\
>  	.popsection;
>  
> +#define NOSPEC_BARRIER_FIXUP_SECTION			\
> +953:							\
> +	.pushsection __barrier_nospec_fixup,"a";	\
> +	.align 2;					\
> +954:							\
> +	FTR_ENTRY_OFFSET 953b-954b;			\
> +	.popsection;
> +
> +#define START_BTB_FLUSH_SECTION			\
> +955:							\
> +
> +#define END_BTB_FLUSH_SECTION			\
> +956:							\
> +	.pushsection __btb_flush_fixup,"a";	\
> +	.align 2;							\
> +957:						\
> +	FTR_ENTRY_OFFSET 955b-957b;			\
> +	FTR_ENTRY_OFFSET 956b-957b;			\
> +	.popsection;
>  
>  #ifndef __ASSEMBLY__
>  
> +extern long stf_barrier_fallback;
> +extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
> +extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
>  extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
> +extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
> +extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
>  
>  #endif
>  
> diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
> index 449bbb87c257..b57db9d09db9 100644
> - --- a/arch/powerpc/include/asm/hvcall.h
> +++ b/arch/powerpc/include/asm/hvcall.h
> @@ -292,10 +292,15 @@
>  #define H_CPU_CHAR_L1D_FLUSH_ORI30	(1ull << 61) // IBM bit 2
>  #define H_CPU_CHAR_L1D_FLUSH_TRIG2	(1ull << 60) // IBM bit 3
>  #define H_CPU_CHAR_L1D_THREAD_PRIV	(1ull << 59) // IBM bit 4
> +#define H_CPU_CHAR_BRANCH_HINTS_HONORED	(1ull << 58) // IBM bit 5
> +#define H_CPU_CHAR_THREAD_RECONFIG_CTRL	(1ull << 57) // IBM bit 6
> +#define H_CPU_CHAR_COUNT_CACHE_DISABLED	(1ull << 56) // IBM bit 7
> +#define H_CPU_CHAR_BCCTR_FLUSH_ASSIST	(1ull << 54) // IBM bit 9
>  
>  #define H_CPU_BEHAV_FAVOUR_SECURITY	(1ull << 63) // IBM bit 0
>  #define H_CPU_BEHAV_L1D_FLUSH_PR	(1ull << 62) // IBM bit 1
>  #define H_CPU_BEHAV_BNDS_CHK_SPEC_BAR	(1ull << 61) // IBM bit 2
> +#define H_CPU_BEHAV_FLUSH_COUNT_CACHE	(1ull << 58) // IBM bit 5
>  
>  #ifndef __ASSEMBLY__
>  #include <linux/types.h>
> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
> index 45e2aefece16..08e5df3395fa 100644
> - --- a/arch/powerpc/include/asm/paca.h
> +++ b/arch/powerpc/include/asm/paca.h
> @@ -199,8 +199,7 @@ struct paca_struct {
>  	 */
>  	u64 exrfi[13] __aligned(0x80);
>  	void *rfi_flush_fallback_area;
> - -	u64 l1d_flush_congruence;
> - -	u64 l1d_flush_sets;
> +	u64 l1d_flush_size;
>  #endif
>  };
>  
> diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
> index 7ab04fc59e24..faf1bb045dee 100644
> - --- a/arch/powerpc/include/asm/ppc-opcode.h
> +++ b/arch/powerpc/include/asm/ppc-opcode.h
> @@ -147,6 +147,7 @@
>  #define PPC_INST_LWSYNC			0x7c2004ac
>  #define PPC_INST_SYNC			0x7c0004ac
>  #define PPC_INST_SYNC_MASK		0xfc0007fe
> +#define PPC_INST_ISYNC			0x4c00012c
>  #define PPC_INST_LXVD2X			0x7c000698
>  #define PPC_INST_MCRXR			0x7c000400
>  #define PPC_INST_MCRXR_MASK		0xfc0007fe
> diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
> index 160bb2311bbb..d219816b3e19 100644
> - --- a/arch/powerpc/include/asm/ppc_asm.h
> +++ b/arch/powerpc/include/asm/ppc_asm.h
> @@ -821,4 +821,15 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,945)
>  	.long 0x2400004c  /* rfid				*/
>  #endif /* !CONFIG_PPC_BOOK3E */
>  #endif /*  __ASSEMBLY__ */
> +
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +#define BTB_FLUSH(reg)			\
> +	lis reg,BUCSR_INIT@h;		\
> +	ori reg,reg,BUCSR_INIT@l;	\
> +	mtspr SPRN_BUCSR,reg;		\
> +	isync;
> +#else
> +#define BTB_FLUSH(reg)
> +#endif /* CONFIG_PPC_FSL_BOOK3E */
> +
>  #endif /* _ASM_POWERPC_PPC_ASM_H */
> diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
> new file mode 100644
> index 000000000000..759597bf0fd8
> - --- /dev/null
> +++ b/arch/powerpc/include/asm/security_features.h
> @@ -0,0 +1,92 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * Security related feature bit definitions.
> + *
> + * Copyright 2018, Michael Ellerman, IBM Corporation.
> + */
> +
> +#ifndef _ASM_POWERPC_SECURITY_FEATURES_H
> +#define _ASM_POWERPC_SECURITY_FEATURES_H
> +
> +
> +extern unsigned long powerpc_security_features;
> +extern bool rfi_flush;
> +
> +/* These are bit flags */
> +enum stf_barrier_type {
> +	STF_BARRIER_NONE	= 0x1,
> +	STF_BARRIER_FALLBACK	= 0x2,
> +	STF_BARRIER_EIEIO	= 0x4,
> +	STF_BARRIER_SYNC_ORI	= 0x8,
> +};
> +
> +void setup_stf_barrier(void);
> +void do_stf_barrier_fixups(enum stf_barrier_type types);
> +void setup_count_cache_flush(void);
> +
> +static inline void security_ftr_set(unsigned long feature)
> +{
> +	powerpc_security_features |= feature;
> +}
> +
> +static inline void security_ftr_clear(unsigned long feature)
> +{
> +	powerpc_security_features &= ~feature;
> +}
> +
> +static inline bool security_ftr_enabled(unsigned long feature)
> +{
> +	return !!(powerpc_security_features & feature);
> +}
> +
> +
> +// Features indicating support for Spectre/Meltdown mitigations
> +
> +// The L1-D cache can be flushed with ori r30,r30,0
> +#define SEC_FTR_L1D_FLUSH_ORI30		0x0000000000000001ull
> +
> +// The L1-D cache can be flushed with mtspr 882,r0 (aka SPRN_TRIG2)
> +#define SEC_FTR_L1D_FLUSH_TRIG2		0x0000000000000002ull
> +
> +// ori r31,r31,0 acts as a speculation barrier
> +#define SEC_FTR_SPEC_BAR_ORI31		0x0000000000000004ull
> +
> +// Speculation past bctr is disabled
> +#define SEC_FTR_BCCTRL_SERIALISED	0x0000000000000008ull
> +
> +// Entries in L1-D are private to a SMT thread
> +#define SEC_FTR_L1D_THREAD_PRIV		0x0000000000000010ull
> +
> +// Indirect branch prediction cache disabled
> +#define SEC_FTR_COUNT_CACHE_DISABLED	0x0000000000000020ull
> +
> +// bcctr 2,0,0 triggers a hardware assisted count cache flush
> +#define SEC_FTR_BCCTR_FLUSH_ASSIST	0x0000000000000800ull
> +
> +
> +// Features indicating need for Spectre/Meltdown mitigations
> +
> +// The L1-D cache should be flushed on MSR[HV] 1->0 transition (hypervisor to guest)
> +#define SEC_FTR_L1D_FLUSH_HV		0x0000000000000040ull
> +
> +// The L1-D cache should be flushed on MSR[PR] 0->1 transition (kernel to userspace)
> +#define SEC_FTR_L1D_FLUSH_PR		0x0000000000000080ull
> +
> +// A speculation barrier should be used for bounds checks (Spectre variant 1)
> +#define SEC_FTR_BNDS_CHK_SPEC_BAR	0x0000000000000100ull
> +
> +// Firmware configuration indicates user favours security over performance
> +#define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
> +
> +// Software required to flush count cache on context switch
> +#define SEC_FTR_FLUSH_COUNT_CACHE	0x0000000000000400ull
> +
> +
> +// Features enabled by default
> +#define SEC_FTR_DEFAULT \
> +	(SEC_FTR_L1D_FLUSH_HV | \
> +	 SEC_FTR_L1D_FLUSH_PR | \
> +	 SEC_FTR_BNDS_CHK_SPEC_BAR | \
> +	 SEC_FTR_FAVOUR_SECURITY)
> +
> +#endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
> diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
> index 7916b56f2e60..d299479c770b 100644
> - --- a/arch/powerpc/include/asm/setup.h
> +++ b/arch/powerpc/include/asm/setup.h
> @@ -8,6 +8,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
>  
>  extern unsigned int rtas_data;
>  extern unsigned long long memory_limit;
> +extern bool init_mem_is_free;
>  extern unsigned long klimit;
>  extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
>  
> @@ -36,8 +37,28 @@ enum l1d_flush_type {
>  	L1D_FLUSH_MTTRIG	= 0x8,
>  };
>  
> - -void __init setup_rfi_flush(enum l1d_flush_type, bool enable);
> +void setup_rfi_flush(enum l1d_flush_type, bool enable);
>  void do_rfi_flush_fixups(enum l1d_flush_type types);
> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
> +void setup_barrier_nospec(void);
> +#else
> +static inline void setup_barrier_nospec(void) { };
> +#endif
> +void do_barrier_nospec_fixups(bool enable);
> +extern bool barrier_nospec_enabled;
> +
> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
> +void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
> +#else
> +static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
> +#endif
> +
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +void setup_spectre_v2(void);
> +#else
> +static inline void setup_spectre_v2(void) {};
> +#endif
> +void do_btb_flush_fixups(void);
>  
>  #endif /* !__ASSEMBLY__ */
>  
> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
> index 05f1389228d2..e51ce5a0e221 100644
> - --- a/arch/powerpc/include/asm/uaccess.h
> +++ b/arch/powerpc/include/asm/uaccess.h
> @@ -269,6 +269,7 @@ do {								\
>  	__chk_user_ptr(ptr);					\
>  	if (!is_kernel_addr((unsigned long)__gu_addr))		\
>  		might_fault();					\
> +	barrier_nospec();					\
>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>  	(x) = (__typeof__(*(ptr)))__gu_val;			\
>  	__gu_err;						\
> @@ -283,6 +284,7 @@ do {								\
>  	__chk_user_ptr(ptr);					\
>  	if (!is_kernel_addr((unsigned long)__gu_addr))		\
>  		might_fault();					\
> +	barrier_nospec();					\
>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>  	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
>  	__gu_err;						\
> @@ -295,8 +297,10 @@ do {								\
>  	unsigned long  __gu_val = 0;					\
>  	__typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
>  	might_fault();							\
> - -	if (access_ok(VERIFY_READ, __gu_addr, (size)))			\
> +	if (access_ok(VERIFY_READ, __gu_addr, (size))) {		\
> +		barrier_nospec();					\
>  		__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
> +	}								\
>  	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
>  	__gu_err;							\
>  })
> @@ -307,6 +311,7 @@ do {								\
>  	unsigned long __gu_val;					\
>  	__typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
>  	__chk_user_ptr(ptr);					\
> +	barrier_nospec();					\
>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>  	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
>  	__gu_err;						\
> @@ -323,8 +328,10 @@ extern unsigned long __copy_tofrom_user(void __user *to,
>  static inline unsigned long copy_from_user(void *to,
>  		const void __user *from, unsigned long n)
>  {
> - -	if (likely(access_ok(VERIFY_READ, from, n)))
> +	if (likely(access_ok(VERIFY_READ, from, n))) {
> +		barrier_nospec();
>  		return __copy_tofrom_user((__force void __user *)to, from, n);
> +	}
>  	memset(to, 0, n);
>  	return n;
>  }
> @@ -359,21 +366,27 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
>  
>  		switch (n) {
>  		case 1:
> +			barrier_nospec();
>  			__get_user_size(*(u8 *)to, from, 1, ret);
>  			break;
>  		case 2:
> +			barrier_nospec();
>  			__get_user_size(*(u16 *)to, from, 2, ret);
>  			break;
>  		case 4:
> +			barrier_nospec();
>  			__get_user_size(*(u32 *)to, from, 4, ret);
>  			break;
>  		case 8:
> +			barrier_nospec();
>  			__get_user_size(*(u64 *)to, from, 8, ret);
>  			break;
>  		}
>  		if (ret == 0)
>  			return 0;
>  	}
> +
> +	barrier_nospec();
>  	return __copy_tofrom_user((__force void __user *)to, from, n);
>  }
>  
> @@ -400,6 +413,7 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
>  		if (ret == 0)
>  			return 0;
>  	}
> +
>  	return __copy_tofrom_user(to, (__force const void __user *)from, n);
>  }
>  
> diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
> index ba336930d448..22ed3c32fca8 100644
> - --- a/arch/powerpc/kernel/Makefile
> +++ b/arch/powerpc/kernel/Makefile
> @@ -44,6 +44,7 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o
>  obj-$(CONFIG_PPC_BOOK3S_64)	+= mce.o mce_power.o
>  obj64-$(CONFIG_RELOCATABLE)	+= reloc_64.o
>  obj-$(CONFIG_PPC_BOOK3E_64)	+= exceptions-64e.o idle_book3e.o
> +obj-$(CONFIG_PPC_BARRIER_NOSPEC) += security.o
>  obj-$(CONFIG_PPC64)		+= vdso64/
>  obj-$(CONFIG_ALTIVEC)		+= vecemu.o
>  obj-$(CONFIG_PPC_970_NAP)	+= idle_power4.o
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index d92705e3a0c1..de3c29c51503 100644
> - --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -245,8 +245,7 @@ int main(void)
>  	DEFINE(PACA_IN_MCE, offsetof(struct paca_struct, in_mce));
>  	DEFINE(PACA_RFI_FLUSH_FALLBACK_AREA, offsetof(struct paca_struct, rfi_flush_fallback_area));
>  	DEFINE(PACA_EXRFI, offsetof(struct paca_struct, exrfi));
> - -	DEFINE(PACA_L1D_FLUSH_CONGRUENCE, offsetof(struct paca_struct, l1d_flush_congruence));
> - -	DEFINE(PACA_L1D_FLUSH_SETS, offsetof(struct paca_struct, l1d_flush_sets));
> +	DEFINE(PACA_L1D_FLUSH_SIZE, offsetof(struct paca_struct, l1d_flush_size));
>  #endif
>  	DEFINE(PACAHWCPUID, offsetof(struct paca_struct, hw_cpu_id));
>  	DEFINE(PACAKEXECSTATE, offsetof(struct paca_struct, kexec_state));
> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> index 59be96917369..6d36a4fb4acf 100644
> - --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
> @@ -25,6 +25,7 @@
>  #include <asm/page.h>
>  #include <asm/mmu.h>
>  #include <asm/thread_info.h>
> +#include <asm/code-patching-asm.h>
>  #include <asm/ppc_asm.h>
>  #include <asm/asm-offsets.h>
>  #include <asm/cputable.h>
> @@ -36,6 +37,7 @@
>  #include <asm/hw_irq.h>
>  #include <asm/context_tracking.h>
>  #include <asm/tm.h>
> +#include <asm/barrier.h>
>  #ifdef CONFIG_PPC_BOOK3S
>  #include <asm/exception-64s.h>
>  #else
> @@ -75,6 +77,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
>  	std	r0,GPR0(r1)
>  	std	r10,GPR1(r1)
>  	beq	2f			/* if from kernel mode */
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +START_BTB_FLUSH_SECTION
> +	BTB_FLUSH(r10)
> +END_BTB_FLUSH_SECTION
> +#endif
>  	ACCOUNT_CPU_USER_ENTRY(r10, r11)
>  2:	std	r2,GPR2(r1)
>  	std	r3,GPR3(r1)
> @@ -177,6 +184,15 @@ system_call:			/* label this so stack traces look sane */
>  	clrldi	r8,r8,32
>  15:
>  	slwi	r0,r0,4
> +
> +	barrier_nospec_asm
> +	/*
> +	 * Prevent the load of the handler below (based on the user-passed
> +	 * system call number) being speculatively executed until the test
> +	 * against NR_syscalls and branch to .Lsyscall_enosys above has
> +	 * committed.
> +	 */
> +
>  	ldx	r12,r11,r0	/* Fetch system call handler [ptr] */
>  	mtctr   r12
>  	bctrl			/* Call handler */
> @@ -440,6 +456,57 @@ _GLOBAL(ret_from_kernel_thread)
>  	li	r3,0
>  	b	.Lsyscall_exit
>  
> +#ifdef CONFIG_PPC_BOOK3S_64
> +
> +#define FLUSH_COUNT_CACHE	\
> +1:	nop;			\
> +	patch_site 1b, patch__call_flush_count_cache
> +
> +
> +#define BCCTR_FLUSH	.long 0x4c400420
> +
> +.macro nops number
> +	.rept \number
> +	nop
> +	.endr
> +.endm
> +
> +.balign 32
> +.global flush_count_cache
> +flush_count_cache:
> +	/* Save LR into r9 */
> +	mflr	r9
> +
> +	.rept 64
> +	bl	.+4
> +	.endr
> +	b	1f
> +	nops	6
> +
> +	.balign 32
> +	/* Restore LR */
> +1:	mtlr	r9
> +	li	r9,0x7fff
> +	mtctr	r9
> +
> +	BCCTR_FLUSH
> +
> +2:	nop
> +	patch_site 2b patch__flush_count_cache_return
> +
> +	nops	3
> +
> +	.rept 278
> +	.balign 32
> +	BCCTR_FLUSH
> +	nops	7
> +	.endr
> +
> +	blr
> +#else
> +#define FLUSH_COUNT_CACHE
> +#endif /* CONFIG_PPC_BOOK3S_64 */
> +
>  /*
>   * This routine switches between two different tasks.  The process
>   * state of one is saved on its kernel stack.  Then the state
> @@ -503,6 +570,8 @@ BEGIN_FTR_SECTION
>  END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
>  #endif
>  
> +	FLUSH_COUNT_CACHE
> +
>  #ifdef CONFIG_SMP
>  	/* We need a sync somewhere here to make sure that if the
>  	 * previous task gets rescheduled on another CPU, it sees all
> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
> index 5cc93f0b52ca..48ec841ea1bf 100644
> - --- a/arch/powerpc/kernel/exceptions-64e.S
> +++ b/arch/powerpc/kernel/exceptions-64e.S
> @@ -295,7 +295,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>  	andi.	r10,r11,MSR_PR;		/* save stack pointer */	    \
>  	beq	1f;			/* branch around if supervisor */   \
>  	ld	r1,PACAKSAVE(r13);	/* get kernel stack coming from usr */\
> - -1:	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
> +1:	type##_BTB_FLUSH		\
> +	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
>  	bge-	cr1,exc_##n##_bad_stack;/* bad stack (TODO: out of line) */ \
>  	mfspr	r10,SPRN_##type##_SRR0;	/* read SRR0 before touching stack */
>  
> @@ -327,6 +328,30 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>  #define SPRN_MC_SRR0	SPRN_MCSRR0
>  #define SPRN_MC_SRR1	SPRN_MCSRR1
>  
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +#define GEN_BTB_FLUSH			\
> +	START_BTB_FLUSH_SECTION		\
> +		beq 1f;			\
> +		BTB_FLUSH(r10)			\
> +		1:		\
> +	END_BTB_FLUSH_SECTION
> +
> +#define CRIT_BTB_FLUSH			\
> +	START_BTB_FLUSH_SECTION		\
> +		BTB_FLUSH(r10)		\
> +	END_BTB_FLUSH_SECTION
> +
> +#define DBG_BTB_FLUSH CRIT_BTB_FLUSH
> +#define MC_BTB_FLUSH CRIT_BTB_FLUSH
> +#define GDBELL_BTB_FLUSH GEN_BTB_FLUSH
> +#else
> +#define GEN_BTB_FLUSH
> +#define CRIT_BTB_FLUSH
> +#define DBG_BTB_FLUSH
> +#define MC_BTB_FLUSH
> +#define GDBELL_BTB_FLUSH
> +#endif
> +
>  #define NORMAL_EXCEPTION_PROLOG(n, intnum, addition)			    \
>  	EXCEPTION_PROLOG(n, intnum, GEN, addition##_GEN(n))
>  
> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
> index 938a30fef031..10e7cec9553d 100644
> - --- a/arch/powerpc/kernel/exceptions-64s.S
> +++ b/arch/powerpc/kernel/exceptions-64s.S
> @@ -36,6 +36,7 @@ BEGIN_FTR_SECTION						\
>  END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
>  	mr	r9,r13 ;					\
>  	GET_PACA(r13) ;						\
> +	INTERRUPT_TO_KERNEL ;					\
>  	mfspr	r11,SPRN_SRR0 ;					\
>  0:
>  
> @@ -292,7 +293,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
>  	. = 0x900
>  	.globl decrementer_pSeries
>  decrementer_pSeries:
> - -	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
> +	SET_SCRATCH0(r13)
> +	EXCEPTION_PROLOG_0(PACA_EXGEN)
> +	b	decrementer_ool
>  
>  	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
>  
> @@ -319,6 +322,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
>  	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
>  	HMT_MEDIUM;
>  	std	r10,PACA_EXGEN+EX_R10(r13)
> +	INTERRUPT_TO_KERNEL
>  	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
>  	mfcr	r9
>  	KVMTEST(0xc00)
> @@ -607,6 +611,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
>  
>  	.align	7
>  	/* moved from 0xe00 */
> +	MASKABLE_EXCEPTION_OOL(0x900, decrementer)
>  	STD_EXCEPTION_HV_OOL(0xe02, h_data_storage)
>  	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02)
>  	STD_EXCEPTION_HV_OOL(0xe22, h_instr_storage)
> @@ -1564,6 +1569,21 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>  	blr
>  #endif
>  
> +	.balign 16
> +	.globl stf_barrier_fallback
> +stf_barrier_fallback:
> +	std	r9,PACA_EXRFI+EX_R9(r13)
> +	std	r10,PACA_EXRFI+EX_R10(r13)
> +	sync
> +	ld	r9,PACA_EXRFI+EX_R9(r13)
> +	ld	r10,PACA_EXRFI+EX_R10(r13)
> +	ori	31,31,0
> +	.rept 14
> +	b	1f
> +1:
> +	.endr
> +	blr
> +
>  	.globl rfi_flush_fallback
>  rfi_flush_fallback:
>  	SET_SCRATCH0(r13);
> @@ -1571,39 +1591,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>  	std	r9,PACA_EXRFI+EX_R9(r13)
>  	std	r10,PACA_EXRFI+EX_R10(r13)
>  	std	r11,PACA_EXRFI+EX_R11(r13)
> - -	std	r12,PACA_EXRFI+EX_R12(r13)
> - -	std	r8,PACA_EXRFI+EX_R13(r13)
>  	mfctr	r9
>  	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
> - -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
> - -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
> - -	/*
> - -	 * The load adresses are at staggered offsets within cachelines,
> - -	 * which suits some pipelines better (on others it should not
> - -	 * hurt).
> - -	 */
> - -	addi	r12,r12,8
> +	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
> +	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
>  	mtctr	r11
>  	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
>  
>  	/* order ld/st prior to dcbt stop all streams with flushing */
>  	sync
> - -1:	li	r8,0
> - -	.rept	8 /* 8-way set associative */
> - -	ldx	r11,r10,r8
> - -	add	r8,r8,r12
> - -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
> - -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
> - -	.endr
> - -	addi	r10,r10,128 /* 128 byte cache line */
> +
> +	/*
> +	 * The load adresses are at staggered offsets within cachelines,
> +	 * which suits some pipelines better (on others it should not
> +	 * hurt).
> +	 */
> +1:
> +	ld	r11,(0x80 + 8)*0(r10)
> +	ld	r11,(0x80 + 8)*1(r10)
> +	ld	r11,(0x80 + 8)*2(r10)
> +	ld	r11,(0x80 + 8)*3(r10)
> +	ld	r11,(0x80 + 8)*4(r10)
> +	ld	r11,(0x80 + 8)*5(r10)
> +	ld	r11,(0x80 + 8)*6(r10)
> +	ld	r11,(0x80 + 8)*7(r10)
> +	addi	r10,r10,0x80*8
>  	bdnz	1b
>  
>  	mtctr	r9
>  	ld	r9,PACA_EXRFI+EX_R9(r13)
>  	ld	r10,PACA_EXRFI+EX_R10(r13)
>  	ld	r11,PACA_EXRFI+EX_R11(r13)
> - -	ld	r12,PACA_EXRFI+EX_R12(r13)
> - -	ld	r8,PACA_EXRFI+EX_R13(r13)
>  	GET_SCRATCH0(r13);
>  	rfid
>  
> @@ -1614,39 +1632,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>  	std	r9,PACA_EXRFI+EX_R9(r13)
>  	std	r10,PACA_EXRFI+EX_R10(r13)
>  	std	r11,PACA_EXRFI+EX_R11(r13)
> - -	std	r12,PACA_EXRFI+EX_R12(r13)
> - -	std	r8,PACA_EXRFI+EX_R13(r13)
>  	mfctr	r9
>  	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
> - -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
> - -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
> - -	/*
> - -	 * The load adresses are at staggered offsets within cachelines,
> - -	 * which suits some pipelines better (on others it should not
> - -	 * hurt).
> - -	 */
> - -	addi	r12,r12,8
> +	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
> +	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
>  	mtctr	r11
>  	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
>  
>  	/* order ld/st prior to dcbt stop all streams with flushing */
>  	sync
> - -1:	li	r8,0
> - -	.rept	8 /* 8-way set associative */
> - -	ldx	r11,r10,r8
> - -	add	r8,r8,r12
> - -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
> - -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
> - -	.endr
> - -	addi	r10,r10,128 /* 128 byte cache line */
> +
> +	/*
> +	 * The load adresses are at staggered offsets within cachelines,
> +	 * which suits some pipelines better (on others it should not
> +	 * hurt).
> +	 */
> +1:
> +	ld	r11,(0x80 + 8)*0(r10)
> +	ld	r11,(0x80 + 8)*1(r10)
> +	ld	r11,(0x80 + 8)*2(r10)
> +	ld	r11,(0x80 + 8)*3(r10)
> +	ld	r11,(0x80 + 8)*4(r10)
> +	ld	r11,(0x80 + 8)*5(r10)
> +	ld	r11,(0x80 + 8)*6(r10)
> +	ld	r11,(0x80 + 8)*7(r10)
> +	addi	r10,r10,0x80*8
>  	bdnz	1b
>  
>  	mtctr	r9
>  	ld	r9,PACA_EXRFI+EX_R9(r13)
>  	ld	r10,PACA_EXRFI+EX_R10(r13)
>  	ld	r11,PACA_EXRFI+EX_R11(r13)
> - -	ld	r12,PACA_EXRFI+EX_R12(r13)
> - -	ld	r8,PACA_EXRFI+EX_R13(r13)
>  	GET_SCRATCH0(r13);
>  	hrfid
>  
> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
> index 9547381b631a..ff009be97a42 100644
> - --- a/arch/powerpc/kernel/module.c
> +++ b/arch/powerpc/kernel/module.c
> @@ -67,7 +67,15 @@ int module_finalize(const Elf_Ehdr *hdr,
>  		do_feature_fixups(powerpc_firmware_features,
>  				  (void *)sect->sh_addr,
>  				  (void *)sect->sh_addr + sect->sh_size);
> - -#endif
> +#endif /* CONFIG_PPC64 */
> +
> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
> +	sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
> +	if (sect != NULL)
> +		do_barrier_nospec_fixups_range(barrier_nospec_enabled,
> +				  (void *)sect->sh_addr,
> +				  (void *)sect->sh_addr + sect->sh_size);
> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>  
>  	sect = find_section(hdr, sechdrs, "__lwsync_fixup");
>  	if (sect != NULL)
> diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
> new file mode 100644
> index 000000000000..58f0602a92b9
> - --- /dev/null
> +++ b/arch/powerpc/kernel/security.c
> @@ -0,0 +1,433 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +//
> +// Security related flags and so on.
> +//
> +// Copyright 2018, Michael Ellerman, IBM Corporation.
> +
> +#include <linux/kernel.h>
> +#include <linux/debugfs.h>
> +#include <linux/device.h>
> +#include <linux/seq_buf.h>
> +
> +#include <asm/debug.h>
> +#include <asm/asm-prototypes.h>
> +#include <asm/code-patching.h>
> +#include <asm/security_features.h>
> +#include <asm/setup.h>
> +
> +
> +unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
> +
> +enum count_cache_flush_type {
> +	COUNT_CACHE_FLUSH_NONE	= 0x1,
> +	COUNT_CACHE_FLUSH_SW	= 0x2,
> +	COUNT_CACHE_FLUSH_HW	= 0x4,
> +};
> +static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
> +
> +bool barrier_nospec_enabled;
> +static bool no_nospec;
> +static bool btb_flush_enabled;
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +static bool no_spectrev2;
> +#endif
> +
> +static void enable_barrier_nospec(bool enable)
> +{
> +	barrier_nospec_enabled = enable;
> +	do_barrier_nospec_fixups(enable);
> +}
> +
> +void setup_barrier_nospec(void)
> +{
> +	bool enable;
> +
> +	/*
> +	 * It would make sense to check SEC_FTR_SPEC_BAR_ORI31 below as well.
> +	 * But there's a good reason not to. The two flags we check below are
> +	 * both are enabled by default in the kernel, so if the hcall is not
> +	 * functional they will be enabled.
> +	 * On a system where the host firmware has been updated (so the ori
> +	 * functions as a barrier), but on which the hypervisor (KVM/Qemu) has
> +	 * not been updated, we would like to enable the barrier. Dropping the
> +	 * check for SEC_FTR_SPEC_BAR_ORI31 achieves that. The only downside is
> +	 * we potentially enable the barrier on systems where the host firmware
> +	 * is not updated, but that's harmless as it's a no-op.
> +	 */
> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
> +		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
> +
> +	if (!no_nospec)
> +		enable_barrier_nospec(enable);
> +}
> +
> +static int __init handle_nospectre_v1(char *p)
> +{
> +	no_nospec = true;
> +
> +	return 0;
> +}
> +early_param("nospectre_v1", handle_nospectre_v1);
> +
> +#ifdef CONFIG_DEBUG_FS
> +static int barrier_nospec_set(void *data, u64 val)
> +{
> +	switch (val) {
> +	case 0:
> +	case 1:
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	if (!!val == !!barrier_nospec_enabled)
> +		return 0;
> +
> +	enable_barrier_nospec(!!val);
> +
> +	return 0;
> +}
> +
> +static int barrier_nospec_get(void *data, u64 *val)
> +{
> +	*val = barrier_nospec_enabled ? 1 : 0;
> +	return 0;
> +}
> +
> +DEFINE_SIMPLE_ATTRIBUTE(fops_barrier_nospec,
> +			barrier_nospec_get, barrier_nospec_set, "%llu\n");
> +
> +static __init int barrier_nospec_debugfs_init(void)
> +{
> +	debugfs_create_file("barrier_nospec", 0600, powerpc_debugfs_root, NULL,
> +			    &fops_barrier_nospec);
> +	return 0;
> +}
> +device_initcall(barrier_nospec_debugfs_init);
> +#endif /* CONFIG_DEBUG_FS */
> +
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +static int __init handle_nospectre_v2(char *p)
> +{
> +	no_spectrev2 = true;
> +
> +	return 0;
> +}
> +early_param("nospectre_v2", handle_nospectre_v2);
> +void setup_spectre_v2(void)
> +{
> +	if (no_spectrev2)
> +		do_btb_flush_fixups();
> +	else
> +		btb_flush_enabled = true;
> +}
> +#endif /* CONFIG_PPC_FSL_BOOK3E */
> +
> +#ifdef CONFIG_PPC_BOOK3S_64
> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> +	bool thread_priv;
> +
> +	thread_priv = security_ftr_enabled(SEC_FTR_L1D_THREAD_PRIV);
> +
> +	if (rfi_flush || thread_priv) {
> +		struct seq_buf s;
> +		seq_buf_init(&s, buf, PAGE_SIZE - 1);
> +
> +		seq_buf_printf(&s, "Mitigation: ");
> +
> +		if (rfi_flush)
> +			seq_buf_printf(&s, "RFI Flush");
> +
> +		if (rfi_flush && thread_priv)
> +			seq_buf_printf(&s, ", ");
> +
> +		if (thread_priv)
> +			seq_buf_printf(&s, "L1D private per thread");
> +
> +		seq_buf_printf(&s, "\n");
> +
> +		return s.len;
> +	}
> +
> +	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
> +	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
> +		return sprintf(buf, "Not affected\n");
> +
> +	return sprintf(buf, "Vulnerable\n");
> +}
> +#endif
> +
> +ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> +	struct seq_buf s;
> +
> +	seq_buf_init(&s, buf, PAGE_SIZE - 1);
> +
> +	if (security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR)) {
> +		if (barrier_nospec_enabled)
> +			seq_buf_printf(&s, "Mitigation: __user pointer sanitization");
> +		else
> +			seq_buf_printf(&s, "Vulnerable");
> +
> +		if (security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31))
> +			seq_buf_printf(&s, ", ori31 speculation barrier enabled");
> +
> +		seq_buf_printf(&s, "\n");
> +	} else
> +		seq_buf_printf(&s, "Not affected\n");
> +
> +	return s.len;
> +}
> +
> +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> +	struct seq_buf s;
> +	bool bcs, ccd;
> +
> +	seq_buf_init(&s, buf, PAGE_SIZE - 1);
> +
> +	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
> +	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
> +
> +	if (bcs || ccd) {
> +		seq_buf_printf(&s, "Mitigation: ");
> +
> +		if (bcs)
> +			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
> +
> +		if (bcs && ccd)
> +			seq_buf_printf(&s, ", ");
> +
> +		if (ccd)
> +			seq_buf_printf(&s, "Indirect branch cache disabled");
> +	} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
> +		seq_buf_printf(&s, "Mitigation: Software count cache flush");
> +
> +		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
> +			seq_buf_printf(&s, " (hardware accelerated)");
> +	} else if (btb_flush_enabled) {
> +		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
> +	} else {
> +		seq_buf_printf(&s, "Vulnerable");
> +	}
> +
> +	seq_buf_printf(&s, "\n");
> +
> +	return s.len;
> +}
> +
> +#ifdef CONFIG_PPC_BOOK3S_64
> +/*
> + * Store-forwarding barrier support.
> + */
> +
> +static enum stf_barrier_type stf_enabled_flush_types;
> +static bool no_stf_barrier;
> +bool stf_barrier;
> +
> +static int __init handle_no_stf_barrier(char *p)
> +{
> +	pr_info("stf-barrier: disabled on command line.");
> +	no_stf_barrier = true;
> +	return 0;
> +}
> +
> +early_param("no_stf_barrier", handle_no_stf_barrier);
> +
> +/* This is the generic flag used by other architectures */
> +static int __init handle_ssbd(char *p)
> +{
> +	if (!p || strncmp(p, "auto", 5) == 0 || strncmp(p, "on", 2) == 0 ) {
> +		/* Until firmware tells us, we have the barrier with auto */
> +		return 0;
> +	} else if (strncmp(p, "off", 3) == 0) {
> +		handle_no_stf_barrier(NULL);
> +		return 0;
> +	} else
> +		return 1;
> +
> +	return 0;
> +}
> +early_param("spec_store_bypass_disable", handle_ssbd);
> +
> +/* This is the generic flag used by other architectures */
> +static int __init handle_no_ssbd(char *p)
> +{
> +	handle_no_stf_barrier(NULL);
> +	return 0;
> +}
> +early_param("nospec_store_bypass_disable", handle_no_ssbd);
> +
> +static void stf_barrier_enable(bool enable)
> +{
> +	if (enable)
> +		do_stf_barrier_fixups(stf_enabled_flush_types);
> +	else
> +		do_stf_barrier_fixups(STF_BARRIER_NONE);
> +
> +	stf_barrier = enable;
> +}
> +
> +void setup_stf_barrier(void)
> +{
> +	enum stf_barrier_type type;
> +	bool enable, hv;
> +
> +	hv = cpu_has_feature(CPU_FTR_HVMODE);
> +
> +	/* Default to fallback in case fw-features are not available */
> +	if (cpu_has_feature(CPU_FTR_ARCH_207S))
> +		type = STF_BARRIER_SYNC_ORI;
> +	else if (cpu_has_feature(CPU_FTR_ARCH_206))
> +		type = STF_BARRIER_FALLBACK;
> +	else
> +		type = STF_BARRIER_NONE;
> +
> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
> +		(security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) ||
> +		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) && hv));
> +
> +	if (type == STF_BARRIER_FALLBACK) {
> +		pr_info("stf-barrier: fallback barrier available\n");
> +	} else if (type == STF_BARRIER_SYNC_ORI) {
> +		pr_info("stf-barrier: hwsync barrier available\n");
> +	} else if (type == STF_BARRIER_EIEIO) {
> +		pr_info("stf-barrier: eieio barrier available\n");
> +	}
> +
> +	stf_enabled_flush_types = type;
> +
> +	if (!no_stf_barrier)
> +		stf_barrier_enable(enable);
> +}
> +
> +ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> +	if (stf_barrier && stf_enabled_flush_types != STF_BARRIER_NONE) {
> +		const char *type;
> +		switch (stf_enabled_flush_types) {
> +		case STF_BARRIER_EIEIO:
> +			type = "eieio";
> +			break;
> +		case STF_BARRIER_SYNC_ORI:
> +			type = "hwsync";
> +			break;
> +		case STF_BARRIER_FALLBACK:
> +			type = "fallback";
> +			break;
> +		default:
> +			type = "unknown";
> +		}
> +		return sprintf(buf, "Mitigation: Kernel entry/exit barrier (%s)\n", type);
> +	}
> +
> +	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
> +	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
> +		return sprintf(buf, "Not affected\n");
> +
> +	return sprintf(buf, "Vulnerable\n");
> +}
> +
> +#ifdef CONFIG_DEBUG_FS
> +static int stf_barrier_set(void *data, u64 val)
> +{
> +	bool enable;
> +
> +	if (val == 1)
> +		enable = true;
> +	else if (val == 0)
> +		enable = false;
> +	else
> +		return -EINVAL;
> +
> +	/* Only do anything if we're changing state */
> +	if (enable != stf_barrier)
> +		stf_barrier_enable(enable);
> +
> +	return 0;
> +}
> +
> +static int stf_barrier_get(void *data, u64 *val)
> +{
> +	*val = stf_barrier ? 1 : 0;
> +	return 0;
> +}
> +
> +DEFINE_SIMPLE_ATTRIBUTE(fops_stf_barrier, stf_barrier_get, stf_barrier_set, "%llu\n");
> +
> +static __init int stf_barrier_debugfs_init(void)
> +{
> +	debugfs_create_file("stf_barrier", 0600, powerpc_debugfs_root, NULL, &fops_stf_barrier);
> +	return 0;
> +}
> +device_initcall(stf_barrier_debugfs_init);
> +#endif /* CONFIG_DEBUG_FS */
> +
> +static void toggle_count_cache_flush(bool enable)
> +{
> +	if (!enable || !security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
> +		patch_instruction_site(&patch__call_flush_count_cache, PPC_INST_NOP);
> +		count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
> +		pr_info("count-cache-flush: software flush disabled.\n");
> +		return;
> +	}
> +
> +	patch_branch_site(&patch__call_flush_count_cache,
> +			  (u64)&flush_count_cache, BRANCH_SET_LINK);
> +
> +	if (!security_ftr_enabled(SEC_FTR_BCCTR_FLUSH_ASSIST)) {
> +		count_cache_flush_type = COUNT_CACHE_FLUSH_SW;
> +		pr_info("count-cache-flush: full software flush sequence enabled.\n");
> +		return;
> +	}
> +
> +	patch_instruction_site(&patch__flush_count_cache_return, PPC_INST_BLR);
> +	count_cache_flush_type = COUNT_CACHE_FLUSH_HW;
> +	pr_info("count-cache-flush: hardware assisted flush sequence enabled\n");
> +}
> +
> +void setup_count_cache_flush(void)
> +{
> +	toggle_count_cache_flush(true);
> +}
> +
> +#ifdef CONFIG_DEBUG_FS
> +static int count_cache_flush_set(void *data, u64 val)
> +{
> +	bool enable;
> +
> +	if (val == 1)
> +		enable = true;
> +	else if (val == 0)
> +		enable = false;
> +	else
> +		return -EINVAL;
> +
> +	toggle_count_cache_flush(enable);
> +
> +	return 0;
> +}
> +
> +static int count_cache_flush_get(void *data, u64 *val)
> +{
> +	if (count_cache_flush_type == COUNT_CACHE_FLUSH_NONE)
> +		*val = 0;
> +	else
> +		*val = 1;
> +
> +	return 0;
> +}
> +
> +DEFINE_SIMPLE_ATTRIBUTE(fops_count_cache_flush, count_cache_flush_get,
> +			count_cache_flush_set, "%llu\n");
> +
> +static __init int count_cache_flush_debugfs_init(void)
> +{
> +	debugfs_create_file("count_cache_flush", 0600, powerpc_debugfs_root,
> +			    NULL, &fops_count_cache_flush);
> +	return 0;
> +}
> +device_initcall(count_cache_flush_debugfs_init);
> +#endif /* CONFIG_DEBUG_FS */
> +#endif /* CONFIG_PPC_BOOK3S_64 */
> diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
> index ad8c9db61237..5a9f035bcd6b 100644
> - --- a/arch/powerpc/kernel/setup_32.c
> +++ b/arch/powerpc/kernel/setup_32.c
> @@ -322,6 +322,8 @@ void __init setup_arch(char **cmdline_p)
>  		ppc_md.setup_arch();
>  	if ( ppc_md.progress ) ppc_md.progress("arch: exit", 0x3eab);
>  
> +	setup_barrier_nospec();
> +
>  	paging_init();
>  
>  	/* Initialize the MMU context management stuff */
> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
> index 9eb469bed22b..6bb731ababc6 100644
> - --- a/arch/powerpc/kernel/setup_64.c
> +++ b/arch/powerpc/kernel/setup_64.c
> @@ -736,6 +736,8 @@ void __init setup_arch(char **cmdline_p)
>  	if (ppc_md.setup_arch)
>  		ppc_md.setup_arch();
>  
> +	setup_barrier_nospec();
> +
>  	paging_init();
>  
>  	/* Initialize the MMU context management stuff */
> @@ -873,9 +875,6 @@ static void do_nothing(void *unused)
>  
>  void rfi_flush_enable(bool enable)
>  {
> - -	if (rfi_flush == enable)
> - -		return;
> - -
>  	if (enable) {
>  		do_rfi_flush_fixups(enabled_flush_types);
>  		on_each_cpu(do_nothing, NULL, 1);
> @@ -885,11 +884,15 @@ void rfi_flush_enable(bool enable)
>  	rfi_flush = enable;
>  }
>  
> - -static void init_fallback_flush(void)
> +static void __ref init_fallback_flush(void)
>  {
>  	u64 l1d_size, limit;
>  	int cpu;
>  
> +	/* Only allocate the fallback flush area once (at boot time). */
> +	if (l1d_flush_fallback_area)
> +		return;
> +
>  	l1d_size = ppc64_caches.dsize;
>  	limit = min(safe_stack_limit(), ppc64_rma_size);
>  
> @@ -902,34 +905,23 @@ static void init_fallback_flush(void)
>  	memset(l1d_flush_fallback_area, 0, l1d_size * 2);
>  
>  	for_each_possible_cpu(cpu) {
> - -		/*
> - -		 * The fallback flush is currently coded for 8-way
> - -		 * associativity. Different associativity is possible, but it
> - -		 * will be treated as 8-way and may not evict the lines as
> - -		 * effectively.
> - -		 *
> - -		 * 128 byte lines are mandatory.
> - -		 */
> - -		u64 c = l1d_size / 8;
> - -
>  		paca[cpu].rfi_flush_fallback_area = l1d_flush_fallback_area;
> - -		paca[cpu].l1d_flush_congruence = c;
> - -		paca[cpu].l1d_flush_sets = c / 128;
> +		paca[cpu].l1d_flush_size = l1d_size;
>  	}
>  }
>  
> - -void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
> +void setup_rfi_flush(enum l1d_flush_type types, bool enable)
>  {
>  	if (types & L1D_FLUSH_FALLBACK) {
> - -		pr_info("rfi-flush: Using fallback displacement flush\n");
> +		pr_info("rfi-flush: fallback displacement flush available\n");
>  		init_fallback_flush();
>  	}
>  
>  	if (types & L1D_FLUSH_ORI)
> - -		pr_info("rfi-flush: Using ori type flush\n");
> +		pr_info("rfi-flush: ori type flush available\n");
>  
>  	if (types & L1D_FLUSH_MTTRIG)
> - -		pr_info("rfi-flush: Using mttrig type flush\n");
> +		pr_info("rfi-flush: mttrig type flush available\n");
>  
>  	enabled_flush_types = types;
>  
> @@ -940,13 +932,19 @@ void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
>  #ifdef CONFIG_DEBUG_FS
>  static int rfi_flush_set(void *data, u64 val)
>  {
> +	bool enable;
> +
>  	if (val == 1)
> - -		rfi_flush_enable(true);
> +		enable = true;
>  	else if (val == 0)
> - -		rfi_flush_enable(false);
> +		enable = false;
>  	else
>  		return -EINVAL;
>  
> +	/* Only do anything if we're changing state */
> +	if (enable != rfi_flush)
> +		rfi_flush_enable(enable);
> +
>  	return 0;
>  }
>  
> @@ -965,12 +963,4 @@ static __init int rfi_flush_debugfs_init(void)
>  }
>  device_initcall(rfi_flush_debugfs_init);
>  #endif
> - -
> - -ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
> - -{
> - -	if (rfi_flush)
> - -		return sprintf(buf, "Mitigation: RFI Flush\n");
> - -
> - -	return sprintf(buf, "Vulnerable\n");
> - -}
>  #endif /* CONFIG_PPC_BOOK3S_64 */
> diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
> index 072a23a17350..876ac9d52afc 100644
> - --- a/arch/powerpc/kernel/vmlinux.lds.S
> +++ b/arch/powerpc/kernel/vmlinux.lds.S
> @@ -73,14 +73,45 @@ SECTIONS
>  	RODATA
>  
>  #ifdef CONFIG_PPC64
> +	. = ALIGN(8);
> +	__stf_entry_barrier_fixup : AT(ADDR(__stf_entry_barrier_fixup) - LOAD_OFFSET) {
> +		__start___stf_entry_barrier_fixup = .;
> +		*(__stf_entry_barrier_fixup)
> +		__stop___stf_entry_barrier_fixup = .;
> +	}
> +
> +	. = ALIGN(8);
> +	__stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) {
> +		__start___stf_exit_barrier_fixup = .;
> +		*(__stf_exit_barrier_fixup)
> +		__stop___stf_exit_barrier_fixup = .;
> +	}
> +
>  	. = ALIGN(8);
>  	__rfi_flush_fixup : AT(ADDR(__rfi_flush_fixup) - LOAD_OFFSET) {
>  		__start___rfi_flush_fixup = .;
>  		*(__rfi_flush_fixup)
>  		__stop___rfi_flush_fixup = .;
>  	}
> - -#endif
> +#endif /* CONFIG_PPC64 */
>  
> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
> +	. = ALIGN(8);
> +	__spec_barrier_fixup : AT(ADDR(__spec_barrier_fixup) - LOAD_OFFSET) {
> +		__start___barrier_nospec_fixup = .;
> +		*(__barrier_nospec_fixup)
> +		__stop___barrier_nospec_fixup = .;
> +	}
> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
> +
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +	. = ALIGN(8);
> +	__spec_btb_flush_fixup : AT(ADDR(__spec_btb_flush_fixup) - LOAD_OFFSET) {
> +		__start__btb_flush_fixup = .;
> +		*(__btb_flush_fixup)
> +		__stop__btb_flush_fixup = .;
> +	}
> +#endif
>  	EXCEPTION_TABLE(0)
>  
>  	NOTES :kernel :notes
> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
> index d5edbeb8eb82..570c06a00db6 100644
> - --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -14,12 +14,25 @@
>  #include <asm/page.h>
>  #include <asm/code-patching.h>
>  #include <asm/uaccess.h>
> +#include <asm/setup.h>
> +#include <asm/sections.h>
>  
>  
> +static inline bool is_init(unsigned int *addr)
> +{
> +	return addr >= (unsigned int *)__init_begin && addr < (unsigned int *)__init_end;
> +}
> +
>  int patch_instruction(unsigned int *addr, unsigned int instr)
>  {
>  	int err;
>  
> +	/* Make sure we aren't patching a freed init section */
> +	if (init_mem_is_free && is_init(addr)) {
> +		pr_debug("Skipping init section patching addr: 0x%px\n", addr);
> +		return 0;
> +	}
> +
>  	__put_user_size(instr, addr, 4, err);
>  	if (err)
>  		return err;
> @@ -32,6 +45,22 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags)
>  	return patch_instruction(addr, create_branch(addr, target, flags));
>  }
>  
> +int patch_branch_site(s32 *site, unsigned long target, int flags)
> +{
> +	unsigned int *addr;
> +
> +	addr = (unsigned int *)((unsigned long)site + *site);
> +	return patch_instruction(addr, create_branch(addr, target, flags));
> +}
> +
> +int patch_instruction_site(s32 *site, unsigned int instr)
> +{
> +	unsigned int *addr;
> +
> +	addr = (unsigned int *)((unsigned long)site + *site);
> +	return patch_instruction(addr, instr);
> +}
> +
>  unsigned int create_branch(const unsigned int *addr,
>  			   unsigned long target, int flags)
>  {
> diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
> index 3af014684872..7bdfc19a491d 100644
> - --- a/arch/powerpc/lib/feature-fixups.c
> +++ b/arch/powerpc/lib/feature-fixups.c
> @@ -21,7 +21,7 @@
>  #include <asm/page.h>
>  #include <asm/sections.h>
>  #include <asm/setup.h>
> - -
> +#include <asm/security_features.h>
>  
>  struct fixup_entry {
>  	unsigned long	mask;
> @@ -115,6 +115,120 @@ void do_feature_fixups(unsigned long value, void *fixup_start, void *fixup_end)
>  }
>  
>  #ifdef CONFIG_PPC_BOOK3S_64
> +void do_stf_entry_barrier_fixups(enum stf_barrier_type types)
> +{
> +	unsigned int instrs[3], *dest;
> +	long *start, *end;
> +	int i;
> +
> +	start = PTRRELOC(&__start___stf_entry_barrier_fixup),
> +	end = PTRRELOC(&__stop___stf_entry_barrier_fixup);
> +
> +	instrs[0] = 0x60000000; /* nop */
> +	instrs[1] = 0x60000000; /* nop */
> +	instrs[2] = 0x60000000; /* nop */
> +
> +	i = 0;
> +	if (types & STF_BARRIER_FALLBACK) {
> +		instrs[i++] = 0x7d4802a6; /* mflr r10		*/
> +		instrs[i++] = 0x60000000; /* branch patched below */
> +		instrs[i++] = 0x7d4803a6; /* mtlr r10		*/
> +	} else if (types & STF_BARRIER_EIEIO) {
> +		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
> +	} else if (types & STF_BARRIER_SYNC_ORI) {
> +		instrs[i++] = 0x7c0004ac; /* hwsync		*/
> +		instrs[i++] = 0xe94d0000; /* ld r10,0(r13)	*/
> +		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
> +	}
> +
> +	for (i = 0; start < end; start++, i++) {
> +		dest = (void *)start + *start;
> +
> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
> +
> +		patch_instruction(dest, instrs[0]);
> +
> +		if (types & STF_BARRIER_FALLBACK)
> +			patch_branch(dest + 1, (unsigned long)&stf_barrier_fallback,
> +				     BRANCH_SET_LINK);
> +		else
> +			patch_instruction(dest + 1, instrs[1]);
> +
> +		patch_instruction(dest + 2, instrs[2]);
> +	}
> +
> +	printk(KERN_DEBUG "stf-barrier: patched %d entry locations (%s barrier)\n", i,
> +		(types == STF_BARRIER_NONE)                  ? "no" :
> +		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
> +		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
> +		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
> +		                                           : "unknown");
> +}
> +
> +void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
> +{
> +	unsigned int instrs[6], *dest;
> +	long *start, *end;
> +	int i;
> +
> +	start = PTRRELOC(&__start___stf_exit_barrier_fixup),
> +	end = PTRRELOC(&__stop___stf_exit_barrier_fixup);
> +
> +	instrs[0] = 0x60000000; /* nop */
> +	instrs[1] = 0x60000000; /* nop */
> +	instrs[2] = 0x60000000; /* nop */
> +	instrs[3] = 0x60000000; /* nop */
> +	instrs[4] = 0x60000000; /* nop */
> +	instrs[5] = 0x60000000; /* nop */
> +
> +	i = 0;
> +	if (types & STF_BARRIER_FALLBACK || types & STF_BARRIER_SYNC_ORI) {
> +		if (cpu_has_feature(CPU_FTR_HVMODE)) {
> +			instrs[i++] = 0x7db14ba6; /* mtspr 0x131, r13 (HSPRG1) */
> +			instrs[i++] = 0x7db04aa6; /* mfspr r13, 0x130 (HSPRG0) */
> +		} else {
> +			instrs[i++] = 0x7db243a6; /* mtsprg 2,r13	*/
> +			instrs[i++] = 0x7db142a6; /* mfsprg r13,1    */
> +	        }
> +		instrs[i++] = 0x7c0004ac; /* hwsync		*/
> +		instrs[i++] = 0xe9ad0000; /* ld r13,0(r13)	*/
> +		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
> +		if (cpu_has_feature(CPU_FTR_HVMODE)) {
> +			instrs[i++] = 0x7db14aa6; /* mfspr r13, 0x131 (HSPRG1) */
> +		} else {
> +			instrs[i++] = 0x7db242a6; /* mfsprg r13,2 */
> +		}
> +	} else if (types & STF_BARRIER_EIEIO) {
> +		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
> +	}
> +
> +	for (i = 0; start < end; start++, i++) {
> +		dest = (void *)start + *start;
> +
> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
> +
> +		patch_instruction(dest, instrs[0]);
> +		patch_instruction(dest + 1, instrs[1]);
> +		patch_instruction(dest + 2, instrs[2]);
> +		patch_instruction(dest + 3, instrs[3]);
> +		patch_instruction(dest + 4, instrs[4]);
> +		patch_instruction(dest + 5, instrs[5]);
> +	}
> +	printk(KERN_DEBUG "stf-barrier: patched %d exit locations (%s barrier)\n", i,
> +		(types == STF_BARRIER_NONE)                  ? "no" :
> +		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
> +		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
> +		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
> +		                                           : "unknown");
> +}
> +
> +
> +void do_stf_barrier_fixups(enum stf_barrier_type types)
> +{
> +	do_stf_entry_barrier_fixups(types);
> +	do_stf_exit_barrier_fixups(types);
> +}
> +
>  void do_rfi_flush_fixups(enum l1d_flush_type types)
>  {
>  	unsigned int instrs[3], *dest;
> @@ -151,10 +265,110 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
>  		patch_instruction(dest + 2, instrs[2]);
>  	}
>  
> - -	printk(KERN_DEBUG "rfi-flush: patched %d locations\n", i);
> +	printk(KERN_DEBUG "rfi-flush: patched %d locations (%s flush)\n", i,
> +		(types == L1D_FLUSH_NONE)       ? "no" :
> +		(types == L1D_FLUSH_FALLBACK)   ? "fallback displacement" :
> +		(types &  L1D_FLUSH_ORI)        ? (types & L1D_FLUSH_MTTRIG)
> +							? "ori+mttrig type"
> +							: "ori type" :
> +		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
> +						: "unknown");
> +}
> +
> +void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
> +{
> +	unsigned int instr, *dest;
> +	long *start, *end;
> +	int i;
> +
> +	start = fixup_start;
> +	end = fixup_end;
> +
> +	instr = 0x60000000; /* nop */
> +
> +	if (enable) {
> +		pr_info("barrier-nospec: using ORI speculation barrier\n");
> +		instr = 0x63ff0000; /* ori 31,31,0 speculation barrier */
> +	}
> +
> +	for (i = 0; start < end; start++, i++) {
> +		dest = (void *)start + *start;
> +
> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
> +		patch_instruction(dest, instr);
> +	}
> +
> +	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
>  }
> +
>  #endif /* CONFIG_PPC_BOOK3S_64 */
>  
> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
> +void do_barrier_nospec_fixups(bool enable)
> +{
> +	void *start, *end;
> +
> +	start = PTRRELOC(&__start___barrier_nospec_fixup),
> +	end = PTRRELOC(&__stop___barrier_nospec_fixup);
> +
> +	do_barrier_nospec_fixups_range(enable, start, end);
> +}
> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
> +
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
> +{
> +	unsigned int instr[2], *dest;
> +	long *start, *end;
> +	int i;
> +
> +	start = fixup_start;
> +	end = fixup_end;
> +
> +	instr[0] = PPC_INST_NOP;
> +	instr[1] = PPC_INST_NOP;
> +
> +	if (enable) {
> +		pr_info("barrier-nospec: using isync; sync as speculation barrier\n");
> +		instr[0] = PPC_INST_ISYNC;
> +		instr[1] = PPC_INST_SYNC;
> +	}
> +
> +	for (i = 0; start < end; start++, i++) {
> +		dest = (void *)start + *start;
> +
> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
> +		patch_instruction(dest, instr[0]);
> +		patch_instruction(dest + 1, instr[1]);
> +	}
> +
> +	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
> +}
> +
> +static void patch_btb_flush_section(long *curr)
> +{
> +	unsigned int *start, *end;
> +
> +	start = (void *)curr + *curr;
> +	end = (void *)curr + *(curr + 1);
> +	for (; start < end; start++) {
> +		pr_devel("patching dest %lx\n", (unsigned long)start);
> +		patch_instruction(start, PPC_INST_NOP);
> +	}
> +}
> +
> +void do_btb_flush_fixups(void)
> +{
> +	long *start, *end;
> +
> +	start = PTRRELOC(&__start__btb_flush_fixup);
> +	end = PTRRELOC(&__stop__btb_flush_fixup);
> +
> +	for (; start < end; start += 2)
> +		patch_btb_flush_section(start);
> +}
> +#endif /* CONFIG_PPC_FSL_BOOK3E */
> +
>  void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
>  {
>  	long *start, *end;
> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
> index 22d94c3e6fc4..1efe5ca5c3bc 100644
> - --- a/arch/powerpc/mm/mem.c
> +++ b/arch/powerpc/mm/mem.c
> @@ -62,6 +62,7 @@
>  #endif
>  
>  unsigned long long memory_limit;
> +bool init_mem_is_free;
>  
>  #ifdef CONFIG_HIGHMEM
>  pte_t *kmap_pte;
> @@ -381,6 +382,7 @@ void __init mem_init(void)
>  void free_initmem(void)
>  {
>  	ppc_md.progress = ppc_printk_progress;
> +	init_mem_is_free = true;
>  	free_initmem_default(POISON_FREE_INITMEM);
>  }
>  
> diff --git a/arch/powerpc/mm/tlb_low_64e.S b/arch/powerpc/mm/tlb_low_64e.S
> index 29d6987c37ba..5486d56da289 100644
> - --- a/arch/powerpc/mm/tlb_low_64e.S
> +++ b/arch/powerpc/mm/tlb_low_64e.S
> @@ -69,6 +69,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>  	std	r15,EX_TLB_R15(r12)
>  	std	r10,EX_TLB_CR(r12)
>  #ifdef CONFIG_PPC_FSL_BOOK3E
> +START_BTB_FLUSH_SECTION
> +	mfspr r11, SPRN_SRR1
> +	andi. r10,r11,MSR_PR
> +	beq 1f
> +	BTB_FLUSH(r10)
> +1:
> +END_BTB_FLUSH_SECTION
>  	std	r7,EX_TLB_R7(r12)
>  #endif
>  	TLB_MISS_PROLOG_STATS
> diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
> index c57afc619b20..e14b52c7ebd8 100644
> - --- a/arch/powerpc/platforms/powernv/setup.c
> +++ b/arch/powerpc/platforms/powernv/setup.c
> @@ -37,53 +37,99 @@
>  #include <asm/smp.h>
>  #include <asm/tm.h>
>  #include <asm/setup.h>
> +#include <asm/security_features.h>
>  
>  #include "powernv.h"
>  
> +
> +static bool fw_feature_is(const char *state, const char *name,
> +			  struct device_node *fw_features)
> +{
> +	struct device_node *np;
> +	bool rc = false;
> +
> +	np = of_get_child_by_name(fw_features, name);
> +	if (np) {
> +		rc = of_property_read_bool(np, state);
> +		of_node_put(np);
> +	}
> +
> +	return rc;
> +}
> +
> +static void init_fw_feat_flags(struct device_node *np)
> +{
> +	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
> +		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
> +
> +	if (fw_feature_is("enabled", "fw-bcctrl-serialized", np))
> +		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
> +
> +	if (fw_feature_is("enabled", "inst-l1d-flush-ori30,30,0", np))
> +		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
> +
> +	if (fw_feature_is("enabled", "inst-l1d-flush-trig2", np))
> +		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
> +
> +	if (fw_feature_is("enabled", "fw-l1d-thread-split", np))
> +		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
> +
> +	if (fw_feature_is("enabled", "fw-count-cache-disabled", np))
> +		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
> +
> +	if (fw_feature_is("enabled", "fw-count-cache-flush-bcctr2,0,0", np))
> +		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
> +
> +	if (fw_feature_is("enabled", "needs-count-cache-flush-on-context-switch", np))
> +		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
> +
> +	/*
> +	 * The features below are enabled by default, so we instead look to see
> +	 * if firmware has *disabled* them, and clear them if so.
> +	 */
> +	if (fw_feature_is("disabled", "speculation-policy-favor-security", np))
> +		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
> +
> +	if (fw_feature_is("disabled", "needs-l1d-flush-msr-pr-0-to-1", np))
> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
> +
> +	if (fw_feature_is("disabled", "needs-l1d-flush-msr-hv-1-to-0", np))
> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
> +
> +	if (fw_feature_is("disabled", "needs-spec-barrier-for-bound-checks", np))
> +		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
> +}
> +
>  static void pnv_setup_rfi_flush(void)
>  {
>  	struct device_node *np, *fw_features;
>  	enum l1d_flush_type type;
> - -	int enable;
> +	bool enable;
>  
>  	/* Default to fallback in case fw-features are not available */
>  	type = L1D_FLUSH_FALLBACK;
> - -	enable = 1;
>  
>  	np = of_find_node_by_name(NULL, "ibm,opal");
>  	fw_features = of_get_child_by_name(np, "fw-features");
>  	of_node_put(np);
>  
>  	if (fw_features) {
> - -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-trig2");
> - -		if (np && of_property_read_bool(np, "enabled"))
> - -			type = L1D_FLUSH_MTTRIG;
> +		init_fw_feat_flags(fw_features);
> +		of_node_put(fw_features);
>  
> - -		of_node_put(np);
> +		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
> +			type = L1D_FLUSH_MTTRIG;
>  
> - -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-ori30,30,0");
> - -		if (np && of_property_read_bool(np, "enabled"))
> +		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
>  			type = L1D_FLUSH_ORI;
> - -
> - -		of_node_put(np);
> - -
> - -		/* Enable unless firmware says NOT to */
> - -		enable = 2;
> - -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-hv-1-to-0");
> - -		if (np && of_property_read_bool(np, "disabled"))
> - -			enable--;
> - -
> - -		of_node_put(np);
> - -
> - -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-pr-0-to-1");
> - -		if (np && of_property_read_bool(np, "disabled"))
> - -			enable--;
> - -
> - -		of_node_put(np);
> - -		of_node_put(fw_features);
>  	}
>  
> - -	setup_rfi_flush(type, enable > 0);
> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
> +		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR)   || \
> +		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
> +
> +	setup_rfi_flush(type, enable);
> +	setup_count_cache_flush();
>  }
>  
>  static void __init pnv_setup_arch(void)
> @@ -91,6 +137,7 @@ static void __init pnv_setup_arch(void)
>  	set_arch_panic_timeout(10, ARCH_PANIC_TIMEOUT);
>  
>  	pnv_setup_rfi_flush();
> +	setup_stf_barrier();
>  
>  	/* Initialize SMP */
>  	pnv_smp_init();
> diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
> index 8dd0c8edefd6..c773396d0969 100644
> - --- a/arch/powerpc/platforms/pseries/mobility.c
> +++ b/arch/powerpc/platforms/pseries/mobility.c
> @@ -314,6 +314,9 @@ void post_mobility_fixup(void)
>  		printk(KERN_ERR "Post-mobility device tree update "
>  			"failed: %d\n", rc);
>  
> +	/* Possibly switch to a new RFI flush type */
> +	pseries_setup_rfi_flush();
> +
>  	return;
>  }
>  
> diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h
> index 8411c27293e4..e7d80797384d 100644
> - --- a/arch/powerpc/platforms/pseries/pseries.h
> +++ b/arch/powerpc/platforms/pseries/pseries.h
> @@ -81,4 +81,6 @@ extern struct pci_controller_ops pseries_pci_controller_ops;
>  
>  unsigned long pseries_memory_block_size(void);
>  
> +void pseries_setup_rfi_flush(void);
> +
>  #endif /* _PSERIES_PSERIES_H */
> diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
> index dd2545fc9947..9cc976ff7fec 100644
> - --- a/arch/powerpc/platforms/pseries/setup.c
> +++ b/arch/powerpc/platforms/pseries/setup.c
> @@ -67,6 +67,7 @@
>  #include <asm/eeh.h>
>  #include <asm/reg.h>
>  #include <asm/plpar_wrappers.h>
> +#include <asm/security_features.h>
>  
>  #include "pseries.h"
>  
> @@ -499,37 +500,87 @@ static void __init find_and_init_phbs(void)
>  	of_pci_check_probe_only();
>  }
>  
> - -static void pseries_setup_rfi_flush(void)
> +static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
> +{
> +	/*
> +	 * The features below are disabled by default, so we instead look to see
> +	 * if firmware has *enabled* them, and set them if so.
> +	 */
> +	if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31)
> +		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
> +
> +	if (result->character & H_CPU_CHAR_BCCTRL_SERIALISED)
> +		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
> +
> +	if (result->character & H_CPU_CHAR_L1D_FLUSH_ORI30)
> +		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
> +
> +	if (result->character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
> +		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
> +
> +	if (result->character & H_CPU_CHAR_L1D_THREAD_PRIV)
> +		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
> +
> +	if (result->character & H_CPU_CHAR_COUNT_CACHE_DISABLED)
> +		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
> +
> +	if (result->character & H_CPU_CHAR_BCCTR_FLUSH_ASSIST)
> +		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
> +
> +	if (result->behaviour & H_CPU_BEHAV_FLUSH_COUNT_CACHE)
> +		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
> +
> +	/*
> +	 * The features below are enabled by default, so we instead look to see
> +	 * if firmware has *disabled* them, and clear them if so.
> +	 */
> +	if (!(result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY))
> +		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
> +
> +	if (!(result->behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
> +
> +	if (!(result->behaviour & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
> +		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
> +}
> +
> +void pseries_setup_rfi_flush(void)
>  {
>  	struct h_cpu_char_result result;
>  	enum l1d_flush_type types;
>  	bool enable;
>  	long rc;
>  
> - -	/* Enable by default */
> - -	enable = true;
> +	/*
> +	 * Set features to the defaults assumed by init_cpu_char_feature_flags()
> +	 * so it can set/clear again any features that might have changed after
> +	 * migration, and in case the hypercall fails and it is not even called.
> +	 */
> +	powerpc_security_features = SEC_FTR_DEFAULT;
>  
>  	rc = plpar_get_cpu_characteristics(&result);
> - -	if (rc == H_SUCCESS) {
> - -		types = L1D_FLUSH_NONE;
> +	if (rc == H_SUCCESS)
> +		init_cpu_char_feature_flags(&result);
>  
> - -		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
> - -			types |= L1D_FLUSH_MTTRIG;
> - -		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
> - -			types |= L1D_FLUSH_ORI;
> +	/*
> +	 * We're the guest so this doesn't apply to us, clear it to simplify
> +	 * handling of it elsewhere.
> +	 */
> +	security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
>  
> - -		/* Use fallback if nothing set in hcall */
> - -		if (types == L1D_FLUSH_NONE)
> - -			types = L1D_FLUSH_FALLBACK;
> +	types = L1D_FLUSH_FALLBACK;
>  
> - -		if (!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
> - -			enable = false;
> - -	} else {
> - -		/* Default to fallback if case hcall is not available */
> - -		types = L1D_FLUSH_FALLBACK;
> - -	}
> +	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
> +		types |= L1D_FLUSH_MTTRIG;
> +
> +	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
> +		types |= L1D_FLUSH_ORI;
> +
> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
> +		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
>  
>  	setup_rfi_flush(types, enable);
> +	setup_count_cache_flush();
>  }
>  
>  static void __init pSeries_setup_arch(void)
> @@ -549,6 +600,7 @@ static void __init pSeries_setup_arch(void)
>  	fwnmi_init();
>  
>  	pseries_setup_rfi_flush();
> +	setup_stf_barrier();
>  
>  	/* By default, only probe PCI (can be overridden by rtas_pci) */
>  	pci_add_flags(PCI_PROBE_ONLY);
> diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
> index 786bf01691c9..83619ebede93 100644
> - --- a/arch/powerpc/xmon/xmon.c
> +++ b/arch/powerpc/xmon/xmon.c
> @@ -2144,6 +2144,8 @@ static void dump_one_paca(int cpu)
>  	DUMP(p, slb_cache_ptr, "x");
>  	for (i = 0; i < SLB_CACHE_ENTRIES; i++)
>  		printf(" slb_cache[%d]:        = 0x%016lx\n", i, p->slb_cache[i]);
> +
> +	DUMP(p, rfi_flush_fallback_area, "px");
>  #endif
>  	DUMP(p, dscr_default, "llx");
>  #ifdef CONFIG_PPC_BOOK3E
> - -- 
> 2.20.1
>
> -----BEGIN PGP SIGNATURE-----
>
> iQIcBAEBAgAGBQJcvHWhAAoJEFHr6jzI4aWA6nsP/0YskmAfLovcUmERQ7+bIjq6
> IcS1T466dvy6MlqeBXU4x8pVgInWeHKEC9XJdkM1lOeib/SLW7Hbz4kgJeOGwFGY
> lOTaexrxvsBqPm7f6GC0zbl9obEIIIIUs+TielFQANBgqm+q8Wio+XXPP9bpKeKY
> agSpQ3nwL/PYixznbNmN/lP9py5p89LQ0IBcR7dDBGGWJtD/AXeZ9hslsZxPbPtI
> nZJ0vdnjuoB2z+hCxfKWlYfLwH0VfoTpqP5x3ALCkvbBr67e8bf6EK8+trnvhyQ8
> iLY4bp1pm2epAI0/3NfyEiDMsGjVJ6IFlkyhDkHJgJNu0BGcGOSX2GpyU3juviAK
> c95FtBft/i8AwigOMCivg2mN5edYjsSiPoEItwT5KWqgByJsdr5i5mYVx8cUjMOz
> iAxLZCdg+UHZYuCBCAO2ZI1G9bVXI1Pa3btMspiCOOOsYGjXGf0oFfKQ+7957hUO
> ftYYJoGHlMHiHR1OPas6T3lk6YKF9uvfIDTE3OKw2obHbbRz3u82xoWMRGW503MN
> 7WpkpAP7oZ9RgqIWFVhatWy5f+7GFL0akEi4o2tsZHhYlPau7YWo+nToTd87itwt
> GBaWJipzge4s13VkhAE+jWFO35Fvwi8uNZ7UgpuKMBECEjkGbtzBTq2MjSF5G8wc
> yPEod5jby/Iqb7DkGPVG
> =6DnF
> -----END PGP SIGNATURE-----
>


^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
  2019-04-22 15:27     ` Diana Madalina Craciun
@ 2019-04-24 13:48       ` Greg KH
  -1 siblings, 0 replies; 180+ messages in thread
From: Greg KH @ 2019-04-24 13:48 UTC (permalink / raw)
  To: Diana Madalina Craciun
  Cc: Michael Ellerman, stable, linuxppc-dev, msuchanek, npiggin,
	christophe.leroy

On Mon, Apr 22, 2019 at 03:27:56PM +0000, Diana Madalina Craciun wrote:
> On 4/21/2019 7:34 PM, Greg KH wrote:
> > On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
> >> -----BEGIN PGP SIGNED MESSAGE-----
> >> Hash: SHA1
> >>
> >> Hi Greg/Sasha,
> >>
> >> Please queue up these powerpc patches for 4.4 if you have no objections.
> > why?  Do you, or someone else, really care about spectre issues in 4.4?
> > Who is using ppc for 4.4 becides a specific enterprise distro (and they
> > don't seem to be pulling in my stable updates anyway...)?
> 
> We (NXP) received questions from customers regarding Spectre mitigations
> on kernel 4.4. Not sure if they really need them as some systems are
> enclosed embedded ones, but they asked for them.

"Asking about", and "actually needing them" are two different things, as
you state.  It would be good to get confirmation from someone that these
are "actually needed".

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
@ 2019-04-24 13:48       ` Greg KH
  0 siblings, 0 replies; 180+ messages in thread
From: Greg KH @ 2019-04-24 13:48 UTC (permalink / raw)
  To: Diana Madalina Craciun; +Cc: npiggin, linuxppc-dev, stable, msuchanek

On Mon, Apr 22, 2019 at 03:27:56PM +0000, Diana Madalina Craciun wrote:
> On 4/21/2019 7:34 PM, Greg KH wrote:
> > On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
> >> -----BEGIN PGP SIGNED MESSAGE-----
> >> Hash: SHA1
> >>
> >> Hi Greg/Sasha,
> >>
> >> Please queue up these powerpc patches for 4.4 if you have no objections.
> > why?  Do you, or someone else, really care about spectre issues in 4.4?
> > Who is using ppc for 4.4 becides a specific enterprise distro (and they
> > don't seem to be pulling in my stable updates anyway...)?
> 
> We (NXP) received questions from customers regarding Spectre mitigations
> on kernel 4.4. Not sure if they really need them as some systems are
> enclosed embedded ones, but they asked for them.

"Asking about", and "actually needing them" are two different things, as
you state.  It would be good to get confirmation from someone that these
are "actually needed".

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
  2019-04-21 16:34   ` Greg KH
@ 2019-04-28  6:17     ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-28  6:17 UTC (permalink / raw)
  To: Greg KH
  Cc: stable, linuxppc-dev, diana.craciun, msuchanek, npiggin,
	christophe.leroy

Greg KH <gregkh@linuxfoundation.org> writes:

> On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>> 
>> Hi Greg/Sasha,
>> 
>> Please queue up these powerpc patches for 4.4 if you have no objections.
>
> why?  Do you, or someone else, really care about spectre issues in 4.4?
> Who is using ppc for 4.4 becides a specific enterprise distro (and they
> don't seem to be pulling in my stable updates anyway...)?

Someone asked for it, but TBH I can't remember who it was. I can chase
it up if you like.

cheers

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
@ 2019-04-28  6:17     ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-28  6:17 UTC (permalink / raw)
  To: Greg KH; +Cc: npiggin, diana.craciun, linuxppc-dev, stable, msuchanek

Greg KH <gregkh@linuxfoundation.org> writes:

> On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>> 
>> Hi Greg/Sasha,
>> 
>> Please queue up these powerpc patches for 4.4 if you have no objections.
>
> why?  Do you, or someone else, really care about spectre issues in 4.4?
> Who is using ppc for 4.4 becides a specific enterprise distro (and they
> don't seem to be pulling in my stable updates anyway...)?

Someone asked for it, but TBH I can't remember who it was. I can chase
it up if you like.

cheers

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
  2019-04-22 15:32   ` Diana Madalina Craciun
@ 2019-04-28  6:20     ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-28  6:20 UTC (permalink / raw)
  To: Diana Madalina Craciun, stable, gregkh
  Cc: linuxppc-dev, msuchanek, npiggin, christophe.leroy

Diana Madalina Craciun <diana.craciun@nxp.com> writes:
> Hi Michael,
>
> There are some missing NXP Spectre v2 patches. I can send them
> separately if the series will be accepted. I have merged them, but I did
> not test them, I was sick today and incapable of doing that.

No worries, there's no rush :)

Sorry I missed them, I thought I had a list that included everything.
Which commits was it I missed?

I guess post them as a reply to this thread? That way whether the series
is merged by Greg or not, there's a record here of what the backports
look like.

cheers

> On 4/21/2019 5:21 PM, Michael Ellerman wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>> Hi Greg/Sasha,
>>
>> Please queue up these powerpc patches for 4.4 if you have no objections.
>>
>> cheers
>>
>>
>> Christophe Leroy (1):
>>   powerpc/fsl: Fix the flush of branch predictor.
>>
>> Diana Craciun (10):
>>   powerpc/64: Disable the speculation barrier from the command line
>>   powerpc/64: Make stf barrier PPC_BOOK3S_64 specific.
>>   powerpc/64: Make meltdown reporting Book3S 64 specific
>>   powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E
>>   powerpc/fsl: Add infrastructure to fixup branch predictor flush
>>   powerpc/fsl: Add macro to flush the branch predictor
>>   powerpc/fsl: Fix spectre_v2 mitigations reporting
>>   powerpc/fsl: Add nospectre_v2 command line argument
>>   powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)
>>   powerpc/fsl: Update Spectre v2 reporting
>>
>> Mauricio Faria de Oliveira (4):
>>   powerpc/rfi-flush: Differentiate enabled and patched flush types
>>   powerpc/pseries: Fix clearing of security feature flags
>>   powerpc: Move default security feature flags
>>   powerpc/pseries: Restore default security feature flags on setup
>>
>> Michael Ellerman (29):
>>   powerpc/xmon: Add RFI flush related fields to paca dump
>>   powerpc/pseries: Support firmware disable of RFI flush
>>   powerpc/powernv: Support firmware disable of RFI flush
>>   powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs
>>     code
>>   powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again
>>   powerpc/rfi-flush: Always enable fallback flush on pseries
>>   powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags
>>   powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration
>>   powerpc: Add security feature flags for Spectre/Meltdown
>>   powerpc/pseries: Set or clear security feature flags
>>   powerpc/powernv: Set or clear security feature flags
>>   powerpc/64s: Move cpu_show_meltdown()
>>   powerpc/64s: Enhance the information in cpu_show_meltdown()
>>   powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()
>>   powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()
>>   powerpc/64s: Wire up cpu_show_spectre_v1()
>>   powerpc/64s: Wire up cpu_show_spectre_v2()
>>   powerpc/64s: Fix section mismatch warnings from setup_rfi_flush()
>>   powerpc/64: Use barrier_nospec in syscall entry
>>   powerpc: Use barrier_nospec in copy_from_user()
>>   powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2
>>   powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC
>>   powerpc/64: Call setup_barrier_nospec() from setup_arch()
>>   powerpc/asm: Add a patch_site macro & helpers for patching
>>     instructions
>>   powerpc/64s: Add new security feature flags for count cache flush
>>   powerpc/64s: Add support for software count cache flush
>>   powerpc/pseries: Query hypervisor for count cache flush settings
>>   powerpc/powernv: Query firmware for count cache flush settings
>>   powerpc/security: Fix spectre_v2 reporting
>>
>> Michael Neuling (1):
>>   powerpc: Avoid code patching freed init sections
>>
>> Michal Suchanek (5):
>>   powerpc/64s: Add barrier_nospec
>>   powerpc/64s: Add support for ori barrier_nospec patching
>>   powerpc/64s: Patch barrier_nospec in modules
>>   powerpc/64s: Enable barrier_nospec based on firmware settings
>>   powerpc/64s: Enhance the information in cpu_show_spectre_v1()
>>
>> Nicholas Piggin (2):
>>   powerpc/64s: Improve RFI L1-D cache flush fallback
>>   powerpc/64s: Add support for a store forwarding barrier at kernel
>>     entry/exit
>>
>>  arch/powerpc/Kconfig                         |   7 +-
>>  arch/powerpc/include/asm/asm-prototypes.h    |  21 +
>>  arch/powerpc/include/asm/barrier.h           |  21 +
>>  arch/powerpc/include/asm/code-patching-asm.h |  18 +
>>  arch/powerpc/include/asm/code-patching.h     |   2 +
>>  arch/powerpc/include/asm/exception-64s.h     |  35 ++
>>  arch/powerpc/include/asm/feature-fixups.h    |  40 ++
>>  arch/powerpc/include/asm/hvcall.h            |   5 +
>>  arch/powerpc/include/asm/paca.h              |   3 +-
>>  arch/powerpc/include/asm/ppc-opcode.h        |   1 +
>>  arch/powerpc/include/asm/ppc_asm.h           |  11 +
>>  arch/powerpc/include/asm/security_features.h |  92 ++++
>>  arch/powerpc/include/asm/setup.h             |  23 +-
>>  arch/powerpc/include/asm/uaccess.h           |  18 +-
>>  arch/powerpc/kernel/Makefile                 |   1 +
>>  arch/powerpc/kernel/asm-offsets.c            |   3 +-
>>  arch/powerpc/kernel/entry_64.S               |  69 +++
>>  arch/powerpc/kernel/exceptions-64e.S         |  27 +-
>>  arch/powerpc/kernel/exceptions-64s.S         |  98 +++--
>>  arch/powerpc/kernel/module.c                 |  10 +-
>>  arch/powerpc/kernel/security.c               | 433 +++++++++++++++++++
>>  arch/powerpc/kernel/setup_32.c               |   2 +
>>  arch/powerpc/kernel/setup_64.c               |  50 +--
>>  arch/powerpc/kernel/vmlinux.lds.S            |  33 +-
>>  arch/powerpc/lib/code-patching.c             |  29 ++
>>  arch/powerpc/lib/feature-fixups.c            | 218 +++++++++-
>>  arch/powerpc/mm/mem.c                        |   2 +
>>  arch/powerpc/mm/tlb_low_64e.S                |   7 +
>>  arch/powerpc/platforms/powernv/setup.c       |  99 +++--
>>  arch/powerpc/platforms/pseries/mobility.c    |   3 +
>>  arch/powerpc/platforms/pseries/pseries.h     |   2 +
>>  arch/powerpc/platforms/pseries/setup.c       |  88 +++-
>>  arch/powerpc/xmon/xmon.c                     |   2 +
>>  33 files changed, 1345 insertions(+), 128 deletions(-)
>>  create mode 100644 arch/powerpc/include/asm/asm-prototypes.h
>>  create mode 100644 arch/powerpc/include/asm/code-patching-asm.h
>>  create mode 100644 arch/powerpc/include/asm/security_features.h
>>  create mode 100644 arch/powerpc/kernel/security.c
>>
>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>> index 58a1fa979655..01b6c00a7060 100644
>> - --- a/arch/powerpc/Kconfig
>> +++ b/arch/powerpc/Kconfig
>> @@ -136,7 +136,7 @@ config PPC
>>  	select GENERIC_SMP_IDLE_THREAD
>>  	select GENERIC_CMOS_UPDATE
>>  	select GENERIC_TIME_VSYSCALL_OLD
>> - -	select GENERIC_CPU_VULNERABILITIES	if PPC_BOOK3S_64
>> +	select GENERIC_CPU_VULNERABILITIES	if PPC_BARRIER_NOSPEC
>>  	select GENERIC_CLOCKEVENTS
>>  	select GENERIC_CLOCKEVENTS_BROADCAST if SMP
>>  	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
>> @@ -162,6 +162,11 @@ config PPC
>>  	select ARCH_HAS_DMA_SET_COHERENT_MASK
>>  	select HAVE_ARCH_SECCOMP_FILTER
>>  
>> +config PPC_BARRIER_NOSPEC
>> +    bool
>> +    default y
>> +    depends on PPC_BOOK3S_64 || PPC_FSL_BOOK3E
>> +
>>  config GENERIC_CSUM
>>  	def_bool CPU_LITTLE_ENDIAN
>>  
>> diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
>> new file mode 100644
>> index 000000000000..8944c55591cf
>> - --- /dev/null
>> +++ b/arch/powerpc/include/asm/asm-prototypes.h
>> @@ -0,0 +1,21 @@
>> +#ifndef _ASM_POWERPC_ASM_PROTOTYPES_H
>> +#define _ASM_POWERPC_ASM_PROTOTYPES_H
>> +/*
>> + * This file is for prototypes of C functions that are only called
>> + * from asm, and any associated variables.
>> + *
>> + * Copyright 2016, Daniel Axtens, IBM Corporation.
>> + *
>> + * This program is free software; you can redistribute it and/or
>> + * modify it under the terms of the GNU General Public License
>> + * as published by the Free Software Foundation; either version 2
>> + * of the License, or (at your option) any later version.
>> + */
>> +
>> +/* Patch sites */
>> +extern s32 patch__call_flush_count_cache;
>> +extern s32 patch__flush_count_cache_return;
>> +
>> +extern long flush_count_cache;
>> +
>> +#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
>> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
>> index b9e16855a037..e7cb72cdb2ba 100644
>> - --- a/arch/powerpc/include/asm/barrier.h
>> +++ b/arch/powerpc/include/asm/barrier.h
>> @@ -92,4 +92,25 @@ do {									\
>>  #define smp_mb__after_atomic()      smp_mb()
>>  #define smp_mb__before_spinlock()   smp_mb()
>>  
>> +#ifdef CONFIG_PPC_BOOK3S_64
>> +#define NOSPEC_BARRIER_SLOT   nop
>> +#elif defined(CONFIG_PPC_FSL_BOOK3E)
>> +#define NOSPEC_BARRIER_SLOT   nop; nop
>> +#endif
>> +
>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>> +/*
>> + * Prevent execution of subsequent instructions until preceding branches have
>> + * been fully resolved and are no longer executing speculatively.
>> + */
>> +#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; NOSPEC_BARRIER_SLOT
>> +
>> +// This also acts as a compiler barrier due to the memory clobber.
>> +#define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
>> +
>> +#else /* !CONFIG_PPC_BARRIER_NOSPEC */
>> +#define barrier_nospec_asm
>> +#define barrier_nospec()
>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>> +
>>  #endif /* _ASM_POWERPC_BARRIER_H */
>> diff --git a/arch/powerpc/include/asm/code-patching-asm.h b/arch/powerpc/include/asm/code-patching-asm.h
>> new file mode 100644
>> index 000000000000..ed7b1448493a
>> - --- /dev/null
>> +++ b/arch/powerpc/include/asm/code-patching-asm.h
>> @@ -0,0 +1,18 @@
>> +/* SPDX-License-Identifier: GPL-2.0+ */
>> +/*
>> + * Copyright 2018, Michael Ellerman, IBM Corporation.
>> + */
>> +#ifndef _ASM_POWERPC_CODE_PATCHING_ASM_H
>> +#define _ASM_POWERPC_CODE_PATCHING_ASM_H
>> +
>> +/* Define a "site" that can be patched */
>> +.macro patch_site label name
>> +	.pushsection ".rodata"
>> +	.balign 4
>> +	.global \name
>> +\name:
>> +	.4byte	\label - .
>> +	.popsection
>> +.endm
>> +
>> +#endif /* _ASM_POWERPC_CODE_PATCHING_ASM_H */
>> diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
>> index 840a5509b3f1..a734b4b34d26 100644
>> - --- a/arch/powerpc/include/asm/code-patching.h
>> +++ b/arch/powerpc/include/asm/code-patching.h
>> @@ -28,6 +28,8 @@ unsigned int create_cond_branch(const unsigned int *addr,
>>  				unsigned long target, int flags);
>>  int patch_branch(unsigned int *addr, unsigned long target, int flags);
>>  int patch_instruction(unsigned int *addr, unsigned int instr);
>> +int patch_instruction_site(s32 *addr, unsigned int instr);
>> +int patch_branch_site(s32 *site, unsigned long target, int flags);
>>  
>>  int instr_is_relative_branch(unsigned int instr);
>>  int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
>> diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
>> index 9bddbec441b8..3ed536bec462 100644
>> - --- a/arch/powerpc/include/asm/exception-64s.h
>> +++ b/arch/powerpc/include/asm/exception-64s.h
>> @@ -50,6 +50,27 @@
>>  #define EX_PPR		88	/* SMT thread status register (priority) */
>>  #define EX_CTR		96
>>  
>> +#define STF_ENTRY_BARRIER_SLOT						\
>> +	STF_ENTRY_BARRIER_FIXUP_SECTION;				\
>> +	nop;								\
>> +	nop;								\
>> +	nop
>> +
>> +#define STF_EXIT_BARRIER_SLOT						\
>> +	STF_EXIT_BARRIER_FIXUP_SECTION;					\
>> +	nop;								\
>> +	nop;								\
>> +	nop;								\
>> +	nop;								\
>> +	nop;								\
>> +	nop
>> +
>> +/*
>> + * r10 must be free to use, r13 must be paca
>> + */
>> +#define INTERRUPT_TO_KERNEL						\
>> +	STF_ENTRY_BARRIER_SLOT
>> +
>>  /*
>>   * Macros for annotating the expected destination of (h)rfid
>>   *
>> @@ -66,16 +87,19 @@
>>  	rfid
>>  
>>  #define RFI_TO_USER							\
>> +	STF_EXIT_BARRIER_SLOT;						\
>>  	RFI_FLUSH_SLOT;							\
>>  	rfid;								\
>>  	b	rfi_flush_fallback
>>  
>>  #define RFI_TO_USER_OR_KERNEL						\
>> +	STF_EXIT_BARRIER_SLOT;						\
>>  	RFI_FLUSH_SLOT;							\
>>  	rfid;								\
>>  	b	rfi_flush_fallback
>>  
>>  #define RFI_TO_GUEST							\
>> +	STF_EXIT_BARRIER_SLOT;						\
>>  	RFI_FLUSH_SLOT;							\
>>  	rfid;								\
>>  	b	rfi_flush_fallback
>> @@ -84,21 +108,25 @@
>>  	hrfid
>>  
>>  #define HRFI_TO_USER							\
>> +	STF_EXIT_BARRIER_SLOT;						\
>>  	RFI_FLUSH_SLOT;							\
>>  	hrfid;								\
>>  	b	hrfi_flush_fallback
>>  
>>  #define HRFI_TO_USER_OR_KERNEL						\
>> +	STF_EXIT_BARRIER_SLOT;						\
>>  	RFI_FLUSH_SLOT;							\
>>  	hrfid;								\
>>  	b	hrfi_flush_fallback
>>  
>>  #define HRFI_TO_GUEST							\
>> +	STF_EXIT_BARRIER_SLOT;						\
>>  	RFI_FLUSH_SLOT;							\
>>  	hrfid;								\
>>  	b	hrfi_flush_fallback
>>  
>>  #define HRFI_TO_UNKNOWN							\
>> +	STF_EXIT_BARRIER_SLOT;						\
>>  	RFI_FLUSH_SLOT;							\
>>  	hrfid;								\
>>  	b	hrfi_flush_fallback
>> @@ -226,6 +254,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
>>  #define __EXCEPTION_PROLOG_1(area, extra, vec)				\
>>  	OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR);		\
>>  	OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);		\
>> +	INTERRUPT_TO_KERNEL;						\
>>  	SAVE_CTR(r10, area);						\
>>  	mfcr	r9;							\
>>  	extra(vec);							\
>> @@ -512,6 +541,12 @@ label##_relon_hv:						\
>>  #define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
>>  	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
>>  
>> +#define MASKABLE_EXCEPTION_OOL(vec, label)				\
>> +	.globl label##_ool;						\
>> +label##_ool:								\
>> +	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
>> +	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
>> +
>>  #define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
>>  	. = loc;							\
>>  	.globl label##_pSeries;						\
>> diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
>> index 7068bafbb2d6..145a37ab2d3e 100644
>> - --- a/arch/powerpc/include/asm/feature-fixups.h
>> +++ b/arch/powerpc/include/asm/feature-fixups.h
>> @@ -184,6 +184,22 @@ label##3:					       	\
>>  	FTR_ENTRY_OFFSET label##1b-label##3b;		\
>>  	.popsection;
>>  
>> +#define STF_ENTRY_BARRIER_FIXUP_SECTION			\
>> +953:							\
>> +	.pushsection __stf_entry_barrier_fixup,"a";	\
>> +	.align 2;					\
>> +954:							\
>> +	FTR_ENTRY_OFFSET 953b-954b;			\
>> +	.popsection;
>> +
>> +#define STF_EXIT_BARRIER_FIXUP_SECTION			\
>> +955:							\
>> +	.pushsection __stf_exit_barrier_fixup,"a";	\
>> +	.align 2;					\
>> +956:							\
>> +	FTR_ENTRY_OFFSET 955b-956b;			\
>> +	.popsection;
>> +
>>  #define RFI_FLUSH_FIXUP_SECTION				\
>>  951:							\
>>  	.pushsection __rfi_flush_fixup,"a";		\
>> @@ -192,10 +208,34 @@ label##3:					       	\
>>  	FTR_ENTRY_OFFSET 951b-952b;			\
>>  	.popsection;
>>  
>> +#define NOSPEC_BARRIER_FIXUP_SECTION			\
>> +953:							\
>> +	.pushsection __barrier_nospec_fixup,"a";	\
>> +	.align 2;					\
>> +954:							\
>> +	FTR_ENTRY_OFFSET 953b-954b;			\
>> +	.popsection;
>> +
>> +#define START_BTB_FLUSH_SECTION			\
>> +955:							\
>> +
>> +#define END_BTB_FLUSH_SECTION			\
>> +956:							\
>> +	.pushsection __btb_flush_fixup,"a";	\
>> +	.align 2;							\
>> +957:						\
>> +	FTR_ENTRY_OFFSET 955b-957b;			\
>> +	FTR_ENTRY_OFFSET 956b-957b;			\
>> +	.popsection;
>>  
>>  #ifndef __ASSEMBLY__
>>  
>> +extern long stf_barrier_fallback;
>> +extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
>> +extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
>>  extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
>> +extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
>> +extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
>>  
>>  #endif
>>  
>> diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
>> index 449bbb87c257..b57db9d09db9 100644
>> - --- a/arch/powerpc/include/asm/hvcall.h
>> +++ b/arch/powerpc/include/asm/hvcall.h
>> @@ -292,10 +292,15 @@
>>  #define H_CPU_CHAR_L1D_FLUSH_ORI30	(1ull << 61) // IBM bit 2
>>  #define H_CPU_CHAR_L1D_FLUSH_TRIG2	(1ull << 60) // IBM bit 3
>>  #define H_CPU_CHAR_L1D_THREAD_PRIV	(1ull << 59) // IBM bit 4
>> +#define H_CPU_CHAR_BRANCH_HINTS_HONORED	(1ull << 58) // IBM bit 5
>> +#define H_CPU_CHAR_THREAD_RECONFIG_CTRL	(1ull << 57) // IBM bit 6
>> +#define H_CPU_CHAR_COUNT_CACHE_DISABLED	(1ull << 56) // IBM bit 7
>> +#define H_CPU_CHAR_BCCTR_FLUSH_ASSIST	(1ull << 54) // IBM bit 9
>>  
>>  #define H_CPU_BEHAV_FAVOUR_SECURITY	(1ull << 63) // IBM bit 0
>>  #define H_CPU_BEHAV_L1D_FLUSH_PR	(1ull << 62) // IBM bit 1
>>  #define H_CPU_BEHAV_BNDS_CHK_SPEC_BAR	(1ull << 61) // IBM bit 2
>> +#define H_CPU_BEHAV_FLUSH_COUNT_CACHE	(1ull << 58) // IBM bit 5
>>  
>>  #ifndef __ASSEMBLY__
>>  #include <linux/types.h>
>> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
>> index 45e2aefece16..08e5df3395fa 100644
>> - --- a/arch/powerpc/include/asm/paca.h
>> +++ b/arch/powerpc/include/asm/paca.h
>> @@ -199,8 +199,7 @@ struct paca_struct {
>>  	 */
>>  	u64 exrfi[13] __aligned(0x80);
>>  	void *rfi_flush_fallback_area;
>> - -	u64 l1d_flush_congruence;
>> - -	u64 l1d_flush_sets;
>> +	u64 l1d_flush_size;
>>  #endif
>>  };
>>  
>> diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
>> index 7ab04fc59e24..faf1bb045dee 100644
>> - --- a/arch/powerpc/include/asm/ppc-opcode.h
>> +++ b/arch/powerpc/include/asm/ppc-opcode.h
>> @@ -147,6 +147,7 @@
>>  #define PPC_INST_LWSYNC			0x7c2004ac
>>  #define PPC_INST_SYNC			0x7c0004ac
>>  #define PPC_INST_SYNC_MASK		0xfc0007fe
>> +#define PPC_INST_ISYNC			0x4c00012c
>>  #define PPC_INST_LXVD2X			0x7c000698
>>  #define PPC_INST_MCRXR			0x7c000400
>>  #define PPC_INST_MCRXR_MASK		0xfc0007fe
>> diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
>> index 160bb2311bbb..d219816b3e19 100644
>> - --- a/arch/powerpc/include/asm/ppc_asm.h
>> +++ b/arch/powerpc/include/asm/ppc_asm.h
>> @@ -821,4 +821,15 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,945)
>>  	.long 0x2400004c  /* rfid				*/
>>  #endif /* !CONFIG_PPC_BOOK3E */
>>  #endif /*  __ASSEMBLY__ */
>> +
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +#define BTB_FLUSH(reg)			\
>> +	lis reg,BUCSR_INIT@h;		\
>> +	ori reg,reg,BUCSR_INIT@l;	\
>> +	mtspr SPRN_BUCSR,reg;		\
>> +	isync;
>> +#else
>> +#define BTB_FLUSH(reg)
>> +#endif /* CONFIG_PPC_FSL_BOOK3E */
>> +
>>  #endif /* _ASM_POWERPC_PPC_ASM_H */
>> diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
>> new file mode 100644
>> index 000000000000..759597bf0fd8
>> - --- /dev/null
>> +++ b/arch/powerpc/include/asm/security_features.h
>> @@ -0,0 +1,92 @@
>> +/* SPDX-License-Identifier: GPL-2.0+ */
>> +/*
>> + * Security related feature bit definitions.
>> + *
>> + * Copyright 2018, Michael Ellerman, IBM Corporation.
>> + */
>> +
>> +#ifndef _ASM_POWERPC_SECURITY_FEATURES_H
>> +#define _ASM_POWERPC_SECURITY_FEATURES_H
>> +
>> +
>> +extern unsigned long powerpc_security_features;
>> +extern bool rfi_flush;
>> +
>> +/* These are bit flags */
>> +enum stf_barrier_type {
>> +	STF_BARRIER_NONE	= 0x1,
>> +	STF_BARRIER_FALLBACK	= 0x2,
>> +	STF_BARRIER_EIEIO	= 0x4,
>> +	STF_BARRIER_SYNC_ORI	= 0x8,
>> +};
>> +
>> +void setup_stf_barrier(void);
>> +void do_stf_barrier_fixups(enum stf_barrier_type types);
>> +void setup_count_cache_flush(void);
>> +
>> +static inline void security_ftr_set(unsigned long feature)
>> +{
>> +	powerpc_security_features |= feature;
>> +}
>> +
>> +static inline void security_ftr_clear(unsigned long feature)
>> +{
>> +	powerpc_security_features &= ~feature;
>> +}
>> +
>> +static inline bool security_ftr_enabled(unsigned long feature)
>> +{
>> +	return !!(powerpc_security_features & feature);
>> +}
>> +
>> +
>> +// Features indicating support for Spectre/Meltdown mitigations
>> +
>> +// The L1-D cache can be flushed with ori r30,r30,0
>> +#define SEC_FTR_L1D_FLUSH_ORI30		0x0000000000000001ull
>> +
>> +// The L1-D cache can be flushed with mtspr 882,r0 (aka SPRN_TRIG2)
>> +#define SEC_FTR_L1D_FLUSH_TRIG2		0x0000000000000002ull
>> +
>> +// ori r31,r31,0 acts as a speculation barrier
>> +#define SEC_FTR_SPEC_BAR_ORI31		0x0000000000000004ull
>> +
>> +// Speculation past bctr is disabled
>> +#define SEC_FTR_BCCTRL_SERIALISED	0x0000000000000008ull
>> +
>> +// Entries in L1-D are private to a SMT thread
>> +#define SEC_FTR_L1D_THREAD_PRIV		0x0000000000000010ull
>> +
>> +// Indirect branch prediction cache disabled
>> +#define SEC_FTR_COUNT_CACHE_DISABLED	0x0000000000000020ull
>> +
>> +// bcctr 2,0,0 triggers a hardware assisted count cache flush
>> +#define SEC_FTR_BCCTR_FLUSH_ASSIST	0x0000000000000800ull
>> +
>> +
>> +// Features indicating need for Spectre/Meltdown mitigations
>> +
>> +// The L1-D cache should be flushed on MSR[HV] 1->0 transition (hypervisor to guest)
>> +#define SEC_FTR_L1D_FLUSH_HV		0x0000000000000040ull
>> +
>> +// The L1-D cache should be flushed on MSR[PR] 0->1 transition (kernel to userspace)
>> +#define SEC_FTR_L1D_FLUSH_PR		0x0000000000000080ull
>> +
>> +// A speculation barrier should be used for bounds checks (Spectre variant 1)
>> +#define SEC_FTR_BNDS_CHK_SPEC_BAR	0x0000000000000100ull
>> +
>> +// Firmware configuration indicates user favours security over performance
>> +#define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
>> +
>> +// Software required to flush count cache on context switch
>> +#define SEC_FTR_FLUSH_COUNT_CACHE	0x0000000000000400ull
>> +
>> +
>> +// Features enabled by default
>> +#define SEC_FTR_DEFAULT \
>> +	(SEC_FTR_L1D_FLUSH_HV | \
>> +	 SEC_FTR_L1D_FLUSH_PR | \
>> +	 SEC_FTR_BNDS_CHK_SPEC_BAR | \
>> +	 SEC_FTR_FAVOUR_SECURITY)
>> +
>> +#endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
>> diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
>> index 7916b56f2e60..d299479c770b 100644
>> - --- a/arch/powerpc/include/asm/setup.h
>> +++ b/arch/powerpc/include/asm/setup.h
>> @@ -8,6 +8,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
>>  
>>  extern unsigned int rtas_data;
>>  extern unsigned long long memory_limit;
>> +extern bool init_mem_is_free;
>>  extern unsigned long klimit;
>>  extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
>>  
>> @@ -36,8 +37,28 @@ enum l1d_flush_type {
>>  	L1D_FLUSH_MTTRIG	= 0x8,
>>  };
>>  
>> - -void __init setup_rfi_flush(enum l1d_flush_type, bool enable);
>> +void setup_rfi_flush(enum l1d_flush_type, bool enable);
>>  void do_rfi_flush_fixups(enum l1d_flush_type types);
>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>> +void setup_barrier_nospec(void);
>> +#else
>> +static inline void setup_barrier_nospec(void) { };
>> +#endif
>> +void do_barrier_nospec_fixups(bool enable);
>> +extern bool barrier_nospec_enabled;
>> +
>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>> +void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
>> +#else
>> +static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
>> +#endif
>> +
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +void setup_spectre_v2(void);
>> +#else
>> +static inline void setup_spectre_v2(void) {};
>> +#endif
>> +void do_btb_flush_fixups(void);
>>  
>>  #endif /* !__ASSEMBLY__ */
>>  
>> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
>> index 05f1389228d2..e51ce5a0e221 100644
>> - --- a/arch/powerpc/include/asm/uaccess.h
>> +++ b/arch/powerpc/include/asm/uaccess.h
>> @@ -269,6 +269,7 @@ do {								\
>>  	__chk_user_ptr(ptr);					\
>>  	if (!is_kernel_addr((unsigned long)__gu_addr))		\
>>  		might_fault();					\
>> +	barrier_nospec();					\
>>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>>  	(x) = (__typeof__(*(ptr)))__gu_val;			\
>>  	__gu_err;						\
>> @@ -283,6 +284,7 @@ do {								\
>>  	__chk_user_ptr(ptr);					\
>>  	if (!is_kernel_addr((unsigned long)__gu_addr))		\
>>  		might_fault();					\
>> +	barrier_nospec();					\
>>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>>  	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
>>  	__gu_err;						\
>> @@ -295,8 +297,10 @@ do {								\
>>  	unsigned long  __gu_val = 0;					\
>>  	__typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
>>  	might_fault();							\
>> - -	if (access_ok(VERIFY_READ, __gu_addr, (size)))			\
>> +	if (access_ok(VERIFY_READ, __gu_addr, (size))) {		\
>> +		barrier_nospec();					\
>>  		__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>> +	}								\
>>  	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
>>  	__gu_err;							\
>>  })
>> @@ -307,6 +311,7 @@ do {								\
>>  	unsigned long __gu_val;					\
>>  	__typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
>>  	__chk_user_ptr(ptr);					\
>> +	barrier_nospec();					\
>>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>>  	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
>>  	__gu_err;						\
>> @@ -323,8 +328,10 @@ extern unsigned long __copy_tofrom_user(void __user *to,
>>  static inline unsigned long copy_from_user(void *to,
>>  		const void __user *from, unsigned long n)
>>  {
>> - -	if (likely(access_ok(VERIFY_READ, from, n)))
>> +	if (likely(access_ok(VERIFY_READ, from, n))) {
>> +		barrier_nospec();
>>  		return __copy_tofrom_user((__force void __user *)to, from, n);
>> +	}
>>  	memset(to, 0, n);
>>  	return n;
>>  }
>> @@ -359,21 +366,27 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
>>  
>>  		switch (n) {
>>  		case 1:
>> +			barrier_nospec();
>>  			__get_user_size(*(u8 *)to, from, 1, ret);
>>  			break;
>>  		case 2:
>> +			barrier_nospec();
>>  			__get_user_size(*(u16 *)to, from, 2, ret);
>>  			break;
>>  		case 4:
>> +			barrier_nospec();
>>  			__get_user_size(*(u32 *)to, from, 4, ret);
>>  			break;
>>  		case 8:
>> +			barrier_nospec();
>>  			__get_user_size(*(u64 *)to, from, 8, ret);
>>  			break;
>>  		}
>>  		if (ret == 0)
>>  			return 0;
>>  	}
>> +
>> +	barrier_nospec();
>>  	return __copy_tofrom_user((__force void __user *)to, from, n);
>>  }
>>  
>> @@ -400,6 +413,7 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
>>  		if (ret == 0)
>>  			return 0;
>>  	}
>> +
>>  	return __copy_tofrom_user(to, (__force const void __user *)from, n);
>>  }
>>  
>> diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
>> index ba336930d448..22ed3c32fca8 100644
>> - --- a/arch/powerpc/kernel/Makefile
>> +++ b/arch/powerpc/kernel/Makefile
>> @@ -44,6 +44,7 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o
>>  obj-$(CONFIG_PPC_BOOK3S_64)	+= mce.o mce_power.o
>>  obj64-$(CONFIG_RELOCATABLE)	+= reloc_64.o
>>  obj-$(CONFIG_PPC_BOOK3E_64)	+= exceptions-64e.o idle_book3e.o
>> +obj-$(CONFIG_PPC_BARRIER_NOSPEC) += security.o
>>  obj-$(CONFIG_PPC64)		+= vdso64/
>>  obj-$(CONFIG_ALTIVEC)		+= vecemu.o
>>  obj-$(CONFIG_PPC_970_NAP)	+= idle_power4.o
>> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
>> index d92705e3a0c1..de3c29c51503 100644
>> - --- a/arch/powerpc/kernel/asm-offsets.c
>> +++ b/arch/powerpc/kernel/asm-offsets.c
>> @@ -245,8 +245,7 @@ int main(void)
>>  	DEFINE(PACA_IN_MCE, offsetof(struct paca_struct, in_mce));
>>  	DEFINE(PACA_RFI_FLUSH_FALLBACK_AREA, offsetof(struct paca_struct, rfi_flush_fallback_area));
>>  	DEFINE(PACA_EXRFI, offsetof(struct paca_struct, exrfi));
>> - -	DEFINE(PACA_L1D_FLUSH_CONGRUENCE, offsetof(struct paca_struct, l1d_flush_congruence));
>> - -	DEFINE(PACA_L1D_FLUSH_SETS, offsetof(struct paca_struct, l1d_flush_sets));
>> +	DEFINE(PACA_L1D_FLUSH_SIZE, offsetof(struct paca_struct, l1d_flush_size));
>>  #endif
>>  	DEFINE(PACAHWCPUID, offsetof(struct paca_struct, hw_cpu_id));
>>  	DEFINE(PACAKEXECSTATE, offsetof(struct paca_struct, kexec_state));
>> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
>> index 59be96917369..6d36a4fb4acf 100644
>> - --- a/arch/powerpc/kernel/entry_64.S
>> +++ b/arch/powerpc/kernel/entry_64.S
>> @@ -25,6 +25,7 @@
>>  #include <asm/page.h>
>>  #include <asm/mmu.h>
>>  #include <asm/thread_info.h>
>> +#include <asm/code-patching-asm.h>
>>  #include <asm/ppc_asm.h>
>>  #include <asm/asm-offsets.h>
>>  #include <asm/cputable.h>
>> @@ -36,6 +37,7 @@
>>  #include <asm/hw_irq.h>
>>  #include <asm/context_tracking.h>
>>  #include <asm/tm.h>
>> +#include <asm/barrier.h>
>>  #ifdef CONFIG_PPC_BOOK3S
>>  #include <asm/exception-64s.h>
>>  #else
>> @@ -75,6 +77,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
>>  	std	r0,GPR0(r1)
>>  	std	r10,GPR1(r1)
>>  	beq	2f			/* if from kernel mode */
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +START_BTB_FLUSH_SECTION
>> +	BTB_FLUSH(r10)
>> +END_BTB_FLUSH_SECTION
>> +#endif
>>  	ACCOUNT_CPU_USER_ENTRY(r10, r11)
>>  2:	std	r2,GPR2(r1)
>>  	std	r3,GPR3(r1)
>> @@ -177,6 +184,15 @@ system_call:			/* label this so stack traces look sane */
>>  	clrldi	r8,r8,32
>>  15:
>>  	slwi	r0,r0,4
>> +
>> +	barrier_nospec_asm
>> +	/*
>> +	 * Prevent the load of the handler below (based on the user-passed
>> +	 * system call number) being speculatively executed until the test
>> +	 * against NR_syscalls and branch to .Lsyscall_enosys above has
>> +	 * committed.
>> +	 */
>> +
>>  	ldx	r12,r11,r0	/* Fetch system call handler [ptr] */
>>  	mtctr   r12
>>  	bctrl			/* Call handler */
>> @@ -440,6 +456,57 @@ _GLOBAL(ret_from_kernel_thread)
>>  	li	r3,0
>>  	b	.Lsyscall_exit
>>  
>> +#ifdef CONFIG_PPC_BOOK3S_64
>> +
>> +#define FLUSH_COUNT_CACHE	\
>> +1:	nop;			\
>> +	patch_site 1b, patch__call_flush_count_cache
>> +
>> +
>> +#define BCCTR_FLUSH	.long 0x4c400420
>> +
>> +.macro nops number
>> +	.rept \number
>> +	nop
>> +	.endr
>> +.endm
>> +
>> +.balign 32
>> +.global flush_count_cache
>> +flush_count_cache:
>> +	/* Save LR into r9 */
>> +	mflr	r9
>> +
>> +	.rept 64
>> +	bl	.+4
>> +	.endr
>> +	b	1f
>> +	nops	6
>> +
>> +	.balign 32
>> +	/* Restore LR */
>> +1:	mtlr	r9
>> +	li	r9,0x7fff
>> +	mtctr	r9
>> +
>> +	BCCTR_FLUSH
>> +
>> +2:	nop
>> +	patch_site 2b patch__flush_count_cache_return
>> +
>> +	nops	3
>> +
>> +	.rept 278
>> +	.balign 32
>> +	BCCTR_FLUSH
>> +	nops	7
>> +	.endr
>> +
>> +	blr
>> +#else
>> +#define FLUSH_COUNT_CACHE
>> +#endif /* CONFIG_PPC_BOOK3S_64 */
>> +
>>  /*
>>   * This routine switches between two different tasks.  The process
>>   * state of one is saved on its kernel stack.  Then the state
>> @@ -503,6 +570,8 @@ BEGIN_FTR_SECTION
>>  END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
>>  #endif
>>  
>> +	FLUSH_COUNT_CACHE
>> +
>>  #ifdef CONFIG_SMP
>>  	/* We need a sync somewhere here to make sure that if the
>>  	 * previous task gets rescheduled on another CPU, it sees all
>> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
>> index 5cc93f0b52ca..48ec841ea1bf 100644
>> - --- a/arch/powerpc/kernel/exceptions-64e.S
>> +++ b/arch/powerpc/kernel/exceptions-64e.S
>> @@ -295,7 +295,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>>  	andi.	r10,r11,MSR_PR;		/* save stack pointer */	    \
>>  	beq	1f;			/* branch around if supervisor */   \
>>  	ld	r1,PACAKSAVE(r13);	/* get kernel stack coming from usr */\
>> - -1:	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
>> +1:	type##_BTB_FLUSH		\
>> +	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
>>  	bge-	cr1,exc_##n##_bad_stack;/* bad stack (TODO: out of line) */ \
>>  	mfspr	r10,SPRN_##type##_SRR0;	/* read SRR0 before touching stack */
>>  
>> @@ -327,6 +328,30 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>>  #define SPRN_MC_SRR0	SPRN_MCSRR0
>>  #define SPRN_MC_SRR1	SPRN_MCSRR1
>>  
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +#define GEN_BTB_FLUSH			\
>> +	START_BTB_FLUSH_SECTION		\
>> +		beq 1f;			\
>> +		BTB_FLUSH(r10)			\
>> +		1:		\
>> +	END_BTB_FLUSH_SECTION
>> +
>> +#define CRIT_BTB_FLUSH			\
>> +	START_BTB_FLUSH_SECTION		\
>> +		BTB_FLUSH(r10)		\
>> +	END_BTB_FLUSH_SECTION
>> +
>> +#define DBG_BTB_FLUSH CRIT_BTB_FLUSH
>> +#define MC_BTB_FLUSH CRIT_BTB_FLUSH
>> +#define GDBELL_BTB_FLUSH GEN_BTB_FLUSH
>> +#else
>> +#define GEN_BTB_FLUSH
>> +#define CRIT_BTB_FLUSH
>> +#define DBG_BTB_FLUSH
>> +#define MC_BTB_FLUSH
>> +#define GDBELL_BTB_FLUSH
>> +#endif
>> +
>>  #define NORMAL_EXCEPTION_PROLOG(n, intnum, addition)			    \
>>  	EXCEPTION_PROLOG(n, intnum, GEN, addition##_GEN(n))
>>  
>> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
>> index 938a30fef031..10e7cec9553d 100644
>> - --- a/arch/powerpc/kernel/exceptions-64s.S
>> +++ b/arch/powerpc/kernel/exceptions-64s.S
>> @@ -36,6 +36,7 @@ BEGIN_FTR_SECTION						\
>>  END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
>>  	mr	r9,r13 ;					\
>>  	GET_PACA(r13) ;						\
>> +	INTERRUPT_TO_KERNEL ;					\
>>  	mfspr	r11,SPRN_SRR0 ;					\
>>  0:
>>  
>> @@ -292,7 +293,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
>>  	. = 0x900
>>  	.globl decrementer_pSeries
>>  decrementer_pSeries:
>> - -	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
>> +	SET_SCRATCH0(r13)
>> +	EXCEPTION_PROLOG_0(PACA_EXGEN)
>> +	b	decrementer_ool
>>  
>>  	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
>>  
>> @@ -319,6 +322,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
>>  	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
>>  	HMT_MEDIUM;
>>  	std	r10,PACA_EXGEN+EX_R10(r13)
>> +	INTERRUPT_TO_KERNEL
>>  	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
>>  	mfcr	r9
>>  	KVMTEST(0xc00)
>> @@ -607,6 +611,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
>>  
>>  	.align	7
>>  	/* moved from 0xe00 */
>> +	MASKABLE_EXCEPTION_OOL(0x900, decrementer)
>>  	STD_EXCEPTION_HV_OOL(0xe02, h_data_storage)
>>  	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02)
>>  	STD_EXCEPTION_HV_OOL(0xe22, h_instr_storage)
>> @@ -1564,6 +1569,21 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>>  	blr
>>  #endif
>>  
>> +	.balign 16
>> +	.globl stf_barrier_fallback
>> +stf_barrier_fallback:
>> +	std	r9,PACA_EXRFI+EX_R9(r13)
>> +	std	r10,PACA_EXRFI+EX_R10(r13)
>> +	sync
>> +	ld	r9,PACA_EXRFI+EX_R9(r13)
>> +	ld	r10,PACA_EXRFI+EX_R10(r13)
>> +	ori	31,31,0
>> +	.rept 14
>> +	b	1f
>> +1:
>> +	.endr
>> +	blr
>> +
>>  	.globl rfi_flush_fallback
>>  rfi_flush_fallback:
>>  	SET_SCRATCH0(r13);
>> @@ -1571,39 +1591,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>>  	std	r9,PACA_EXRFI+EX_R9(r13)
>>  	std	r10,PACA_EXRFI+EX_R10(r13)
>>  	std	r11,PACA_EXRFI+EX_R11(r13)
>> - -	std	r12,PACA_EXRFI+EX_R12(r13)
>> - -	std	r8,PACA_EXRFI+EX_R13(r13)
>>  	mfctr	r9
>>  	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
>> - -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
>> - -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
>> - -	/*
>> - -	 * The load adresses are at staggered offsets within cachelines,
>> - -	 * which suits some pipelines better (on others it should not
>> - -	 * hurt).
>> - -	 */
>> - -	addi	r12,r12,8
>> +	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
>> +	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
>>  	mtctr	r11
>>  	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
>>  
>>  	/* order ld/st prior to dcbt stop all streams with flushing */
>>  	sync
>> - -1:	li	r8,0
>> - -	.rept	8 /* 8-way set associative */
>> - -	ldx	r11,r10,r8
>> - -	add	r8,r8,r12
>> - -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
>> - -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
>> - -	.endr
>> - -	addi	r10,r10,128 /* 128 byte cache line */
>> +
>> +	/*
>> +	 * The load adresses are at staggered offsets within cachelines,
>> +	 * which suits some pipelines better (on others it should not
>> +	 * hurt).
>> +	 */
>> +1:
>> +	ld	r11,(0x80 + 8)*0(r10)
>> +	ld	r11,(0x80 + 8)*1(r10)
>> +	ld	r11,(0x80 + 8)*2(r10)
>> +	ld	r11,(0x80 + 8)*3(r10)
>> +	ld	r11,(0x80 + 8)*4(r10)
>> +	ld	r11,(0x80 + 8)*5(r10)
>> +	ld	r11,(0x80 + 8)*6(r10)
>> +	ld	r11,(0x80 + 8)*7(r10)
>> +	addi	r10,r10,0x80*8
>>  	bdnz	1b
>>  
>>  	mtctr	r9
>>  	ld	r9,PACA_EXRFI+EX_R9(r13)
>>  	ld	r10,PACA_EXRFI+EX_R10(r13)
>>  	ld	r11,PACA_EXRFI+EX_R11(r13)
>> - -	ld	r12,PACA_EXRFI+EX_R12(r13)
>> - -	ld	r8,PACA_EXRFI+EX_R13(r13)
>>  	GET_SCRATCH0(r13);
>>  	rfid
>>  
>> @@ -1614,39 +1632,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>>  	std	r9,PACA_EXRFI+EX_R9(r13)
>>  	std	r10,PACA_EXRFI+EX_R10(r13)
>>  	std	r11,PACA_EXRFI+EX_R11(r13)
>> - -	std	r12,PACA_EXRFI+EX_R12(r13)
>> - -	std	r8,PACA_EXRFI+EX_R13(r13)
>>  	mfctr	r9
>>  	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
>> - -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
>> - -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
>> - -	/*
>> - -	 * The load adresses are at staggered offsets within cachelines,
>> - -	 * which suits some pipelines better (on others it should not
>> - -	 * hurt).
>> - -	 */
>> - -	addi	r12,r12,8
>> +	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
>> +	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
>>  	mtctr	r11
>>  	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
>>  
>>  	/* order ld/st prior to dcbt stop all streams with flushing */
>>  	sync
>> - -1:	li	r8,0
>> - -	.rept	8 /* 8-way set associative */
>> - -	ldx	r11,r10,r8
>> - -	add	r8,r8,r12
>> - -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
>> - -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
>> - -	.endr
>> - -	addi	r10,r10,128 /* 128 byte cache line */
>> +
>> +	/*
>> +	 * The load adresses are at staggered offsets within cachelines,
>> +	 * which suits some pipelines better (on others it should not
>> +	 * hurt).
>> +	 */
>> +1:
>> +	ld	r11,(0x80 + 8)*0(r10)
>> +	ld	r11,(0x80 + 8)*1(r10)
>> +	ld	r11,(0x80 + 8)*2(r10)
>> +	ld	r11,(0x80 + 8)*3(r10)
>> +	ld	r11,(0x80 + 8)*4(r10)
>> +	ld	r11,(0x80 + 8)*5(r10)
>> +	ld	r11,(0x80 + 8)*6(r10)
>> +	ld	r11,(0x80 + 8)*7(r10)
>> +	addi	r10,r10,0x80*8
>>  	bdnz	1b
>>  
>>  	mtctr	r9
>>  	ld	r9,PACA_EXRFI+EX_R9(r13)
>>  	ld	r10,PACA_EXRFI+EX_R10(r13)
>>  	ld	r11,PACA_EXRFI+EX_R11(r13)
>> - -	ld	r12,PACA_EXRFI+EX_R12(r13)
>> - -	ld	r8,PACA_EXRFI+EX_R13(r13)
>>  	GET_SCRATCH0(r13);
>>  	hrfid
>>  
>> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
>> index 9547381b631a..ff009be97a42 100644
>> - --- a/arch/powerpc/kernel/module.c
>> +++ b/arch/powerpc/kernel/module.c
>> @@ -67,7 +67,15 @@ int module_finalize(const Elf_Ehdr *hdr,
>>  		do_feature_fixups(powerpc_firmware_features,
>>  				  (void *)sect->sh_addr,
>>  				  (void *)sect->sh_addr + sect->sh_size);
>> - -#endif
>> +#endif /* CONFIG_PPC64 */
>> +
>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>> +	sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
>> +	if (sect != NULL)
>> +		do_barrier_nospec_fixups_range(barrier_nospec_enabled,
>> +				  (void *)sect->sh_addr,
>> +				  (void *)sect->sh_addr + sect->sh_size);
>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>>  
>>  	sect = find_section(hdr, sechdrs, "__lwsync_fixup");
>>  	if (sect != NULL)
>> diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
>> new file mode 100644
>> index 000000000000..58f0602a92b9
>> - --- /dev/null
>> +++ b/arch/powerpc/kernel/security.c
>> @@ -0,0 +1,433 @@
>> +// SPDX-License-Identifier: GPL-2.0+
>> +//
>> +// Security related flags and so on.
>> +//
>> +// Copyright 2018, Michael Ellerman, IBM Corporation.
>> +
>> +#include <linux/kernel.h>
>> +#include <linux/debugfs.h>
>> +#include <linux/device.h>
>> +#include <linux/seq_buf.h>
>> +
>> +#include <asm/debug.h>
>> +#include <asm/asm-prototypes.h>
>> +#include <asm/code-patching.h>
>> +#include <asm/security_features.h>
>> +#include <asm/setup.h>
>> +
>> +
>> +unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
>> +
>> +enum count_cache_flush_type {
>> +	COUNT_CACHE_FLUSH_NONE	= 0x1,
>> +	COUNT_CACHE_FLUSH_SW	= 0x2,
>> +	COUNT_CACHE_FLUSH_HW	= 0x4,
>> +};
>> +static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
>> +
>> +bool barrier_nospec_enabled;
>> +static bool no_nospec;
>> +static bool btb_flush_enabled;
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +static bool no_spectrev2;
>> +#endif
>> +
>> +static void enable_barrier_nospec(bool enable)
>> +{
>> +	barrier_nospec_enabled = enable;
>> +	do_barrier_nospec_fixups(enable);
>> +}
>> +
>> +void setup_barrier_nospec(void)
>> +{
>> +	bool enable;
>> +
>> +	/*
>> +	 * It would make sense to check SEC_FTR_SPEC_BAR_ORI31 below as well.
>> +	 * But there's a good reason not to. The two flags we check below are
>> +	 * both are enabled by default in the kernel, so if the hcall is not
>> +	 * functional they will be enabled.
>> +	 * On a system where the host firmware has been updated (so the ori
>> +	 * functions as a barrier), but on which the hypervisor (KVM/Qemu) has
>> +	 * not been updated, we would like to enable the barrier. Dropping the
>> +	 * check for SEC_FTR_SPEC_BAR_ORI31 achieves that. The only downside is
>> +	 * we potentially enable the barrier on systems where the host firmware
>> +	 * is not updated, but that's harmless as it's a no-op.
>> +	 */
>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
>> +		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
>> +
>> +	if (!no_nospec)
>> +		enable_barrier_nospec(enable);
>> +}
>> +
>> +static int __init handle_nospectre_v1(char *p)
>> +{
>> +	no_nospec = true;
>> +
>> +	return 0;
>> +}
>> +early_param("nospectre_v1", handle_nospectre_v1);
>> +
>> +#ifdef CONFIG_DEBUG_FS
>> +static int barrier_nospec_set(void *data, u64 val)
>> +{
>> +	switch (val) {
>> +	case 0:
>> +	case 1:
>> +		break;
>> +	default:
>> +		return -EINVAL;
>> +	}
>> +
>> +	if (!!val == !!barrier_nospec_enabled)
>> +		return 0;
>> +
>> +	enable_barrier_nospec(!!val);
>> +
>> +	return 0;
>> +}
>> +
>> +static int barrier_nospec_get(void *data, u64 *val)
>> +{
>> +	*val = barrier_nospec_enabled ? 1 : 0;
>> +	return 0;
>> +}
>> +
>> +DEFINE_SIMPLE_ATTRIBUTE(fops_barrier_nospec,
>> +			barrier_nospec_get, barrier_nospec_set, "%llu\n");
>> +
>> +static __init int barrier_nospec_debugfs_init(void)
>> +{
>> +	debugfs_create_file("barrier_nospec", 0600, powerpc_debugfs_root, NULL,
>> +			    &fops_barrier_nospec);
>> +	return 0;
>> +}
>> +device_initcall(barrier_nospec_debugfs_init);
>> +#endif /* CONFIG_DEBUG_FS */
>> +
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +static int __init handle_nospectre_v2(char *p)
>> +{
>> +	no_spectrev2 = true;
>> +
>> +	return 0;
>> +}
>> +early_param("nospectre_v2", handle_nospectre_v2);
>> +void setup_spectre_v2(void)
>> +{
>> +	if (no_spectrev2)
>> +		do_btb_flush_fixups();
>> +	else
>> +		btb_flush_enabled = true;
>> +}
>> +#endif /* CONFIG_PPC_FSL_BOOK3E */
>> +
>> +#ifdef CONFIG_PPC_BOOK3S_64
>> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
>> +{
>> +	bool thread_priv;
>> +
>> +	thread_priv = security_ftr_enabled(SEC_FTR_L1D_THREAD_PRIV);
>> +
>> +	if (rfi_flush || thread_priv) {
>> +		struct seq_buf s;
>> +		seq_buf_init(&s, buf, PAGE_SIZE - 1);
>> +
>> +		seq_buf_printf(&s, "Mitigation: ");
>> +
>> +		if (rfi_flush)
>> +			seq_buf_printf(&s, "RFI Flush");
>> +
>> +		if (rfi_flush && thread_priv)
>> +			seq_buf_printf(&s, ", ");
>> +
>> +		if (thread_priv)
>> +			seq_buf_printf(&s, "L1D private per thread");
>> +
>> +		seq_buf_printf(&s, "\n");
>> +
>> +		return s.len;
>> +	}
>> +
>> +	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
>> +	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
>> +		return sprintf(buf, "Not affected\n");
>> +
>> +	return sprintf(buf, "Vulnerable\n");
>> +}
>> +#endif
>> +
>> +ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
>> +{
>> +	struct seq_buf s;
>> +
>> +	seq_buf_init(&s, buf, PAGE_SIZE - 1);
>> +
>> +	if (security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR)) {
>> +		if (barrier_nospec_enabled)
>> +			seq_buf_printf(&s, "Mitigation: __user pointer sanitization");
>> +		else
>> +			seq_buf_printf(&s, "Vulnerable");
>> +
>> +		if (security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31))
>> +			seq_buf_printf(&s, ", ori31 speculation barrier enabled");
>> +
>> +		seq_buf_printf(&s, "\n");
>> +	} else
>> +		seq_buf_printf(&s, "Not affected\n");
>> +
>> +	return s.len;
>> +}
>> +
>> +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
>> +{
>> +	struct seq_buf s;
>> +	bool bcs, ccd;
>> +
>> +	seq_buf_init(&s, buf, PAGE_SIZE - 1);
>> +
>> +	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
>> +	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
>> +
>> +	if (bcs || ccd) {
>> +		seq_buf_printf(&s, "Mitigation: ");
>> +
>> +		if (bcs)
>> +			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
>> +
>> +		if (bcs && ccd)
>> +			seq_buf_printf(&s, ", ");
>> +
>> +		if (ccd)
>> +			seq_buf_printf(&s, "Indirect branch cache disabled");
>> +	} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
>> +		seq_buf_printf(&s, "Mitigation: Software count cache flush");
>> +
>> +		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
>> +			seq_buf_printf(&s, " (hardware accelerated)");
>> +	} else if (btb_flush_enabled) {
>> +		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
>> +	} else {
>> +		seq_buf_printf(&s, "Vulnerable");
>> +	}
>> +
>> +	seq_buf_printf(&s, "\n");
>> +
>> +	return s.len;
>> +}
>> +
>> +#ifdef CONFIG_PPC_BOOK3S_64
>> +/*
>> + * Store-forwarding barrier support.
>> + */
>> +
>> +static enum stf_barrier_type stf_enabled_flush_types;
>> +static bool no_stf_barrier;
>> +bool stf_barrier;
>> +
>> +static int __init handle_no_stf_barrier(char *p)
>> +{
>> +	pr_info("stf-barrier: disabled on command line.");
>> +	no_stf_barrier = true;
>> +	return 0;
>> +}
>> +
>> +early_param("no_stf_barrier", handle_no_stf_barrier);
>> +
>> +/* This is the generic flag used by other architectures */
>> +static int __init handle_ssbd(char *p)
>> +{
>> +	if (!p || strncmp(p, "auto", 5) == 0 || strncmp(p, "on", 2) == 0 ) {
>> +		/* Until firmware tells us, we have the barrier with auto */
>> +		return 0;
>> +	} else if (strncmp(p, "off", 3) == 0) {
>> +		handle_no_stf_barrier(NULL);
>> +		return 0;
>> +	} else
>> +		return 1;
>> +
>> +	return 0;
>> +}
>> +early_param("spec_store_bypass_disable", handle_ssbd);
>> +
>> +/* This is the generic flag used by other architectures */
>> +static int __init handle_no_ssbd(char *p)
>> +{
>> +	handle_no_stf_barrier(NULL);
>> +	return 0;
>> +}
>> +early_param("nospec_store_bypass_disable", handle_no_ssbd);
>> +
>> +static void stf_barrier_enable(bool enable)
>> +{
>> +	if (enable)
>> +		do_stf_barrier_fixups(stf_enabled_flush_types);
>> +	else
>> +		do_stf_barrier_fixups(STF_BARRIER_NONE);
>> +
>> +	stf_barrier = enable;
>> +}
>> +
>> +void setup_stf_barrier(void)
>> +{
>> +	enum stf_barrier_type type;
>> +	bool enable, hv;
>> +
>> +	hv = cpu_has_feature(CPU_FTR_HVMODE);
>> +
>> +	/* Default to fallback in case fw-features are not available */
>> +	if (cpu_has_feature(CPU_FTR_ARCH_207S))
>> +		type = STF_BARRIER_SYNC_ORI;
>> +	else if (cpu_has_feature(CPU_FTR_ARCH_206))
>> +		type = STF_BARRIER_FALLBACK;
>> +	else
>> +		type = STF_BARRIER_NONE;
>> +
>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
>> +		(security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) ||
>> +		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) && hv));
>> +
>> +	if (type == STF_BARRIER_FALLBACK) {
>> +		pr_info("stf-barrier: fallback barrier available\n");
>> +	} else if (type == STF_BARRIER_SYNC_ORI) {
>> +		pr_info("stf-barrier: hwsync barrier available\n");
>> +	} else if (type == STF_BARRIER_EIEIO) {
>> +		pr_info("stf-barrier: eieio barrier available\n");
>> +	}
>> +
>> +	stf_enabled_flush_types = type;
>> +
>> +	if (!no_stf_barrier)
>> +		stf_barrier_enable(enable);
>> +}
>> +
>> +ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *attr, char *buf)
>> +{
>> +	if (stf_barrier && stf_enabled_flush_types != STF_BARRIER_NONE) {
>> +		const char *type;
>> +		switch (stf_enabled_flush_types) {
>> +		case STF_BARRIER_EIEIO:
>> +			type = "eieio";
>> +			break;
>> +		case STF_BARRIER_SYNC_ORI:
>> +			type = "hwsync";
>> +			break;
>> +		case STF_BARRIER_FALLBACK:
>> +			type = "fallback";
>> +			break;
>> +		default:
>> +			type = "unknown";
>> +		}
>> +		return sprintf(buf, "Mitigation: Kernel entry/exit barrier (%s)\n", type);
>> +	}
>> +
>> +	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
>> +	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
>> +		return sprintf(buf, "Not affected\n");
>> +
>> +	return sprintf(buf, "Vulnerable\n");
>> +}
>> +
>> +#ifdef CONFIG_DEBUG_FS
>> +static int stf_barrier_set(void *data, u64 val)
>> +{
>> +	bool enable;
>> +
>> +	if (val == 1)
>> +		enable = true;
>> +	else if (val == 0)
>> +		enable = false;
>> +	else
>> +		return -EINVAL;
>> +
>> +	/* Only do anything if we're changing state */
>> +	if (enable != stf_barrier)
>> +		stf_barrier_enable(enable);
>> +
>> +	return 0;
>> +}
>> +
>> +static int stf_barrier_get(void *data, u64 *val)
>> +{
>> +	*val = stf_barrier ? 1 : 0;
>> +	return 0;
>> +}
>> +
>> +DEFINE_SIMPLE_ATTRIBUTE(fops_stf_barrier, stf_barrier_get, stf_barrier_set, "%llu\n");
>> +
>> +static __init int stf_barrier_debugfs_init(void)
>> +{
>> +	debugfs_create_file("stf_barrier", 0600, powerpc_debugfs_root, NULL, &fops_stf_barrier);
>> +	return 0;
>> +}
>> +device_initcall(stf_barrier_debugfs_init);
>> +#endif /* CONFIG_DEBUG_FS */
>> +
>> +static void toggle_count_cache_flush(bool enable)
>> +{
>> +	if (!enable || !security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
>> +		patch_instruction_site(&patch__call_flush_count_cache, PPC_INST_NOP);
>> +		count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
>> +		pr_info("count-cache-flush: software flush disabled.\n");
>> +		return;
>> +	}
>> +
>> +	patch_branch_site(&patch__call_flush_count_cache,
>> +			  (u64)&flush_count_cache, BRANCH_SET_LINK);
>> +
>> +	if (!security_ftr_enabled(SEC_FTR_BCCTR_FLUSH_ASSIST)) {
>> +		count_cache_flush_type = COUNT_CACHE_FLUSH_SW;
>> +		pr_info("count-cache-flush: full software flush sequence enabled.\n");
>> +		return;
>> +	}
>> +
>> +	patch_instruction_site(&patch__flush_count_cache_return, PPC_INST_BLR);
>> +	count_cache_flush_type = COUNT_CACHE_FLUSH_HW;
>> +	pr_info("count-cache-flush: hardware assisted flush sequence enabled\n");
>> +}
>> +
>> +void setup_count_cache_flush(void)
>> +{
>> +	toggle_count_cache_flush(true);
>> +}
>> +
>> +#ifdef CONFIG_DEBUG_FS
>> +static int count_cache_flush_set(void *data, u64 val)
>> +{
>> +	bool enable;
>> +
>> +	if (val == 1)
>> +		enable = true;
>> +	else if (val == 0)
>> +		enable = false;
>> +	else
>> +		return -EINVAL;
>> +
>> +	toggle_count_cache_flush(enable);
>> +
>> +	return 0;
>> +}
>> +
>> +static int count_cache_flush_get(void *data, u64 *val)
>> +{
>> +	if (count_cache_flush_type == COUNT_CACHE_FLUSH_NONE)
>> +		*val = 0;
>> +	else
>> +		*val = 1;
>> +
>> +	return 0;
>> +}
>> +
>> +DEFINE_SIMPLE_ATTRIBUTE(fops_count_cache_flush, count_cache_flush_get,
>> +			count_cache_flush_set, "%llu\n");
>> +
>> +static __init int count_cache_flush_debugfs_init(void)
>> +{
>> +	debugfs_create_file("count_cache_flush", 0600, powerpc_debugfs_root,
>> +			    NULL, &fops_count_cache_flush);
>> +	return 0;
>> +}
>> +device_initcall(count_cache_flush_debugfs_init);
>> +#endif /* CONFIG_DEBUG_FS */
>> +#endif /* CONFIG_PPC_BOOK3S_64 */
>> diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
>> index ad8c9db61237..5a9f035bcd6b 100644
>> - --- a/arch/powerpc/kernel/setup_32.c
>> +++ b/arch/powerpc/kernel/setup_32.c
>> @@ -322,6 +322,8 @@ void __init setup_arch(char **cmdline_p)
>>  		ppc_md.setup_arch();
>>  	if ( ppc_md.progress ) ppc_md.progress("arch: exit", 0x3eab);
>>  
>> +	setup_barrier_nospec();
>> +
>>  	paging_init();
>>  
>>  	/* Initialize the MMU context management stuff */
>> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
>> index 9eb469bed22b..6bb731ababc6 100644
>> - --- a/arch/powerpc/kernel/setup_64.c
>> +++ b/arch/powerpc/kernel/setup_64.c
>> @@ -736,6 +736,8 @@ void __init setup_arch(char **cmdline_p)
>>  	if (ppc_md.setup_arch)
>>  		ppc_md.setup_arch();
>>  
>> +	setup_barrier_nospec();
>> +
>>  	paging_init();
>>  
>>  	/* Initialize the MMU context management stuff */
>> @@ -873,9 +875,6 @@ static void do_nothing(void *unused)
>>  
>>  void rfi_flush_enable(bool enable)
>>  {
>> - -	if (rfi_flush == enable)
>> - -		return;
>> - -
>>  	if (enable) {
>>  		do_rfi_flush_fixups(enabled_flush_types);
>>  		on_each_cpu(do_nothing, NULL, 1);
>> @@ -885,11 +884,15 @@ void rfi_flush_enable(bool enable)
>>  	rfi_flush = enable;
>>  }
>>  
>> - -static void init_fallback_flush(void)
>> +static void __ref init_fallback_flush(void)
>>  {
>>  	u64 l1d_size, limit;
>>  	int cpu;
>>  
>> +	/* Only allocate the fallback flush area once (at boot time). */
>> +	if (l1d_flush_fallback_area)
>> +		return;
>> +
>>  	l1d_size = ppc64_caches.dsize;
>>  	limit = min(safe_stack_limit(), ppc64_rma_size);
>>  
>> @@ -902,34 +905,23 @@ static void init_fallback_flush(void)
>>  	memset(l1d_flush_fallback_area, 0, l1d_size * 2);
>>  
>>  	for_each_possible_cpu(cpu) {
>> - -		/*
>> - -		 * The fallback flush is currently coded for 8-way
>> - -		 * associativity. Different associativity is possible, but it
>> - -		 * will be treated as 8-way and may not evict the lines as
>> - -		 * effectively.
>> - -		 *
>> - -		 * 128 byte lines are mandatory.
>> - -		 */
>> - -		u64 c = l1d_size / 8;
>> - -
>>  		paca[cpu].rfi_flush_fallback_area = l1d_flush_fallback_area;
>> - -		paca[cpu].l1d_flush_congruence = c;
>> - -		paca[cpu].l1d_flush_sets = c / 128;
>> +		paca[cpu].l1d_flush_size = l1d_size;
>>  	}
>>  }
>>  
>> - -void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
>> +void setup_rfi_flush(enum l1d_flush_type types, bool enable)
>>  {
>>  	if (types & L1D_FLUSH_FALLBACK) {
>> - -		pr_info("rfi-flush: Using fallback displacement flush\n");
>> +		pr_info("rfi-flush: fallback displacement flush available\n");
>>  		init_fallback_flush();
>>  	}
>>  
>>  	if (types & L1D_FLUSH_ORI)
>> - -		pr_info("rfi-flush: Using ori type flush\n");
>> +		pr_info("rfi-flush: ori type flush available\n");
>>  
>>  	if (types & L1D_FLUSH_MTTRIG)
>> - -		pr_info("rfi-flush: Using mttrig type flush\n");
>> +		pr_info("rfi-flush: mttrig type flush available\n");
>>  
>>  	enabled_flush_types = types;
>>  
>> @@ -940,13 +932,19 @@ void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
>>  #ifdef CONFIG_DEBUG_FS
>>  static int rfi_flush_set(void *data, u64 val)
>>  {
>> +	bool enable;
>> +
>>  	if (val == 1)
>> - -		rfi_flush_enable(true);
>> +		enable = true;
>>  	else if (val == 0)
>> - -		rfi_flush_enable(false);
>> +		enable = false;
>>  	else
>>  		return -EINVAL;
>>  
>> +	/* Only do anything if we're changing state */
>> +	if (enable != rfi_flush)
>> +		rfi_flush_enable(enable);
>> +
>>  	return 0;
>>  }
>>  
>> @@ -965,12 +963,4 @@ static __init int rfi_flush_debugfs_init(void)
>>  }
>>  device_initcall(rfi_flush_debugfs_init);
>>  #endif
>> - -
>> - -ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
>> - -{
>> - -	if (rfi_flush)
>> - -		return sprintf(buf, "Mitigation: RFI Flush\n");
>> - -
>> - -	return sprintf(buf, "Vulnerable\n");
>> - -}
>>  #endif /* CONFIG_PPC_BOOK3S_64 */
>> diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
>> index 072a23a17350..876ac9d52afc 100644
>> - --- a/arch/powerpc/kernel/vmlinux.lds.S
>> +++ b/arch/powerpc/kernel/vmlinux.lds.S
>> @@ -73,14 +73,45 @@ SECTIONS
>>  	RODATA
>>  
>>  #ifdef CONFIG_PPC64
>> +	. = ALIGN(8);
>> +	__stf_entry_barrier_fixup : AT(ADDR(__stf_entry_barrier_fixup) - LOAD_OFFSET) {
>> +		__start___stf_entry_barrier_fixup = .;
>> +		*(__stf_entry_barrier_fixup)
>> +		__stop___stf_entry_barrier_fixup = .;
>> +	}
>> +
>> +	. = ALIGN(8);
>> +	__stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) {
>> +		__start___stf_exit_barrier_fixup = .;
>> +		*(__stf_exit_barrier_fixup)
>> +		__stop___stf_exit_barrier_fixup = .;
>> +	}
>> +
>>  	. = ALIGN(8);
>>  	__rfi_flush_fixup : AT(ADDR(__rfi_flush_fixup) - LOAD_OFFSET) {
>>  		__start___rfi_flush_fixup = .;
>>  		*(__rfi_flush_fixup)
>>  		__stop___rfi_flush_fixup = .;
>>  	}
>> - -#endif
>> +#endif /* CONFIG_PPC64 */
>>  
>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>> +	. = ALIGN(8);
>> +	__spec_barrier_fixup : AT(ADDR(__spec_barrier_fixup) - LOAD_OFFSET) {
>> +		__start___barrier_nospec_fixup = .;
>> +		*(__barrier_nospec_fixup)
>> +		__stop___barrier_nospec_fixup = .;
>> +	}
>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>> +
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +	. = ALIGN(8);
>> +	__spec_btb_flush_fixup : AT(ADDR(__spec_btb_flush_fixup) - LOAD_OFFSET) {
>> +		__start__btb_flush_fixup = .;
>> +		*(__btb_flush_fixup)
>> +		__stop__btb_flush_fixup = .;
>> +	}
>> +#endif
>>  	EXCEPTION_TABLE(0)
>>  
>>  	NOTES :kernel :notes
>> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
>> index d5edbeb8eb82..570c06a00db6 100644
>> - --- a/arch/powerpc/lib/code-patching.c
>> +++ b/arch/powerpc/lib/code-patching.c
>> @@ -14,12 +14,25 @@
>>  #include <asm/page.h>
>>  #include <asm/code-patching.h>
>>  #include <asm/uaccess.h>
>> +#include <asm/setup.h>
>> +#include <asm/sections.h>
>>  
>>  
>> +static inline bool is_init(unsigned int *addr)
>> +{
>> +	return addr >= (unsigned int *)__init_begin && addr < (unsigned int *)__init_end;
>> +}
>> +
>>  int patch_instruction(unsigned int *addr, unsigned int instr)
>>  {
>>  	int err;
>>  
>> +	/* Make sure we aren't patching a freed init section */
>> +	if (init_mem_is_free && is_init(addr)) {
>> +		pr_debug("Skipping init section patching addr: 0x%px\n", addr);
>> +		return 0;
>> +	}
>> +
>>  	__put_user_size(instr, addr, 4, err);
>>  	if (err)
>>  		return err;
>> @@ -32,6 +45,22 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags)
>>  	return patch_instruction(addr, create_branch(addr, target, flags));
>>  }
>>  
>> +int patch_branch_site(s32 *site, unsigned long target, int flags)
>> +{
>> +	unsigned int *addr;
>> +
>> +	addr = (unsigned int *)((unsigned long)site + *site);
>> +	return patch_instruction(addr, create_branch(addr, target, flags));
>> +}
>> +
>> +int patch_instruction_site(s32 *site, unsigned int instr)
>> +{
>> +	unsigned int *addr;
>> +
>> +	addr = (unsigned int *)((unsigned long)site + *site);
>> +	return patch_instruction(addr, instr);
>> +}
>> +
>>  unsigned int create_branch(const unsigned int *addr,
>>  			   unsigned long target, int flags)
>>  {
>> diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
>> index 3af014684872..7bdfc19a491d 100644
>> - --- a/arch/powerpc/lib/feature-fixups.c
>> +++ b/arch/powerpc/lib/feature-fixups.c
>> @@ -21,7 +21,7 @@
>>  #include <asm/page.h>
>>  #include <asm/sections.h>
>>  #include <asm/setup.h>
>> - -
>> +#include <asm/security_features.h>
>>  
>>  struct fixup_entry {
>>  	unsigned long	mask;
>> @@ -115,6 +115,120 @@ void do_feature_fixups(unsigned long value, void *fixup_start, void *fixup_end)
>>  }
>>  
>>  #ifdef CONFIG_PPC_BOOK3S_64
>> +void do_stf_entry_barrier_fixups(enum stf_barrier_type types)
>> +{
>> +	unsigned int instrs[3], *dest;
>> +	long *start, *end;
>> +	int i;
>> +
>> +	start = PTRRELOC(&__start___stf_entry_barrier_fixup),
>> +	end = PTRRELOC(&__stop___stf_entry_barrier_fixup);
>> +
>> +	instrs[0] = 0x60000000; /* nop */
>> +	instrs[1] = 0x60000000; /* nop */
>> +	instrs[2] = 0x60000000; /* nop */
>> +
>> +	i = 0;
>> +	if (types & STF_BARRIER_FALLBACK) {
>> +		instrs[i++] = 0x7d4802a6; /* mflr r10		*/
>> +		instrs[i++] = 0x60000000; /* branch patched below */
>> +		instrs[i++] = 0x7d4803a6; /* mtlr r10		*/
>> +	} else if (types & STF_BARRIER_EIEIO) {
>> +		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
>> +	} else if (types & STF_BARRIER_SYNC_ORI) {
>> +		instrs[i++] = 0x7c0004ac; /* hwsync		*/
>> +		instrs[i++] = 0xe94d0000; /* ld r10,0(r13)	*/
>> +		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
>> +	}
>> +
>> +	for (i = 0; start < end; start++, i++) {
>> +		dest = (void *)start + *start;
>> +
>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>> +
>> +		patch_instruction(dest, instrs[0]);
>> +
>> +		if (types & STF_BARRIER_FALLBACK)
>> +			patch_branch(dest + 1, (unsigned long)&stf_barrier_fallback,
>> +				     BRANCH_SET_LINK);
>> +		else
>> +			patch_instruction(dest + 1, instrs[1]);
>> +
>> +		patch_instruction(dest + 2, instrs[2]);
>> +	}
>> +
>> +	printk(KERN_DEBUG "stf-barrier: patched %d entry locations (%s barrier)\n", i,
>> +		(types == STF_BARRIER_NONE)                  ? "no" :
>> +		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
>> +		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
>> +		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
>> +		                                           : "unknown");
>> +}
>> +
>> +void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
>> +{
>> +	unsigned int instrs[6], *dest;
>> +	long *start, *end;
>> +	int i;
>> +
>> +	start = PTRRELOC(&__start___stf_exit_barrier_fixup),
>> +	end = PTRRELOC(&__stop___stf_exit_barrier_fixup);
>> +
>> +	instrs[0] = 0x60000000; /* nop */
>> +	instrs[1] = 0x60000000; /* nop */
>> +	instrs[2] = 0x60000000; /* nop */
>> +	instrs[3] = 0x60000000; /* nop */
>> +	instrs[4] = 0x60000000; /* nop */
>> +	instrs[5] = 0x60000000; /* nop */
>> +
>> +	i = 0;
>> +	if (types & STF_BARRIER_FALLBACK || types & STF_BARRIER_SYNC_ORI) {
>> +		if (cpu_has_feature(CPU_FTR_HVMODE)) {
>> +			instrs[i++] = 0x7db14ba6; /* mtspr 0x131, r13 (HSPRG1) */
>> +			instrs[i++] = 0x7db04aa6; /* mfspr r13, 0x130 (HSPRG0) */
>> +		} else {
>> +			instrs[i++] = 0x7db243a6; /* mtsprg 2,r13	*/
>> +			instrs[i++] = 0x7db142a6; /* mfsprg r13,1    */
>> +	        }
>> +		instrs[i++] = 0x7c0004ac; /* hwsync		*/
>> +		instrs[i++] = 0xe9ad0000; /* ld r13,0(r13)	*/
>> +		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
>> +		if (cpu_has_feature(CPU_FTR_HVMODE)) {
>> +			instrs[i++] = 0x7db14aa6; /* mfspr r13, 0x131 (HSPRG1) */
>> +		} else {
>> +			instrs[i++] = 0x7db242a6; /* mfsprg r13,2 */
>> +		}
>> +	} else if (types & STF_BARRIER_EIEIO) {
>> +		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
>> +	}
>> +
>> +	for (i = 0; start < end; start++, i++) {
>> +		dest = (void *)start + *start;
>> +
>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>> +
>> +		patch_instruction(dest, instrs[0]);
>> +		patch_instruction(dest + 1, instrs[1]);
>> +		patch_instruction(dest + 2, instrs[2]);
>> +		patch_instruction(dest + 3, instrs[3]);
>> +		patch_instruction(dest + 4, instrs[4]);
>> +		patch_instruction(dest + 5, instrs[5]);
>> +	}
>> +	printk(KERN_DEBUG "stf-barrier: patched %d exit locations (%s barrier)\n", i,
>> +		(types == STF_BARRIER_NONE)                  ? "no" :
>> +		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
>> +		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
>> +		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
>> +		                                           : "unknown");
>> +}
>> +
>> +
>> +void do_stf_barrier_fixups(enum stf_barrier_type types)
>> +{
>> +	do_stf_entry_barrier_fixups(types);
>> +	do_stf_exit_barrier_fixups(types);
>> +}
>> +
>>  void do_rfi_flush_fixups(enum l1d_flush_type types)
>>  {
>>  	unsigned int instrs[3], *dest;
>> @@ -151,10 +265,110 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
>>  		patch_instruction(dest + 2, instrs[2]);
>>  	}
>>  
>> - -	printk(KERN_DEBUG "rfi-flush: patched %d locations\n", i);
>> +	printk(KERN_DEBUG "rfi-flush: patched %d locations (%s flush)\n", i,
>> +		(types == L1D_FLUSH_NONE)       ? "no" :
>> +		(types == L1D_FLUSH_FALLBACK)   ? "fallback displacement" :
>> +		(types &  L1D_FLUSH_ORI)        ? (types & L1D_FLUSH_MTTRIG)
>> +							? "ori+mttrig type"
>> +							: "ori type" :
>> +		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
>> +						: "unknown");
>> +}
>> +
>> +void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
>> +{
>> +	unsigned int instr, *dest;
>> +	long *start, *end;
>> +	int i;
>> +
>> +	start = fixup_start;
>> +	end = fixup_end;
>> +
>> +	instr = 0x60000000; /* nop */
>> +
>> +	if (enable) {
>> +		pr_info("barrier-nospec: using ORI speculation barrier\n");
>> +		instr = 0x63ff0000; /* ori 31,31,0 speculation barrier */
>> +	}
>> +
>> +	for (i = 0; start < end; start++, i++) {
>> +		dest = (void *)start + *start;
>> +
>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>> +		patch_instruction(dest, instr);
>> +	}
>> +
>> +	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
>>  }
>> +
>>  #endif /* CONFIG_PPC_BOOK3S_64 */
>>  
>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>> +void do_barrier_nospec_fixups(bool enable)
>> +{
>> +	void *start, *end;
>> +
>> +	start = PTRRELOC(&__start___barrier_nospec_fixup),
>> +	end = PTRRELOC(&__stop___barrier_nospec_fixup);
>> +
>> +	do_barrier_nospec_fixups_range(enable, start, end);
>> +}
>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>> +
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
>> +{
>> +	unsigned int instr[2], *dest;
>> +	long *start, *end;
>> +	int i;
>> +
>> +	start = fixup_start;
>> +	end = fixup_end;
>> +
>> +	instr[0] = PPC_INST_NOP;
>> +	instr[1] = PPC_INST_NOP;
>> +
>> +	if (enable) {
>> +		pr_info("barrier-nospec: using isync; sync as speculation barrier\n");
>> +		instr[0] = PPC_INST_ISYNC;
>> +		instr[1] = PPC_INST_SYNC;
>> +	}
>> +
>> +	for (i = 0; start < end; start++, i++) {
>> +		dest = (void *)start + *start;
>> +
>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>> +		patch_instruction(dest, instr[0]);
>> +		patch_instruction(dest + 1, instr[1]);
>> +	}
>> +
>> +	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
>> +}
>> +
>> +static void patch_btb_flush_section(long *curr)
>> +{
>> +	unsigned int *start, *end;
>> +
>> +	start = (void *)curr + *curr;
>> +	end = (void *)curr + *(curr + 1);
>> +	for (; start < end; start++) {
>> +		pr_devel("patching dest %lx\n", (unsigned long)start);
>> +		patch_instruction(start, PPC_INST_NOP);
>> +	}
>> +}
>> +
>> +void do_btb_flush_fixups(void)
>> +{
>> +	long *start, *end;
>> +
>> +	start = PTRRELOC(&__start__btb_flush_fixup);
>> +	end = PTRRELOC(&__stop__btb_flush_fixup);
>> +
>> +	for (; start < end; start += 2)
>> +		patch_btb_flush_section(start);
>> +}
>> +#endif /* CONFIG_PPC_FSL_BOOK3E */
>> +
>>  void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
>>  {
>>  	long *start, *end;
>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>> index 22d94c3e6fc4..1efe5ca5c3bc 100644
>> - --- a/arch/powerpc/mm/mem.c
>> +++ b/arch/powerpc/mm/mem.c
>> @@ -62,6 +62,7 @@
>>  #endif
>>  
>>  unsigned long long memory_limit;
>> +bool init_mem_is_free;
>>  
>>  #ifdef CONFIG_HIGHMEM
>>  pte_t *kmap_pte;
>> @@ -381,6 +382,7 @@ void __init mem_init(void)
>>  void free_initmem(void)
>>  {
>>  	ppc_md.progress = ppc_printk_progress;
>> +	init_mem_is_free = true;
>>  	free_initmem_default(POISON_FREE_INITMEM);
>>  }
>>  
>> diff --git a/arch/powerpc/mm/tlb_low_64e.S b/arch/powerpc/mm/tlb_low_64e.S
>> index 29d6987c37ba..5486d56da289 100644
>> - --- a/arch/powerpc/mm/tlb_low_64e.S
>> +++ b/arch/powerpc/mm/tlb_low_64e.S
>> @@ -69,6 +69,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>>  	std	r15,EX_TLB_R15(r12)
>>  	std	r10,EX_TLB_CR(r12)
>>  #ifdef CONFIG_PPC_FSL_BOOK3E
>> +START_BTB_FLUSH_SECTION
>> +	mfspr r11, SPRN_SRR1
>> +	andi. r10,r11,MSR_PR
>> +	beq 1f
>> +	BTB_FLUSH(r10)
>> +1:
>> +END_BTB_FLUSH_SECTION
>>  	std	r7,EX_TLB_R7(r12)
>>  #endif
>>  	TLB_MISS_PROLOG_STATS
>> diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
>> index c57afc619b20..e14b52c7ebd8 100644
>> - --- a/arch/powerpc/platforms/powernv/setup.c
>> +++ b/arch/powerpc/platforms/powernv/setup.c
>> @@ -37,53 +37,99 @@
>>  #include <asm/smp.h>
>>  #include <asm/tm.h>
>>  #include <asm/setup.h>
>> +#include <asm/security_features.h>
>>  
>>  #include "powernv.h"
>>  
>> +
>> +static bool fw_feature_is(const char *state, const char *name,
>> +			  struct device_node *fw_features)
>> +{
>> +	struct device_node *np;
>> +	bool rc = false;
>> +
>> +	np = of_get_child_by_name(fw_features, name);
>> +	if (np) {
>> +		rc = of_property_read_bool(np, state);
>> +		of_node_put(np);
>> +	}
>> +
>> +	return rc;
>> +}
>> +
>> +static void init_fw_feat_flags(struct device_node *np)
>> +{
>> +	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
>> +		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
>> +
>> +	if (fw_feature_is("enabled", "fw-bcctrl-serialized", np))
>> +		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
>> +
>> +	if (fw_feature_is("enabled", "inst-l1d-flush-ori30,30,0", np))
>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
>> +
>> +	if (fw_feature_is("enabled", "inst-l1d-flush-trig2", np))
>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
>> +
>> +	if (fw_feature_is("enabled", "fw-l1d-thread-split", np))
>> +		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
>> +
>> +	if (fw_feature_is("enabled", "fw-count-cache-disabled", np))
>> +		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
>> +
>> +	if (fw_feature_is("enabled", "fw-count-cache-flush-bcctr2,0,0", np))
>> +		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
>> +
>> +	if (fw_feature_is("enabled", "needs-count-cache-flush-on-context-switch", np))
>> +		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
>> +
>> +	/*
>> +	 * The features below are enabled by default, so we instead look to see
>> +	 * if firmware has *disabled* them, and clear them if so.
>> +	 */
>> +	if (fw_feature_is("disabled", "speculation-policy-favor-security", np))
>> +		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
>> +
>> +	if (fw_feature_is("disabled", "needs-l1d-flush-msr-pr-0-to-1", np))
>> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
>> +
>> +	if (fw_feature_is("disabled", "needs-l1d-flush-msr-hv-1-to-0", np))
>> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
>> +
>> +	if (fw_feature_is("disabled", "needs-spec-barrier-for-bound-checks", np))
>> +		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
>> +}
>> +
>>  static void pnv_setup_rfi_flush(void)
>>  {
>>  	struct device_node *np, *fw_features;
>>  	enum l1d_flush_type type;
>> - -	int enable;
>> +	bool enable;
>>  
>>  	/* Default to fallback in case fw-features are not available */
>>  	type = L1D_FLUSH_FALLBACK;
>> - -	enable = 1;
>>  
>>  	np = of_find_node_by_name(NULL, "ibm,opal");
>>  	fw_features = of_get_child_by_name(np, "fw-features");
>>  	of_node_put(np);
>>  
>>  	if (fw_features) {
>> - -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-trig2");
>> - -		if (np && of_property_read_bool(np, "enabled"))
>> - -			type = L1D_FLUSH_MTTRIG;
>> +		init_fw_feat_flags(fw_features);
>> +		of_node_put(fw_features);
>>  
>> - -		of_node_put(np);
>> +		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
>> +			type = L1D_FLUSH_MTTRIG;
>>  
>> - -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-ori30,30,0");
>> - -		if (np && of_property_read_bool(np, "enabled"))
>> +		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
>>  			type = L1D_FLUSH_ORI;
>> - -
>> - -		of_node_put(np);
>> - -
>> - -		/* Enable unless firmware says NOT to */
>> - -		enable = 2;
>> - -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-hv-1-to-0");
>> - -		if (np && of_property_read_bool(np, "disabled"))
>> - -			enable--;
>> - -
>> - -		of_node_put(np);
>> - -
>> - -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-pr-0-to-1");
>> - -		if (np && of_property_read_bool(np, "disabled"))
>> - -			enable--;
>> - -
>> - -		of_node_put(np);
>> - -		of_node_put(fw_features);
>>  	}
>>  
>> - -	setup_rfi_flush(type, enable > 0);
>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
>> +		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR)   || \
>> +		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
>> +
>> +	setup_rfi_flush(type, enable);
>> +	setup_count_cache_flush();
>>  }
>>  
>>  static void __init pnv_setup_arch(void)
>> @@ -91,6 +137,7 @@ static void __init pnv_setup_arch(void)
>>  	set_arch_panic_timeout(10, ARCH_PANIC_TIMEOUT);
>>  
>>  	pnv_setup_rfi_flush();
>> +	setup_stf_barrier();
>>  
>>  	/* Initialize SMP */
>>  	pnv_smp_init();
>> diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
>> index 8dd0c8edefd6..c773396d0969 100644
>> - --- a/arch/powerpc/platforms/pseries/mobility.c
>> +++ b/arch/powerpc/platforms/pseries/mobility.c
>> @@ -314,6 +314,9 @@ void post_mobility_fixup(void)
>>  		printk(KERN_ERR "Post-mobility device tree update "
>>  			"failed: %d\n", rc);
>>  
>> +	/* Possibly switch to a new RFI flush type */
>> +	pseries_setup_rfi_flush();
>> +
>>  	return;
>>  }
>>  
>> diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h
>> index 8411c27293e4..e7d80797384d 100644
>> - --- a/arch/powerpc/platforms/pseries/pseries.h
>> +++ b/arch/powerpc/platforms/pseries/pseries.h
>> @@ -81,4 +81,6 @@ extern struct pci_controller_ops pseries_pci_controller_ops;
>>  
>>  unsigned long pseries_memory_block_size(void);
>>  
>> +void pseries_setup_rfi_flush(void);
>> +
>>  #endif /* _PSERIES_PSERIES_H */
>> diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
>> index dd2545fc9947..9cc976ff7fec 100644
>> - --- a/arch/powerpc/platforms/pseries/setup.c
>> +++ b/arch/powerpc/platforms/pseries/setup.c
>> @@ -67,6 +67,7 @@
>>  #include <asm/eeh.h>
>>  #include <asm/reg.h>
>>  #include <asm/plpar_wrappers.h>
>> +#include <asm/security_features.h>
>>  
>>  #include "pseries.h"
>>  
>> @@ -499,37 +500,87 @@ static void __init find_and_init_phbs(void)
>>  	of_pci_check_probe_only();
>>  }
>>  
>> - -static void pseries_setup_rfi_flush(void)
>> +static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
>> +{
>> +	/*
>> +	 * The features below are disabled by default, so we instead look to see
>> +	 * if firmware has *enabled* them, and set them if so.
>> +	 */
>> +	if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31)
>> +		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
>> +
>> +	if (result->character & H_CPU_CHAR_BCCTRL_SERIALISED)
>> +		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
>> +
>> +	if (result->character & H_CPU_CHAR_L1D_FLUSH_ORI30)
>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
>> +
>> +	if (result->character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
>> +
>> +	if (result->character & H_CPU_CHAR_L1D_THREAD_PRIV)
>> +		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
>> +
>> +	if (result->character & H_CPU_CHAR_COUNT_CACHE_DISABLED)
>> +		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
>> +
>> +	if (result->character & H_CPU_CHAR_BCCTR_FLUSH_ASSIST)
>> +		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
>> +
>> +	if (result->behaviour & H_CPU_BEHAV_FLUSH_COUNT_CACHE)
>> +		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
>> +
>> +	/*
>> +	 * The features below are enabled by default, so we instead look to see
>> +	 * if firmware has *disabled* them, and clear them if so.
>> +	 */
>> +	if (!(result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY))
>> +		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
>> +
>> +	if (!(result->behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
>> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
>> +
>> +	if (!(result->behaviour & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
>> +		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
>> +}
>> +
>> +void pseries_setup_rfi_flush(void)
>>  {
>>  	struct h_cpu_char_result result;
>>  	enum l1d_flush_type types;
>>  	bool enable;
>>  	long rc;
>>  
>> - -	/* Enable by default */
>> - -	enable = true;
>> +	/*
>> +	 * Set features to the defaults assumed by init_cpu_char_feature_flags()
>> +	 * so it can set/clear again any features that might have changed after
>> +	 * migration, and in case the hypercall fails and it is not even called.
>> +	 */
>> +	powerpc_security_features = SEC_FTR_DEFAULT;
>>  
>>  	rc = plpar_get_cpu_characteristics(&result);
>> - -	if (rc == H_SUCCESS) {
>> - -		types = L1D_FLUSH_NONE;
>> +	if (rc == H_SUCCESS)
>> +		init_cpu_char_feature_flags(&result);
>>  
>> - -		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
>> - -			types |= L1D_FLUSH_MTTRIG;
>> - -		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
>> - -			types |= L1D_FLUSH_ORI;
>> +	/*
>> +	 * We're the guest so this doesn't apply to us, clear it to simplify
>> +	 * handling of it elsewhere.
>> +	 */
>> +	security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
>>  
>> - -		/* Use fallback if nothing set in hcall */
>> - -		if (types == L1D_FLUSH_NONE)
>> - -			types = L1D_FLUSH_FALLBACK;
>> +	types = L1D_FLUSH_FALLBACK;
>>  
>> - -		if (!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
>> - -			enable = false;
>> - -	} else {
>> - -		/* Default to fallback if case hcall is not available */
>> - -		types = L1D_FLUSH_FALLBACK;
>> - -	}
>> +	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
>> +		types |= L1D_FLUSH_MTTRIG;
>> +
>> +	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
>> +		types |= L1D_FLUSH_ORI;
>> +
>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
>> +		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
>>  
>>  	setup_rfi_flush(types, enable);
>> +	setup_count_cache_flush();
>>  }
>>  
>>  static void __init pSeries_setup_arch(void)
>> @@ -549,6 +600,7 @@ static void __init pSeries_setup_arch(void)
>>  	fwnmi_init();
>>  
>>  	pseries_setup_rfi_flush();
>> +	setup_stf_barrier();
>>  
>>  	/* By default, only probe PCI (can be overridden by rtas_pci) */
>>  	pci_add_flags(PCI_PROBE_ONLY);
>> diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
>> index 786bf01691c9..83619ebede93 100644
>> - --- a/arch/powerpc/xmon/xmon.c
>> +++ b/arch/powerpc/xmon/xmon.c
>> @@ -2144,6 +2144,8 @@ static void dump_one_paca(int cpu)
>>  	DUMP(p, slb_cache_ptr, "x");
>>  	for (i = 0; i < SLB_CACHE_ENTRIES; i++)
>>  		printf(" slb_cache[%d]:        = 0x%016lx\n", i, p->slb_cache[i]);
>> +
>> +	DUMP(p, rfi_flush_fallback_area, "px");
>>  #endif
>>  	DUMP(p, dscr_default, "llx");
>>  #ifdef CONFIG_PPC_BOOK3E
>> - -- 
>> 2.20.1
>>
>> -----BEGIN PGP SIGNATURE-----
>>
>> iQIcBAEBAgAGBQJcvHWhAAoJEFHr6jzI4aWA6nsP/0YskmAfLovcUmERQ7+bIjq6
>> IcS1T466dvy6MlqeBXU4x8pVgInWeHKEC9XJdkM1lOeib/SLW7Hbz4kgJeOGwFGY
>> lOTaexrxvsBqPm7f6GC0zbl9obEIIIIUs+TielFQANBgqm+q8Wio+XXPP9bpKeKY
>> agSpQ3nwL/PYixznbNmN/lP9py5p89LQ0IBcR7dDBGGWJtD/AXeZ9hslsZxPbPtI
>> nZJ0vdnjuoB2z+hCxfKWlYfLwH0VfoTpqP5x3ALCkvbBr67e8bf6EK8+trnvhyQ8
>> iLY4bp1pm2epAI0/3NfyEiDMsGjVJ6IFlkyhDkHJgJNu0BGcGOSX2GpyU3juviAK
>> c95FtBft/i8AwigOMCivg2mN5edYjsSiPoEItwT5KWqgByJsdr5i5mYVx8cUjMOz
>> iAxLZCdg+UHZYuCBCAO2ZI1G9bVXI1Pa3btMspiCOOOsYGjXGf0oFfKQ+7957hUO
>> ftYYJoGHlMHiHR1OPas6T3lk6YKF9uvfIDTE3OKw2obHbbRz3u82xoWMRGW503MN
>> 7WpkpAP7oZ9RgqIWFVhatWy5f+7GFL0akEi4o2tsZHhYlPau7YWo+nToTd87itwt
>> GBaWJipzge4s13VkhAE+jWFO35Fvwi8uNZ7UgpuKMBECEjkGbtzBTq2MjSF5G8wc
>> yPEod5jby/Iqb7DkGPVG
>> =6DnF
>> -----END PGP SIGNATURE-----
>>

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
@ 2019-04-28  6:20     ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-28  6:20 UTC (permalink / raw)
  To: Diana Madalina Craciun, stable, gregkh; +Cc: linuxppc-dev, msuchanek, npiggin

Diana Madalina Craciun <diana.craciun@nxp.com> writes:
> Hi Michael,
>
> There are some missing NXP Spectre v2 patches. I can send them
> separately if the series will be accepted. I have merged them, but I did
> not test them, I was sick today and incapable of doing that.

No worries, there's no rush :)

Sorry I missed them, I thought I had a list that included everything.
Which commits was it I missed?

I guess post them as a reply to this thread? That way whether the series
is merged by Greg or not, there's a record here of what the backports
look like.

cheers

> On 4/21/2019 5:21 PM, Michael Ellerman wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>> Hi Greg/Sasha,
>>
>> Please queue up these powerpc patches for 4.4 if you have no objections.
>>
>> cheers
>>
>>
>> Christophe Leroy (1):
>>   powerpc/fsl: Fix the flush of branch predictor.
>>
>> Diana Craciun (10):
>>   powerpc/64: Disable the speculation barrier from the command line
>>   powerpc/64: Make stf barrier PPC_BOOK3S_64 specific.
>>   powerpc/64: Make meltdown reporting Book3S 64 specific
>>   powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E
>>   powerpc/fsl: Add infrastructure to fixup branch predictor flush
>>   powerpc/fsl: Add macro to flush the branch predictor
>>   powerpc/fsl: Fix spectre_v2 mitigations reporting
>>   powerpc/fsl: Add nospectre_v2 command line argument
>>   powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)
>>   powerpc/fsl: Update Spectre v2 reporting
>>
>> Mauricio Faria de Oliveira (4):
>>   powerpc/rfi-flush: Differentiate enabled and patched flush types
>>   powerpc/pseries: Fix clearing of security feature flags
>>   powerpc: Move default security feature flags
>>   powerpc/pseries: Restore default security feature flags on setup
>>
>> Michael Ellerman (29):
>>   powerpc/xmon: Add RFI flush related fields to paca dump
>>   powerpc/pseries: Support firmware disable of RFI flush
>>   powerpc/powernv: Support firmware disable of RFI flush
>>   powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs
>>     code
>>   powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again
>>   powerpc/rfi-flush: Always enable fallback flush on pseries
>>   powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags
>>   powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration
>>   powerpc: Add security feature flags for Spectre/Meltdown
>>   powerpc/pseries: Set or clear security feature flags
>>   powerpc/powernv: Set or clear security feature flags
>>   powerpc/64s: Move cpu_show_meltdown()
>>   powerpc/64s: Enhance the information in cpu_show_meltdown()
>>   powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()
>>   powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()
>>   powerpc/64s: Wire up cpu_show_spectre_v1()
>>   powerpc/64s: Wire up cpu_show_spectre_v2()
>>   powerpc/64s: Fix section mismatch warnings from setup_rfi_flush()
>>   powerpc/64: Use barrier_nospec in syscall entry
>>   powerpc: Use barrier_nospec in copy_from_user()
>>   powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2
>>   powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC
>>   powerpc/64: Call setup_barrier_nospec() from setup_arch()
>>   powerpc/asm: Add a patch_site macro & helpers for patching
>>     instructions
>>   powerpc/64s: Add new security feature flags for count cache flush
>>   powerpc/64s: Add support for software count cache flush
>>   powerpc/pseries: Query hypervisor for count cache flush settings
>>   powerpc/powernv: Query firmware for count cache flush settings
>>   powerpc/security: Fix spectre_v2 reporting
>>
>> Michael Neuling (1):
>>   powerpc: Avoid code patching freed init sections
>>
>> Michal Suchanek (5):
>>   powerpc/64s: Add barrier_nospec
>>   powerpc/64s: Add support for ori barrier_nospec patching
>>   powerpc/64s: Patch barrier_nospec in modules
>>   powerpc/64s: Enable barrier_nospec based on firmware settings
>>   powerpc/64s: Enhance the information in cpu_show_spectre_v1()
>>
>> Nicholas Piggin (2):
>>   powerpc/64s: Improve RFI L1-D cache flush fallback
>>   powerpc/64s: Add support for a store forwarding barrier at kernel
>>     entry/exit
>>
>>  arch/powerpc/Kconfig                         |   7 +-
>>  arch/powerpc/include/asm/asm-prototypes.h    |  21 +
>>  arch/powerpc/include/asm/barrier.h           |  21 +
>>  arch/powerpc/include/asm/code-patching-asm.h |  18 +
>>  arch/powerpc/include/asm/code-patching.h     |   2 +
>>  arch/powerpc/include/asm/exception-64s.h     |  35 ++
>>  arch/powerpc/include/asm/feature-fixups.h    |  40 ++
>>  arch/powerpc/include/asm/hvcall.h            |   5 +
>>  arch/powerpc/include/asm/paca.h              |   3 +-
>>  arch/powerpc/include/asm/ppc-opcode.h        |   1 +
>>  arch/powerpc/include/asm/ppc_asm.h           |  11 +
>>  arch/powerpc/include/asm/security_features.h |  92 ++++
>>  arch/powerpc/include/asm/setup.h             |  23 +-
>>  arch/powerpc/include/asm/uaccess.h           |  18 +-
>>  arch/powerpc/kernel/Makefile                 |   1 +
>>  arch/powerpc/kernel/asm-offsets.c            |   3 +-
>>  arch/powerpc/kernel/entry_64.S               |  69 +++
>>  arch/powerpc/kernel/exceptions-64e.S         |  27 +-
>>  arch/powerpc/kernel/exceptions-64s.S         |  98 +++--
>>  arch/powerpc/kernel/module.c                 |  10 +-
>>  arch/powerpc/kernel/security.c               | 433 +++++++++++++++++++
>>  arch/powerpc/kernel/setup_32.c               |   2 +
>>  arch/powerpc/kernel/setup_64.c               |  50 +--
>>  arch/powerpc/kernel/vmlinux.lds.S            |  33 +-
>>  arch/powerpc/lib/code-patching.c             |  29 ++
>>  arch/powerpc/lib/feature-fixups.c            | 218 +++++++++-
>>  arch/powerpc/mm/mem.c                        |   2 +
>>  arch/powerpc/mm/tlb_low_64e.S                |   7 +
>>  arch/powerpc/platforms/powernv/setup.c       |  99 +++--
>>  arch/powerpc/platforms/pseries/mobility.c    |   3 +
>>  arch/powerpc/platforms/pseries/pseries.h     |   2 +
>>  arch/powerpc/platforms/pseries/setup.c       |  88 +++-
>>  arch/powerpc/xmon/xmon.c                     |   2 +
>>  33 files changed, 1345 insertions(+), 128 deletions(-)
>>  create mode 100644 arch/powerpc/include/asm/asm-prototypes.h
>>  create mode 100644 arch/powerpc/include/asm/code-patching-asm.h
>>  create mode 100644 arch/powerpc/include/asm/security_features.h
>>  create mode 100644 arch/powerpc/kernel/security.c
>>
>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>> index 58a1fa979655..01b6c00a7060 100644
>> - --- a/arch/powerpc/Kconfig
>> +++ b/arch/powerpc/Kconfig
>> @@ -136,7 +136,7 @@ config PPC
>>  	select GENERIC_SMP_IDLE_THREAD
>>  	select GENERIC_CMOS_UPDATE
>>  	select GENERIC_TIME_VSYSCALL_OLD
>> - -	select GENERIC_CPU_VULNERABILITIES	if PPC_BOOK3S_64
>> +	select GENERIC_CPU_VULNERABILITIES	if PPC_BARRIER_NOSPEC
>>  	select GENERIC_CLOCKEVENTS
>>  	select GENERIC_CLOCKEVENTS_BROADCAST if SMP
>>  	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
>> @@ -162,6 +162,11 @@ config PPC
>>  	select ARCH_HAS_DMA_SET_COHERENT_MASK
>>  	select HAVE_ARCH_SECCOMP_FILTER
>>  
>> +config PPC_BARRIER_NOSPEC
>> +    bool
>> +    default y
>> +    depends on PPC_BOOK3S_64 || PPC_FSL_BOOK3E
>> +
>>  config GENERIC_CSUM
>>  	def_bool CPU_LITTLE_ENDIAN
>>  
>> diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
>> new file mode 100644
>> index 000000000000..8944c55591cf
>> - --- /dev/null
>> +++ b/arch/powerpc/include/asm/asm-prototypes.h
>> @@ -0,0 +1,21 @@
>> +#ifndef _ASM_POWERPC_ASM_PROTOTYPES_H
>> +#define _ASM_POWERPC_ASM_PROTOTYPES_H
>> +/*
>> + * This file is for prototypes of C functions that are only called
>> + * from asm, and any associated variables.
>> + *
>> + * Copyright 2016, Daniel Axtens, IBM Corporation.
>> + *
>> + * This program is free software; you can redistribute it and/or
>> + * modify it under the terms of the GNU General Public License
>> + * as published by the Free Software Foundation; either version 2
>> + * of the License, or (at your option) any later version.
>> + */
>> +
>> +/* Patch sites */
>> +extern s32 patch__call_flush_count_cache;
>> +extern s32 patch__flush_count_cache_return;
>> +
>> +extern long flush_count_cache;
>> +
>> +#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
>> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
>> index b9e16855a037..e7cb72cdb2ba 100644
>> - --- a/arch/powerpc/include/asm/barrier.h
>> +++ b/arch/powerpc/include/asm/barrier.h
>> @@ -92,4 +92,25 @@ do {									\
>>  #define smp_mb__after_atomic()      smp_mb()
>>  #define smp_mb__before_spinlock()   smp_mb()
>>  
>> +#ifdef CONFIG_PPC_BOOK3S_64
>> +#define NOSPEC_BARRIER_SLOT   nop
>> +#elif defined(CONFIG_PPC_FSL_BOOK3E)
>> +#define NOSPEC_BARRIER_SLOT   nop; nop
>> +#endif
>> +
>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>> +/*
>> + * Prevent execution of subsequent instructions until preceding branches have
>> + * been fully resolved and are no longer executing speculatively.
>> + */
>> +#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; NOSPEC_BARRIER_SLOT
>> +
>> +// This also acts as a compiler barrier due to the memory clobber.
>> +#define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
>> +
>> +#else /* !CONFIG_PPC_BARRIER_NOSPEC */
>> +#define barrier_nospec_asm
>> +#define barrier_nospec()
>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>> +
>>  #endif /* _ASM_POWERPC_BARRIER_H */
>> diff --git a/arch/powerpc/include/asm/code-patching-asm.h b/arch/powerpc/include/asm/code-patching-asm.h
>> new file mode 100644
>> index 000000000000..ed7b1448493a
>> - --- /dev/null
>> +++ b/arch/powerpc/include/asm/code-patching-asm.h
>> @@ -0,0 +1,18 @@
>> +/* SPDX-License-Identifier: GPL-2.0+ */
>> +/*
>> + * Copyright 2018, Michael Ellerman, IBM Corporation.
>> + */
>> +#ifndef _ASM_POWERPC_CODE_PATCHING_ASM_H
>> +#define _ASM_POWERPC_CODE_PATCHING_ASM_H
>> +
>> +/* Define a "site" that can be patched */
>> +.macro patch_site label name
>> +	.pushsection ".rodata"
>> +	.balign 4
>> +	.global \name
>> +\name:
>> +	.4byte	\label - .
>> +	.popsection
>> +.endm
>> +
>> +#endif /* _ASM_POWERPC_CODE_PATCHING_ASM_H */
>> diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
>> index 840a5509b3f1..a734b4b34d26 100644
>> - --- a/arch/powerpc/include/asm/code-patching.h
>> +++ b/arch/powerpc/include/asm/code-patching.h
>> @@ -28,6 +28,8 @@ unsigned int create_cond_branch(const unsigned int *addr,
>>  				unsigned long target, int flags);
>>  int patch_branch(unsigned int *addr, unsigned long target, int flags);
>>  int patch_instruction(unsigned int *addr, unsigned int instr);
>> +int patch_instruction_site(s32 *addr, unsigned int instr);
>> +int patch_branch_site(s32 *site, unsigned long target, int flags);
>>  
>>  int instr_is_relative_branch(unsigned int instr);
>>  int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
>> diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
>> index 9bddbec441b8..3ed536bec462 100644
>> - --- a/arch/powerpc/include/asm/exception-64s.h
>> +++ b/arch/powerpc/include/asm/exception-64s.h
>> @@ -50,6 +50,27 @@
>>  #define EX_PPR		88	/* SMT thread status register (priority) */
>>  #define EX_CTR		96
>>  
>> +#define STF_ENTRY_BARRIER_SLOT						\
>> +	STF_ENTRY_BARRIER_FIXUP_SECTION;				\
>> +	nop;								\
>> +	nop;								\
>> +	nop
>> +
>> +#define STF_EXIT_BARRIER_SLOT						\
>> +	STF_EXIT_BARRIER_FIXUP_SECTION;					\
>> +	nop;								\
>> +	nop;								\
>> +	nop;								\
>> +	nop;								\
>> +	nop;								\
>> +	nop
>> +
>> +/*
>> + * r10 must be free to use, r13 must be paca
>> + */
>> +#define INTERRUPT_TO_KERNEL						\
>> +	STF_ENTRY_BARRIER_SLOT
>> +
>>  /*
>>   * Macros for annotating the expected destination of (h)rfid
>>   *
>> @@ -66,16 +87,19 @@
>>  	rfid
>>  
>>  #define RFI_TO_USER							\
>> +	STF_EXIT_BARRIER_SLOT;						\
>>  	RFI_FLUSH_SLOT;							\
>>  	rfid;								\
>>  	b	rfi_flush_fallback
>>  
>>  #define RFI_TO_USER_OR_KERNEL						\
>> +	STF_EXIT_BARRIER_SLOT;						\
>>  	RFI_FLUSH_SLOT;							\
>>  	rfid;								\
>>  	b	rfi_flush_fallback
>>  
>>  #define RFI_TO_GUEST							\
>> +	STF_EXIT_BARRIER_SLOT;						\
>>  	RFI_FLUSH_SLOT;							\
>>  	rfid;								\
>>  	b	rfi_flush_fallback
>> @@ -84,21 +108,25 @@
>>  	hrfid
>>  
>>  #define HRFI_TO_USER							\
>> +	STF_EXIT_BARRIER_SLOT;						\
>>  	RFI_FLUSH_SLOT;							\
>>  	hrfid;								\
>>  	b	hrfi_flush_fallback
>>  
>>  #define HRFI_TO_USER_OR_KERNEL						\
>> +	STF_EXIT_BARRIER_SLOT;						\
>>  	RFI_FLUSH_SLOT;							\
>>  	hrfid;								\
>>  	b	hrfi_flush_fallback
>>  
>>  #define HRFI_TO_GUEST							\
>> +	STF_EXIT_BARRIER_SLOT;						\
>>  	RFI_FLUSH_SLOT;							\
>>  	hrfid;								\
>>  	b	hrfi_flush_fallback
>>  
>>  #define HRFI_TO_UNKNOWN							\
>> +	STF_EXIT_BARRIER_SLOT;						\
>>  	RFI_FLUSH_SLOT;							\
>>  	hrfid;								\
>>  	b	hrfi_flush_fallback
>> @@ -226,6 +254,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
>>  #define __EXCEPTION_PROLOG_1(area, extra, vec)				\
>>  	OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR);		\
>>  	OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);		\
>> +	INTERRUPT_TO_KERNEL;						\
>>  	SAVE_CTR(r10, area);						\
>>  	mfcr	r9;							\
>>  	extra(vec);							\
>> @@ -512,6 +541,12 @@ label##_relon_hv:						\
>>  #define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
>>  	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
>>  
>> +#define MASKABLE_EXCEPTION_OOL(vec, label)				\
>> +	.globl label##_ool;						\
>> +label##_ool:								\
>> +	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
>> +	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
>> +
>>  #define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
>>  	. = loc;							\
>>  	.globl label##_pSeries;						\
>> diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
>> index 7068bafbb2d6..145a37ab2d3e 100644
>> - --- a/arch/powerpc/include/asm/feature-fixups.h
>> +++ b/arch/powerpc/include/asm/feature-fixups.h
>> @@ -184,6 +184,22 @@ label##3:					       	\
>>  	FTR_ENTRY_OFFSET label##1b-label##3b;		\
>>  	.popsection;
>>  
>> +#define STF_ENTRY_BARRIER_FIXUP_SECTION			\
>> +953:							\
>> +	.pushsection __stf_entry_barrier_fixup,"a";	\
>> +	.align 2;					\
>> +954:							\
>> +	FTR_ENTRY_OFFSET 953b-954b;			\
>> +	.popsection;
>> +
>> +#define STF_EXIT_BARRIER_FIXUP_SECTION			\
>> +955:							\
>> +	.pushsection __stf_exit_barrier_fixup,"a";	\
>> +	.align 2;					\
>> +956:							\
>> +	FTR_ENTRY_OFFSET 955b-956b;			\
>> +	.popsection;
>> +
>>  #define RFI_FLUSH_FIXUP_SECTION				\
>>  951:							\
>>  	.pushsection __rfi_flush_fixup,"a";		\
>> @@ -192,10 +208,34 @@ label##3:					       	\
>>  	FTR_ENTRY_OFFSET 951b-952b;			\
>>  	.popsection;
>>  
>> +#define NOSPEC_BARRIER_FIXUP_SECTION			\
>> +953:							\
>> +	.pushsection __barrier_nospec_fixup,"a";	\
>> +	.align 2;					\
>> +954:							\
>> +	FTR_ENTRY_OFFSET 953b-954b;			\
>> +	.popsection;
>> +
>> +#define START_BTB_FLUSH_SECTION			\
>> +955:							\
>> +
>> +#define END_BTB_FLUSH_SECTION			\
>> +956:							\
>> +	.pushsection __btb_flush_fixup,"a";	\
>> +	.align 2;							\
>> +957:						\
>> +	FTR_ENTRY_OFFSET 955b-957b;			\
>> +	FTR_ENTRY_OFFSET 956b-957b;			\
>> +	.popsection;
>>  
>>  #ifndef __ASSEMBLY__
>>  
>> +extern long stf_barrier_fallback;
>> +extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
>> +extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
>>  extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
>> +extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
>> +extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
>>  
>>  #endif
>>  
>> diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
>> index 449bbb87c257..b57db9d09db9 100644
>> - --- a/arch/powerpc/include/asm/hvcall.h
>> +++ b/arch/powerpc/include/asm/hvcall.h
>> @@ -292,10 +292,15 @@
>>  #define H_CPU_CHAR_L1D_FLUSH_ORI30	(1ull << 61) // IBM bit 2
>>  #define H_CPU_CHAR_L1D_FLUSH_TRIG2	(1ull << 60) // IBM bit 3
>>  #define H_CPU_CHAR_L1D_THREAD_PRIV	(1ull << 59) // IBM bit 4
>> +#define H_CPU_CHAR_BRANCH_HINTS_HONORED	(1ull << 58) // IBM bit 5
>> +#define H_CPU_CHAR_THREAD_RECONFIG_CTRL	(1ull << 57) // IBM bit 6
>> +#define H_CPU_CHAR_COUNT_CACHE_DISABLED	(1ull << 56) // IBM bit 7
>> +#define H_CPU_CHAR_BCCTR_FLUSH_ASSIST	(1ull << 54) // IBM bit 9
>>  
>>  #define H_CPU_BEHAV_FAVOUR_SECURITY	(1ull << 63) // IBM bit 0
>>  #define H_CPU_BEHAV_L1D_FLUSH_PR	(1ull << 62) // IBM bit 1
>>  #define H_CPU_BEHAV_BNDS_CHK_SPEC_BAR	(1ull << 61) // IBM bit 2
>> +#define H_CPU_BEHAV_FLUSH_COUNT_CACHE	(1ull << 58) // IBM bit 5
>>  
>>  #ifndef __ASSEMBLY__
>>  #include <linux/types.h>
>> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
>> index 45e2aefece16..08e5df3395fa 100644
>> - --- a/arch/powerpc/include/asm/paca.h
>> +++ b/arch/powerpc/include/asm/paca.h
>> @@ -199,8 +199,7 @@ struct paca_struct {
>>  	 */
>>  	u64 exrfi[13] __aligned(0x80);
>>  	void *rfi_flush_fallback_area;
>> - -	u64 l1d_flush_congruence;
>> - -	u64 l1d_flush_sets;
>> +	u64 l1d_flush_size;
>>  #endif
>>  };
>>  
>> diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
>> index 7ab04fc59e24..faf1bb045dee 100644
>> - --- a/arch/powerpc/include/asm/ppc-opcode.h
>> +++ b/arch/powerpc/include/asm/ppc-opcode.h
>> @@ -147,6 +147,7 @@
>>  #define PPC_INST_LWSYNC			0x7c2004ac
>>  #define PPC_INST_SYNC			0x7c0004ac
>>  #define PPC_INST_SYNC_MASK		0xfc0007fe
>> +#define PPC_INST_ISYNC			0x4c00012c
>>  #define PPC_INST_LXVD2X			0x7c000698
>>  #define PPC_INST_MCRXR			0x7c000400
>>  #define PPC_INST_MCRXR_MASK		0xfc0007fe
>> diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
>> index 160bb2311bbb..d219816b3e19 100644
>> - --- a/arch/powerpc/include/asm/ppc_asm.h
>> +++ b/arch/powerpc/include/asm/ppc_asm.h
>> @@ -821,4 +821,15 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,945)
>>  	.long 0x2400004c  /* rfid				*/
>>  #endif /* !CONFIG_PPC_BOOK3E */
>>  #endif /*  __ASSEMBLY__ */
>> +
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +#define BTB_FLUSH(reg)			\
>> +	lis reg,BUCSR_INIT@h;		\
>> +	ori reg,reg,BUCSR_INIT@l;	\
>> +	mtspr SPRN_BUCSR,reg;		\
>> +	isync;
>> +#else
>> +#define BTB_FLUSH(reg)
>> +#endif /* CONFIG_PPC_FSL_BOOK3E */
>> +
>>  #endif /* _ASM_POWERPC_PPC_ASM_H */
>> diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
>> new file mode 100644
>> index 000000000000..759597bf0fd8
>> - --- /dev/null
>> +++ b/arch/powerpc/include/asm/security_features.h
>> @@ -0,0 +1,92 @@
>> +/* SPDX-License-Identifier: GPL-2.0+ */
>> +/*
>> + * Security related feature bit definitions.
>> + *
>> + * Copyright 2018, Michael Ellerman, IBM Corporation.
>> + */
>> +
>> +#ifndef _ASM_POWERPC_SECURITY_FEATURES_H
>> +#define _ASM_POWERPC_SECURITY_FEATURES_H
>> +
>> +
>> +extern unsigned long powerpc_security_features;
>> +extern bool rfi_flush;
>> +
>> +/* These are bit flags */
>> +enum stf_barrier_type {
>> +	STF_BARRIER_NONE	= 0x1,
>> +	STF_BARRIER_FALLBACK	= 0x2,
>> +	STF_BARRIER_EIEIO	= 0x4,
>> +	STF_BARRIER_SYNC_ORI	= 0x8,
>> +};
>> +
>> +void setup_stf_barrier(void);
>> +void do_stf_barrier_fixups(enum stf_barrier_type types);
>> +void setup_count_cache_flush(void);
>> +
>> +static inline void security_ftr_set(unsigned long feature)
>> +{
>> +	powerpc_security_features |= feature;
>> +}
>> +
>> +static inline void security_ftr_clear(unsigned long feature)
>> +{
>> +	powerpc_security_features &= ~feature;
>> +}
>> +
>> +static inline bool security_ftr_enabled(unsigned long feature)
>> +{
>> +	return !!(powerpc_security_features & feature);
>> +}
>> +
>> +
>> +// Features indicating support for Spectre/Meltdown mitigations
>> +
>> +// The L1-D cache can be flushed with ori r30,r30,0
>> +#define SEC_FTR_L1D_FLUSH_ORI30		0x0000000000000001ull
>> +
>> +// The L1-D cache can be flushed with mtspr 882,r0 (aka SPRN_TRIG2)
>> +#define SEC_FTR_L1D_FLUSH_TRIG2		0x0000000000000002ull
>> +
>> +// ori r31,r31,0 acts as a speculation barrier
>> +#define SEC_FTR_SPEC_BAR_ORI31		0x0000000000000004ull
>> +
>> +// Speculation past bctr is disabled
>> +#define SEC_FTR_BCCTRL_SERIALISED	0x0000000000000008ull
>> +
>> +// Entries in L1-D are private to a SMT thread
>> +#define SEC_FTR_L1D_THREAD_PRIV		0x0000000000000010ull
>> +
>> +// Indirect branch prediction cache disabled
>> +#define SEC_FTR_COUNT_CACHE_DISABLED	0x0000000000000020ull
>> +
>> +// bcctr 2,0,0 triggers a hardware assisted count cache flush
>> +#define SEC_FTR_BCCTR_FLUSH_ASSIST	0x0000000000000800ull
>> +
>> +
>> +// Features indicating need for Spectre/Meltdown mitigations
>> +
>> +// The L1-D cache should be flushed on MSR[HV] 1->0 transition (hypervisor to guest)
>> +#define SEC_FTR_L1D_FLUSH_HV		0x0000000000000040ull
>> +
>> +// The L1-D cache should be flushed on MSR[PR] 0->1 transition (kernel to userspace)
>> +#define SEC_FTR_L1D_FLUSH_PR		0x0000000000000080ull
>> +
>> +// A speculation barrier should be used for bounds checks (Spectre variant 1)
>> +#define SEC_FTR_BNDS_CHK_SPEC_BAR	0x0000000000000100ull
>> +
>> +// Firmware configuration indicates user favours security over performance
>> +#define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
>> +
>> +// Software required to flush count cache on context switch
>> +#define SEC_FTR_FLUSH_COUNT_CACHE	0x0000000000000400ull
>> +
>> +
>> +// Features enabled by default
>> +#define SEC_FTR_DEFAULT \
>> +	(SEC_FTR_L1D_FLUSH_HV | \
>> +	 SEC_FTR_L1D_FLUSH_PR | \
>> +	 SEC_FTR_BNDS_CHK_SPEC_BAR | \
>> +	 SEC_FTR_FAVOUR_SECURITY)
>> +
>> +#endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
>> diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
>> index 7916b56f2e60..d299479c770b 100644
>> - --- a/arch/powerpc/include/asm/setup.h
>> +++ b/arch/powerpc/include/asm/setup.h
>> @@ -8,6 +8,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
>>  
>>  extern unsigned int rtas_data;
>>  extern unsigned long long memory_limit;
>> +extern bool init_mem_is_free;
>>  extern unsigned long klimit;
>>  extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
>>  
>> @@ -36,8 +37,28 @@ enum l1d_flush_type {
>>  	L1D_FLUSH_MTTRIG	= 0x8,
>>  };
>>  
>> - -void __init setup_rfi_flush(enum l1d_flush_type, bool enable);
>> +void setup_rfi_flush(enum l1d_flush_type, bool enable);
>>  void do_rfi_flush_fixups(enum l1d_flush_type types);
>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>> +void setup_barrier_nospec(void);
>> +#else
>> +static inline void setup_barrier_nospec(void) { };
>> +#endif
>> +void do_barrier_nospec_fixups(bool enable);
>> +extern bool barrier_nospec_enabled;
>> +
>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>> +void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
>> +#else
>> +static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
>> +#endif
>> +
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +void setup_spectre_v2(void);
>> +#else
>> +static inline void setup_spectre_v2(void) {};
>> +#endif
>> +void do_btb_flush_fixups(void);
>>  
>>  #endif /* !__ASSEMBLY__ */
>>  
>> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
>> index 05f1389228d2..e51ce5a0e221 100644
>> - --- a/arch/powerpc/include/asm/uaccess.h
>> +++ b/arch/powerpc/include/asm/uaccess.h
>> @@ -269,6 +269,7 @@ do {								\
>>  	__chk_user_ptr(ptr);					\
>>  	if (!is_kernel_addr((unsigned long)__gu_addr))		\
>>  		might_fault();					\
>> +	barrier_nospec();					\
>>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>>  	(x) = (__typeof__(*(ptr)))__gu_val;			\
>>  	__gu_err;						\
>> @@ -283,6 +284,7 @@ do {								\
>>  	__chk_user_ptr(ptr);					\
>>  	if (!is_kernel_addr((unsigned long)__gu_addr))		\
>>  		might_fault();					\
>> +	barrier_nospec();					\
>>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>>  	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
>>  	__gu_err;						\
>> @@ -295,8 +297,10 @@ do {								\
>>  	unsigned long  __gu_val = 0;					\
>>  	__typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
>>  	might_fault();							\
>> - -	if (access_ok(VERIFY_READ, __gu_addr, (size)))			\
>> +	if (access_ok(VERIFY_READ, __gu_addr, (size))) {		\
>> +		barrier_nospec();					\
>>  		__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>> +	}								\
>>  	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
>>  	__gu_err;							\
>>  })
>> @@ -307,6 +311,7 @@ do {								\
>>  	unsigned long __gu_val;					\
>>  	__typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
>>  	__chk_user_ptr(ptr);					\
>> +	barrier_nospec();					\
>>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>>  	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
>>  	__gu_err;						\
>> @@ -323,8 +328,10 @@ extern unsigned long __copy_tofrom_user(void __user *to,
>>  static inline unsigned long copy_from_user(void *to,
>>  		const void __user *from, unsigned long n)
>>  {
>> - -	if (likely(access_ok(VERIFY_READ, from, n)))
>> +	if (likely(access_ok(VERIFY_READ, from, n))) {
>> +		barrier_nospec();
>>  		return __copy_tofrom_user((__force void __user *)to, from, n);
>> +	}
>>  	memset(to, 0, n);
>>  	return n;
>>  }
>> @@ -359,21 +366,27 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
>>  
>>  		switch (n) {
>>  		case 1:
>> +			barrier_nospec();
>>  			__get_user_size(*(u8 *)to, from, 1, ret);
>>  			break;
>>  		case 2:
>> +			barrier_nospec();
>>  			__get_user_size(*(u16 *)to, from, 2, ret);
>>  			break;
>>  		case 4:
>> +			barrier_nospec();
>>  			__get_user_size(*(u32 *)to, from, 4, ret);
>>  			break;
>>  		case 8:
>> +			barrier_nospec();
>>  			__get_user_size(*(u64 *)to, from, 8, ret);
>>  			break;
>>  		}
>>  		if (ret == 0)
>>  			return 0;
>>  	}
>> +
>> +	barrier_nospec();
>>  	return __copy_tofrom_user((__force void __user *)to, from, n);
>>  }
>>  
>> @@ -400,6 +413,7 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
>>  		if (ret == 0)
>>  			return 0;
>>  	}
>> +
>>  	return __copy_tofrom_user(to, (__force const void __user *)from, n);
>>  }
>>  
>> diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
>> index ba336930d448..22ed3c32fca8 100644
>> - --- a/arch/powerpc/kernel/Makefile
>> +++ b/arch/powerpc/kernel/Makefile
>> @@ -44,6 +44,7 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o
>>  obj-$(CONFIG_PPC_BOOK3S_64)	+= mce.o mce_power.o
>>  obj64-$(CONFIG_RELOCATABLE)	+= reloc_64.o
>>  obj-$(CONFIG_PPC_BOOK3E_64)	+= exceptions-64e.o idle_book3e.o
>> +obj-$(CONFIG_PPC_BARRIER_NOSPEC) += security.o
>>  obj-$(CONFIG_PPC64)		+= vdso64/
>>  obj-$(CONFIG_ALTIVEC)		+= vecemu.o
>>  obj-$(CONFIG_PPC_970_NAP)	+= idle_power4.o
>> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
>> index d92705e3a0c1..de3c29c51503 100644
>> - --- a/arch/powerpc/kernel/asm-offsets.c
>> +++ b/arch/powerpc/kernel/asm-offsets.c
>> @@ -245,8 +245,7 @@ int main(void)
>>  	DEFINE(PACA_IN_MCE, offsetof(struct paca_struct, in_mce));
>>  	DEFINE(PACA_RFI_FLUSH_FALLBACK_AREA, offsetof(struct paca_struct, rfi_flush_fallback_area));
>>  	DEFINE(PACA_EXRFI, offsetof(struct paca_struct, exrfi));
>> - -	DEFINE(PACA_L1D_FLUSH_CONGRUENCE, offsetof(struct paca_struct, l1d_flush_congruence));
>> - -	DEFINE(PACA_L1D_FLUSH_SETS, offsetof(struct paca_struct, l1d_flush_sets));
>> +	DEFINE(PACA_L1D_FLUSH_SIZE, offsetof(struct paca_struct, l1d_flush_size));
>>  #endif
>>  	DEFINE(PACAHWCPUID, offsetof(struct paca_struct, hw_cpu_id));
>>  	DEFINE(PACAKEXECSTATE, offsetof(struct paca_struct, kexec_state));
>> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
>> index 59be96917369..6d36a4fb4acf 100644
>> - --- a/arch/powerpc/kernel/entry_64.S
>> +++ b/arch/powerpc/kernel/entry_64.S
>> @@ -25,6 +25,7 @@
>>  #include <asm/page.h>
>>  #include <asm/mmu.h>
>>  #include <asm/thread_info.h>
>> +#include <asm/code-patching-asm.h>
>>  #include <asm/ppc_asm.h>
>>  #include <asm/asm-offsets.h>
>>  #include <asm/cputable.h>
>> @@ -36,6 +37,7 @@
>>  #include <asm/hw_irq.h>
>>  #include <asm/context_tracking.h>
>>  #include <asm/tm.h>
>> +#include <asm/barrier.h>
>>  #ifdef CONFIG_PPC_BOOK3S
>>  #include <asm/exception-64s.h>
>>  #else
>> @@ -75,6 +77,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
>>  	std	r0,GPR0(r1)
>>  	std	r10,GPR1(r1)
>>  	beq	2f			/* if from kernel mode */
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +START_BTB_FLUSH_SECTION
>> +	BTB_FLUSH(r10)
>> +END_BTB_FLUSH_SECTION
>> +#endif
>>  	ACCOUNT_CPU_USER_ENTRY(r10, r11)
>>  2:	std	r2,GPR2(r1)
>>  	std	r3,GPR3(r1)
>> @@ -177,6 +184,15 @@ system_call:			/* label this so stack traces look sane */
>>  	clrldi	r8,r8,32
>>  15:
>>  	slwi	r0,r0,4
>> +
>> +	barrier_nospec_asm
>> +	/*
>> +	 * Prevent the load of the handler below (based on the user-passed
>> +	 * system call number) being speculatively executed until the test
>> +	 * against NR_syscalls and branch to .Lsyscall_enosys above has
>> +	 * committed.
>> +	 */
>> +
>>  	ldx	r12,r11,r0	/* Fetch system call handler [ptr] */
>>  	mtctr   r12
>>  	bctrl			/* Call handler */
>> @@ -440,6 +456,57 @@ _GLOBAL(ret_from_kernel_thread)
>>  	li	r3,0
>>  	b	.Lsyscall_exit
>>  
>> +#ifdef CONFIG_PPC_BOOK3S_64
>> +
>> +#define FLUSH_COUNT_CACHE	\
>> +1:	nop;			\
>> +	patch_site 1b, patch__call_flush_count_cache
>> +
>> +
>> +#define BCCTR_FLUSH	.long 0x4c400420
>> +
>> +.macro nops number
>> +	.rept \number
>> +	nop
>> +	.endr
>> +.endm
>> +
>> +.balign 32
>> +.global flush_count_cache
>> +flush_count_cache:
>> +	/* Save LR into r9 */
>> +	mflr	r9
>> +
>> +	.rept 64
>> +	bl	.+4
>> +	.endr
>> +	b	1f
>> +	nops	6
>> +
>> +	.balign 32
>> +	/* Restore LR */
>> +1:	mtlr	r9
>> +	li	r9,0x7fff
>> +	mtctr	r9
>> +
>> +	BCCTR_FLUSH
>> +
>> +2:	nop
>> +	patch_site 2b patch__flush_count_cache_return
>> +
>> +	nops	3
>> +
>> +	.rept 278
>> +	.balign 32
>> +	BCCTR_FLUSH
>> +	nops	7
>> +	.endr
>> +
>> +	blr
>> +#else
>> +#define FLUSH_COUNT_CACHE
>> +#endif /* CONFIG_PPC_BOOK3S_64 */
>> +
>>  /*
>>   * This routine switches between two different tasks.  The process
>>   * state of one is saved on its kernel stack.  Then the state
>> @@ -503,6 +570,8 @@ BEGIN_FTR_SECTION
>>  END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
>>  #endif
>>  
>> +	FLUSH_COUNT_CACHE
>> +
>>  #ifdef CONFIG_SMP
>>  	/* We need a sync somewhere here to make sure that if the
>>  	 * previous task gets rescheduled on another CPU, it sees all
>> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
>> index 5cc93f0b52ca..48ec841ea1bf 100644
>> - --- a/arch/powerpc/kernel/exceptions-64e.S
>> +++ b/arch/powerpc/kernel/exceptions-64e.S
>> @@ -295,7 +295,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>>  	andi.	r10,r11,MSR_PR;		/* save stack pointer */	    \
>>  	beq	1f;			/* branch around if supervisor */   \
>>  	ld	r1,PACAKSAVE(r13);	/* get kernel stack coming from usr */\
>> - -1:	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
>> +1:	type##_BTB_FLUSH		\
>> +	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
>>  	bge-	cr1,exc_##n##_bad_stack;/* bad stack (TODO: out of line) */ \
>>  	mfspr	r10,SPRN_##type##_SRR0;	/* read SRR0 before touching stack */
>>  
>> @@ -327,6 +328,30 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>>  #define SPRN_MC_SRR0	SPRN_MCSRR0
>>  #define SPRN_MC_SRR1	SPRN_MCSRR1
>>  
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +#define GEN_BTB_FLUSH			\
>> +	START_BTB_FLUSH_SECTION		\
>> +		beq 1f;			\
>> +		BTB_FLUSH(r10)			\
>> +		1:		\
>> +	END_BTB_FLUSH_SECTION
>> +
>> +#define CRIT_BTB_FLUSH			\
>> +	START_BTB_FLUSH_SECTION		\
>> +		BTB_FLUSH(r10)		\
>> +	END_BTB_FLUSH_SECTION
>> +
>> +#define DBG_BTB_FLUSH CRIT_BTB_FLUSH
>> +#define MC_BTB_FLUSH CRIT_BTB_FLUSH
>> +#define GDBELL_BTB_FLUSH GEN_BTB_FLUSH
>> +#else
>> +#define GEN_BTB_FLUSH
>> +#define CRIT_BTB_FLUSH
>> +#define DBG_BTB_FLUSH
>> +#define MC_BTB_FLUSH
>> +#define GDBELL_BTB_FLUSH
>> +#endif
>> +
>>  #define NORMAL_EXCEPTION_PROLOG(n, intnum, addition)			    \
>>  	EXCEPTION_PROLOG(n, intnum, GEN, addition##_GEN(n))
>>  
>> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
>> index 938a30fef031..10e7cec9553d 100644
>> - --- a/arch/powerpc/kernel/exceptions-64s.S
>> +++ b/arch/powerpc/kernel/exceptions-64s.S
>> @@ -36,6 +36,7 @@ BEGIN_FTR_SECTION						\
>>  END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
>>  	mr	r9,r13 ;					\
>>  	GET_PACA(r13) ;						\
>> +	INTERRUPT_TO_KERNEL ;					\
>>  	mfspr	r11,SPRN_SRR0 ;					\
>>  0:
>>  
>> @@ -292,7 +293,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
>>  	. = 0x900
>>  	.globl decrementer_pSeries
>>  decrementer_pSeries:
>> - -	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
>> +	SET_SCRATCH0(r13)
>> +	EXCEPTION_PROLOG_0(PACA_EXGEN)
>> +	b	decrementer_ool
>>  
>>  	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
>>  
>> @@ -319,6 +322,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
>>  	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
>>  	HMT_MEDIUM;
>>  	std	r10,PACA_EXGEN+EX_R10(r13)
>> +	INTERRUPT_TO_KERNEL
>>  	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
>>  	mfcr	r9
>>  	KVMTEST(0xc00)
>> @@ -607,6 +611,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
>>  
>>  	.align	7
>>  	/* moved from 0xe00 */
>> +	MASKABLE_EXCEPTION_OOL(0x900, decrementer)
>>  	STD_EXCEPTION_HV_OOL(0xe02, h_data_storage)
>>  	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02)
>>  	STD_EXCEPTION_HV_OOL(0xe22, h_instr_storage)
>> @@ -1564,6 +1569,21 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>>  	blr
>>  #endif
>>  
>> +	.balign 16
>> +	.globl stf_barrier_fallback
>> +stf_barrier_fallback:
>> +	std	r9,PACA_EXRFI+EX_R9(r13)
>> +	std	r10,PACA_EXRFI+EX_R10(r13)
>> +	sync
>> +	ld	r9,PACA_EXRFI+EX_R9(r13)
>> +	ld	r10,PACA_EXRFI+EX_R10(r13)
>> +	ori	31,31,0
>> +	.rept 14
>> +	b	1f
>> +1:
>> +	.endr
>> +	blr
>> +
>>  	.globl rfi_flush_fallback
>>  rfi_flush_fallback:
>>  	SET_SCRATCH0(r13);
>> @@ -1571,39 +1591,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>>  	std	r9,PACA_EXRFI+EX_R9(r13)
>>  	std	r10,PACA_EXRFI+EX_R10(r13)
>>  	std	r11,PACA_EXRFI+EX_R11(r13)
>> - -	std	r12,PACA_EXRFI+EX_R12(r13)
>> - -	std	r8,PACA_EXRFI+EX_R13(r13)
>>  	mfctr	r9
>>  	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
>> - -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
>> - -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
>> - -	/*
>> - -	 * The load adresses are at staggered offsets within cachelines,
>> - -	 * which suits some pipelines better (on others it should not
>> - -	 * hurt).
>> - -	 */
>> - -	addi	r12,r12,8
>> +	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
>> +	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
>>  	mtctr	r11
>>  	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
>>  
>>  	/* order ld/st prior to dcbt stop all streams with flushing */
>>  	sync
>> - -1:	li	r8,0
>> - -	.rept	8 /* 8-way set associative */
>> - -	ldx	r11,r10,r8
>> - -	add	r8,r8,r12
>> - -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
>> - -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
>> - -	.endr
>> - -	addi	r10,r10,128 /* 128 byte cache line */
>> +
>> +	/*
>> +	 * The load adresses are at staggered offsets within cachelines,
>> +	 * which suits some pipelines better (on others it should not
>> +	 * hurt).
>> +	 */
>> +1:
>> +	ld	r11,(0x80 + 8)*0(r10)
>> +	ld	r11,(0x80 + 8)*1(r10)
>> +	ld	r11,(0x80 + 8)*2(r10)
>> +	ld	r11,(0x80 + 8)*3(r10)
>> +	ld	r11,(0x80 + 8)*4(r10)
>> +	ld	r11,(0x80 + 8)*5(r10)
>> +	ld	r11,(0x80 + 8)*6(r10)
>> +	ld	r11,(0x80 + 8)*7(r10)
>> +	addi	r10,r10,0x80*8
>>  	bdnz	1b
>>  
>>  	mtctr	r9
>>  	ld	r9,PACA_EXRFI+EX_R9(r13)
>>  	ld	r10,PACA_EXRFI+EX_R10(r13)
>>  	ld	r11,PACA_EXRFI+EX_R11(r13)
>> - -	ld	r12,PACA_EXRFI+EX_R12(r13)
>> - -	ld	r8,PACA_EXRFI+EX_R13(r13)
>>  	GET_SCRATCH0(r13);
>>  	rfid
>>  
>> @@ -1614,39 +1632,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>>  	std	r9,PACA_EXRFI+EX_R9(r13)
>>  	std	r10,PACA_EXRFI+EX_R10(r13)
>>  	std	r11,PACA_EXRFI+EX_R11(r13)
>> - -	std	r12,PACA_EXRFI+EX_R12(r13)
>> - -	std	r8,PACA_EXRFI+EX_R13(r13)
>>  	mfctr	r9
>>  	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
>> - -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
>> - -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
>> - -	/*
>> - -	 * The load adresses are at staggered offsets within cachelines,
>> - -	 * which suits some pipelines better (on others it should not
>> - -	 * hurt).
>> - -	 */
>> - -	addi	r12,r12,8
>> +	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
>> +	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
>>  	mtctr	r11
>>  	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
>>  
>>  	/* order ld/st prior to dcbt stop all streams with flushing */
>>  	sync
>> - -1:	li	r8,0
>> - -	.rept	8 /* 8-way set associative */
>> - -	ldx	r11,r10,r8
>> - -	add	r8,r8,r12
>> - -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
>> - -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
>> - -	.endr
>> - -	addi	r10,r10,128 /* 128 byte cache line */
>> +
>> +	/*
>> +	 * The load adresses are at staggered offsets within cachelines,
>> +	 * which suits some pipelines better (on others it should not
>> +	 * hurt).
>> +	 */
>> +1:
>> +	ld	r11,(0x80 + 8)*0(r10)
>> +	ld	r11,(0x80 + 8)*1(r10)
>> +	ld	r11,(0x80 + 8)*2(r10)
>> +	ld	r11,(0x80 + 8)*3(r10)
>> +	ld	r11,(0x80 + 8)*4(r10)
>> +	ld	r11,(0x80 + 8)*5(r10)
>> +	ld	r11,(0x80 + 8)*6(r10)
>> +	ld	r11,(0x80 + 8)*7(r10)
>> +	addi	r10,r10,0x80*8
>>  	bdnz	1b
>>  
>>  	mtctr	r9
>>  	ld	r9,PACA_EXRFI+EX_R9(r13)
>>  	ld	r10,PACA_EXRFI+EX_R10(r13)
>>  	ld	r11,PACA_EXRFI+EX_R11(r13)
>> - -	ld	r12,PACA_EXRFI+EX_R12(r13)
>> - -	ld	r8,PACA_EXRFI+EX_R13(r13)
>>  	GET_SCRATCH0(r13);
>>  	hrfid
>>  
>> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
>> index 9547381b631a..ff009be97a42 100644
>> - --- a/arch/powerpc/kernel/module.c
>> +++ b/arch/powerpc/kernel/module.c
>> @@ -67,7 +67,15 @@ int module_finalize(const Elf_Ehdr *hdr,
>>  		do_feature_fixups(powerpc_firmware_features,
>>  				  (void *)sect->sh_addr,
>>  				  (void *)sect->sh_addr + sect->sh_size);
>> - -#endif
>> +#endif /* CONFIG_PPC64 */
>> +
>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>> +	sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
>> +	if (sect != NULL)
>> +		do_barrier_nospec_fixups_range(barrier_nospec_enabled,
>> +				  (void *)sect->sh_addr,
>> +				  (void *)sect->sh_addr + sect->sh_size);
>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>>  
>>  	sect = find_section(hdr, sechdrs, "__lwsync_fixup");
>>  	if (sect != NULL)
>> diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
>> new file mode 100644
>> index 000000000000..58f0602a92b9
>> - --- /dev/null
>> +++ b/arch/powerpc/kernel/security.c
>> @@ -0,0 +1,433 @@
>> +// SPDX-License-Identifier: GPL-2.0+
>> +//
>> +// Security related flags and so on.
>> +//
>> +// Copyright 2018, Michael Ellerman, IBM Corporation.
>> +
>> +#include <linux/kernel.h>
>> +#include <linux/debugfs.h>
>> +#include <linux/device.h>
>> +#include <linux/seq_buf.h>
>> +
>> +#include <asm/debug.h>
>> +#include <asm/asm-prototypes.h>
>> +#include <asm/code-patching.h>
>> +#include <asm/security_features.h>
>> +#include <asm/setup.h>
>> +
>> +
>> +unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
>> +
>> +enum count_cache_flush_type {
>> +	COUNT_CACHE_FLUSH_NONE	= 0x1,
>> +	COUNT_CACHE_FLUSH_SW	= 0x2,
>> +	COUNT_CACHE_FLUSH_HW	= 0x4,
>> +};
>> +static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
>> +
>> +bool barrier_nospec_enabled;
>> +static bool no_nospec;
>> +static bool btb_flush_enabled;
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +static bool no_spectrev2;
>> +#endif
>> +
>> +static void enable_barrier_nospec(bool enable)
>> +{
>> +	barrier_nospec_enabled = enable;
>> +	do_barrier_nospec_fixups(enable);
>> +}
>> +
>> +void setup_barrier_nospec(void)
>> +{
>> +	bool enable;
>> +
>> +	/*
>> +	 * It would make sense to check SEC_FTR_SPEC_BAR_ORI31 below as well.
>> +	 * But there's a good reason not to. The two flags we check below are
>> +	 * both are enabled by default in the kernel, so if the hcall is not
>> +	 * functional they will be enabled.
>> +	 * On a system where the host firmware has been updated (so the ori
>> +	 * functions as a barrier), but on which the hypervisor (KVM/Qemu) has
>> +	 * not been updated, we would like to enable the barrier. Dropping the
>> +	 * check for SEC_FTR_SPEC_BAR_ORI31 achieves that. The only downside is
>> +	 * we potentially enable the barrier on systems where the host firmware
>> +	 * is not updated, but that's harmless as it's a no-op.
>> +	 */
>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
>> +		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
>> +
>> +	if (!no_nospec)
>> +		enable_barrier_nospec(enable);
>> +}
>> +
>> +static int __init handle_nospectre_v1(char *p)
>> +{
>> +	no_nospec = true;
>> +
>> +	return 0;
>> +}
>> +early_param("nospectre_v1", handle_nospectre_v1);
>> +
>> +#ifdef CONFIG_DEBUG_FS
>> +static int barrier_nospec_set(void *data, u64 val)
>> +{
>> +	switch (val) {
>> +	case 0:
>> +	case 1:
>> +		break;
>> +	default:
>> +		return -EINVAL;
>> +	}
>> +
>> +	if (!!val == !!barrier_nospec_enabled)
>> +		return 0;
>> +
>> +	enable_barrier_nospec(!!val);
>> +
>> +	return 0;
>> +}
>> +
>> +static int barrier_nospec_get(void *data, u64 *val)
>> +{
>> +	*val = barrier_nospec_enabled ? 1 : 0;
>> +	return 0;
>> +}
>> +
>> +DEFINE_SIMPLE_ATTRIBUTE(fops_barrier_nospec,
>> +			barrier_nospec_get, barrier_nospec_set, "%llu\n");
>> +
>> +static __init int barrier_nospec_debugfs_init(void)
>> +{
>> +	debugfs_create_file("barrier_nospec", 0600, powerpc_debugfs_root, NULL,
>> +			    &fops_barrier_nospec);
>> +	return 0;
>> +}
>> +device_initcall(barrier_nospec_debugfs_init);
>> +#endif /* CONFIG_DEBUG_FS */
>> +
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +static int __init handle_nospectre_v2(char *p)
>> +{
>> +	no_spectrev2 = true;
>> +
>> +	return 0;
>> +}
>> +early_param("nospectre_v2", handle_nospectre_v2);
>> +void setup_spectre_v2(void)
>> +{
>> +	if (no_spectrev2)
>> +		do_btb_flush_fixups();
>> +	else
>> +		btb_flush_enabled = true;
>> +}
>> +#endif /* CONFIG_PPC_FSL_BOOK3E */
>> +
>> +#ifdef CONFIG_PPC_BOOK3S_64
>> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
>> +{
>> +	bool thread_priv;
>> +
>> +	thread_priv = security_ftr_enabled(SEC_FTR_L1D_THREAD_PRIV);
>> +
>> +	if (rfi_flush || thread_priv) {
>> +		struct seq_buf s;
>> +		seq_buf_init(&s, buf, PAGE_SIZE - 1);
>> +
>> +		seq_buf_printf(&s, "Mitigation: ");
>> +
>> +		if (rfi_flush)
>> +			seq_buf_printf(&s, "RFI Flush");
>> +
>> +		if (rfi_flush && thread_priv)
>> +			seq_buf_printf(&s, ", ");
>> +
>> +		if (thread_priv)
>> +			seq_buf_printf(&s, "L1D private per thread");
>> +
>> +		seq_buf_printf(&s, "\n");
>> +
>> +		return s.len;
>> +	}
>> +
>> +	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
>> +	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
>> +		return sprintf(buf, "Not affected\n");
>> +
>> +	return sprintf(buf, "Vulnerable\n");
>> +}
>> +#endif
>> +
>> +ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
>> +{
>> +	struct seq_buf s;
>> +
>> +	seq_buf_init(&s, buf, PAGE_SIZE - 1);
>> +
>> +	if (security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR)) {
>> +		if (barrier_nospec_enabled)
>> +			seq_buf_printf(&s, "Mitigation: __user pointer sanitization");
>> +		else
>> +			seq_buf_printf(&s, "Vulnerable");
>> +
>> +		if (security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31))
>> +			seq_buf_printf(&s, ", ori31 speculation barrier enabled");
>> +
>> +		seq_buf_printf(&s, "\n");
>> +	} else
>> +		seq_buf_printf(&s, "Not affected\n");
>> +
>> +	return s.len;
>> +}
>> +
>> +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
>> +{
>> +	struct seq_buf s;
>> +	bool bcs, ccd;
>> +
>> +	seq_buf_init(&s, buf, PAGE_SIZE - 1);
>> +
>> +	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
>> +	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
>> +
>> +	if (bcs || ccd) {
>> +		seq_buf_printf(&s, "Mitigation: ");
>> +
>> +		if (bcs)
>> +			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
>> +
>> +		if (bcs && ccd)
>> +			seq_buf_printf(&s, ", ");
>> +
>> +		if (ccd)
>> +			seq_buf_printf(&s, "Indirect branch cache disabled");
>> +	} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
>> +		seq_buf_printf(&s, "Mitigation: Software count cache flush");
>> +
>> +		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
>> +			seq_buf_printf(&s, " (hardware accelerated)");
>> +	} else if (btb_flush_enabled) {
>> +		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
>> +	} else {
>> +		seq_buf_printf(&s, "Vulnerable");
>> +	}
>> +
>> +	seq_buf_printf(&s, "\n");
>> +
>> +	return s.len;
>> +}
>> +
>> +#ifdef CONFIG_PPC_BOOK3S_64
>> +/*
>> + * Store-forwarding barrier support.
>> + */
>> +
>> +static enum stf_barrier_type stf_enabled_flush_types;
>> +static bool no_stf_barrier;
>> +bool stf_barrier;
>> +
>> +static int __init handle_no_stf_barrier(char *p)
>> +{
>> +	pr_info("stf-barrier: disabled on command line.");
>> +	no_stf_barrier = true;
>> +	return 0;
>> +}
>> +
>> +early_param("no_stf_barrier", handle_no_stf_barrier);
>> +
>> +/* This is the generic flag used by other architectures */
>> +static int __init handle_ssbd(char *p)
>> +{
>> +	if (!p || strncmp(p, "auto", 5) == 0 || strncmp(p, "on", 2) == 0 ) {
>> +		/* Until firmware tells us, we have the barrier with auto */
>> +		return 0;
>> +	} else if (strncmp(p, "off", 3) == 0) {
>> +		handle_no_stf_barrier(NULL);
>> +		return 0;
>> +	} else
>> +		return 1;
>> +
>> +	return 0;
>> +}
>> +early_param("spec_store_bypass_disable", handle_ssbd);
>> +
>> +/* This is the generic flag used by other architectures */
>> +static int __init handle_no_ssbd(char *p)
>> +{
>> +	handle_no_stf_barrier(NULL);
>> +	return 0;
>> +}
>> +early_param("nospec_store_bypass_disable", handle_no_ssbd);
>> +
>> +static void stf_barrier_enable(bool enable)
>> +{
>> +	if (enable)
>> +		do_stf_barrier_fixups(stf_enabled_flush_types);
>> +	else
>> +		do_stf_barrier_fixups(STF_BARRIER_NONE);
>> +
>> +	stf_barrier = enable;
>> +}
>> +
>> +void setup_stf_barrier(void)
>> +{
>> +	enum stf_barrier_type type;
>> +	bool enable, hv;
>> +
>> +	hv = cpu_has_feature(CPU_FTR_HVMODE);
>> +
>> +	/* Default to fallback in case fw-features are not available */
>> +	if (cpu_has_feature(CPU_FTR_ARCH_207S))
>> +		type = STF_BARRIER_SYNC_ORI;
>> +	else if (cpu_has_feature(CPU_FTR_ARCH_206))
>> +		type = STF_BARRIER_FALLBACK;
>> +	else
>> +		type = STF_BARRIER_NONE;
>> +
>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
>> +		(security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) ||
>> +		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) && hv));
>> +
>> +	if (type == STF_BARRIER_FALLBACK) {
>> +		pr_info("stf-barrier: fallback barrier available\n");
>> +	} else if (type == STF_BARRIER_SYNC_ORI) {
>> +		pr_info("stf-barrier: hwsync barrier available\n");
>> +	} else if (type == STF_BARRIER_EIEIO) {
>> +		pr_info("stf-barrier: eieio barrier available\n");
>> +	}
>> +
>> +	stf_enabled_flush_types = type;
>> +
>> +	if (!no_stf_barrier)
>> +		stf_barrier_enable(enable);
>> +}
>> +
>> +ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *attr, char *buf)
>> +{
>> +	if (stf_barrier && stf_enabled_flush_types != STF_BARRIER_NONE) {
>> +		const char *type;
>> +		switch (stf_enabled_flush_types) {
>> +		case STF_BARRIER_EIEIO:
>> +			type = "eieio";
>> +			break;
>> +		case STF_BARRIER_SYNC_ORI:
>> +			type = "hwsync";
>> +			break;
>> +		case STF_BARRIER_FALLBACK:
>> +			type = "fallback";
>> +			break;
>> +		default:
>> +			type = "unknown";
>> +		}
>> +		return sprintf(buf, "Mitigation: Kernel entry/exit barrier (%s)\n", type);
>> +	}
>> +
>> +	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
>> +	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
>> +		return sprintf(buf, "Not affected\n");
>> +
>> +	return sprintf(buf, "Vulnerable\n");
>> +}
>> +
>> +#ifdef CONFIG_DEBUG_FS
>> +static int stf_barrier_set(void *data, u64 val)
>> +{
>> +	bool enable;
>> +
>> +	if (val == 1)
>> +		enable = true;
>> +	else if (val == 0)
>> +		enable = false;
>> +	else
>> +		return -EINVAL;
>> +
>> +	/* Only do anything if we're changing state */
>> +	if (enable != stf_barrier)
>> +		stf_barrier_enable(enable);
>> +
>> +	return 0;
>> +}
>> +
>> +static int stf_barrier_get(void *data, u64 *val)
>> +{
>> +	*val = stf_barrier ? 1 : 0;
>> +	return 0;
>> +}
>> +
>> +DEFINE_SIMPLE_ATTRIBUTE(fops_stf_barrier, stf_barrier_get, stf_barrier_set, "%llu\n");
>> +
>> +static __init int stf_barrier_debugfs_init(void)
>> +{
>> +	debugfs_create_file("stf_barrier", 0600, powerpc_debugfs_root, NULL, &fops_stf_barrier);
>> +	return 0;
>> +}
>> +device_initcall(stf_barrier_debugfs_init);
>> +#endif /* CONFIG_DEBUG_FS */
>> +
>> +static void toggle_count_cache_flush(bool enable)
>> +{
>> +	if (!enable || !security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
>> +		patch_instruction_site(&patch__call_flush_count_cache, PPC_INST_NOP);
>> +		count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
>> +		pr_info("count-cache-flush: software flush disabled.\n");
>> +		return;
>> +	}
>> +
>> +	patch_branch_site(&patch__call_flush_count_cache,
>> +			  (u64)&flush_count_cache, BRANCH_SET_LINK);
>> +
>> +	if (!security_ftr_enabled(SEC_FTR_BCCTR_FLUSH_ASSIST)) {
>> +		count_cache_flush_type = COUNT_CACHE_FLUSH_SW;
>> +		pr_info("count-cache-flush: full software flush sequence enabled.\n");
>> +		return;
>> +	}
>> +
>> +	patch_instruction_site(&patch__flush_count_cache_return, PPC_INST_BLR);
>> +	count_cache_flush_type = COUNT_CACHE_FLUSH_HW;
>> +	pr_info("count-cache-flush: hardware assisted flush sequence enabled\n");
>> +}
>> +
>> +void setup_count_cache_flush(void)
>> +{
>> +	toggle_count_cache_flush(true);
>> +}
>> +
>> +#ifdef CONFIG_DEBUG_FS
>> +static int count_cache_flush_set(void *data, u64 val)
>> +{
>> +	bool enable;
>> +
>> +	if (val == 1)
>> +		enable = true;
>> +	else if (val == 0)
>> +		enable = false;
>> +	else
>> +		return -EINVAL;
>> +
>> +	toggle_count_cache_flush(enable);
>> +
>> +	return 0;
>> +}
>> +
>> +static int count_cache_flush_get(void *data, u64 *val)
>> +{
>> +	if (count_cache_flush_type == COUNT_CACHE_FLUSH_NONE)
>> +		*val = 0;
>> +	else
>> +		*val = 1;
>> +
>> +	return 0;
>> +}
>> +
>> +DEFINE_SIMPLE_ATTRIBUTE(fops_count_cache_flush, count_cache_flush_get,
>> +			count_cache_flush_set, "%llu\n");
>> +
>> +static __init int count_cache_flush_debugfs_init(void)
>> +{
>> +	debugfs_create_file("count_cache_flush", 0600, powerpc_debugfs_root,
>> +			    NULL, &fops_count_cache_flush);
>> +	return 0;
>> +}
>> +device_initcall(count_cache_flush_debugfs_init);
>> +#endif /* CONFIG_DEBUG_FS */
>> +#endif /* CONFIG_PPC_BOOK3S_64 */
>> diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
>> index ad8c9db61237..5a9f035bcd6b 100644
>> - --- a/arch/powerpc/kernel/setup_32.c
>> +++ b/arch/powerpc/kernel/setup_32.c
>> @@ -322,6 +322,8 @@ void __init setup_arch(char **cmdline_p)
>>  		ppc_md.setup_arch();
>>  	if ( ppc_md.progress ) ppc_md.progress("arch: exit", 0x3eab);
>>  
>> +	setup_barrier_nospec();
>> +
>>  	paging_init();
>>  
>>  	/* Initialize the MMU context management stuff */
>> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
>> index 9eb469bed22b..6bb731ababc6 100644
>> - --- a/arch/powerpc/kernel/setup_64.c
>> +++ b/arch/powerpc/kernel/setup_64.c
>> @@ -736,6 +736,8 @@ void __init setup_arch(char **cmdline_p)
>>  	if (ppc_md.setup_arch)
>>  		ppc_md.setup_arch();
>>  
>> +	setup_barrier_nospec();
>> +
>>  	paging_init();
>>  
>>  	/* Initialize the MMU context management stuff */
>> @@ -873,9 +875,6 @@ static void do_nothing(void *unused)
>>  
>>  void rfi_flush_enable(bool enable)
>>  {
>> - -	if (rfi_flush == enable)
>> - -		return;
>> - -
>>  	if (enable) {
>>  		do_rfi_flush_fixups(enabled_flush_types);
>>  		on_each_cpu(do_nothing, NULL, 1);
>> @@ -885,11 +884,15 @@ void rfi_flush_enable(bool enable)
>>  	rfi_flush = enable;
>>  }
>>  
>> - -static void init_fallback_flush(void)
>> +static void __ref init_fallback_flush(void)
>>  {
>>  	u64 l1d_size, limit;
>>  	int cpu;
>>  
>> +	/* Only allocate the fallback flush area once (at boot time). */
>> +	if (l1d_flush_fallback_area)
>> +		return;
>> +
>>  	l1d_size = ppc64_caches.dsize;
>>  	limit = min(safe_stack_limit(), ppc64_rma_size);
>>  
>> @@ -902,34 +905,23 @@ static void init_fallback_flush(void)
>>  	memset(l1d_flush_fallback_area, 0, l1d_size * 2);
>>  
>>  	for_each_possible_cpu(cpu) {
>> - -		/*
>> - -		 * The fallback flush is currently coded for 8-way
>> - -		 * associativity. Different associativity is possible, but it
>> - -		 * will be treated as 8-way and may not evict the lines as
>> - -		 * effectively.
>> - -		 *
>> - -		 * 128 byte lines are mandatory.
>> - -		 */
>> - -		u64 c = l1d_size / 8;
>> - -
>>  		paca[cpu].rfi_flush_fallback_area = l1d_flush_fallback_area;
>> - -		paca[cpu].l1d_flush_congruence = c;
>> - -		paca[cpu].l1d_flush_sets = c / 128;
>> +		paca[cpu].l1d_flush_size = l1d_size;
>>  	}
>>  }
>>  
>> - -void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
>> +void setup_rfi_flush(enum l1d_flush_type types, bool enable)
>>  {
>>  	if (types & L1D_FLUSH_FALLBACK) {
>> - -		pr_info("rfi-flush: Using fallback displacement flush\n");
>> +		pr_info("rfi-flush: fallback displacement flush available\n");
>>  		init_fallback_flush();
>>  	}
>>  
>>  	if (types & L1D_FLUSH_ORI)
>> - -		pr_info("rfi-flush: Using ori type flush\n");
>> +		pr_info("rfi-flush: ori type flush available\n");
>>  
>>  	if (types & L1D_FLUSH_MTTRIG)
>> - -		pr_info("rfi-flush: Using mttrig type flush\n");
>> +		pr_info("rfi-flush: mttrig type flush available\n");
>>  
>>  	enabled_flush_types = types;
>>  
>> @@ -940,13 +932,19 @@ void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
>>  #ifdef CONFIG_DEBUG_FS
>>  static int rfi_flush_set(void *data, u64 val)
>>  {
>> +	bool enable;
>> +
>>  	if (val == 1)
>> - -		rfi_flush_enable(true);
>> +		enable = true;
>>  	else if (val == 0)
>> - -		rfi_flush_enable(false);
>> +		enable = false;
>>  	else
>>  		return -EINVAL;
>>  
>> +	/* Only do anything if we're changing state */
>> +	if (enable != rfi_flush)
>> +		rfi_flush_enable(enable);
>> +
>>  	return 0;
>>  }
>>  
>> @@ -965,12 +963,4 @@ static __init int rfi_flush_debugfs_init(void)
>>  }
>>  device_initcall(rfi_flush_debugfs_init);
>>  #endif
>> - -
>> - -ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
>> - -{
>> - -	if (rfi_flush)
>> - -		return sprintf(buf, "Mitigation: RFI Flush\n");
>> - -
>> - -	return sprintf(buf, "Vulnerable\n");
>> - -}
>>  #endif /* CONFIG_PPC_BOOK3S_64 */
>> diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
>> index 072a23a17350..876ac9d52afc 100644
>> - --- a/arch/powerpc/kernel/vmlinux.lds.S
>> +++ b/arch/powerpc/kernel/vmlinux.lds.S
>> @@ -73,14 +73,45 @@ SECTIONS
>>  	RODATA
>>  
>>  #ifdef CONFIG_PPC64
>> +	. = ALIGN(8);
>> +	__stf_entry_barrier_fixup : AT(ADDR(__stf_entry_barrier_fixup) - LOAD_OFFSET) {
>> +		__start___stf_entry_barrier_fixup = .;
>> +		*(__stf_entry_barrier_fixup)
>> +		__stop___stf_entry_barrier_fixup = .;
>> +	}
>> +
>> +	. = ALIGN(8);
>> +	__stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) {
>> +		__start___stf_exit_barrier_fixup = .;
>> +		*(__stf_exit_barrier_fixup)
>> +		__stop___stf_exit_barrier_fixup = .;
>> +	}
>> +
>>  	. = ALIGN(8);
>>  	__rfi_flush_fixup : AT(ADDR(__rfi_flush_fixup) - LOAD_OFFSET) {
>>  		__start___rfi_flush_fixup = .;
>>  		*(__rfi_flush_fixup)
>>  		__stop___rfi_flush_fixup = .;
>>  	}
>> - -#endif
>> +#endif /* CONFIG_PPC64 */
>>  
>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>> +	. = ALIGN(8);
>> +	__spec_barrier_fixup : AT(ADDR(__spec_barrier_fixup) - LOAD_OFFSET) {
>> +		__start___barrier_nospec_fixup = .;
>> +		*(__barrier_nospec_fixup)
>> +		__stop___barrier_nospec_fixup = .;
>> +	}
>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>> +
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +	. = ALIGN(8);
>> +	__spec_btb_flush_fixup : AT(ADDR(__spec_btb_flush_fixup) - LOAD_OFFSET) {
>> +		__start__btb_flush_fixup = .;
>> +		*(__btb_flush_fixup)
>> +		__stop__btb_flush_fixup = .;
>> +	}
>> +#endif
>>  	EXCEPTION_TABLE(0)
>>  
>>  	NOTES :kernel :notes
>> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
>> index d5edbeb8eb82..570c06a00db6 100644
>> - --- a/arch/powerpc/lib/code-patching.c
>> +++ b/arch/powerpc/lib/code-patching.c
>> @@ -14,12 +14,25 @@
>>  #include <asm/page.h>
>>  #include <asm/code-patching.h>
>>  #include <asm/uaccess.h>
>> +#include <asm/setup.h>
>> +#include <asm/sections.h>
>>  
>>  
>> +static inline bool is_init(unsigned int *addr)
>> +{
>> +	return addr >= (unsigned int *)__init_begin && addr < (unsigned int *)__init_end;
>> +}
>> +
>>  int patch_instruction(unsigned int *addr, unsigned int instr)
>>  {
>>  	int err;
>>  
>> +	/* Make sure we aren't patching a freed init section */
>> +	if (init_mem_is_free && is_init(addr)) {
>> +		pr_debug("Skipping init section patching addr: 0x%px\n", addr);
>> +		return 0;
>> +	}
>> +
>>  	__put_user_size(instr, addr, 4, err);
>>  	if (err)
>>  		return err;
>> @@ -32,6 +45,22 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags)
>>  	return patch_instruction(addr, create_branch(addr, target, flags));
>>  }
>>  
>> +int patch_branch_site(s32 *site, unsigned long target, int flags)
>> +{
>> +	unsigned int *addr;
>> +
>> +	addr = (unsigned int *)((unsigned long)site + *site);
>> +	return patch_instruction(addr, create_branch(addr, target, flags));
>> +}
>> +
>> +int patch_instruction_site(s32 *site, unsigned int instr)
>> +{
>> +	unsigned int *addr;
>> +
>> +	addr = (unsigned int *)((unsigned long)site + *site);
>> +	return patch_instruction(addr, instr);
>> +}
>> +
>>  unsigned int create_branch(const unsigned int *addr,
>>  			   unsigned long target, int flags)
>>  {
>> diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
>> index 3af014684872..7bdfc19a491d 100644
>> - --- a/arch/powerpc/lib/feature-fixups.c
>> +++ b/arch/powerpc/lib/feature-fixups.c
>> @@ -21,7 +21,7 @@
>>  #include <asm/page.h>
>>  #include <asm/sections.h>
>>  #include <asm/setup.h>
>> - -
>> +#include <asm/security_features.h>
>>  
>>  struct fixup_entry {
>>  	unsigned long	mask;
>> @@ -115,6 +115,120 @@ void do_feature_fixups(unsigned long value, void *fixup_start, void *fixup_end)
>>  }
>>  
>>  #ifdef CONFIG_PPC_BOOK3S_64
>> +void do_stf_entry_barrier_fixups(enum stf_barrier_type types)
>> +{
>> +	unsigned int instrs[3], *dest;
>> +	long *start, *end;
>> +	int i;
>> +
>> +	start = PTRRELOC(&__start___stf_entry_barrier_fixup),
>> +	end = PTRRELOC(&__stop___stf_entry_barrier_fixup);
>> +
>> +	instrs[0] = 0x60000000; /* nop */
>> +	instrs[1] = 0x60000000; /* nop */
>> +	instrs[2] = 0x60000000; /* nop */
>> +
>> +	i = 0;
>> +	if (types & STF_BARRIER_FALLBACK) {
>> +		instrs[i++] = 0x7d4802a6; /* mflr r10		*/
>> +		instrs[i++] = 0x60000000; /* branch patched below */
>> +		instrs[i++] = 0x7d4803a6; /* mtlr r10		*/
>> +	} else if (types & STF_BARRIER_EIEIO) {
>> +		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
>> +	} else if (types & STF_BARRIER_SYNC_ORI) {
>> +		instrs[i++] = 0x7c0004ac; /* hwsync		*/
>> +		instrs[i++] = 0xe94d0000; /* ld r10,0(r13)	*/
>> +		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
>> +	}
>> +
>> +	for (i = 0; start < end; start++, i++) {
>> +		dest = (void *)start + *start;
>> +
>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>> +
>> +		patch_instruction(dest, instrs[0]);
>> +
>> +		if (types & STF_BARRIER_FALLBACK)
>> +			patch_branch(dest + 1, (unsigned long)&stf_barrier_fallback,
>> +				     BRANCH_SET_LINK);
>> +		else
>> +			patch_instruction(dest + 1, instrs[1]);
>> +
>> +		patch_instruction(dest + 2, instrs[2]);
>> +	}
>> +
>> +	printk(KERN_DEBUG "stf-barrier: patched %d entry locations (%s barrier)\n", i,
>> +		(types == STF_BARRIER_NONE)                  ? "no" :
>> +		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
>> +		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
>> +		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
>> +		                                           : "unknown");
>> +}
>> +
>> +void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
>> +{
>> +	unsigned int instrs[6], *dest;
>> +	long *start, *end;
>> +	int i;
>> +
>> +	start = PTRRELOC(&__start___stf_exit_barrier_fixup),
>> +	end = PTRRELOC(&__stop___stf_exit_barrier_fixup);
>> +
>> +	instrs[0] = 0x60000000; /* nop */
>> +	instrs[1] = 0x60000000; /* nop */
>> +	instrs[2] = 0x60000000; /* nop */
>> +	instrs[3] = 0x60000000; /* nop */
>> +	instrs[4] = 0x60000000; /* nop */
>> +	instrs[5] = 0x60000000; /* nop */
>> +
>> +	i = 0;
>> +	if (types & STF_BARRIER_FALLBACK || types & STF_BARRIER_SYNC_ORI) {
>> +		if (cpu_has_feature(CPU_FTR_HVMODE)) {
>> +			instrs[i++] = 0x7db14ba6; /* mtspr 0x131, r13 (HSPRG1) */
>> +			instrs[i++] = 0x7db04aa6; /* mfspr r13, 0x130 (HSPRG0) */
>> +		} else {
>> +			instrs[i++] = 0x7db243a6; /* mtsprg 2,r13	*/
>> +			instrs[i++] = 0x7db142a6; /* mfsprg r13,1    */
>> +	        }
>> +		instrs[i++] = 0x7c0004ac; /* hwsync		*/
>> +		instrs[i++] = 0xe9ad0000; /* ld r13,0(r13)	*/
>> +		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
>> +		if (cpu_has_feature(CPU_FTR_HVMODE)) {
>> +			instrs[i++] = 0x7db14aa6; /* mfspr r13, 0x131 (HSPRG1) */
>> +		} else {
>> +			instrs[i++] = 0x7db242a6; /* mfsprg r13,2 */
>> +		}
>> +	} else if (types & STF_BARRIER_EIEIO) {
>> +		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
>> +	}
>> +
>> +	for (i = 0; start < end; start++, i++) {
>> +		dest = (void *)start + *start;
>> +
>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>> +
>> +		patch_instruction(dest, instrs[0]);
>> +		patch_instruction(dest + 1, instrs[1]);
>> +		patch_instruction(dest + 2, instrs[2]);
>> +		patch_instruction(dest + 3, instrs[3]);
>> +		patch_instruction(dest + 4, instrs[4]);
>> +		patch_instruction(dest + 5, instrs[5]);
>> +	}
>> +	printk(KERN_DEBUG "stf-barrier: patched %d exit locations (%s barrier)\n", i,
>> +		(types == STF_BARRIER_NONE)                  ? "no" :
>> +		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
>> +		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
>> +		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
>> +		                                           : "unknown");
>> +}
>> +
>> +
>> +void do_stf_barrier_fixups(enum stf_barrier_type types)
>> +{
>> +	do_stf_entry_barrier_fixups(types);
>> +	do_stf_exit_barrier_fixups(types);
>> +}
>> +
>>  void do_rfi_flush_fixups(enum l1d_flush_type types)
>>  {
>>  	unsigned int instrs[3], *dest;
>> @@ -151,10 +265,110 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
>>  		patch_instruction(dest + 2, instrs[2]);
>>  	}
>>  
>> - -	printk(KERN_DEBUG "rfi-flush: patched %d locations\n", i);
>> +	printk(KERN_DEBUG "rfi-flush: patched %d locations (%s flush)\n", i,
>> +		(types == L1D_FLUSH_NONE)       ? "no" :
>> +		(types == L1D_FLUSH_FALLBACK)   ? "fallback displacement" :
>> +		(types &  L1D_FLUSH_ORI)        ? (types & L1D_FLUSH_MTTRIG)
>> +							? "ori+mttrig type"
>> +							: "ori type" :
>> +		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
>> +						: "unknown");
>> +}
>> +
>> +void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
>> +{
>> +	unsigned int instr, *dest;
>> +	long *start, *end;
>> +	int i;
>> +
>> +	start = fixup_start;
>> +	end = fixup_end;
>> +
>> +	instr = 0x60000000; /* nop */
>> +
>> +	if (enable) {
>> +		pr_info("barrier-nospec: using ORI speculation barrier\n");
>> +		instr = 0x63ff0000; /* ori 31,31,0 speculation barrier */
>> +	}
>> +
>> +	for (i = 0; start < end; start++, i++) {
>> +		dest = (void *)start + *start;
>> +
>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>> +		patch_instruction(dest, instr);
>> +	}
>> +
>> +	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
>>  }
>> +
>>  #endif /* CONFIG_PPC_BOOK3S_64 */
>>  
>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>> +void do_barrier_nospec_fixups(bool enable)
>> +{
>> +	void *start, *end;
>> +
>> +	start = PTRRELOC(&__start___barrier_nospec_fixup),
>> +	end = PTRRELOC(&__stop___barrier_nospec_fixup);
>> +
>> +	do_barrier_nospec_fixups_range(enable, start, end);
>> +}
>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>> +
>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>> +void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
>> +{
>> +	unsigned int instr[2], *dest;
>> +	long *start, *end;
>> +	int i;
>> +
>> +	start = fixup_start;
>> +	end = fixup_end;
>> +
>> +	instr[0] = PPC_INST_NOP;
>> +	instr[1] = PPC_INST_NOP;
>> +
>> +	if (enable) {
>> +		pr_info("barrier-nospec: using isync; sync as speculation barrier\n");
>> +		instr[0] = PPC_INST_ISYNC;
>> +		instr[1] = PPC_INST_SYNC;
>> +	}
>> +
>> +	for (i = 0; start < end; start++, i++) {
>> +		dest = (void *)start + *start;
>> +
>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>> +		patch_instruction(dest, instr[0]);
>> +		patch_instruction(dest + 1, instr[1]);
>> +	}
>> +
>> +	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
>> +}
>> +
>> +static void patch_btb_flush_section(long *curr)
>> +{
>> +	unsigned int *start, *end;
>> +
>> +	start = (void *)curr + *curr;
>> +	end = (void *)curr + *(curr + 1);
>> +	for (; start < end; start++) {
>> +		pr_devel("patching dest %lx\n", (unsigned long)start);
>> +		patch_instruction(start, PPC_INST_NOP);
>> +	}
>> +}
>> +
>> +void do_btb_flush_fixups(void)
>> +{
>> +	long *start, *end;
>> +
>> +	start = PTRRELOC(&__start__btb_flush_fixup);
>> +	end = PTRRELOC(&__stop__btb_flush_fixup);
>> +
>> +	for (; start < end; start += 2)
>> +		patch_btb_flush_section(start);
>> +}
>> +#endif /* CONFIG_PPC_FSL_BOOK3E */
>> +
>>  void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
>>  {
>>  	long *start, *end;
>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>> index 22d94c3e6fc4..1efe5ca5c3bc 100644
>> - --- a/arch/powerpc/mm/mem.c
>> +++ b/arch/powerpc/mm/mem.c
>> @@ -62,6 +62,7 @@
>>  #endif
>>  
>>  unsigned long long memory_limit;
>> +bool init_mem_is_free;
>>  
>>  #ifdef CONFIG_HIGHMEM
>>  pte_t *kmap_pte;
>> @@ -381,6 +382,7 @@ void __init mem_init(void)
>>  void free_initmem(void)
>>  {
>>  	ppc_md.progress = ppc_printk_progress;
>> +	init_mem_is_free = true;
>>  	free_initmem_default(POISON_FREE_INITMEM);
>>  }
>>  
>> diff --git a/arch/powerpc/mm/tlb_low_64e.S b/arch/powerpc/mm/tlb_low_64e.S
>> index 29d6987c37ba..5486d56da289 100644
>> - --- a/arch/powerpc/mm/tlb_low_64e.S
>> +++ b/arch/powerpc/mm/tlb_low_64e.S
>> @@ -69,6 +69,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>>  	std	r15,EX_TLB_R15(r12)
>>  	std	r10,EX_TLB_CR(r12)
>>  #ifdef CONFIG_PPC_FSL_BOOK3E
>> +START_BTB_FLUSH_SECTION
>> +	mfspr r11, SPRN_SRR1
>> +	andi. r10,r11,MSR_PR
>> +	beq 1f
>> +	BTB_FLUSH(r10)
>> +1:
>> +END_BTB_FLUSH_SECTION
>>  	std	r7,EX_TLB_R7(r12)
>>  #endif
>>  	TLB_MISS_PROLOG_STATS
>> diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
>> index c57afc619b20..e14b52c7ebd8 100644
>> - --- a/arch/powerpc/platforms/powernv/setup.c
>> +++ b/arch/powerpc/platforms/powernv/setup.c
>> @@ -37,53 +37,99 @@
>>  #include <asm/smp.h>
>>  #include <asm/tm.h>
>>  #include <asm/setup.h>
>> +#include <asm/security_features.h>
>>  
>>  #include "powernv.h"
>>  
>> +
>> +static bool fw_feature_is(const char *state, const char *name,
>> +			  struct device_node *fw_features)
>> +{
>> +	struct device_node *np;
>> +	bool rc = false;
>> +
>> +	np = of_get_child_by_name(fw_features, name);
>> +	if (np) {
>> +		rc = of_property_read_bool(np, state);
>> +		of_node_put(np);
>> +	}
>> +
>> +	return rc;
>> +}
>> +
>> +static void init_fw_feat_flags(struct device_node *np)
>> +{
>> +	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
>> +		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
>> +
>> +	if (fw_feature_is("enabled", "fw-bcctrl-serialized", np))
>> +		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
>> +
>> +	if (fw_feature_is("enabled", "inst-l1d-flush-ori30,30,0", np))
>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
>> +
>> +	if (fw_feature_is("enabled", "inst-l1d-flush-trig2", np))
>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
>> +
>> +	if (fw_feature_is("enabled", "fw-l1d-thread-split", np))
>> +		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
>> +
>> +	if (fw_feature_is("enabled", "fw-count-cache-disabled", np))
>> +		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
>> +
>> +	if (fw_feature_is("enabled", "fw-count-cache-flush-bcctr2,0,0", np))
>> +		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
>> +
>> +	if (fw_feature_is("enabled", "needs-count-cache-flush-on-context-switch", np))
>> +		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
>> +
>> +	/*
>> +	 * The features below are enabled by default, so we instead look to see
>> +	 * if firmware has *disabled* them, and clear them if so.
>> +	 */
>> +	if (fw_feature_is("disabled", "speculation-policy-favor-security", np))
>> +		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
>> +
>> +	if (fw_feature_is("disabled", "needs-l1d-flush-msr-pr-0-to-1", np))
>> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
>> +
>> +	if (fw_feature_is("disabled", "needs-l1d-flush-msr-hv-1-to-0", np))
>> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
>> +
>> +	if (fw_feature_is("disabled", "needs-spec-barrier-for-bound-checks", np))
>> +		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
>> +}
>> +
>>  static void pnv_setup_rfi_flush(void)
>>  {
>>  	struct device_node *np, *fw_features;
>>  	enum l1d_flush_type type;
>> - -	int enable;
>> +	bool enable;
>>  
>>  	/* Default to fallback in case fw-features are not available */
>>  	type = L1D_FLUSH_FALLBACK;
>> - -	enable = 1;
>>  
>>  	np = of_find_node_by_name(NULL, "ibm,opal");
>>  	fw_features = of_get_child_by_name(np, "fw-features");
>>  	of_node_put(np);
>>  
>>  	if (fw_features) {
>> - -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-trig2");
>> - -		if (np && of_property_read_bool(np, "enabled"))
>> - -			type = L1D_FLUSH_MTTRIG;
>> +		init_fw_feat_flags(fw_features);
>> +		of_node_put(fw_features);
>>  
>> - -		of_node_put(np);
>> +		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
>> +			type = L1D_FLUSH_MTTRIG;
>>  
>> - -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-ori30,30,0");
>> - -		if (np && of_property_read_bool(np, "enabled"))
>> +		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
>>  			type = L1D_FLUSH_ORI;
>> - -
>> - -		of_node_put(np);
>> - -
>> - -		/* Enable unless firmware says NOT to */
>> - -		enable = 2;
>> - -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-hv-1-to-0");
>> - -		if (np && of_property_read_bool(np, "disabled"))
>> - -			enable--;
>> - -
>> - -		of_node_put(np);
>> - -
>> - -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-pr-0-to-1");
>> - -		if (np && of_property_read_bool(np, "disabled"))
>> - -			enable--;
>> - -
>> - -		of_node_put(np);
>> - -		of_node_put(fw_features);
>>  	}
>>  
>> - -	setup_rfi_flush(type, enable > 0);
>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
>> +		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR)   || \
>> +		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
>> +
>> +	setup_rfi_flush(type, enable);
>> +	setup_count_cache_flush();
>>  }
>>  
>>  static void __init pnv_setup_arch(void)
>> @@ -91,6 +137,7 @@ static void __init pnv_setup_arch(void)
>>  	set_arch_panic_timeout(10, ARCH_PANIC_TIMEOUT);
>>  
>>  	pnv_setup_rfi_flush();
>> +	setup_stf_barrier();
>>  
>>  	/* Initialize SMP */
>>  	pnv_smp_init();
>> diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
>> index 8dd0c8edefd6..c773396d0969 100644
>> - --- a/arch/powerpc/platforms/pseries/mobility.c
>> +++ b/arch/powerpc/platforms/pseries/mobility.c
>> @@ -314,6 +314,9 @@ void post_mobility_fixup(void)
>>  		printk(KERN_ERR "Post-mobility device tree update "
>>  			"failed: %d\n", rc);
>>  
>> +	/* Possibly switch to a new RFI flush type */
>> +	pseries_setup_rfi_flush();
>> +
>>  	return;
>>  }
>>  
>> diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h
>> index 8411c27293e4..e7d80797384d 100644
>> - --- a/arch/powerpc/platforms/pseries/pseries.h
>> +++ b/arch/powerpc/platforms/pseries/pseries.h
>> @@ -81,4 +81,6 @@ extern struct pci_controller_ops pseries_pci_controller_ops;
>>  
>>  unsigned long pseries_memory_block_size(void);
>>  
>> +void pseries_setup_rfi_flush(void);
>> +
>>  #endif /* _PSERIES_PSERIES_H */
>> diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
>> index dd2545fc9947..9cc976ff7fec 100644
>> - --- a/arch/powerpc/platforms/pseries/setup.c
>> +++ b/arch/powerpc/platforms/pseries/setup.c
>> @@ -67,6 +67,7 @@
>>  #include <asm/eeh.h>
>>  #include <asm/reg.h>
>>  #include <asm/plpar_wrappers.h>
>> +#include <asm/security_features.h>
>>  
>>  #include "pseries.h"
>>  
>> @@ -499,37 +500,87 @@ static void __init find_and_init_phbs(void)
>>  	of_pci_check_probe_only();
>>  }
>>  
>> - -static void pseries_setup_rfi_flush(void)
>> +static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
>> +{
>> +	/*
>> +	 * The features below are disabled by default, so we instead look to see
>> +	 * if firmware has *enabled* them, and set them if so.
>> +	 */
>> +	if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31)
>> +		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
>> +
>> +	if (result->character & H_CPU_CHAR_BCCTRL_SERIALISED)
>> +		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
>> +
>> +	if (result->character & H_CPU_CHAR_L1D_FLUSH_ORI30)
>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
>> +
>> +	if (result->character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
>> +
>> +	if (result->character & H_CPU_CHAR_L1D_THREAD_PRIV)
>> +		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
>> +
>> +	if (result->character & H_CPU_CHAR_COUNT_CACHE_DISABLED)
>> +		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
>> +
>> +	if (result->character & H_CPU_CHAR_BCCTR_FLUSH_ASSIST)
>> +		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
>> +
>> +	if (result->behaviour & H_CPU_BEHAV_FLUSH_COUNT_CACHE)
>> +		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
>> +
>> +	/*
>> +	 * The features below are enabled by default, so we instead look to see
>> +	 * if firmware has *disabled* them, and clear them if so.
>> +	 */
>> +	if (!(result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY))
>> +		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
>> +
>> +	if (!(result->behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
>> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
>> +
>> +	if (!(result->behaviour & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
>> +		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
>> +}
>> +
>> +void pseries_setup_rfi_flush(void)
>>  {
>>  	struct h_cpu_char_result result;
>>  	enum l1d_flush_type types;
>>  	bool enable;
>>  	long rc;
>>  
>> - -	/* Enable by default */
>> - -	enable = true;
>> +	/*
>> +	 * Set features to the defaults assumed by init_cpu_char_feature_flags()
>> +	 * so it can set/clear again any features that might have changed after
>> +	 * migration, and in case the hypercall fails and it is not even called.
>> +	 */
>> +	powerpc_security_features = SEC_FTR_DEFAULT;
>>  
>>  	rc = plpar_get_cpu_characteristics(&result);
>> - -	if (rc == H_SUCCESS) {
>> - -		types = L1D_FLUSH_NONE;
>> +	if (rc == H_SUCCESS)
>> +		init_cpu_char_feature_flags(&result);
>>  
>> - -		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
>> - -			types |= L1D_FLUSH_MTTRIG;
>> - -		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
>> - -			types |= L1D_FLUSH_ORI;
>> +	/*
>> +	 * We're the guest so this doesn't apply to us, clear it to simplify
>> +	 * handling of it elsewhere.
>> +	 */
>> +	security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
>>  
>> - -		/* Use fallback if nothing set in hcall */
>> - -		if (types == L1D_FLUSH_NONE)
>> - -			types = L1D_FLUSH_FALLBACK;
>> +	types = L1D_FLUSH_FALLBACK;
>>  
>> - -		if (!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
>> - -			enable = false;
>> - -	} else {
>> - -		/* Default to fallback if case hcall is not available */
>> - -		types = L1D_FLUSH_FALLBACK;
>> - -	}
>> +	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
>> +		types |= L1D_FLUSH_MTTRIG;
>> +
>> +	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
>> +		types |= L1D_FLUSH_ORI;
>> +
>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
>> +		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
>>  
>>  	setup_rfi_flush(types, enable);
>> +	setup_count_cache_flush();
>>  }
>>  
>>  static void __init pSeries_setup_arch(void)
>> @@ -549,6 +600,7 @@ static void __init pSeries_setup_arch(void)
>>  	fwnmi_init();
>>  
>>  	pseries_setup_rfi_flush();
>> +	setup_stf_barrier();
>>  
>>  	/* By default, only probe PCI (can be overridden by rtas_pci) */
>>  	pci_add_flags(PCI_PROBE_ONLY);
>> diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
>> index 786bf01691c9..83619ebede93 100644
>> - --- a/arch/powerpc/xmon/xmon.c
>> +++ b/arch/powerpc/xmon/xmon.c
>> @@ -2144,6 +2144,8 @@ static void dump_one_paca(int cpu)
>>  	DUMP(p, slb_cache_ptr, "x");
>>  	for (i = 0; i < SLB_CACHE_ENTRIES; i++)
>>  		printf(" slb_cache[%d]:        = 0x%016lx\n", i, p->slb_cache[i]);
>> +
>> +	DUMP(p, rfi_flush_fallback_area, "px");
>>  #endif
>>  	DUMP(p, dscr_default, "llx");
>>  #ifdef CONFIG_PPC_BOOK3E
>> - -- 
>> 2.20.1
>>
>> -----BEGIN PGP SIGNATURE-----
>>
>> iQIcBAEBAgAGBQJcvHWhAAoJEFHr6jzI4aWA6nsP/0YskmAfLovcUmERQ7+bIjq6
>> IcS1T466dvy6MlqeBXU4x8pVgInWeHKEC9XJdkM1lOeib/SLW7Hbz4kgJeOGwFGY
>> lOTaexrxvsBqPm7f6GC0zbl9obEIIIIUs+TielFQANBgqm+q8Wio+XXPP9bpKeKY
>> agSpQ3nwL/PYixznbNmN/lP9py5p89LQ0IBcR7dDBGGWJtD/AXeZ9hslsZxPbPtI
>> nZJ0vdnjuoB2z+hCxfKWlYfLwH0VfoTpqP5x3ALCkvbBr67e8bf6EK8+trnvhyQ8
>> iLY4bp1pm2epAI0/3NfyEiDMsGjVJ6IFlkyhDkHJgJNu0BGcGOSX2GpyU3juviAK
>> c95FtBft/i8AwigOMCivg2mN5edYjsSiPoEItwT5KWqgByJsdr5i5mYVx8cUjMOz
>> iAxLZCdg+UHZYuCBCAO2ZI1G9bVXI1Pa3btMspiCOOOsYGjXGf0oFfKQ+7957hUO
>> ftYYJoGHlMHiHR1OPas6T3lk6YKF9uvfIDTE3OKw2obHbbRz3u82xoWMRGW503MN
>> 7WpkpAP7oZ9RgqIWFVhatWy5f+7GFL0akEi4o2tsZHhYlPau7YWo+nToTd87itwt
>> GBaWJipzge4s13VkhAE+jWFO35Fvwi8uNZ7UgpuKMBECEjkGbtzBTq2MjSF5G8wc
>> yPEod5jby/Iqb7DkGPVG
>> =6DnF
>> -----END PGP SIGNATURE-----
>>

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
  2019-04-28  6:17     ` Michael Ellerman
@ 2019-04-29  6:26       ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-29  6:26 UTC (permalink / raw)
  To: Greg KH
  Cc: stable, linuxppc-dev, diana.craciun, msuchanek, npiggin,
	christophe.leroy

Michael Ellerman <mpe@ellerman.id.au> writes:
> Greg KH <gregkh@linuxfoundation.org> writes:
>> On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
>>> -----BEGIN PGP SIGNED MESSAGE-----
>>> Hash: SHA1
>>> 
>>> Hi Greg/Sasha,
>>> 
>>> Please queue up these powerpc patches for 4.4 if you have no objections.
>>
>> why?  Do you, or someone else, really care about spectre issues in 4.4?
>> Who is using ppc for 4.4 becides a specific enterprise distro (and they
>> don't seem to be pulling in my stable updates anyway...)?
>
> Someone asked for it, but TBH I can't remember who it was. I can chase
> it up if you like.

Yeah it was a request from one of the distros. They plan to take it once
it lands in 4.4 stable.

cheers

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
@ 2019-04-29  6:26       ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-29  6:26 UTC (permalink / raw)
  To: Greg KH; +Cc: npiggin, diana.craciun, linuxppc-dev, stable, msuchanek

Michael Ellerman <mpe@ellerman.id.au> writes:
> Greg KH <gregkh@linuxfoundation.org> writes:
>> On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
>>> -----BEGIN PGP SIGNED MESSAGE-----
>>> Hash: SHA1
>>> 
>>> Hi Greg/Sasha,
>>> 
>>> Please queue up these powerpc patches for 4.4 if you have no objections.
>>
>> why?  Do you, or someone else, really care about spectre issues in 4.4?
>> Who is using ppc for 4.4 becides a specific enterprise distro (and they
>> don't seem to be pulling in my stable updates anyway...)?
>
> Someone asked for it, but TBH I can't remember who it was. I can chase
> it up if you like.

Yeah it was a request from one of the distros. They plan to take it once
it lands in 4.4 stable.

cheers

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
  2019-04-29  6:26       ` Michael Ellerman
@ 2019-04-29  7:03         ` Greg KH
  -1 siblings, 0 replies; 180+ messages in thread
From: Greg KH @ 2019-04-29  7:03 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: stable, linuxppc-dev, diana.craciun, msuchanek, npiggin,
	christophe.leroy

On Mon, Apr 29, 2019 at 04:26:45PM +1000, Michael Ellerman wrote:
> Michael Ellerman <mpe@ellerman.id.au> writes:
> > Greg KH <gregkh@linuxfoundation.org> writes:
> >> On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
> >>> -----BEGIN PGP SIGNED MESSAGE-----
> >>> Hash: SHA1
> >>> 
> >>> Hi Greg/Sasha,
> >>> 
> >>> Please queue up these powerpc patches for 4.4 if you have no objections.
> >>
> >> why?  Do you, or someone else, really care about spectre issues in 4.4?
> >> Who is using ppc for 4.4 becides a specific enterprise distro (and they
> >> don't seem to be pulling in my stable updates anyway...)?
> >
> > Someone asked for it, but TBH I can't remember who it was. I can chase
> > it up if you like.
> 
> Yeah it was a request from one of the distros. They plan to take it once
> it lands in 4.4 stable.

Ok, thanks for confirming, I'll work on this this afternoon.

greg k-h

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
@ 2019-04-29  7:03         ` Greg KH
  0 siblings, 0 replies; 180+ messages in thread
From: Greg KH @ 2019-04-29  7:03 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: npiggin, diana.craciun, linuxppc-dev, stable, msuchanek

On Mon, Apr 29, 2019 at 04:26:45PM +1000, Michael Ellerman wrote:
> Michael Ellerman <mpe@ellerman.id.au> writes:
> > Greg KH <gregkh@linuxfoundation.org> writes:
> >> On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
> >>> -----BEGIN PGP SIGNED MESSAGE-----
> >>> Hash: SHA1
> >>> 
> >>> Hi Greg/Sasha,
> >>> 
> >>> Please queue up these powerpc patches for 4.4 if you have no objections.
> >>
> >> why?  Do you, or someone else, really care about spectre issues in 4.4?
> >> Who is using ppc for 4.4 becides a specific enterprise distro (and they
> >> don't seem to be pulling in my stable updates anyway...)?
> >
> > Someone asked for it, but TBH I can't remember who it was. I can chase
> > it up if you like.
> 
> Yeah it was a request from one of the distros. They plan to take it once
> it lands in 4.4 stable.

Ok, thanks for confirming, I'll work on this this afternoon.

greg k-h

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
  2019-04-21 14:19 ` Michael Ellerman
@ 2019-04-29  9:43   ` Greg KH
  -1 siblings, 0 replies; 180+ messages in thread
From: Greg KH @ 2019-04-29  9:43 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: stable, linuxppc-dev, diana.craciun, msuchanek, npiggin,
	christophe.leroy

On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Hi Greg/Sasha,
> 
> Please queue up these powerpc patches for 4.4 if you have no objections.

All now queued up, thanks.

greg k-h

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
@ 2019-04-29  9:43   ` Greg KH
  0 siblings, 0 replies; 180+ messages in thread
From: Greg KH @ 2019-04-29  9:43 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: npiggin, diana.craciun, linuxppc-dev, stable, msuchanek

On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Hi Greg/Sasha,
> 
> Please queue up these powerpc patches for 4.4 if you have no objections.

All now queued up, thanks.

greg k-h

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64-add-config_ppc_barrier_nospec.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:20 +1000
Subject: powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-36-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 179ab1cbf883575c3a585bcfc0f2160f1d22a149 upstream.

Add a config symbol to encode which platforms support the
barrier_nospec speculation barrier. Currently this is just Book3S 64
but we will add Book3E in a future patch.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/Kconfig               |    7 ++++++-
 arch/powerpc/include/asm/barrier.h |    6 +++---
 arch/powerpc/include/asm/setup.h   |    2 +-
 arch/powerpc/kernel/Makefile       |    3 ++-
 arch/powerpc/kernel/module.c       |    4 +++-
 arch/powerpc/kernel/vmlinux.lds.S  |    4 +++-
 arch/powerpc/lib/feature-fixups.c  |    6 ++++--
 7 files changed, 22 insertions(+), 10 deletions(-)

--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -136,7 +136,7 @@ config PPC
 	select GENERIC_SMP_IDLE_THREAD
 	select GENERIC_CMOS_UPDATE
 	select GENERIC_TIME_VSYSCALL_OLD
-	select GENERIC_CPU_VULNERABILITIES	if PPC_BOOK3S_64
+	select GENERIC_CPU_VULNERABILITIES	if PPC_BARRIER_NOSPEC
 	select GENERIC_CLOCKEVENTS
 	select GENERIC_CLOCKEVENTS_BROADCAST if SMP
 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
@@ -162,6 +162,11 @@ config PPC
 	select ARCH_HAS_DMA_SET_COHERENT_MASK
 	select HAVE_ARCH_SECCOMP_FILTER
 
+config PPC_BARRIER_NOSPEC
+    bool
+    default y
+    depends on PPC_BOOK3S_64
+
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
 
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -92,7 +92,7 @@ do {									\
 #define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 /*
  * Prevent execution of subsequent instructions until preceding branches have
  * been fully resolved and are no longer executing speculatively.
@@ -102,9 +102,9 @@ do {									\
 // This also acts as a compiler barrier due to the memory clobber.
 #define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
 
-#else /* !CONFIG_PPC_BOOK3S_64 */
+#else /* !CONFIG_PPC_BARRIER_NOSPEC */
 #define barrier_nospec_asm
 #define barrier_nospec()
-#endif
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 #endif /* _ASM_POWERPC_BARRIER_H */
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -42,7 +42,7 @@ void setup_barrier_nospec(void);
 void do_barrier_nospec_fixups(bool enable);
 extern bool barrier_nospec_enabled;
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
 #else
 static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -40,10 +40,11 @@ obj-$(CONFIG_PPC64)		+= setup_64.o sys_p
 obj-$(CONFIG_VDSO32)		+= vdso32/
 obj-$(CONFIG_HAVE_HW_BREAKPOINT)	+= hw_breakpoint.o
 obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_ppc970.o cpu_setup_pa6t.o
-obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o security.o
+obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o
 obj-$(CONFIG_PPC_BOOK3S_64)	+= mce.o mce_power.o
 obj64-$(CONFIG_RELOCATABLE)	+= reloc_64.o
 obj-$(CONFIG_PPC_BOOK3E_64)	+= exceptions-64e.o idle_book3e.o
+obj-$(CONFIG_PPC_BARRIER_NOSPEC) += security.o
 obj-$(CONFIG_PPC64)		+= vdso64/
 obj-$(CONFIG_ALTIVEC)		+= vecemu.o
 obj-$(CONFIG_PPC_970_NAP)	+= idle_power4.o
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -67,13 +67,15 @@ int module_finalize(const Elf_Ehdr *hdr,
 		do_feature_fixups(powerpc_firmware_features,
 				  (void *)sect->sh_addr,
 				  (void *)sect->sh_addr + sect->sh_size);
+#endif /* CONFIG_PPC64 */
 
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 	sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
 	if (sect != NULL)
 		do_barrier_nospec_fixups_range(barrier_nospec_enabled,
 				  (void *)sect->sh_addr,
 				  (void *)sect->sh_addr + sect->sh_size);
-#endif
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 	sect = find_section(hdr, sechdrs, "__lwsync_fixup");
 	if (sect != NULL)
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -93,14 +93,16 @@ SECTIONS
 		*(__rfi_flush_fixup)
 		__stop___rfi_flush_fixup = .;
 	}
+#endif /* CONFIG_PPC64 */
 
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 	. = ALIGN(8);
 	__spec_barrier_fixup : AT(ADDR(__spec_barrier_fixup) - LOAD_OFFSET) {
 		__start___barrier_nospec_fixup = .;
 		*(__barrier_nospec_fixup)
 		__stop___barrier_nospec_fixup = .;
 	}
-#endif
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 	EXCEPTION_TABLE(0)
 
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -301,6 +301,9 @@ void do_barrier_nospec_fixups_range(bool
 	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
 }
 
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 void do_barrier_nospec_fixups(bool enable)
 {
 	void *start, *end;
@@ -310,8 +313,7 @@ void do_barrier_nospec_fixups(bool enabl
 
 	do_barrier_nospec_fixups_range(enable, start, end);
 }
-
-#endif /* CONFIG_PPC_BOOK3S_64 */
+#endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
 {


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64: Call setup_barrier_nospec() from setup_arch()" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64: Call setup_barrier_nospec() from setup_arch()

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:21 +1000
Subject: powerpc/64: Call setup_barrier_nospec() from setup_arch()
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-37-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit af375eefbfb27cbb5b831984e66d724a40d26b5c upstream.

Currently we require platform code to call setup_barrier_nospec(). But
if we add an empty definition for the !CONFIG_PPC_BARRIER_NOSPEC case
then we can call it in setup_arch().

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/setup.h       |    4 ++++
 arch/powerpc/kernel/setup_32.c         |    2 ++
 arch/powerpc/kernel/setup_64.c         |    2 ++
 arch/powerpc/platforms/powernv/setup.c |    1 -
 arch/powerpc/platforms/pseries/setup.c |    1 -
 5 files changed, 8 insertions(+), 2 deletions(-)

--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -38,7 +38,11 @@ enum l1d_flush_type {
 
 void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
+#ifdef CONFIG_PPC_BARRIER_NOSPEC
 void setup_barrier_nospec(void);
+#else
+static inline void setup_barrier_nospec(void) { };
+#endif
 void do_barrier_nospec_fixups(bool enable);
 extern bool barrier_nospec_enabled;
 
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -322,6 +322,8 @@ void __init setup_arch(char **cmdline_p)
 		ppc_md.setup_arch();
 	if ( ppc_md.progress ) ppc_md.progress("arch: exit", 0x3eab);
 
+	setup_barrier_nospec();
+
 	paging_init();
 
 	/* Initialize the MMU context management stuff */
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -736,6 +736,8 @@ void __init setup_arch(char **cmdline_p)
 	if (ppc_md.setup_arch)
 		ppc_md.setup_arch();
 
+	setup_barrier_nospec();
+
 	paging_init();
 
 	/* Initialize the MMU context management stuff */
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -123,7 +123,6 @@ static void pnv_setup_rfi_flush(void)
 		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
 
 	setup_rfi_flush(type, enable);
-	setup_barrier_nospec();
 }
 
 static void __init pnv_setup_arch(void)
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -574,7 +574,6 @@ void pseries_setup_rfi_flush(void)
 		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
 
 	setup_rfi_flush(types, enable);
-	setup_barrier_nospec();
 }
 
 static void __init pSeries_setup_arch(void)


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64: Disable the speculation barrier from the command line" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64: Disable the speculation barrier from the command line

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:18 +1000
Subject: powerpc/64: Disable the speculation barrier from the command line
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-34-mpe@ellerman.id.au>

From: Diana Craciun <diana.craciun@nxp.com>

commit cf175dc315f90185128fb061dc05b6fbb211aa2f upstream.

The speculation barrier can be disabled from the command line
with the parameter: "nospectre_v1".

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/security.c |   12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -17,6 +17,7 @@
 unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
 bool barrier_nospec_enabled;
+static bool no_nospec;
 
 static void enable_barrier_nospec(bool enable)
 {
@@ -43,9 +44,18 @@ void setup_barrier_nospec(void)
 	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
 		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
 
-	enable_barrier_nospec(enable);
+	if (!no_nospec)
+		enable_barrier_nospec(enable);
 }
 
+static int __init handle_nospectre_v1(char *p)
+{
+	no_nospec = true;
+
+	return 0;
+}
+early_param("nospectre_v1", handle_nospectre_v1);
+
 #ifdef CONFIG_DEBUG_FS
 static int barrier_nospec_set(void *data, u64 val)
 {


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64: Make meltdown reporting Book3S 64 specific" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64: Make meltdown reporting Book3S 64 specific

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:22 +1000
Subject: powerpc/64: Make meltdown reporting Book3S 64 specific
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-38-mpe@ellerman.id.au>

From: Diana Craciun <diana.craciun@nxp.com>

commit 406d2b6ae3420f5bb2b3db6986dc6f0b6dbb637b upstream.

In a subsequent patch we will enable building security.c for Book3E.
However the NXP platforms are not vulnerable to Meltdown, so make the
Meltdown vulnerability reporting PPC_BOOK3S_64 specific.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
[mpe: Split out of larger patch]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/security.c |    2 ++
 1 file changed, 2 insertions(+)

--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -93,6 +93,7 @@ static __init int barrier_nospec_debugfs
 device_initcall(barrier_nospec_debugfs_init);
 #endif /* CONFIG_DEBUG_FS */
 
+#ifdef CONFIG_PPC_BOOK3S_64
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
 	bool thread_priv;
@@ -125,6 +126,7 @@ ssize_t cpu_show_meltdown(struct device
 
 	return sprintf(buf, "Vulnerable\n");
 }
+#endif
 
 ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
 {


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64: Use barrier_nospec in syscall entry" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64: Use barrier_nospec in syscall entry

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64-use-barrier_nospec-in-syscall-entry.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:14 +1000
Subject: powerpc/64: Use barrier_nospec in syscall entry
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-30-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 51973a815c6b46d7b23b68d6af371ad1c9d503ca upstream.

Our syscall entry is done in assembly so patch in an explicit
barrier_nospec.

Based on a patch by Michal Suchanek.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/entry_64.S |   10 ++++++++++
 1 file changed, 10 insertions(+)

--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -36,6 +36,7 @@
 #include <asm/hw_irq.h>
 #include <asm/context_tracking.h>
 #include <asm/tm.h>
+#include <asm/barrier.h>
 #ifdef CONFIG_PPC_BOOK3S
 #include <asm/exception-64s.h>
 #else
@@ -177,6 +178,15 @@ system_call:			/* label this so stack tr
 	clrldi	r8,r8,32
 15:
 	slwi	r0,r0,4
+
+	barrier_nospec_asm
+	/*
+	 * Prevent the load of the handler below (based on the user-passed
+	 * system call number) being speculatively executed until the test
+	 * against NR_syscalls and branch to .Lsyscall_enosys above has
+	 * committed.
+	 */
+
 	ldx	r12,r11,r0	/* Fetch system call handler [ptr] */
 	mtctr   r12
 	bctrl			/* Call handler */


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64s: Add barrier_nospec" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64s: Add barrier_nospec

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-add-barrier_nospec.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:10 +1000
Subject: powerpc/64s: Add barrier_nospec
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-26-mpe@ellerman.id.au>

From: Michal Suchanek <msuchanek@suse.de>

commit a6b3964ad71a61bb7c61d80a60bea7d42187b2eb upstream.

A no-op form of ori (or immediate of 0 into r31 and the result stored
in r31) has been re-tasked as a speculation barrier. The instruction
only acts as a barrier on newer machines with appropriate firmware
support. On older CPUs it remains a harmless no-op.

Implement barrier_nospec using this instruction.

mpe: The semantics of the instruction are believed to be that it
prevents execution of subsequent instructions until preceding branches
have been fully resolved and are no longer executing speculatively.
There is no further documentation available at this time.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/barrier.h |   15 +++++++++++++++
 1 file changed, 15 insertions(+)

--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -92,4 +92,19 @@ do {									\
 #define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
+#ifdef CONFIG_PPC_BOOK3S_64
+/*
+ * Prevent execution of subsequent instructions until preceding branches have
+ * been fully resolved and are no longer executing speculatively.
+ */
+#define barrier_nospec_asm ori 31,31,0
+
+// This also acts as a compiler barrier due to the memory clobber.
+#define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
+
+#else /* !CONFIG_PPC_BOOK3S_64 */
+#define barrier_nospec_asm
+#define barrier_nospec()
+#endif
+
 #endif /* _ASM_POWERPC_BARRIER_H */


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64s: Add new security feature flags for count cache flush" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64s: Add new security feature flags for count cache flush

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:25 +1000
Subject: powerpc/64s: Add new security feature flags for count cache flush
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-41-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit dc8c6cce9a26a51fc19961accb978217a3ba8c75 upstream.

Add security feature flags to indicate the need for software to flush
the count cache on context switch, and for the presence of a hardware
assisted count cache flush.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/security_features.h |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -59,6 +59,9 @@ static inline bool security_ftr_enabled(
 // Indirect branch prediction cache disabled
 #define SEC_FTR_COUNT_CACHE_DISABLED	0x0000000000000020ull
 
+// bcctr 2,0,0 triggers a hardware assisted count cache flush
+#define SEC_FTR_BCCTR_FLUSH_ASSIST	0x0000000000000800ull
+
 
 // Features indicating need for Spectre/Meltdown mitigations
 
@@ -74,6 +77,9 @@ static inline bool security_ftr_enabled(
 // Firmware configuration indicates user favours security over performance
 #define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
 
+// Software required to flush count cache on context switch
+#define SEC_FTR_FLUSH_COUNT_CACHE	0x0000000000000400ull
+
 
 // Features enabled by default
 #define SEC_FTR_DEFAULT \


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64s: Add support for a store forwarding barrier at kernel entry/exit" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mauricfo,
	mikey, mpe, msuchanek, npiggin, torvalds
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64s: Add support for a store forwarding barrier at kernel entry/exit

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:09 +1000
Subject: powerpc/64s: Add support for a store forwarding barrier at kernel entry/exit
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-25-mpe@ellerman.id.au>

From: Nicholas Piggin <npiggin@gmail.com>

commit a048a07d7f4535baa4cbad6bc024f175317ab938 upstream.

On some CPUs we can prevent a vulnerability related to store-to-load
forwarding by preventing store forwarding between privilege domains,
by inserting a barrier in kernel entry and exit paths.

This is known to be the case on at least Power7, Power8 and Power9
powerpc CPUs.

Barriers must be inserted generally before the first load after moving
to a higher privilege, and after the last store before moving to a
lower privilege, HV and PR privilege transitions must be protected.

Barriers are added as patch sections, with all kernel/hypervisor entry
points patched, and the exit points to lower privilge levels patched
similarly to the RFI flush patching.

Firmware advertisement is not implemented yet, so CPU flush types
are hard coded.

Thanks to Michal Suchánek for bug fixes and review.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michal Suchánek <msuchanek@suse.de>
[mpe: 4.4 doesn't have EXC_REAL_OOL_MASKABLE, so do it manually]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/exception-64s.h     |   35 ++++++
 arch/powerpc/include/asm/feature-fixups.h    |   19 +++
 arch/powerpc/include/asm/security_features.h |   11 ++
 arch/powerpc/kernel/exceptions-64s.S         |   22 +++-
 arch/powerpc/kernel/security.c               |  148 +++++++++++++++++++++++++++
 arch/powerpc/kernel/vmlinux.lds.S            |   14 ++
 arch/powerpc/lib/feature-fixups.c            |  116 ++++++++++++++++++++-
 arch/powerpc/platforms/powernv/setup.c       |    1 
 arch/powerpc/platforms/pseries/setup.c       |    1 
 9 files changed, 365 insertions(+), 2 deletions(-)

--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -50,6 +50,27 @@
 #define EX_PPR		88	/* SMT thread status register (priority) */
 #define EX_CTR		96
 
+#define STF_ENTRY_BARRIER_SLOT						\
+	STF_ENTRY_BARRIER_FIXUP_SECTION;				\
+	nop;								\
+	nop;								\
+	nop
+
+#define STF_EXIT_BARRIER_SLOT						\
+	STF_EXIT_BARRIER_FIXUP_SECTION;					\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop;								\
+	nop
+
+/*
+ * r10 must be free to use, r13 must be paca
+ */
+#define INTERRUPT_TO_KERNEL						\
+	STF_ENTRY_BARRIER_SLOT
+
 /*
  * Macros for annotating the expected destination of (h)rfid
  *
@@ -66,16 +87,19 @@
 	rfid
 
 #define RFI_TO_USER							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
 
 #define RFI_TO_USER_OR_KERNEL						\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
 
 #define RFI_TO_GUEST							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	rfid;								\
 	b	rfi_flush_fallback
@@ -84,21 +108,25 @@
 	hrfid
 
 #define HRFI_TO_USER							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_USER_OR_KERNEL						\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_GUEST							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
 
 #define HRFI_TO_UNKNOWN							\
+	STF_EXIT_BARRIER_SLOT;						\
 	RFI_FLUSH_SLOT;							\
 	hrfid;								\
 	b	hrfi_flush_fallback
@@ -226,6 +254,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 #define __EXCEPTION_PROLOG_1(area, extra, vec)				\
 	OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR);		\
 	OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);		\
+	INTERRUPT_TO_KERNEL;						\
 	SAVE_CTR(r10, area);						\
 	mfcr	r9;							\
 	extra(vec);							\
@@ -512,6 +541,12 @@ label##_relon_hv:						\
 #define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
 	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
 
+#define MASKABLE_EXCEPTION_OOL(vec, label)				\
+	.globl label##_ool;						\
+label##_ool:								\
+	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
+	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
+
 #define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
 	. = loc;							\
 	.globl label##_pSeries;						\
--- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -184,6 +184,22 @@ label##3:					       	\
 	FTR_ENTRY_OFFSET label##1b-label##3b;		\
 	.popsection;
 
+#define STF_ENTRY_BARRIER_FIXUP_SECTION			\
+953:							\
+	.pushsection __stf_entry_barrier_fixup,"a";	\
+	.align 2;					\
+954:							\
+	FTR_ENTRY_OFFSET 953b-954b;			\
+	.popsection;
+
+#define STF_EXIT_BARRIER_FIXUP_SECTION			\
+955:							\
+	.pushsection __stf_exit_barrier_fixup,"a";	\
+	.align 2;					\
+956:							\
+	FTR_ENTRY_OFFSET 955b-956b;			\
+	.popsection;
+
 #define RFI_FLUSH_FIXUP_SECTION				\
 951:							\
 	.pushsection __rfi_flush_fixup,"a";		\
@@ -195,6 +211,9 @@ label##3:					       	\
 
 #ifndef __ASSEMBLY__
 
+extern long stf_barrier_fallback;
+extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
+extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
 extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
 
 #endif
--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -12,6 +12,17 @@
 extern unsigned long powerpc_security_features;
 extern bool rfi_flush;
 
+/* These are bit flags */
+enum stf_barrier_type {
+	STF_BARRIER_NONE	= 0x1,
+	STF_BARRIER_FALLBACK	= 0x2,
+	STF_BARRIER_EIEIO	= 0x4,
+	STF_BARRIER_SYNC_ORI	= 0x8,
+};
+
+void setup_stf_barrier(void);
+void do_stf_barrier_fixups(enum stf_barrier_type types);
+
 static inline void security_ftr_set(unsigned long feature)
 {
 	powerpc_security_features |= feature;
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -36,6 +36,7 @@ BEGIN_FTR_SECTION						\
 END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
 	mr	r9,r13 ;					\
 	GET_PACA(r13) ;						\
+	INTERRUPT_TO_KERNEL ;					\
 	mfspr	r11,SPRN_SRR0 ;					\
 0:
 
@@ -292,7 +293,9 @@ hardware_interrupt_hv:
 	. = 0x900
 	.globl decrementer_pSeries
 decrementer_pSeries:
-	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
+	SET_SCRATCH0(r13)
+	EXCEPTION_PROLOG_0(PACA_EXGEN)
+	b	decrementer_ool
 
 	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
 
@@ -319,6 +322,7 @@ system_call_pSeries:
 	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
 	HMT_MEDIUM;
 	std	r10,PACA_EXGEN+EX_R10(r13)
+	INTERRUPT_TO_KERNEL
 	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
 	mfcr	r9
 	KVMTEST(0xc00)
@@ -607,6 +611,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 
 	.align	7
 	/* moved from 0xe00 */
+	MASKABLE_EXCEPTION_OOL(0x900, decrementer)
 	STD_EXCEPTION_HV_OOL(0xe02, h_data_storage)
 	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02)
 	STD_EXCEPTION_HV_OOL(0xe22, h_instr_storage)
@@ -1564,6 +1569,21 @@ power4_fixup_nap:
 	blr
 #endif
 
+	.balign 16
+	.globl stf_barrier_fallback
+stf_barrier_fallback:
+	std	r9,PACA_EXRFI+EX_R9(r13)
+	std	r10,PACA_EXRFI+EX_R10(r13)
+	sync
+	ld	r9,PACA_EXRFI+EX_R9(r13)
+	ld	r10,PACA_EXRFI+EX_R10(r13)
+	ori	31,31,0
+	.rept 14
+	b	1f
+1:
+	.endr
+	blr
+
 	.globl rfi_flush_fallback
 rfi_flush_fallback:
 	SET_SCRATCH0(r13);
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -5,9 +5,11 @@
 // Copyright 2018, Michael Ellerman, IBM Corporation.
 
 #include <linux/kernel.h>
+#include <linux/debugfs.h>
 #include <linux/device.h>
 #include <linux/seq_buf.h>
 
+#include <asm/debug.h>
 #include <asm/security_features.h>
 
 
@@ -86,3 +88,149 @@ ssize_t cpu_show_spectre_v2(struct devic
 
 	return s.len;
 }
+
+/*
+ * Store-forwarding barrier support.
+ */
+
+static enum stf_barrier_type stf_enabled_flush_types;
+static bool no_stf_barrier;
+bool stf_barrier;
+
+static int __init handle_no_stf_barrier(char *p)
+{
+	pr_info("stf-barrier: disabled on command line.");
+	no_stf_barrier = true;
+	return 0;
+}
+
+early_param("no_stf_barrier", handle_no_stf_barrier);
+
+/* This is the generic flag used by other architectures */
+static int __init handle_ssbd(char *p)
+{
+	if (!p || strncmp(p, "auto", 5) == 0 || strncmp(p, "on", 2) == 0 ) {
+		/* Until firmware tells us, we have the barrier with auto */
+		return 0;
+	} else if (strncmp(p, "off", 3) == 0) {
+		handle_no_stf_barrier(NULL);
+		return 0;
+	} else
+		return 1;
+
+	return 0;
+}
+early_param("spec_store_bypass_disable", handle_ssbd);
+
+/* This is the generic flag used by other architectures */
+static int __init handle_no_ssbd(char *p)
+{
+	handle_no_stf_barrier(NULL);
+	return 0;
+}
+early_param("nospec_store_bypass_disable", handle_no_ssbd);
+
+static void stf_barrier_enable(bool enable)
+{
+	if (enable)
+		do_stf_barrier_fixups(stf_enabled_flush_types);
+	else
+		do_stf_barrier_fixups(STF_BARRIER_NONE);
+
+	stf_barrier = enable;
+}
+
+void setup_stf_barrier(void)
+{
+	enum stf_barrier_type type;
+	bool enable, hv;
+
+	hv = cpu_has_feature(CPU_FTR_HVMODE);
+
+	/* Default to fallback in case fw-features are not available */
+	if (cpu_has_feature(CPU_FTR_ARCH_207S))
+		type = STF_BARRIER_SYNC_ORI;
+	else if (cpu_has_feature(CPU_FTR_ARCH_206))
+		type = STF_BARRIER_FALLBACK;
+	else
+		type = STF_BARRIER_NONE;
+
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+		(security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) ||
+		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) && hv));
+
+	if (type == STF_BARRIER_FALLBACK) {
+		pr_info("stf-barrier: fallback barrier available\n");
+	} else if (type == STF_BARRIER_SYNC_ORI) {
+		pr_info("stf-barrier: hwsync barrier available\n");
+	} else if (type == STF_BARRIER_EIEIO) {
+		pr_info("stf-barrier: eieio barrier available\n");
+	}
+
+	stf_enabled_flush_types = type;
+
+	if (!no_stf_barrier)
+		stf_barrier_enable(enable);
+}
+
+ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	if (stf_barrier && stf_enabled_flush_types != STF_BARRIER_NONE) {
+		const char *type;
+		switch (stf_enabled_flush_types) {
+		case STF_BARRIER_EIEIO:
+			type = "eieio";
+			break;
+		case STF_BARRIER_SYNC_ORI:
+			type = "hwsync";
+			break;
+		case STF_BARRIER_FALLBACK:
+			type = "fallback";
+			break;
+		default:
+			type = "unknown";
+		}
+		return sprintf(buf, "Mitigation: Kernel entry/exit barrier (%s)\n", type);
+	}
+
+	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
+	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int stf_barrier_set(void *data, u64 val)
+{
+	bool enable;
+
+	if (val == 1)
+		enable = true;
+	else if (val == 0)
+		enable = false;
+	else
+		return -EINVAL;
+
+	/* Only do anything if we're changing state */
+	if (enable != stf_barrier)
+		stf_barrier_enable(enable);
+
+	return 0;
+}
+
+static int stf_barrier_get(void *data, u64 *val)
+{
+	*val = stf_barrier ? 1 : 0;
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_stf_barrier, stf_barrier_get, stf_barrier_set, "%llu\n");
+
+static __init int stf_barrier_debugfs_init(void)
+{
+	debugfs_create_file("stf_barrier", 0600, powerpc_debugfs_root, NULL, &fops_stf_barrier);
+	return 0;
+}
+device_initcall(stf_barrier_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -74,6 +74,20 @@ SECTIONS
 
 #ifdef CONFIG_PPC64
 	. = ALIGN(8);
+	__stf_entry_barrier_fixup : AT(ADDR(__stf_entry_barrier_fixup) - LOAD_OFFSET) {
+		__start___stf_entry_barrier_fixup = .;
+		*(__stf_entry_barrier_fixup)
+		__stop___stf_entry_barrier_fixup = .;
+	}
+
+	. = ALIGN(8);
+	__stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) {
+		__start___stf_exit_barrier_fixup = .;
+		*(__stf_exit_barrier_fixup)
+		__stop___stf_exit_barrier_fixup = .;
+	}
+
+	. = ALIGN(8);
 	__rfi_flush_fixup : AT(ADDR(__rfi_flush_fixup) - LOAD_OFFSET) {
 		__start___rfi_flush_fixup = .;
 		*(__rfi_flush_fixup)
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -21,7 +21,7 @@
 #include <asm/page.h>
 #include <asm/sections.h>
 #include <asm/setup.h>
-
+#include <asm/security_features.h>
 
 struct fixup_entry {
 	unsigned long	mask;
@@ -115,6 +115,120 @@ void do_feature_fixups(unsigned long val
 }
 
 #ifdef CONFIG_PPC_BOOK3S_64
+void do_stf_entry_barrier_fixups(enum stf_barrier_type types)
+{
+	unsigned int instrs[3], *dest;
+	long *start, *end;
+	int i;
+
+	start = PTRRELOC(&__start___stf_entry_barrier_fixup),
+	end = PTRRELOC(&__stop___stf_entry_barrier_fixup);
+
+	instrs[0] = 0x60000000; /* nop */
+	instrs[1] = 0x60000000; /* nop */
+	instrs[2] = 0x60000000; /* nop */
+
+	i = 0;
+	if (types & STF_BARRIER_FALLBACK) {
+		instrs[i++] = 0x7d4802a6; /* mflr r10		*/
+		instrs[i++] = 0x60000000; /* branch patched below */
+		instrs[i++] = 0x7d4803a6; /* mtlr r10		*/
+	} else if (types & STF_BARRIER_EIEIO) {
+		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
+	} else if (types & STF_BARRIER_SYNC_ORI) {
+		instrs[i++] = 0x7c0004ac; /* hwsync		*/
+		instrs[i++] = 0xe94d0000; /* ld r10,0(r13)	*/
+		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+
+		patch_instruction(dest, instrs[0]);
+
+		if (types & STF_BARRIER_FALLBACK)
+			patch_branch(dest + 1, (unsigned long)&stf_barrier_fallback,
+				     BRANCH_SET_LINK);
+		else
+			patch_instruction(dest + 1, instrs[1]);
+
+		patch_instruction(dest + 2, instrs[2]);
+	}
+
+	printk(KERN_DEBUG "stf-barrier: patched %d entry locations (%s barrier)\n", i,
+		(types == STF_BARRIER_NONE)                  ? "no" :
+		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
+		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
+		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
+		                                           : "unknown");
+}
+
+void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
+{
+	unsigned int instrs[6], *dest;
+	long *start, *end;
+	int i;
+
+	start = PTRRELOC(&__start___stf_exit_barrier_fixup),
+	end = PTRRELOC(&__stop___stf_exit_barrier_fixup);
+
+	instrs[0] = 0x60000000; /* nop */
+	instrs[1] = 0x60000000; /* nop */
+	instrs[2] = 0x60000000; /* nop */
+	instrs[3] = 0x60000000; /* nop */
+	instrs[4] = 0x60000000; /* nop */
+	instrs[5] = 0x60000000; /* nop */
+
+	i = 0;
+	if (types & STF_BARRIER_FALLBACK || types & STF_BARRIER_SYNC_ORI) {
+		if (cpu_has_feature(CPU_FTR_HVMODE)) {
+			instrs[i++] = 0x7db14ba6; /* mtspr 0x131, r13 (HSPRG1) */
+			instrs[i++] = 0x7db04aa6; /* mfspr r13, 0x130 (HSPRG0) */
+		} else {
+			instrs[i++] = 0x7db243a6; /* mtsprg 2,r13	*/
+			instrs[i++] = 0x7db142a6; /* mfsprg r13,1    */
+	        }
+		instrs[i++] = 0x7c0004ac; /* hwsync		*/
+		instrs[i++] = 0xe9ad0000; /* ld r13,0(r13)	*/
+		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+		if (cpu_has_feature(CPU_FTR_HVMODE)) {
+			instrs[i++] = 0x7db14aa6; /* mfspr r13, 0x131 (HSPRG1) */
+		} else {
+			instrs[i++] = 0x7db242a6; /* mfsprg r13,2 */
+		}
+	} else if (types & STF_BARRIER_EIEIO) {
+		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+
+		patch_instruction(dest, instrs[0]);
+		patch_instruction(dest + 1, instrs[1]);
+		patch_instruction(dest + 2, instrs[2]);
+		patch_instruction(dest + 3, instrs[3]);
+		patch_instruction(dest + 4, instrs[4]);
+		patch_instruction(dest + 5, instrs[5]);
+	}
+	printk(KERN_DEBUG "stf-barrier: patched %d exit locations (%s barrier)\n", i,
+		(types == STF_BARRIER_NONE)                  ? "no" :
+		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
+		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
+		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
+		                                           : "unknown");
+}
+
+
+void do_stf_barrier_fixups(enum stf_barrier_type types)
+{
+	do_stf_entry_barrier_fixups(types);
+	do_stf_exit_barrier_fixups(types);
+}
+
 void do_rfi_flush_fixups(enum l1d_flush_type types)
 {
 	unsigned int instrs[3], *dest;
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -130,6 +130,7 @@ static void __init pnv_setup_arch(void)
 	set_arch_panic_timeout(10, ARCH_PANIC_TIMEOUT);
 
 	pnv_setup_rfi_flush();
+	setup_stf_barrier();
 
 	/* Initialize SMP */
 	pnv_smp_init();
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -593,6 +593,7 @@ static void __init pSeries_setup_arch(vo
 	fwnmi_init();
 
 	pseries_setup_rfi_flush();
+	setup_stf_barrier();
 
 	/* By default, only probe PCI (can be overridden by rtas_pci) */
 	pci_add_flags(PCI_PROBE_ONLY);


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64s: Add support for ori barrier_nospec patching" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64s: Add support for ori barrier_nospec patching

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:11 +1000
Subject: powerpc/64s: Add support for ori barrier_nospec patching
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-27-mpe@ellerman.id.au>

From: Michal Suchanek <msuchanek@suse.de>

commit 2eea7f067f495e33b8b116b35b5988ab2b8aec55 upstream.

Based on the RFI patching. This is required to be able to disable the
speculation barrier.

Only one barrier type is supported and it does nothing when the
firmware does not enable it. Also re-patching modules is not supported
So the only meaningful thing that can be done is patching out the
speculation barrier at boot when the user says it is not wanted.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/barrier.h        |    2 +-
 arch/powerpc/include/asm/feature-fixups.h |    9 +++++++++
 arch/powerpc/include/asm/setup.h          |    1 +
 arch/powerpc/kernel/security.c            |    9 +++++++++
 arch/powerpc/kernel/vmlinux.lds.S         |    7 +++++++
 arch/powerpc/lib/feature-fixups.c         |   27 +++++++++++++++++++++++++++
 6 files changed, 54 insertions(+), 1 deletion(-)

--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -97,7 +97,7 @@ do {									\
  * Prevent execution of subsequent instructions until preceding branches have
  * been fully resolved and are no longer executing speculatively.
  */
-#define barrier_nospec_asm ori 31,31,0
+#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; nop
 
 // This also acts as a compiler barrier due to the memory clobber.
 #define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
--- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -208,6 +208,14 @@ label##3:					       	\
 	FTR_ENTRY_OFFSET 951b-952b;			\
 	.popsection;
 
+#define NOSPEC_BARRIER_FIXUP_SECTION			\
+953:							\
+	.pushsection __barrier_nospec_fixup,"a";	\
+	.align 2;					\
+954:							\
+	FTR_ENTRY_OFFSET 953b-954b;			\
+	.popsection;
+
 
 #ifndef __ASSEMBLY__
 
@@ -215,6 +223,7 @@ extern long stf_barrier_fallback;
 extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
 extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
 extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
+extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
 
 #endif
 
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -38,6 +38,7 @@ enum l1d_flush_type {
 
 void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
+void do_barrier_nospec_fixups(bool enable);
 
 #endif /* !__ASSEMBLY__ */
 
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -11,10 +11,19 @@
 
 #include <asm/debug.h>
 #include <asm/security_features.h>
+#include <asm/setup.h>
 
 
 unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
+static bool barrier_nospec_enabled;
+
+static void enable_barrier_nospec(bool enable)
+{
+	barrier_nospec_enabled = enable;
+	do_barrier_nospec_fixups(enable);
+}
+
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
 	bool thread_priv;
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -93,6 +93,13 @@ SECTIONS
 		*(__rfi_flush_fixup)
 		__stop___rfi_flush_fixup = .;
 	}
+
+	. = ALIGN(8);
+	__spec_barrier_fixup : AT(ADDR(__spec_barrier_fixup) - LOAD_OFFSET) {
+		__start___barrier_nospec_fixup = .;
+		*(__barrier_nospec_fixup)
+		__stop___barrier_nospec_fixup = .;
+	}
 #endif
 
 	EXCEPTION_TABLE(0)
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -274,6 +274,33 @@ void do_rfi_flush_fixups(enum l1d_flush_
 		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
 						: "unknown");
 }
+
+void do_barrier_nospec_fixups(bool enable)
+{
+	unsigned int instr, *dest;
+	long *start, *end;
+	int i;
+
+	start = PTRRELOC(&__start___barrier_nospec_fixup),
+	end = PTRRELOC(&__stop___barrier_nospec_fixup);
+
+	instr = 0x60000000; /* nop */
+
+	if (enable) {
+		pr_info("barrier-nospec: using ORI speculation barrier\n");
+		instr = 0x63ff0000; /* ori 31,31,0 speculation barrier */
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+		patch_instruction(dest, instr);
+	}
+
+	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
+}
+
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64s: Add support for software count cache flush" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64s: Add support for software count cache flush

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-add-support-for-software-count-cache-flush.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:26 +1000
Subject: powerpc/64s: Add support for software count cache flush
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-42-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit ee13cb249fabdff8b90aaff61add347749280087 upstream.

Some CPU revisions support a mode where the count cache needs to be
flushed by software on context switch. Additionally some revisions may
have a hardware accelerated flush, in which case the software flush
sequence can be shortened.

If we detect the appropriate flag from firmware we patch a branch
into _switch() which takes us to a count cache flush sequence.

That sequence in turn may be patched to return early if we detect that
the CPU supports accelerating the flush sequence in hardware.

Add debugfs support for reporting the state of the flush, as well as
runtime disabling it.

And modify the spectre_v2 sysfs file to report the state of the
software flush.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/asm-prototypes.h    |   21 +++++
 arch/powerpc/include/asm/security_features.h |    1 
 arch/powerpc/kernel/entry_64.S               |   54 ++++++++++++++
 arch/powerpc/kernel/security.c               |   98 +++++++++++++++++++++++++--
 4 files changed, 169 insertions(+), 5 deletions(-)
 create mode 100644 arch/powerpc/include/asm/asm-prototypes.h

--- /dev/null
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -0,0 +1,21 @@
+#ifndef _ASM_POWERPC_ASM_PROTOTYPES_H
+#define _ASM_POWERPC_ASM_PROTOTYPES_H
+/*
+ * This file is for prototypes of C functions that are only called
+ * from asm, and any associated variables.
+ *
+ * Copyright 2016, Daniel Axtens, IBM Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ */
+
+/* Patch sites */
+extern s32 patch__call_flush_count_cache;
+extern s32 patch__flush_count_cache_return;
+
+extern long flush_count_cache;
+
+#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -22,6 +22,7 @@ enum stf_barrier_type {
 
 void setup_stf_barrier(void);
 void do_stf_barrier_fixups(enum stf_barrier_type types);
+void setup_count_cache_flush(void);
 
 static inline void security_ftr_set(unsigned long feature)
 {
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -25,6 +25,7 @@
 #include <asm/page.h>
 #include <asm/mmu.h>
 #include <asm/thread_info.h>
+#include <asm/code-patching-asm.h>
 #include <asm/ppc_asm.h>
 #include <asm/asm-offsets.h>
 #include <asm/cputable.h>
@@ -450,6 +451,57 @@ _GLOBAL(ret_from_kernel_thread)
 	li	r3,0
 	b	.Lsyscall_exit
 
+#ifdef CONFIG_PPC_BOOK3S_64
+
+#define FLUSH_COUNT_CACHE	\
+1:	nop;			\
+	patch_site 1b, patch__call_flush_count_cache
+
+
+#define BCCTR_FLUSH	.long 0x4c400420
+
+.macro nops number
+	.rept \number
+	nop
+	.endr
+.endm
+
+.balign 32
+.global flush_count_cache
+flush_count_cache:
+	/* Save LR into r9 */
+	mflr	r9
+
+	.rept 64
+	bl	.+4
+	.endr
+	b	1f
+	nops	6
+
+	.balign 32
+	/* Restore LR */
+1:	mtlr	r9
+	li	r9,0x7fff
+	mtctr	r9
+
+	BCCTR_FLUSH
+
+2:	nop
+	patch_site 2b patch__flush_count_cache_return
+
+	nops	3
+
+	.rept 278
+	.balign 32
+	BCCTR_FLUSH
+	nops	7
+	.endr
+
+	blr
+#else
+#define FLUSH_COUNT_CACHE
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
 /*
  * This routine switches between two different tasks.  The process
  * state of one is saved on its kernel stack.  Then the state
@@ -513,6 +565,8 @@ BEGIN_FTR_SECTION
 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
 #endif
 
+	FLUSH_COUNT_CACHE
+
 #ifdef CONFIG_SMP
 	/* We need a sync somewhere here to make sure that if the
 	 * previous task gets rescheduled on another CPU, it sees all
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -10,12 +10,21 @@
 #include <linux/seq_buf.h>
 
 #include <asm/debug.h>
+#include <asm/asm-prototypes.h>
+#include <asm/code-patching.h>
 #include <asm/security_features.h>
 #include <asm/setup.h>
 
 
 unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
+enum count_cache_flush_type {
+	COUNT_CACHE_FLUSH_NONE	= 0x1,
+	COUNT_CACHE_FLUSH_SW	= 0x2,
+	COUNT_CACHE_FLUSH_HW	= 0x4,
+};
+static enum count_cache_flush_type count_cache_flush_type;
+
 bool barrier_nospec_enabled;
 static bool no_nospec;
 
@@ -160,17 +169,29 @@ ssize_t cpu_show_spectre_v2(struct devic
 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
 
-	if (bcs || ccd) {
+	if (bcs || ccd || count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
+		bool comma = false;
 		seq_buf_printf(&s, "Mitigation: ");
 
-		if (bcs)
+		if (bcs) {
 			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
+			comma = true;
+		}
+
+		if (ccd) {
+			if (comma)
+				seq_buf_printf(&s, ", ");
+			seq_buf_printf(&s, "Indirect branch cache disabled");
+			comma = true;
+		}
 
-		if (bcs && ccd)
+		if (comma)
 			seq_buf_printf(&s, ", ");
 
-		if (ccd)
-			seq_buf_printf(&s, "Indirect branch cache disabled");
+		seq_buf_printf(&s, "Software count cache flush");
+
+		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
+			seq_buf_printf(&s, "(hardware accelerated)");
 	} else
 		seq_buf_printf(&s, "Vulnerable");
 
@@ -325,4 +346,71 @@ static __init int stf_barrier_debugfs_in
 }
 device_initcall(stf_barrier_debugfs_init);
 #endif /* CONFIG_DEBUG_FS */
+
+static void toggle_count_cache_flush(bool enable)
+{
+	if (!enable || !security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
+		patch_instruction_site(&patch__call_flush_count_cache, PPC_INST_NOP);
+		count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
+		pr_info("count-cache-flush: software flush disabled.\n");
+		return;
+	}
+
+	patch_branch_site(&patch__call_flush_count_cache,
+			  (u64)&flush_count_cache, BRANCH_SET_LINK);
+
+	if (!security_ftr_enabled(SEC_FTR_BCCTR_FLUSH_ASSIST)) {
+		count_cache_flush_type = COUNT_CACHE_FLUSH_SW;
+		pr_info("count-cache-flush: full software flush sequence enabled.\n");
+		return;
+	}
+
+	patch_instruction_site(&patch__flush_count_cache_return, PPC_INST_BLR);
+	count_cache_flush_type = COUNT_CACHE_FLUSH_HW;
+	pr_info("count-cache-flush: hardware assisted flush sequence enabled\n");
+}
+
+void setup_count_cache_flush(void)
+{
+	toggle_count_cache_flush(true);
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int count_cache_flush_set(void *data, u64 val)
+{
+	bool enable;
+
+	if (val == 1)
+		enable = true;
+	else if (val == 0)
+		enable = false;
+	else
+		return -EINVAL;
+
+	toggle_count_cache_flush(enable);
+
+	return 0;
+}
+
+static int count_cache_flush_get(void *data, u64 *val)
+{
+	if (count_cache_flush_type == COUNT_CACHE_FLUSH_NONE)
+		*val = 0;
+	else
+		*val = 1;
+
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_count_cache_flush, count_cache_flush_get,
+			count_cache_flush_set, "%llu\n");
+
+static __init int count_cache_flush_debugfs_init(void)
+{
+	debugfs_create_file("count_cache_flush", 0600, powerpc_debugfs_root,
+			    NULL, &fops_count_cache_flush);
+	return 0;
+}
+device_initcall(count_cache_flush_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
 #endif /* CONFIG_PPC_BOOK3S_64 */


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64s: Enable barrier_nospec based on firmware settings" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64s: Enable barrier_nospec based on firmware settings

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:13 +1000
Subject: powerpc/64s: Enable barrier_nospec based on firmware settings
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-29-mpe@ellerman.id.au>

From: Michal Suchanek <msuchanek@suse.de>

commit cb3d6759a93c6d0aea1c10deb6d00e111c29c19c upstream.

Check what firmware told us and enable/disable the barrier_nospec as
appropriate.

We err on the side of enabling the barrier, as it's no-op on older
systems, see the comment for more detail.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/setup.h       |    1 
 arch/powerpc/kernel/security.c         |   59 +++++++++++++++++++++++++++++++++
 arch/powerpc/platforms/powernv/setup.c |    1 
 arch/powerpc/platforms/pseries/setup.c |    1 
 4 files changed, 62 insertions(+)

--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -38,6 +38,7 @@ enum l1d_flush_type {
 
 void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
+void setup_barrier_nospec(void);
 void do_barrier_nospec_fixups(bool enable);
 extern bool barrier_nospec_enabled;
 
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -24,6 +24,65 @@ static void enable_barrier_nospec(bool e
 	do_barrier_nospec_fixups(enable);
 }
 
+void setup_barrier_nospec(void)
+{
+	bool enable;
+
+	/*
+	 * It would make sense to check SEC_FTR_SPEC_BAR_ORI31 below as well.
+	 * But there's a good reason not to. The two flags we check below are
+	 * both are enabled by default in the kernel, so if the hcall is not
+	 * functional they will be enabled.
+	 * On a system where the host firmware has been updated (so the ori
+	 * functions as a barrier), but on which the hypervisor (KVM/Qemu) has
+	 * not been updated, we would like to enable the barrier. Dropping the
+	 * check for SEC_FTR_SPEC_BAR_ORI31 achieves that. The only downside is
+	 * we potentially enable the barrier on systems where the host firmware
+	 * is not updated, but that's harmless as it's a no-op.
+	 */
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
+
+	enable_barrier_nospec(enable);
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int barrier_nospec_set(void *data, u64 val)
+{
+	switch (val) {
+	case 0:
+	case 1:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (!!val == !!barrier_nospec_enabled)
+		return 0;
+
+	enable_barrier_nospec(!!val);
+
+	return 0;
+}
+
+static int barrier_nospec_get(void *data, u64 *val)
+{
+	*val = barrier_nospec_enabled ? 1 : 0;
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_barrier_nospec,
+			barrier_nospec_get, barrier_nospec_set, "%llu\n");
+
+static __init int barrier_nospec_debugfs_init(void)
+{
+	debugfs_create_file("barrier_nospec", 0600, powerpc_debugfs_root, NULL,
+			    &fops_barrier_nospec);
+	return 0;
+}
+device_initcall(barrier_nospec_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
+
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
 	bool thread_priv;
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -123,6 +123,7 @@ static void pnv_setup_rfi_flush(void)
 		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
 
 	setup_rfi_flush(type, enable);
+	setup_barrier_nospec();
 }
 
 static void __init pnv_setup_arch(void)
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -574,6 +574,7 @@ void pseries_setup_rfi_flush(void)
 		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
 
 	setup_rfi_flush(types, enable);
+	setup_barrier_nospec();
 }
 
 static void __init pSeries_setup_arch(void)


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64: Make stf barrier PPC_BOOK3S_64 specific." has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64: Make stf barrier PPC_BOOK3S_64 specific.

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:19 +1000
Subject: powerpc/64: Make stf barrier PPC_BOOK3S_64 specific.
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-35-mpe@ellerman.id.au>

From: Diana Craciun <diana.craciun@nxp.com>

commit 6453b532f2c8856a80381e6b9a1f5ea2f12294df upstream.

NXP Book3E platforms are not vulnerable to speculative store
bypass, so make the mitigations PPC_BOOK3S_64 specific.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/security.c |    2 ++
 1 file changed, 2 insertions(+)

--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -177,6 +177,7 @@ ssize_t cpu_show_spectre_v2(struct devic
 	return s.len;
 }
 
+#ifdef CONFIG_PPC_BOOK3S_64
 /*
  * Store-forwarding barrier support.
  */
@@ -322,3 +323,4 @@ static __init int stf_barrier_debugfs_in
 }
 device_initcall(stf_barrier_debugfs_init);
 #endif /* CONFIG_DEBUG_FS */
+#endif /* CONFIG_PPC_BOOK3S_64 */


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64s: Enhance the information in cpu_show_spectre_v1()" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64s: Enhance the information in cpu_show_spectre_v1()

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:16 +1000
Subject: powerpc/64s: Enhance the information in cpu_show_spectre_v1()
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-32-mpe@ellerman.id.au>

From: Michal Suchanek <msuchanek@suse.de>

commit a377514519b9a20fa1ea9adddbb4129573129cef upstream.

We now have barrier_nospec as mitigation so print it in
cpu_show_spectre_v1() when enabled.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/security.c |    3 +++
 1 file changed, 3 insertions(+)

--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -121,6 +121,9 @@ ssize_t cpu_show_spectre_v1(struct devic
 	if (!security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR))
 		return sprintf(buf, "Not affected\n");
 
+	if (barrier_nospec_enabled)
+		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+
 	return sprintf(buf, "Vulnerable\n");
 }
 


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64s: Fix section mismatch warnings from setup_rfi_flush()" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64s: Fix section mismatch warnings from setup_rfi_flush()

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:08 +1000
Subject: powerpc/64s: Fix section mismatch warnings from setup_rfi_flush()
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-24-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 501a78cbc17c329fabf8e9750a1e9ab810c88a0e upstream.

The recent LPM changes to setup_rfi_flush() are causing some section
mismatch warnings because we removed the __init annotation on
setup_rfi_flush():

  The function setup_rfi_flush() references
  the function __init ppc64_bolted_size().
  the function __init memblock_alloc_base().

The references are actually in init_fallback_flush(), but that is
inlined into setup_rfi_flush().

These references are safe because:
 - only pseries calls setup_rfi_flush() at runtime
 - pseries always passes L1D_FLUSH_FALLBACK at boot
 - so the fallback flush area will always be allocated
 - so the check in init_fallback_flush() will always return early:
   /* Only allocate the fallback flush area once (at boot time). */
   if (l1d_flush_fallback_area)
   	return;

 - and therefore we won't actually call the freed init routines.

We should rework the code to make it safer by default rather than
relying on the above, but for now as a quick-fix just add a __ref
annotation to squash the warning.

Fixes: abf110f3e1ce ("powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/setup_64.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -882,7 +882,7 @@ void rfi_flush_enable(bool enable)
 	rfi_flush = enable;
 }
 
-static void init_fallback_flush(void)
+static void __ref init_fallback_flush(void)
 {
 	u64 l1d_size, limit;
 	int cpu;


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64s: Improve RFI L1-D cache flush fallback" has been added to the 4.4-stable tree
  2019-04-21 14:19   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64s: Improve RFI L1-D cache flush fallback

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:19:47 +1000
Subject: powerpc/64s: Improve RFI L1-D cache flush fallback
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-3-mpe@ellerman.id.au>

From: Nicholas Piggin <npiggin@gmail.com>

commit bdcb1aefc5b3f7d0f1dc8b02673602bca2ff7a4b upstream.

The fallback RFI flush is used when firmware does not provide a way
to flush the cache. It's a "displacement flush" that evicts useful
data by displacing it with an uninteresting buffer.

The flush has to take care to work with implementation specific cache
replacment policies, so the recipe has been in flux. The initial
slow but conservative approach is to touch all lines of a congruence
class, with dependencies between each load. It has since been
determined that a linear pattern of loads without dependencies is
sufficient, and is significantly faster.

Measuring the speed of a null syscall with RFI fallback flush enabled
gives the relative improvement:

P8 - 1.83x
P9 - 1.75x

The flush also becomes simpler and more adaptable to different cache
geometries.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/paca.h      |    3 -
 arch/powerpc/kernel/asm-offsets.c    |    3 -
 arch/powerpc/kernel/exceptions-64s.S |   76 ++++++++++++++++-------------------
 arch/powerpc/kernel/setup_64.c       |   13 -----
 arch/powerpc/xmon/xmon.c             |    2 
 5 files changed, 39 insertions(+), 58 deletions(-)

--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -199,8 +199,7 @@ struct paca_struct {
 	 */
 	u64 exrfi[13] __aligned(0x80);
 	void *rfi_flush_fallback_area;
-	u64 l1d_flush_congruence;
-	u64 l1d_flush_sets;
+	u64 l1d_flush_size;
 #endif
 };
 
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -245,8 +245,7 @@ int main(void)
 	DEFINE(PACA_IN_MCE, offsetof(struct paca_struct, in_mce));
 	DEFINE(PACA_RFI_FLUSH_FALLBACK_AREA, offsetof(struct paca_struct, rfi_flush_fallback_area));
 	DEFINE(PACA_EXRFI, offsetof(struct paca_struct, exrfi));
-	DEFINE(PACA_L1D_FLUSH_CONGRUENCE, offsetof(struct paca_struct, l1d_flush_congruence));
-	DEFINE(PACA_L1D_FLUSH_SETS, offsetof(struct paca_struct, l1d_flush_sets));
+	DEFINE(PACA_L1D_FLUSH_SIZE, offsetof(struct paca_struct, l1d_flush_size));
 #endif
 	DEFINE(PACAHWCPUID, offsetof(struct paca_struct, hw_cpu_id));
 	DEFINE(PACAKEXECSTATE, offsetof(struct paca_struct, kexec_state));
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1571,39 +1571,37 @@ rfi_flush_fallback:
 	std	r9,PACA_EXRFI+EX_R9(r13)
 	std	r10,PACA_EXRFI+EX_R10(r13)
 	std	r11,PACA_EXRFI+EX_R11(r13)
-	std	r12,PACA_EXRFI+EX_R12(r13)
-	std	r8,PACA_EXRFI+EX_R13(r13)
 	mfctr	r9
 	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
-	ld	r11,PACA_L1D_FLUSH_SETS(r13)
-	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
-	/*
-	 * The load adresses are at staggered offsets within cachelines,
-	 * which suits some pipelines better (on others it should not
-	 * hurt).
-	 */
-	addi	r12,r12,8
+	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
+	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
 	mtctr	r11
 	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
 
 	/* order ld/st prior to dcbt stop all streams with flushing */
 	sync
-1:	li	r8,0
-	.rept	8 /* 8-way set associative */
-	ldx	r11,r10,r8
-	add	r8,r8,r12
-	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
-	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
-	.endr
-	addi	r10,r10,128 /* 128 byte cache line */
+
+	/*
+	 * The load adresses are at staggered offsets within cachelines,
+	 * which suits some pipelines better (on others it should not
+	 * hurt).
+	 */
+1:
+	ld	r11,(0x80 + 8)*0(r10)
+	ld	r11,(0x80 + 8)*1(r10)
+	ld	r11,(0x80 + 8)*2(r10)
+	ld	r11,(0x80 + 8)*3(r10)
+	ld	r11,(0x80 + 8)*4(r10)
+	ld	r11,(0x80 + 8)*5(r10)
+	ld	r11,(0x80 + 8)*6(r10)
+	ld	r11,(0x80 + 8)*7(r10)
+	addi	r10,r10,0x80*8
 	bdnz	1b
 
 	mtctr	r9
 	ld	r9,PACA_EXRFI+EX_R9(r13)
 	ld	r10,PACA_EXRFI+EX_R10(r13)
 	ld	r11,PACA_EXRFI+EX_R11(r13)
-	ld	r12,PACA_EXRFI+EX_R12(r13)
-	ld	r8,PACA_EXRFI+EX_R13(r13)
 	GET_SCRATCH0(r13);
 	rfid
 
@@ -1614,39 +1612,37 @@ hrfi_flush_fallback:
 	std	r9,PACA_EXRFI+EX_R9(r13)
 	std	r10,PACA_EXRFI+EX_R10(r13)
 	std	r11,PACA_EXRFI+EX_R11(r13)
-	std	r12,PACA_EXRFI+EX_R12(r13)
-	std	r8,PACA_EXRFI+EX_R13(r13)
 	mfctr	r9
 	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
-	ld	r11,PACA_L1D_FLUSH_SETS(r13)
-	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
-	/*
-	 * The load adresses are at staggered offsets within cachelines,
-	 * which suits some pipelines better (on others it should not
-	 * hurt).
-	 */
-	addi	r12,r12,8
+	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
+	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
 	mtctr	r11
 	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
 
 	/* order ld/st prior to dcbt stop all streams with flushing */
 	sync
-1:	li	r8,0
-	.rept	8 /* 8-way set associative */
-	ldx	r11,r10,r8
-	add	r8,r8,r12
-	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
-	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
-	.endr
-	addi	r10,r10,128 /* 128 byte cache line */
+
+	/*
+	 * The load adresses are at staggered offsets within cachelines,
+	 * which suits some pipelines better (on others it should not
+	 * hurt).
+	 */
+1:
+	ld	r11,(0x80 + 8)*0(r10)
+	ld	r11,(0x80 + 8)*1(r10)
+	ld	r11,(0x80 + 8)*2(r10)
+	ld	r11,(0x80 + 8)*3(r10)
+	ld	r11,(0x80 + 8)*4(r10)
+	ld	r11,(0x80 + 8)*5(r10)
+	ld	r11,(0x80 + 8)*6(r10)
+	ld	r11,(0x80 + 8)*7(r10)
+	addi	r10,r10,0x80*8
 	bdnz	1b
 
 	mtctr	r9
 	ld	r9,PACA_EXRFI+EX_R9(r13)
 	ld	r10,PACA_EXRFI+EX_R10(r13)
 	ld	r11,PACA_EXRFI+EX_R11(r13)
-	ld	r12,PACA_EXRFI+EX_R12(r13)
-	ld	r8,PACA_EXRFI+EX_R13(r13)
 	GET_SCRATCH0(r13);
 	hrfid
 
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -902,19 +902,8 @@ static void init_fallback_flush(void)
 	memset(l1d_flush_fallback_area, 0, l1d_size * 2);
 
 	for_each_possible_cpu(cpu) {
-		/*
-		 * The fallback flush is currently coded for 8-way
-		 * associativity. Different associativity is possible, but it
-		 * will be treated as 8-way and may not evict the lines as
-		 * effectively.
-		 *
-		 * 128 byte lines are mandatory.
-		 */
-		u64 c = l1d_size / 8;
-
 		paca[cpu].rfi_flush_fallback_area = l1d_flush_fallback_area;
-		paca[cpu].l1d_flush_congruence = c;
-		paca[cpu].l1d_flush_sets = c / 128;
+		paca[cpu].l1d_flush_size = l1d_size;
 	}
 }
 
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -2146,8 +2146,6 @@ static void dump_one_paca(int cpu)
 		printf(" slb_cache[%d]:        = 0x%016lx\n", i, p->slb_cache[i]);
 
 	DUMP(p, rfi_flush_fallback_area, "px");
-	DUMP(p, l1d_flush_congruence, "llx");
-	DUMP(p, l1d_flush_sets, "llx");
 #endif
 	DUMP(p, dscr_default, "llx");
 #ifdef CONFIG_PPC_BOOK3E


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64s: Move cpu_show_meltdown()" has been added to the 4.4-stable tree
  2019-04-21 14:19   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64s: Move cpu_show_meltdown()

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-move-cpu_show_meltdown.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:19:59 +1000
Subject: powerpc/64s: Move cpu_show_meltdown()
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-15-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 8ad33041563a10b34988800c682ada14b2612533 upstream.

This landed in setup_64.c for no good reason other than we had nowhere
else to put it. Now that we have a security-related file, that is a
better place for it so move it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/security.c |   11 +++++++++++
 arch/powerpc/kernel/setup_64.c |    8 --------
 2 files changed, 11 insertions(+), 8 deletions(-)

--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -5,6 +5,8 @@
 // Copyright 2018, Michael Ellerman, IBM Corporation.
 
 #include <linux/kernel.h>
+#include <linux/device.h>
+
 #include <asm/security_features.h>
 
 
@@ -13,3 +15,12 @@ unsigned long powerpc_security_features
 	SEC_FTR_L1D_FLUSH_PR | \
 	SEC_FTR_BNDS_CHK_SPEC_BAR | \
 	SEC_FTR_FAVOUR_SECURITY;
+
+
+ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	if (rfi_flush)
+		return sprintf(buf, "Mitigation: RFI Flush\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -961,12 +961,4 @@ static __init int rfi_flush_debugfs_init
 }
 device_initcall(rfi_flush_debugfs_init);
 #endif
-
-ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
-{
-	if (rfi_flush)
-		return sprintf(buf, "Mitigation: RFI Flush\n");
-
-	return sprintf(buf, "Vulnerable\n");
-}
 #endif /* CONFIG_PPC_BOOK3S_64 */


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64s: Enhance the information in cpu_show_meltdown()" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64s: Enhance the information in cpu_show_meltdown()

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:00 +1000
Subject: powerpc/64s: Enhance the information in cpu_show_meltdown()
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-16-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit ff348355e9c72493947be337bb4fae4fc1a41eba upstream.

Now that we have the security feature flags we can make the
information displayed in the "meltdown" file more informative.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/security_features.h |    1 
 arch/powerpc/kernel/security.c               |   30 +++++++++++++++++++++++++--
 2 files changed, 29 insertions(+), 2 deletions(-)

--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -10,6 +10,7 @@
 
 
 extern unsigned long powerpc_security_features;
+extern bool rfi_flush;
 
 static inline void security_ftr_set(unsigned long feature)
 {
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -6,6 +6,7 @@
 
 #include <linux/kernel.h>
 #include <linux/device.h>
+#include <linux/seq_buf.h>
 
 #include <asm/security_features.h>
 
@@ -19,8 +20,33 @@ unsigned long powerpc_security_features
 
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {
-	if (rfi_flush)
-		return sprintf(buf, "Mitigation: RFI Flush\n");
+	bool thread_priv;
+
+	thread_priv = security_ftr_enabled(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (rfi_flush || thread_priv) {
+		struct seq_buf s;
+		seq_buf_init(&s, buf, PAGE_SIZE - 1);
+
+		seq_buf_printf(&s, "Mitigation: ");
+
+		if (rfi_flush)
+			seq_buf_printf(&s, "RFI Flush");
+
+		if (rfi_flush && thread_priv)
+			seq_buf_printf(&s, ", ");
+
+		if (thread_priv)
+			seq_buf_printf(&s, "L1D private per thread");
+
+		seq_buf_printf(&s, "\n");
+
+		return s.len;
+	}
+
+	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
+	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
+		return sprintf(buf, "Not affected\n");
 
 	return sprintf(buf, "Vulnerable\n");
 }


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64s: Wire up cpu_show_spectre_v1()" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64s: Wire up cpu_show_spectre_v1()

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-wire-up-cpu_show_spectre_v1.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:03 +1000
Subject: powerpc/64s: Wire up cpu_show_spectre_v1()
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-19-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 56986016cb8cd9050e601831fe89f332b4e3c46e upstream.

Add a definition for cpu_show_spectre_v1() to override the generic
version. Currently this just prints "Not affected" or "Vulnerable"
based on the firmware flag.

Although the kernel does have array_index_nospec() in a few places, we
haven't yet audited all the powerpc code to see where it's necessary,
so for now we don't list that as a mitigation.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/security.c |    8 ++++++++
 1 file changed, 8 insertions(+)

--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -50,3 +50,11 @@ ssize_t cpu_show_meltdown(struct device
 
 	return sprintf(buf, "Vulnerable\n");
 }
+
+ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	if (!security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR))
+		return sprintf(buf, "Not affected\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64s: Wire up cpu_show_spectre_v2()" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64s: Wire up cpu_show_spectre_v2()

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-wire-up-cpu_show_spectre_v2.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:04 +1000
Subject: powerpc/64s: Wire up cpu_show_spectre_v2()
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-20-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit d6fbe1c55c55c6937cbea3531af7da84ab7473c3 upstream.

Add a definition for cpu_show_spectre_v2() to override the generic
version. This has several permuations, though in practice some may not
occur we cater for any combination.

The most verbose is:

  Mitigation: Indirect branch serialisation (kernel only), Indirect
  branch cache disabled, ori31 speculation barrier enabled

We don't treat the ori31 speculation barrier as a mitigation on its
own, because it has to be *used* by code in order to be a mitigation
and we don't know if userspace is doing that. So if that's all we see
we say:

  Vulnerable, ori31 speculation barrier enabled

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/security.c |   33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -58,3 +58,36 @@ ssize_t cpu_show_spectre_v1(struct devic
 
 	return sprintf(buf, "Vulnerable\n");
 }
+
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	bool bcs, ccd, ori;
+	struct seq_buf s;
+
+	seq_buf_init(&s, buf, PAGE_SIZE - 1);
+
+	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
+	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
+	ori = security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31);
+
+	if (bcs || ccd) {
+		seq_buf_printf(&s, "Mitigation: ");
+
+		if (bcs)
+			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
+
+		if (bcs && ccd)
+			seq_buf_printf(&s, ", ");
+
+		if (ccd)
+			seq_buf_printf(&s, "Indirect branch cache disabled");
+	} else
+		seq_buf_printf(&s, "Vulnerable");
+
+	if (ori)
+		seq_buf_printf(&s, ", ori31 speculation barrier enabled");
+
+	seq_buf_printf(&s, "\n");
+
+	return s.len;
+}


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc: Add security feature flags for Spectre/Meltdown" has been added to the 4.4-stable tree
  2019-04-21 14:19   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc: Add security feature flags for Spectre/Meltdown

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-add-security-feature-flags-for-spectre-meltdown.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:19:56 +1000
Subject: powerpc: Add security feature flags for Spectre/Meltdown
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-12-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 9a868f634349e62922c226834aa23e3d1329ae7f upstream.

This commit adds security feature flags to reflect the settings we
receive from firmware regarding Spectre/Meltdown mitigations.

The feature names reflect the names we are given by firmware on bare
metal machines. See the hostboot source for details.

Arguably these could be firmware features, but that then requires them
to be read early in boot so they're available prior to asm feature
patching, but we don't actually want to use them for patching. We may
also want to dynamically update them in future, which would be
incompatible with the way firmware features work (at the moment at
least). So for now just make them separate flags.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/security_features.h |   65 +++++++++++++++++++++++++++
 arch/powerpc/kernel/Makefile                 |    2 
 arch/powerpc/kernel/security.c               |   15 ++++++
 3 files changed, 81 insertions(+), 1 deletion(-)
 create mode 100644 arch/powerpc/include/asm/security_features.h
 create mode 100644 arch/powerpc/kernel/security.c

--- /dev/null
+++ b/arch/powerpc/include/asm/security_features.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Security related feature bit definitions.
+ *
+ * Copyright 2018, Michael Ellerman, IBM Corporation.
+ */
+
+#ifndef _ASM_POWERPC_SECURITY_FEATURES_H
+#define _ASM_POWERPC_SECURITY_FEATURES_H
+
+
+extern unsigned long powerpc_security_features;
+
+static inline void security_ftr_set(unsigned long feature)
+{
+	powerpc_security_features |= feature;
+}
+
+static inline void security_ftr_clear(unsigned long feature)
+{
+	powerpc_security_features &= ~feature;
+}
+
+static inline bool security_ftr_enabled(unsigned long feature)
+{
+	return !!(powerpc_security_features & feature);
+}
+
+
+// Features indicating support for Spectre/Meltdown mitigations
+
+// The L1-D cache can be flushed with ori r30,r30,0
+#define SEC_FTR_L1D_FLUSH_ORI30		0x0000000000000001ull
+
+// The L1-D cache can be flushed with mtspr 882,r0 (aka SPRN_TRIG2)
+#define SEC_FTR_L1D_FLUSH_TRIG2		0x0000000000000002ull
+
+// ori r31,r31,0 acts as a speculation barrier
+#define SEC_FTR_SPEC_BAR_ORI31		0x0000000000000004ull
+
+// Speculation past bctr is disabled
+#define SEC_FTR_BCCTRL_SERIALISED	0x0000000000000008ull
+
+// Entries in L1-D are private to a SMT thread
+#define SEC_FTR_L1D_THREAD_PRIV		0x0000000000000010ull
+
+// Indirect branch prediction cache disabled
+#define SEC_FTR_COUNT_CACHE_DISABLED	0x0000000000000020ull
+
+
+// Features indicating need for Spectre/Meltdown mitigations
+
+// The L1-D cache should be flushed on MSR[HV] 1->0 transition (hypervisor to guest)
+#define SEC_FTR_L1D_FLUSH_HV		0x0000000000000040ull
+
+// The L1-D cache should be flushed on MSR[PR] 0->1 transition (kernel to userspace)
+#define SEC_FTR_L1D_FLUSH_PR		0x0000000000000080ull
+
+// A speculation barrier should be used for bounds checks (Spectre variant 1)
+#define SEC_FTR_BNDS_CHK_SPEC_BAR	0x0000000000000100ull
+
+// Firmware configuration indicates user favours security over performance
+#define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
+
+#endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -40,7 +40,7 @@ obj-$(CONFIG_PPC64)		+= setup_64.o sys_p
 obj-$(CONFIG_VDSO32)		+= vdso32/
 obj-$(CONFIG_HAVE_HW_BREAKPOINT)	+= hw_breakpoint.o
 obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_ppc970.o cpu_setup_pa6t.o
-obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o
+obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o security.o
 obj-$(CONFIG_PPC_BOOK3S_64)	+= mce.o mce_power.o
 obj64-$(CONFIG_RELOCATABLE)	+= reloc_64.o
 obj-$(CONFIG_PPC_BOOK3E_64)	+= exceptions-64e.o idle_book3e.o
--- /dev/null
+++ b/arch/powerpc/kernel/security.c
@@ -0,0 +1,15 @@
+// SPDX-License-Identifier: GPL-2.0+
+//
+// Security related flags and so on.
+//
+// Copyright 2018, Michael Ellerman, IBM Corporation.
+
+#include <linux/kernel.h>
+#include <asm/security_features.h>
+
+
+unsigned long powerpc_security_features __read_mostly = \
+	SEC_FTR_L1D_FLUSH_HV | \
+	SEC_FTR_L1D_FLUSH_PR | \
+	SEC_FTR_BNDS_CHK_SPEC_BAR | \
+	SEC_FTR_FAVOUR_SECURITY;


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/asm: Add a patch_site macro & helpers for patching instructions" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/asm: Add a patch_site macro & helpers for patching instructions

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:24 +1000
Subject: powerpc/asm: Add a patch_site macro & helpers for patching instructions
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-40-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 06d0bbc6d0f56dacac3a79900e9a9a0d5972d818 upstream.

Add a macro and some helper C functions for patching single asm
instructions.

The gas macro means we can do something like:

  1:	nop
  	patch_site 1b, patch__foo

Which is less visually distracting than defining a GLOBAL symbol at 1,
and also doesn't pollute the symbol table which can confuse eg. perf.

These are obviously similar to our existing feature sections, but are
not automatically patched based on CPU/MMU features, rather they are
designed to be manually patched by C code at some arbitrary point.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/code-patching-asm.h |   18 ++++++++++++++++++
 arch/powerpc/include/asm/code-patching.h     |    2 ++
 arch/powerpc/lib/code-patching.c             |   16 ++++++++++++++++
 3 files changed, 36 insertions(+)
 create mode 100644 arch/powerpc/include/asm/code-patching-asm.h

--- /dev/null
+++ b/arch/powerpc/include/asm/code-patching-asm.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Copyright 2018, Michael Ellerman, IBM Corporation.
+ */
+#ifndef _ASM_POWERPC_CODE_PATCHING_ASM_H
+#define _ASM_POWERPC_CODE_PATCHING_ASM_H
+
+/* Define a "site" that can be patched */
+.macro patch_site label name
+	.pushsection ".rodata"
+	.balign 4
+	.global \name
+\name:
+	.4byte	\label - .
+	.popsection
+.endm
+
+#endif /* _ASM_POWERPC_CODE_PATCHING_ASM_H */
--- a/arch/powerpc/include/asm/code-patching.h
+++ b/arch/powerpc/include/asm/code-patching.h
@@ -28,6 +28,8 @@ unsigned int create_cond_branch(const un
 				unsigned long target, int flags);
 int patch_branch(unsigned int *addr, unsigned long target, int flags);
 int patch_instruction(unsigned int *addr, unsigned int instr);
+int patch_instruction_site(s32 *addr, unsigned int instr);
+int patch_branch_site(s32 *site, unsigned long target, int flags);
 
 int instr_is_relative_branch(unsigned int instr);
 int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -32,6 +32,22 @@ int patch_branch(unsigned int *addr, uns
 	return patch_instruction(addr, create_branch(addr, target, flags));
 }
 
+int patch_branch_site(s32 *site, unsigned long target, int flags)
+{
+	unsigned int *addr;
+
+	addr = (unsigned int *)((unsigned long)site + *site);
+	return patch_instruction(addr, create_branch(addr, target, flags));
+}
+
+int patch_instruction_site(s32 *site, unsigned int instr)
+{
+	unsigned int *addr;
+
+	addr = (unsigned int *)((unsigned long)site + *site);
+	return patch_instruction(addr, instr);
+}
+
 unsigned int create_branch(const unsigned int *addr,
 			   unsigned long target, int flags)
 {


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/64s: Patch barrier_nospec in modules" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/64s: Patch barrier_nospec in modules

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-patch-barrier_nospec-in-modules.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:12 +1000
Subject: powerpc/64s: Patch barrier_nospec in modules
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-28-mpe@ellerman.id.au>

From: Michal Suchanek <msuchanek@suse.de>

commit 815069ca57c142eb71d27439bc27f41a433a67b3 upstream.

Note that unlike RFI which is patched only in kernel the nospec state
reflects settings at the time the module was loaded.

Iterating all modules and re-patching every time the settings change
is not implemented.

Based on lwsync patching.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/setup.h  |    7 +++++++
 arch/powerpc/kernel/module.c      |    6 ++++++
 arch/powerpc/kernel/security.c    |    2 +-
 arch/powerpc/lib/feature-fixups.c |   16 +++++++++++++---
 4 files changed, 27 insertions(+), 4 deletions(-)

--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -39,6 +39,13 @@ enum l1d_flush_type {
 void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
 void do_barrier_nospec_fixups(bool enable);
+extern bool barrier_nospec_enabled;
+
+#ifdef CONFIG_PPC_BOOK3S_64
+void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
+#else
+static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
+#endif
 
 #endif /* !__ASSEMBLY__ */
 
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -67,6 +67,12 @@ int module_finalize(const Elf_Ehdr *hdr,
 		do_feature_fixups(powerpc_firmware_features,
 				  (void *)sect->sh_addr,
 				  (void *)sect->sh_addr + sect->sh_size);
+
+	sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
+	if (sect != NULL)
+		do_barrier_nospec_fixups_range(barrier_nospec_enabled,
+				  (void *)sect->sh_addr,
+				  (void *)sect->sh_addr + sect->sh_size);
 #endif
 
 	sect = find_section(hdr, sechdrs, "__lwsync_fixup");
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -16,7 +16,7 @@
 
 unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
-static bool barrier_nospec_enabled;
+bool barrier_nospec_enabled;
 
 static void enable_barrier_nospec(bool enable)
 {
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -275,14 +275,14 @@ void do_rfi_flush_fixups(enum l1d_flush_
 						: "unknown");
 }
 
-void do_barrier_nospec_fixups(bool enable)
+void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
 {
 	unsigned int instr, *dest;
 	long *start, *end;
 	int i;
 
-	start = PTRRELOC(&__start___barrier_nospec_fixup),
-	end = PTRRELOC(&__stop___barrier_nospec_fixup);
+	start = fixup_start;
+	end = fixup_end;
 
 	instr = 0x60000000; /* nop */
 
@@ -301,6 +301,16 @@ void do_barrier_nospec_fixups(bool enabl
 	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
 }
 
+void do_barrier_nospec_fixups(bool enable)
+{
+	void *start, *end;
+
+	start = PTRRELOC(&__start___barrier_nospec_fixup),
+	end = PTRRELOC(&__stop___barrier_nospec_fixup);
+
+	do_barrier_nospec_fixups_range(enable, start, end);
+}
+
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:23 +1000
Subject: powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-39-mpe@ellerman.id.au>

From: Diana Craciun <diana.craciun@nxp.com>

commit ebcd1bfc33c7a90df941df68a6e5d4018c022fba upstream.

Implement the barrier_nospec as a isync;sync instruction sequence.
The implementation uses the infrastructure built for BOOK3S 64.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
[mpe: Add PPC_INST_ISYNC for backport]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/Kconfig                  |    2 +-
 arch/powerpc/include/asm/barrier.h    |    8 +++++++-
 arch/powerpc/include/asm/ppc-opcode.h |    1 +
 arch/powerpc/lib/feature-fixups.c     |   31 +++++++++++++++++++++++++++++++
 4 files changed, 40 insertions(+), 2 deletions(-)

--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -165,7 +165,7 @@ config PPC
 config PPC_BARRIER_NOSPEC
     bool
     default y
-    depends on PPC_BOOK3S_64
+    depends on PPC_BOOK3S_64 || PPC_FSL_BOOK3E
 
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -92,12 +92,18 @@ do {									\
 #define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
+#ifdef CONFIG_PPC_BOOK3S_64
+#define NOSPEC_BARRIER_SLOT   nop
+#elif defined(CONFIG_PPC_FSL_BOOK3E)
+#define NOSPEC_BARRIER_SLOT   nop; nop
+#endif
+
 #ifdef CONFIG_PPC_BARRIER_NOSPEC
 /*
  * Prevent execution of subsequent instructions until preceding branches have
  * been fully resolved and are no longer executing speculatively.
  */
-#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; nop
+#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; NOSPEC_BARRIER_SLOT
 
 // This also acts as a compiler barrier due to the memory clobber.
 #define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -147,6 +147,7 @@
 #define PPC_INST_LWSYNC			0x7c2004ac
 #define PPC_INST_SYNC			0x7c0004ac
 #define PPC_INST_SYNC_MASK		0xfc0007fe
+#define PPC_INST_ISYNC			0x4c00012c
 #define PPC_INST_LXVD2X			0x7c000698
 #define PPC_INST_MCRXR			0x7c000400
 #define PPC_INST_MCRXR_MASK		0xfc0007fe
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -315,6 +315,37 @@ void do_barrier_nospec_fixups(bool enabl
 }
 #endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
+{
+	unsigned int instr[2], *dest;
+	long *start, *end;
+	int i;
+
+	start = fixup_start;
+	end = fixup_end;
+
+	instr[0] = PPC_INST_NOP;
+	instr[1] = PPC_INST_NOP;
+
+	if (enable) {
+		pr_info("barrier-nospec: using isync; sync as speculation barrier\n");
+		instr[0] = PPC_INST_ISYNC;
+		instr[1] = PPC_INST_SYNC;
+	}
+
+	for (i = 0; start < end; start++, i++) {
+		dest = (void *)start + *start;
+
+		pr_devel("patching dest %lx\n", (unsigned long)dest);
+		patch_instruction(dest, instr[0]);
+		patch_instruction(dest + 1, instr[1]);
+	}
+
+	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
+}
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
 {
 	long *start, *end;


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/fsl: Add infrastructure to fixup branch predictor flush" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/fsl: Add infrastructure to fixup branch predictor flush

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:30 +1000
Subject: powerpc/fsl: Add infrastructure to fixup branch predictor flush
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-46-mpe@ellerman.id.au>

From: Diana Craciun <diana.craciun@nxp.com>

commit 76a5eaa38b15dda92cd6964248c39b5a6f3a4e9d upstream.

In order to protect against speculation attacks (Spectre
variant 2) on NXP PowerPC platforms, the branch predictor
should be flushed when the privillege level is changed.
This patch is adding the infrastructure to fixup at runtime
the code sections that are performing the branch predictor flush
depending on a boot arg parameter which is added later in a
separate patch.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/feature-fixups.h |   12 ++++++++++++
 arch/powerpc/include/asm/setup.h          |    2 ++
 arch/powerpc/kernel/vmlinux.lds.S         |    8 ++++++++
 arch/powerpc/lib/feature-fixups.c         |   23 +++++++++++++++++++++++
 4 files changed, 45 insertions(+)

--- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -216,6 +216,17 @@ label##3:					       	\
 	FTR_ENTRY_OFFSET 953b-954b;			\
 	.popsection;
 
+#define START_BTB_FLUSH_SECTION			\
+955:							\
+
+#define END_BTB_FLUSH_SECTION			\
+956:							\
+	.pushsection __btb_flush_fixup,"a";	\
+	.align 2;							\
+957:						\
+	FTR_ENTRY_OFFSET 955b-957b;			\
+	FTR_ENTRY_OFFSET 956b-957b;			\
+	.popsection;
 
 #ifndef __ASSEMBLY__
 
@@ -224,6 +235,7 @@ extern long __start___stf_entry_barrier_
 extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
 extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
 extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
+extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
 
 #endif
 
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -53,6 +53,8 @@ void do_barrier_nospec_fixups_range(bool
 static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
 #endif
 
+void do_btb_flush_fixups(void);
+
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_SETUP_H */
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -104,6 +104,14 @@ SECTIONS
 	}
 #endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+	. = ALIGN(8);
+	__spec_btb_flush_fixup : AT(ADDR(__spec_btb_flush_fixup) - LOAD_OFFSET) {
+		__start__btb_flush_fixup = .;
+		*(__btb_flush_fixup)
+		__stop__btb_flush_fixup = .;
+	}
+#endif
 	EXCEPTION_TABLE(0)
 
 	NOTES :kernel :notes
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -344,6 +344,29 @@ void do_barrier_nospec_fixups_range(bool
 
 	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
 }
+
+static void patch_btb_flush_section(long *curr)
+{
+	unsigned int *start, *end;
+
+	start = (void *)curr + *curr;
+	end = (void *)curr + *(curr + 1);
+	for (; start < end; start++) {
+		pr_devel("patching dest %lx\n", (unsigned long)start);
+		patch_instruction(start, PPC_INST_NOP);
+	}
+}
+
+void do_btb_flush_fixups(void)
+{
+	long *start, *end;
+
+	start = PTRRELOC(&__start__btb_flush_fixup);
+	end = PTRRELOC(&__stop__btb_flush_fixup);
+
+	for (; start < end; start += 2)
+		patch_btb_flush_section(start);
+}
 #endif /* CONFIG_PPC_FSL_BOOK3E */
 
 void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/fsl: Add macro to flush the branch predictor" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/fsl: Add macro to flush the branch predictor

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:31 +1000
Subject: powerpc/fsl: Add macro to flush the branch predictor
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-47-mpe@ellerman.id.au>

From: Diana Craciun <diana.craciun@nxp.com>

commit 1cbf8990d79ff69da8ad09e8a3df014e1494462b upstream.

The BUCSR register can be used to invalidate the entries in the
branch prediction mechanisms.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/ppc_asm.h |   11 +++++++++++
 1 file changed, 11 insertions(+)

--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -821,4 +821,15 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,C
 	.long 0x2400004c  /* rfid				*/
 #endif /* !CONFIG_PPC_BOOK3E */
 #endif /*  __ASSEMBLY__ */
+
+#ifdef CONFIG_PPC_FSL_BOOK3E
+#define BTB_FLUSH(reg)			\
+	lis reg,BUCSR_INIT@h;		\
+	ori reg,reg,BUCSR_INIT@l;	\
+	mtspr SPRN_BUCSR,reg;		\
+	isync;
+#else
+#define BTB_FLUSH(reg)
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
 #endif /* _ASM_POWERPC_PPC_ASM_H */


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/fsl: Add nospectre_v2 command line argument" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/fsl: Add nospectre_v2 command line argument

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-fsl-add-nospectre_v2-command-line-argument.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:33 +1000
Subject: powerpc/fsl: Add nospectre_v2 command line argument
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-49-mpe@ellerman.id.au>

From: Diana Craciun <diana.craciun@nxp.com>

commit f633a8ad636efb5d4bba1a047d4a0f1ef719aa06 upstream.

When the command line argument is present, the Spectre variant 2
mitigations are disabled.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/setup.h |    5 +++++
 arch/powerpc/kernel/security.c   |   21 +++++++++++++++++++++
 2 files changed, 26 insertions(+)

--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -53,6 +53,11 @@ void do_barrier_nospec_fixups_range(bool
 static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
 #endif
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+void setup_spectre_v2(void);
+#else
+static inline void setup_spectre_v2(void) {};
+#endif
 void do_btb_flush_fixups(void);
 
 #endif /* !__ASSEMBLY__ */
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -27,6 +27,10 @@ static enum count_cache_flush_type count
 
 bool barrier_nospec_enabled;
 static bool no_nospec;
+static bool btb_flush_enabled;
+#ifdef CONFIG_PPC_FSL_BOOK3E
+static bool no_spectrev2;
+#endif
 
 static void enable_barrier_nospec(bool enable)
 {
@@ -102,6 +106,23 @@ static __init int barrier_nospec_debugfs
 device_initcall(barrier_nospec_debugfs_init);
 #endif /* CONFIG_DEBUG_FS */
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+static int __init handle_nospectre_v2(char *p)
+{
+	no_spectrev2 = true;
+
+	return 0;
+}
+early_param("nospectre_v2", handle_nospectre_v2);
+void setup_spectre_v2(void)
+{
+	if (no_spectrev2)
+		do_btb_flush_fixups();
+	else
+		btb_flush_enabled = true;
+}
+#endif /* CONFIG_PPC_FSL_BOOK3E */
+
 #ifdef CONFIG_PPC_BOOK3S_64
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc: Avoid code patching freed init sections" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mikey,
	mpe, msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc: Avoid code patching freed init sections

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-avoid-code-patching-freed-init-sections.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:29 +1000
Subject: powerpc: Avoid code patching freed init sections
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-45-mpe@ellerman.id.au>

From: Michael Neuling <mikey@neuling.org>

commit 51c3c62b58b357e8d35e4cc32f7b4ec907426fe3 upstream.

This stops us from doing code patching in init sections after they've
been freed.

In this chain:
  kvm_guest_init() ->
    kvm_use_magic_page() ->
      fault_in_pages_readable() ->
	 __get_user() ->
	   __get_user_nocheck() ->
	     barrier_nospec();

We have a code patching location at barrier_nospec() and
kvm_guest_init() is an init function. This whole chain gets inlined,
so when we free the init section (hence kvm_guest_init()), this code
goes away and hence should no longer be patched.

We seen this as userspace memory corruption when using a memory
checker while doing partition migration testing on powervm (this
starts the code patching post migration via
/sys/kernel/mobility/migration). In theory, it could also happen when
using /sys/kernel/debug/powerpc/barrier_nospec.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/setup.h |    1 +
 arch/powerpc/lib/code-patching.c |   13 +++++++++++++
 arch/powerpc/mm/mem.c            |    2 ++
 3 files changed, 16 insertions(+)

--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -8,6 +8,7 @@ extern void ppc_printk_progress(char *s,
 
 extern unsigned int rtas_data;
 extern unsigned long long memory_limit;
+extern bool init_mem_is_free;
 extern unsigned long klimit;
 extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
 
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -14,12 +14,25 @@
 #include <asm/page.h>
 #include <asm/code-patching.h>
 #include <asm/uaccess.h>
+#include <asm/setup.h>
+#include <asm/sections.h>
 
 
+static inline bool is_init(unsigned int *addr)
+{
+	return addr >= (unsigned int *)__init_begin && addr < (unsigned int *)__init_end;
+}
+
 int patch_instruction(unsigned int *addr, unsigned int instr)
 {
 	int err;
 
+	/* Make sure we aren't patching a freed init section */
+	if (init_mem_is_free && is_init(addr)) {
+		pr_debug("Skipping init section patching addr: 0x%px\n", addr);
+		return 0;
+	}
+
 	__put_user_size(instr, addr, 4, err);
 	if (err)
 		return err;
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -62,6 +62,7 @@
 #endif
 
 unsigned long long memory_limit;
+bool init_mem_is_free;
 
 #ifdef CONFIG_HIGHMEM
 pte_t *kmap_pte;
@@ -381,6 +382,7 @@ void __init mem_init(void)
 void free_initmem(void)
 {
 	ppc_md.progress = ppc_printk_progress;
+	init_mem_is_free = true;
 	free_initmem_default(POISON_FREE_INITMEM);
 }
 


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/fsl: Fix the flush of branch predictor." has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, dja, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/fsl: Fix the flush of branch predictor.

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-fsl-fix-the-flush-of-branch-predictor.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:37 +1000
Subject: powerpc/fsl: Fix the flush of branch predictor.
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-53-mpe@ellerman.id.au>

From: Christophe Leroy <christophe.leroy@c-s.fr>

commit 27da80719ef132cf8c80eb406d5aeb37dddf78cc upstream.

The commit identified below adds MC_BTB_FLUSH macro only when
CONFIG_PPC_FSL_BOOK3E is defined. This results in the following error
on some configs (seen several times with kisskb randconfig_defconfig)

arch/powerpc/kernel/exceptions-64e.S:576: Error: Unrecognized opcode: `mc_btb_flush'
make[3]: *** [scripts/Makefile.build:367: arch/powerpc/kernel/exceptions-64e.o] Error 1
make[2]: *** [scripts/Makefile.build:492: arch/powerpc/kernel] Error 2
make[1]: *** [Makefile:1043: arch/powerpc] Error 2
make: *** [Makefile:152: sub-make] Error 2

This patch adds a blank definition of MC_BTB_FLUSH for other cases.

Fixes: 10c5e83afd4a ("powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)")
Cc: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/exceptions-64e.S |    1 +
 1 file changed, 1 insertion(+)

--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -348,6 +348,7 @@ ret_from_mc_except:
 #define GEN_BTB_FLUSH
 #define CRIT_BTB_FLUSH
 #define DBG_BTB_FLUSH
+#define MC_BTB_FLUSH
 #define GDBELL_BTB_FLUSH
 #endif
 


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:34 +1000
Subject: powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-50-mpe@ellerman.id.au>

From: Diana Craciun <diana.craciun@nxp.com>

commit 10c5e83afd4a3f01712d97d3bb1ae34d5b74a185 upstream.

In order to protect against speculation attacks on
indirect branches, the branch predictor is flushed at
kernel entry to protect for the following situations:
- userspace process attacking another userspace process
- userspace process attacking the kernel
Basically when the privillege level change (i.e. the
kernel is entered), the branch predictor state is flushed.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/entry_64.S       |    5 +++++
 arch/powerpc/kernel/exceptions-64e.S |   26 +++++++++++++++++++++++++-
 arch/powerpc/mm/tlb_low_64e.S        |    7 +++++++
 3 files changed, 37 insertions(+), 1 deletion(-)

--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -77,6 +77,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
 	std	r0,GPR0(r1)
 	std	r10,GPR1(r1)
 	beq	2f			/* if from kernel mode */
+#ifdef CONFIG_PPC_FSL_BOOK3E
+START_BTB_FLUSH_SECTION
+	BTB_FLUSH(r10)
+END_BTB_FLUSH_SECTION
+#endif
 	ACCOUNT_CPU_USER_ENTRY(r10, r11)
 2:	std	r2,GPR2(r1)
 	std	r3,GPR3(r1)
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -295,7 +295,8 @@ ret_from_mc_except:
 	andi.	r10,r11,MSR_PR;		/* save stack pointer */	    \
 	beq	1f;			/* branch around if supervisor */   \
 	ld	r1,PACAKSAVE(r13);	/* get kernel stack coming from usr */\
-1:	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
+1:	type##_BTB_FLUSH		\
+	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
 	bge-	cr1,exc_##n##_bad_stack;/* bad stack (TODO: out of line) */ \
 	mfspr	r10,SPRN_##type##_SRR0;	/* read SRR0 before touching stack */
 
@@ -327,6 +328,29 @@ ret_from_mc_except:
 #define SPRN_MC_SRR0	SPRN_MCSRR0
 #define SPRN_MC_SRR1	SPRN_MCSRR1
 
+#ifdef CONFIG_PPC_FSL_BOOK3E
+#define GEN_BTB_FLUSH			\
+	START_BTB_FLUSH_SECTION		\
+		beq 1f;			\
+		BTB_FLUSH(r10)			\
+		1:		\
+	END_BTB_FLUSH_SECTION
+
+#define CRIT_BTB_FLUSH			\
+	START_BTB_FLUSH_SECTION		\
+		BTB_FLUSH(r10)		\
+	END_BTB_FLUSH_SECTION
+
+#define DBG_BTB_FLUSH CRIT_BTB_FLUSH
+#define MC_BTB_FLUSH CRIT_BTB_FLUSH
+#define GDBELL_BTB_FLUSH GEN_BTB_FLUSH
+#else
+#define GEN_BTB_FLUSH
+#define CRIT_BTB_FLUSH
+#define DBG_BTB_FLUSH
+#define GDBELL_BTB_FLUSH
+#endif
+
 #define NORMAL_EXCEPTION_PROLOG(n, intnum, addition)			    \
 	EXCEPTION_PROLOG(n, intnum, GEN, addition##_GEN(n))
 
--- a/arch/powerpc/mm/tlb_low_64e.S
+++ b/arch/powerpc/mm/tlb_low_64e.S
@@ -69,6 +69,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	std	r15,EX_TLB_R15(r12)
 	std	r10,EX_TLB_CR(r12)
 #ifdef CONFIG_PPC_FSL_BOOK3E
+START_BTB_FLUSH_SECTION
+	mfspr r11, SPRN_SRR1
+	andi. r10,r11,MSR_PR
+	beq 1f
+	BTB_FLUSH(r10)
+1:
+END_BTB_FLUSH_SECTION
 	std	r7,EX_TLB_R7(r12)
 #endif
 	TLB_MISS_PROLOG_STATS


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/fsl: Update Spectre v2 reporting" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/fsl: Update Spectre v2 reporting

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-fsl-update-spectre-v2-reporting.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:35 +1000
Subject: powerpc/fsl: Update Spectre v2 reporting
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-51-mpe@ellerman.id.au>

From: Diana Craciun <diana.craciun@nxp.com>

commit dfa88658fb0583abb92e062c7a9cd5a5b94f2a46 upstream.

Report branch predictor state flush as a mitigation for
Spectre variant 2.

Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/security.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -213,8 +213,11 @@ ssize_t cpu_show_spectre_v2(struct devic
 
 		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
 			seq_buf_printf(&s, "(hardware accelerated)");
-	} else
+	} else if (btb_flush_enabled) {
+		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
+	} else {
 		seq_buf_printf(&s, "Vulnerable");
+	}
 
 	seq_buf_printf(&s, "\n");
 


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/fsl: Fix spectre_v2 mitigations reporting" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/fsl: Fix spectre_v2 mitigations reporting

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:32 +1000
Subject: powerpc/fsl: Fix spectre_v2 mitigations reporting
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-48-mpe@ellerman.id.au>

From: Diana Craciun <diana.craciun@nxp.com>

commit 7d8bad99ba5a22892f0cad6881289fdc3875a930 upstream.

Currently for CONFIG_PPC_FSL_BOOK3E the spectre_v2 file is incorrect:

  $ cat /sys/devices/system/cpu/vulnerabilities/spectre_v2
  "Mitigation: Software count cache flush"

Which is wrong. Fix it to report vulnerable for now.

Fixes: ee13cb249fab ("powerpc/64s: Add support for software count cache flush")
Cc: stable@vger.kernel.org # v4.19+
Signed-off-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/security.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -23,7 +23,7 @@ enum count_cache_flush_type {
 	COUNT_CACHE_FLUSH_SW	= 0x2,
 	COUNT_CACHE_FLUSH_HW	= 0x4,
 };
-static enum count_cache_flush_type count_cache_flush_type;
+static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
 
 bool barrier_nospec_enabled;
 static bool no_nospec;


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/powernv: Query firmware for count cache flush settings" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/powernv: Query firmware for count cache flush settings

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:28 +1000
Subject: powerpc/powernv: Query firmware for count cache flush settings
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-44-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 99d54754d3d5f896a8f616b0b6520662bc99d66b upstream.

Look for fw-features properties to determine the appropriate settings
for the count cache flush, and then call the generic powerpc code to
set it up based on the security feature flags.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/platforms/powernv/setup.c |    7 +++++++
 1 file changed, 7 insertions(+)

--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -77,6 +77,12 @@ static void init_fw_feat_flags(struct de
 	if (fw_feature_is("enabled", "fw-count-cache-disabled", np))
 		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
 
+	if (fw_feature_is("enabled", "fw-count-cache-flush-bcctr2,0,0", np))
+		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
+
+	if (fw_feature_is("enabled", "needs-count-cache-flush-on-context-switch", np))
+		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
+
 	/*
 	 * The features below are enabled by default, so we instead look to see
 	 * if firmware has *disabled* them, and clear them if so.
@@ -123,6 +129,7 @@ static void pnv_setup_rfi_flush(void)
 		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
 
 	setup_rfi_flush(type, enable);
+	setup_count_cache_flush();
 }
 
 static void __init pnv_setup_arch(void)


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/powernv: Set or clear security feature flags" has been added to the 4.4-stable tree
  2019-04-21 14:19   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/powernv: Set or clear security feature flags

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-powernv-set-or-clear-security-feature-flags.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:19:58 +1000
Subject: powerpc/powernv: Set or clear security feature flags
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-14-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 77addf6e95c8689e478d607176b399a6242a777e upstream.

Now that we have feature flags for security related things, set or
clear them based on what we see in the device tree provided by
firmware.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/platforms/powernv/setup.c |   56 +++++++++++++++++++++++++++++++++
 1 file changed, 56 insertions(+)

--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -37,9 +37,63 @@
 #include <asm/smp.h>
 #include <asm/tm.h>
 #include <asm/setup.h>
+#include <asm/security_features.h>
 
 #include "powernv.h"
 
+
+static bool fw_feature_is(const char *state, const char *name,
+			  struct device_node *fw_features)
+{
+	struct device_node *np;
+	bool rc = false;
+
+	np = of_get_child_by_name(fw_features, name);
+	if (np) {
+		rc = of_property_read_bool(np, state);
+		of_node_put(np);
+	}
+
+	return rc;
+}
+
+static void init_fw_feat_flags(struct device_node *np)
+{
+	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
+		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
+
+	if (fw_feature_is("enabled", "fw-bcctrl-serialized", np))
+		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
+
+	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
+		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
+
+	if (fw_feature_is("enabled", "inst-l1d-flush-trig2", np))
+		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
+
+	if (fw_feature_is("enabled", "fw-l1d-thread-split", np))
+		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (fw_feature_is("enabled", "fw-count-cache-disabled", np))
+		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
+
+	/*
+	 * The features below are enabled by default, so we instead look to see
+	 * if firmware has *disabled* them, and clear them if so.
+	 */
+	if (fw_feature_is("disabled", "speculation-policy-favor-security", np))
+		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
+
+	if (fw_feature_is("disabled", "needs-l1d-flush-msr-pr-0-to-1", np))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
+
+	if (fw_feature_is("disabled", "needs-l1d-flush-msr-hv-1-to-0", np))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
+
+	if (fw_feature_is("disabled", "needs-spec-barrier-for-bound-checks", np))
+		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
+}
+
 static void pnv_setup_rfi_flush(void)
 {
 	struct device_node *np, *fw_features;
@@ -55,6 +109,8 @@ static void pnv_setup_rfi_flush(void)
 	of_node_put(np);
 
 	if (fw_features) {
+		init_fw_feat_flags(fw_features);
+
 		np = of_get_child_by_name(fw_features, "inst-l1d-flush-trig2");
 		if (np && of_property_read_bool(np, "enabled"))
 			type = L1D_FLUSH_MTTRIG;


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/powernv: Support firmware disable of RFI flush" has been added to the 4.4-stable tree
  2019-04-21 14:19   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/powernv: Support firmware disable of RFI flush

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:19:49 +1000
Subject: powerpc/powernv: Support firmware disable of RFI flush
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-5-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit eb0a2d2620ae431c543963c8c7f08f597366fc60 upstream.

Some versions of firmware will have a setting that can be configured
to disable the RFI flush, add support for it.

Fixes: 6e032b350cd1 ("powerpc/powernv: Check device-tree for RFI flush settings")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/platforms/powernv/setup.c |    4 ++++
 1 file changed, 4 insertions(+)

--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -79,6 +79,10 @@ static void pnv_setup_rfi_flush(void)
 		if (np && of_property_read_bool(np, "disabled"))
 			enable--;
 
+		np = of_get_child_by_name(fw_features, "speculation-policy-favor-security");
+		if (np && of_property_read_bool(np, "disabled"))
+			enable = 0;
+
 		of_node_put(np);
 		of_node_put(fw_features);
 	}


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:01 +1000
Subject: powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-17-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 37c0bdd00d3ae83369ab60a6712c28e11e6458d5 upstream.

Now that we have the security flags we can significantly simplify the
code in pnv_setup_rfi_flush(), because we can use the flags instead of
checking device tree properties and because the security flags have
pessimistic defaults.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/platforms/powernv/setup.c |   41 ++++++++-------------------------
 1 file changed, 10 insertions(+), 31 deletions(-)

--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -65,7 +65,7 @@ static void init_fw_feat_flags(struct de
 	if (fw_feature_is("enabled", "fw-bcctrl-serialized", np))
 		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
 
-	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
+	if (fw_feature_is("enabled", "inst-l1d-flush-ori30,30,0", np))
 		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
 
 	if (fw_feature_is("enabled", "inst-l1d-flush-trig2", np))
@@ -98,11 +98,10 @@ static void pnv_setup_rfi_flush(void)
 {
 	struct device_node *np, *fw_features;
 	enum l1d_flush_type type;
-	int enable;
+	bool enable;
 
 	/* Default to fallback in case fw-features are not available */
 	type = L1D_FLUSH_FALLBACK;
-	enable = 1;
 
 	np = of_find_node_by_name(NULL, "ibm,opal");
 	fw_features = of_get_child_by_name(np, "fw-features");
@@ -110,40 +109,20 @@ static void pnv_setup_rfi_flush(void)
 
 	if (fw_features) {
 		init_fw_feat_flags(fw_features);
+		of_node_put(fw_features);
 
-		np = of_get_child_by_name(fw_features, "inst-l1d-flush-trig2");
-		if (np && of_property_read_bool(np, "enabled"))
+		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
 			type = L1D_FLUSH_MTTRIG;
 
-		of_node_put(np);
-
-		np = of_get_child_by_name(fw_features, "inst-l1d-flush-ori30,30,0");
-		if (np && of_property_read_bool(np, "enabled"))
+		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
 			type = L1D_FLUSH_ORI;
-
-		of_node_put(np);
-
-		/* Enable unless firmware says NOT to */
-		enable = 2;
-		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-hv-1-to-0");
-		if (np && of_property_read_bool(np, "disabled"))
-			enable--;
-
-		of_node_put(np);
-
-		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-pr-0-to-1");
-		if (np && of_property_read_bool(np, "disabled"))
-			enable--;
-
-		np = of_get_child_by_name(fw_features, "speculation-policy-favor-security");
-		if (np && of_property_read_bool(np, "disabled"))
-			enable = 0;
-
-		of_node_put(np);
-		of_node_put(fw_features);
 	}
 
-	setup_rfi_flush(type, enable > 0);
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
+		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR)   || \
+		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
+
+	setup_rfi_flush(type, enable);
 }
 
 static void __init pnv_setup_arch(void)


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc: Move default security feature flags" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mauricfo,
	mpe, msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc: Move default security feature flags

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-move-default-security-feature-flags.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:06 +1000
Subject: powerpc: Move default security feature flags
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-22-mpe@ellerman.id.au>

From: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>

commit e7347a86830f38dc3e40c8f7e28c04412b12a2e7 upstream.

This moves the definition of the default security feature flags
(i.e., enabled by default) closer to the security feature flags.

This can be used to restore current flags to the default flags.

Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/security_features.h |    8 ++++++++
 arch/powerpc/kernel/security.c               |    7 +------
 2 files changed, 9 insertions(+), 6 deletions(-)

--- a/arch/powerpc/include/asm/security_features.h
+++ b/arch/powerpc/include/asm/security_features.h
@@ -63,4 +63,12 @@ static inline bool security_ftr_enabled(
 // Firmware configuration indicates user favours security over performance
 #define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
 
+
+// Features enabled by default
+#define SEC_FTR_DEFAULT \
+	(SEC_FTR_L1D_FLUSH_HV | \
+	 SEC_FTR_L1D_FLUSH_PR | \
+	 SEC_FTR_BNDS_CHK_SPEC_BAR | \
+	 SEC_FTR_FAVOUR_SECURITY)
+
 #endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -11,12 +11,7 @@
 #include <asm/security_features.h>
 
 
-unsigned long powerpc_security_features __read_mostly = \
-	SEC_FTR_L1D_FLUSH_HV | \
-	SEC_FTR_L1D_FLUSH_PR | \
-	SEC_FTR_BNDS_CHK_SPEC_BAR | \
-	SEC_FTR_FAVOUR_SECURITY;
-
+unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
 
 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
 {


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/pseries: Fix clearing of security feature flags" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mauricfo,
	mpe, msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/pseries: Fix clearing of security feature flags

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-pseries-fix-clearing-of-security-feature-flags.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:05 +1000
Subject: powerpc/pseries: Fix clearing of security feature flags
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-21-mpe@ellerman.id.au>

From: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>

commit 0f9bdfe3c77091e8704d2e510eb7c2c2c6cde524 upstream.

The H_CPU_BEHAV_* flags should be checked for in the 'behaviour' field
of 'struct h_cpu_char_result' -- 'character' is for H_CPU_CHAR_*
flags.

Found by playing around with QEMU's implementation of the hypercall:

  H_CPU_CHAR=0xf000000000000000
  H_CPU_BEHAV=0x0000000000000000

  This clears H_CPU_BEHAV_FAVOUR_SECURITY and H_CPU_BEHAV_L1D_FLUSH_PR
  so pseries_setup_rfi_flush() disables 'rfi_flush'; and it also
  clears H_CPU_CHAR_L1D_THREAD_PRIV flag. So there is no RFI flush
  mitigation at all for cpu_show_meltdown() to report; but currently
  it does:

  Original kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/meltdown
    Mitigation: RFI Flush

  Patched kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/meltdown
    Not affected

  H_CPU_CHAR=0x0000000000000000
  H_CPU_BEHAV=0xf000000000000000

  This sets H_CPU_BEHAV_BNDS_CHK_SPEC_BAR so cpu_show_spectre_v1() should
  report vulnerable; but currently it doesn't:

  Original kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/spectre_v1
    Not affected

  Patched kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/spectre_v1
    Vulnerable

Brown-paper-bag-by: Michael Ellerman <mpe@ellerman.id.au>
Fixes: f636c14790ea ("powerpc/pseries: Set or clear security feature flags")
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/platforms/pseries/setup.c |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -524,13 +524,13 @@ static void init_cpu_char_feature_flags(
 	 * The features below are enabled by default, so we instead look to see
 	 * if firmware has *disabled* them, and clear them if so.
 	 */
-	if (!(result->character & H_CPU_BEHAV_FAVOUR_SECURITY))
+	if (!(result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY))
 		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
 
-	if (!(result->character & H_CPU_BEHAV_L1D_FLUSH_PR))
+	if (!(result->behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
 		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
 
-	if (!(result->character & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
+	if (!(result->behaviour & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
 		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
 }
 


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/pseries: Query hypervisor for count cache flush settings" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/pseries: Query hypervisor for count cache flush settings

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:27 +1000
Subject: powerpc/pseries: Query hypervisor for count cache flush settings
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-43-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit ba72dc171954b782a79d25e0f4b3ed91090c3b1e upstream.

Use the existing hypercall to determine the appropriate settings for
the count cache flush, and then call the generic powerpc code to set
it up based on the security feature flags.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/hvcall.h      |    2 ++
 arch/powerpc/platforms/pseries/setup.c |    7 +++++++
 2 files changed, 9 insertions(+)

--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -295,10 +295,12 @@
 #define H_CPU_CHAR_BRANCH_HINTS_HONORED	(1ull << 58) // IBM bit 5
 #define H_CPU_CHAR_THREAD_RECONFIG_CTRL	(1ull << 57) // IBM bit 6
 #define H_CPU_CHAR_COUNT_CACHE_DISABLED	(1ull << 56) // IBM bit 7
+#define H_CPU_CHAR_BCCTR_FLUSH_ASSIST	(1ull << 54) // IBM bit 9
 
 #define H_CPU_BEHAV_FAVOUR_SECURITY	(1ull << 63) // IBM bit 0
 #define H_CPU_BEHAV_L1D_FLUSH_PR	(1ull << 62) // IBM bit 1
 #define H_CPU_BEHAV_BNDS_CHK_SPEC_BAR	(1ull << 61) // IBM bit 2
+#define H_CPU_BEHAV_FLUSH_COUNT_CACHE	(1ull << 58) // IBM bit 5
 
 #ifndef __ASSEMBLY__
 #include <linux/types.h>
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -524,6 +524,12 @@ static void init_cpu_char_feature_flags(
 	if (result->character & H_CPU_CHAR_COUNT_CACHE_DISABLED)
 		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
 
+	if (result->character & H_CPU_CHAR_BCCTR_FLUSH_ASSIST)
+		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
+
+	if (result->behaviour & H_CPU_BEHAV_FLUSH_COUNT_CACHE)
+		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
+
 	/*
 	 * The features below are enabled by default, so we instead look to see
 	 * if firmware has *disabled* them, and clear them if so.
@@ -574,6 +580,7 @@ void pseries_setup_rfi_flush(void)
 		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
 
 	setup_rfi_flush(types, enable);
+	setup_count_cache_flush();
 }
 
 static void __init pSeries_setup_arch(void)


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/pseries: Restore default security feature flags on setup" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mauricfo,
	mpe, msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/pseries: Restore default security feature flags on setup

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-pseries-restore-default-security-feature-flags-on-setup.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:07 +1000
Subject: powerpc/pseries: Restore default security feature flags on setup
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-23-mpe@ellerman.id.au>

From: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>

commit 6232774f1599028a15418179d17f7df47ede770a upstream.

After migration the security feature flags might have changed (e.g.,
destination system with unpatched firmware), but some flags are not
set/clear again in init_cpu_char_feature_flags() because it assumes
the security flags to be the defaults.

Additionally, if the H_GET_CPU_CHARACTERISTICS hypercall fails then
init_cpu_char_feature_flags() does not run again, which potentially
might leave the system in an insecure or sub-optimal configuration.

So, just restore the security feature flags to the defaults assumed
by init_cpu_char_feature_flags() so it can set/clear them correctly,
and to ensure safe settings are in place in case the hypercall fail.

Fixes: f636c14790ea ("powerpc/pseries: Set or clear security feature flags")
Depends-on: 19887d6a28e2 ("powerpc: Move default security feature flags")
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/platforms/pseries/setup.c |   11 +++++++++++
 1 file changed, 11 insertions(+)

--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -502,6 +502,10 @@ static void __init find_and_init_phbs(vo
 
 static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
 {
+	/*
+	 * The features below are disabled by default, so we instead look to see
+	 * if firmware has *enabled* them, and set them if so.
+	 */
 	if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31)
 		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
 
@@ -541,6 +545,13 @@ void pseries_setup_rfi_flush(void)
 	bool enable;
 	long rc;
 
+	/*
+	 * Set features to the defaults assumed by init_cpu_char_feature_flags()
+	 * so it can set/clear again any features that might have changed after
+	 * migration, and in case the hypercall fails and it is not even called.
+	 */
+	powerpc_security_features = SEC_FTR_DEFAULT;
+
 	rc = plpar_get_cpu_characteristics(&result);
 	if (rc == H_SUCCESS)
 		init_cpu_char_feature_flags(&result);


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/pseries: Set or clear security feature flags" has been added to the 4.4-stable tree
  2019-04-21 14:19   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/pseries: Set or clear security feature flags

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-pseries-set-or-clear-security-feature-flags.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:19:57 +1000
Subject: powerpc/pseries: Set or clear security feature flags
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-13-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit f636c14790ead6cc22cf62279b1f8d7e11a67116 upstream.

Now that we have feature flags for security related things, set or
clear them based on what we receive from the hypercall.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/platforms/pseries/setup.c |   43 +++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -67,6 +67,7 @@
 #include <asm/eeh.h>
 #include <asm/reg.h>
 #include <asm/plpar_wrappers.h>
+#include <asm/security_features.h>
 
 #include "pseries.h"
 
@@ -499,6 +500,40 @@ static void __init find_and_init_phbs(vo
 	of_pci_check_probe_only();
 }
 
+static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
+{
+	if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31)
+		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
+
+	if (result->character & H_CPU_CHAR_BCCTRL_SERIALISED)
+		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
+
+	if (result->character & H_CPU_CHAR_L1D_FLUSH_ORI30)
+		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
+
+	if (result->character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
+		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
+
+	if (result->character & H_CPU_CHAR_L1D_THREAD_PRIV)
+		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
+
+	if (result->character & H_CPU_CHAR_COUNT_CACHE_DISABLED)
+		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
+
+	/*
+	 * The features below are enabled by default, so we instead look to see
+	 * if firmware has *disabled* them, and clear them if so.
+	 */
+	if (!(result->character & H_CPU_BEHAV_FAVOUR_SECURITY))
+		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
+
+	if (!(result->character & H_CPU_BEHAV_L1D_FLUSH_PR))
+		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
+
+	if (!(result->character & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
+		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
+}
+
 void pseries_setup_rfi_flush(void)
 {
 	struct h_cpu_char_result result;
@@ -512,6 +547,8 @@ void pseries_setup_rfi_flush(void)
 
 	rc = plpar_get_cpu_characteristics(&result);
 	if (rc == H_SUCCESS) {
+		init_cpu_char_feature_flags(&result);
+
 		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
 			types |= L1D_FLUSH_MTTRIG;
 		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
@@ -522,6 +559,12 @@ void pseries_setup_rfi_flush(void)
 			enable = false;
 	}
 
+	/*
+	 * We're the guest so this doesn't apply to us, clear it to simplify
+	 * handling of it elsewhere.
+	 */
+	security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
+
 	setup_rfi_flush(types, enable);
 }
 


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags" has been added to the 4.4-stable tree
  2019-04-21 14:19   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:19:54 +1000
Subject: powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-10-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit c4bc36628d7f8b664657d8bd6ad1c44c177880b7 upstream.

Add some additional values which have been defined for the
H_GET_CPU_CHARACTERISTICS hypercall.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/hvcall.h |    3 +++
 1 file changed, 3 insertions(+)

--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -292,6 +292,9 @@
 #define H_CPU_CHAR_L1D_FLUSH_ORI30	(1ull << 61) // IBM bit 2
 #define H_CPU_CHAR_L1D_FLUSH_TRIG2	(1ull << 60) // IBM bit 3
 #define H_CPU_CHAR_L1D_THREAD_PRIV	(1ull << 59) // IBM bit 4
+#define H_CPU_CHAR_BRANCH_HINTS_HONORED	(1ull << 58) // IBM bit 5
+#define H_CPU_CHAR_THREAD_RECONFIG_CTRL	(1ull << 57) // IBM bit 6
+#define H_CPU_CHAR_COUNT_CACHE_DISABLED	(1ull << 56) // IBM bit 7
 
 #define H_CPU_BEHAV_FAVOUR_SECURITY	(1ull << 63) // IBM bit 0
 #define H_CPU_BEHAV_L1D_FLUSH_PR	(1ull << 62) // IBM bit 1


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:02 +1000
Subject: powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-18-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 2e4a16161fcd324b1f9bf6cb6856529f7eaf0689 upstream.

Now that we have the security flags we can simplify the code in
pseries_setup_rfi_flush() because the security flags have pessimistic
defaults.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/platforms/pseries/setup.c |   27 ++++++++++++---------------
 1 file changed, 12 insertions(+), 15 deletions(-)

--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -541,30 +541,27 @@ void pseries_setup_rfi_flush(void)
 	bool enable;
 	long rc;
 
-	/* Enable by default */
-	enable = true;
-	types = L1D_FLUSH_FALLBACK;
-
 	rc = plpar_get_cpu_characteristics(&result);
-	if (rc == H_SUCCESS) {
+	if (rc == H_SUCCESS)
 		init_cpu_char_feature_flags(&result);
 
-		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
-			types |= L1D_FLUSH_MTTRIG;
-		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
-			types |= L1D_FLUSH_ORI;
-
-		if ((!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR)) ||
-		    (!(result.behaviour & H_CPU_BEHAV_FAVOUR_SECURITY)))
-			enable = false;
-	}
-
 	/*
 	 * We're the guest so this doesn't apply to us, clear it to simplify
 	 * handling of it elsewhere.
 	 */
 	security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
 
+	types = L1D_FLUSH_FALLBACK;
+
+	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
+		types |= L1D_FLUSH_MTTRIG;
+
+	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
+		types |= L1D_FLUSH_ORI;
+
+	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
+		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
+
 	setup_rfi_flush(types, enable);
 }
 


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/pseries: Support firmware disable of RFI flush" has been added to the 4.4-stable tree
  2019-04-21 14:19   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/pseries: Support firmware disable of RFI flush

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:19:48 +1000
Subject: powerpc/pseries: Support firmware disable of RFI flush
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-4-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 582605a429e20ae68fd0b041b2e840af296edd08 upstream.

Some versions of firmware will have a setting that can be configured
to disable the RFI flush, add support for it.

Fixes: 8989d56878a7 ("powerpc/pseries: Query hypervisor for RFI flush settings")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/platforms/pseries/setup.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -522,7 +522,8 @@ static void pseries_setup_rfi_flush(void
 		if (types == L1D_FLUSH_NONE)
 			types = L1D_FLUSH_FALLBACK;
 
-		if (!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
+		if ((!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR)) ||
+		    (!(result.behaviour & H_CPU_BEHAV_FAVOUR_SECURITY)))
 			enable = false;
 	} else {
 		/* Default to fallback if case hcall is not available */


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration" has been added to the 4.4-stable tree
  2019-04-21 14:19   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mauricfo,
	mpe, msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:19:55 +1000
Subject: powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-11-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 921bc6cf807ceb2ab8005319cf39f33494d6b100 upstream.

We might have migrated to a machine that uses a different flush type,
or doesn't need flushing at all.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/platforms/pseries/mobility.c |    3 +++
 arch/powerpc/platforms/pseries/pseries.h  |    2 ++
 arch/powerpc/platforms/pseries/setup.c    |    2 +-
 3 files changed, 6 insertions(+), 1 deletion(-)

--- a/arch/powerpc/platforms/pseries/mobility.c
+++ b/arch/powerpc/platforms/pseries/mobility.c
@@ -314,6 +314,9 @@ void post_mobility_fixup(void)
 		printk(KERN_ERR "Post-mobility device tree update "
 			"failed: %d\n", rc);
 
+	/* Possibly switch to a new RFI flush type */
+	pseries_setup_rfi_flush();
+
 	return;
 }
 
--- a/arch/powerpc/platforms/pseries/pseries.h
+++ b/arch/powerpc/platforms/pseries/pseries.h
@@ -81,4 +81,6 @@ extern struct pci_controller_ops pseries
 
 unsigned long pseries_memory_block_size(void);
 
+void pseries_setup_rfi_flush(void);
+
 #endif /* _PSERIES_PSERIES_H */
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -499,7 +499,7 @@ static void __init find_and_init_phbs(vo
 	of_pci_check_probe_only();
 }
 
-static void pseries_setup_rfi_flush(void)
+void pseries_setup_rfi_flush(void)
 {
 	struct h_cpu_char_result result;
 	enum l1d_flush_type types;


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/rfi-flush: Differentiate enabled and patched flush types" has been added to the 4.4-stable tree
  2019-04-21 14:19   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mauricfo,
	mpe, msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/rfi-flush: Differentiate enabled and patched flush types

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:19:53 +1000
Subject: powerpc/rfi-flush: Differentiate enabled and patched flush types
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-9-mpe@ellerman.id.au>

From: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>

commit 0063d61ccfc011f379a31acaeba6de7c926fed2c upstream.

Currently the rfi-flush messages print 'Using <type> flush' for all
enabled_flush_types, but that is not necessarily true -- as now the
fallback flush is always enabled on pseries, but the fixup function
overwrites its nop/branch slot with other flush types, if available.

So, replace the 'Using <type> flush' messages with '<type> flush is
available'.

Also, print the patched flush types in the fixup function, so users
can know what is (not) being used (e.g., the slower, fallback flush,
or no flush type at all if flush is disabled via the debugfs switch).

Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/setup_64.c    |    6 +++---
 arch/powerpc/lib/feature-fixups.c |    9 ++++++++-
 2 files changed, 11 insertions(+), 4 deletions(-)

--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -911,15 +911,15 @@ static void init_fallback_flush(void)
 void setup_rfi_flush(enum l1d_flush_type types, bool enable)
 {
 	if (types & L1D_FLUSH_FALLBACK) {
-		pr_info("rfi-flush: Using fallback displacement flush\n");
+		pr_info("rfi-flush: fallback displacement flush available\n");
 		init_fallback_flush();
 	}
 
 	if (types & L1D_FLUSH_ORI)
-		pr_info("rfi-flush: Using ori type flush\n");
+		pr_info("rfi-flush: ori type flush available\n");
 
 	if (types & L1D_FLUSH_MTTRIG)
-		pr_info("rfi-flush: Using mttrig type flush\n");
+		pr_info("rfi-flush: mttrig type flush available\n");
 
 	enabled_flush_types = types;
 
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -151,7 +151,14 @@ void do_rfi_flush_fixups(enum l1d_flush_
 		patch_instruction(dest + 2, instrs[2]);
 	}
 
-	printk(KERN_DEBUG "rfi-flush: patched %d locations\n", i);
+	printk(KERN_DEBUG "rfi-flush: patched %d locations (%s flush)\n", i,
+		(types == L1D_FLUSH_NONE)       ? "no" :
+		(types == L1D_FLUSH_FALLBACK)   ? "fallback displacement" :
+		(types &  L1D_FLUSH_ORI)        ? (types & L1D_FLUSH_MTTRIG)
+							? "ori+mttrig type"
+							: "ori type" :
+		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
+						: "unknown");
 }
 #endif /* CONFIG_PPC_BOOK3S_64 */
 


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/rfi-flush: Always enable fallback flush on pseries" has been added to the 4.4-stable tree
  2019-04-21 14:19   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mauricfo,
	mpe, msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/rfi-flush: Always enable fallback flush on pseries

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:19:52 +1000
Subject: powerpc/rfi-flush: Always enable fallback flush on pseries
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-8-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 84749a58b6e382f109abf1e734bc4dd43c2c25bb upstream.

This ensures the fallback flush area is always allocated on pseries,
so in case a LPAR is migrated from a patched to an unpatched system,
it is possible to enable the fallback flush in the target system.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/platforms/pseries/setup.c |   10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -508,26 +508,18 @@ static void pseries_setup_rfi_flush(void
 
 	/* Enable by default */
 	enable = true;
+	types = L1D_FLUSH_FALLBACK;
 
 	rc = plpar_get_cpu_characteristics(&result);
 	if (rc == H_SUCCESS) {
-		types = L1D_FLUSH_NONE;
-
 		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
 			types |= L1D_FLUSH_MTTRIG;
 		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
 			types |= L1D_FLUSH_ORI;
 
-		/* Use fallback if nothing set in hcall */
-		if (types == L1D_FLUSH_NONE)
-			types = L1D_FLUSH_FALLBACK;
-
 		if ((!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR)) ||
 		    (!(result.behaviour & H_CPU_BEHAV_FAVOUR_SECURITY)))
 			enable = false;
-	} else {
-		/* Default to fallback if case hcall is not available */
-		types = L1D_FLUSH_FALLBACK;
 	}
 
 	setup_rfi_flush(types, enable);


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs code" has been added to the 4.4-stable tree
  2019-04-21 14:19   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mauricfo,
	mpe, msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs code

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:19:50 +1000
Subject: powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs code
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-6-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 1e2a9fc7496955faacbbed49461d611b704a7505 upstream.

rfi_flush_enable() includes a check to see if we're already
enabled (or disabled), and in that case does nothing.

But that means calling setup_rfi_flush() a 2nd time doesn't actually
work, which is a bit confusing.

Move that check into the debugfs code, where it really belongs.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/setup_64.c |   13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -873,9 +873,6 @@ static void do_nothing(void *unused)
 
 void rfi_flush_enable(bool enable)
 {
-	if (rfi_flush == enable)
-		return;
-
 	if (enable) {
 		do_rfi_flush_fixups(enabled_flush_types);
 		on_each_cpu(do_nothing, NULL, 1);
@@ -929,13 +926,19 @@ void __init setup_rfi_flush(enum l1d_flu
 #ifdef CONFIG_DEBUG_FS
 static int rfi_flush_set(void *data, u64 val)
 {
+	bool enable;
+
 	if (val == 1)
-		rfi_flush_enable(true);
+		enable = true;
 	else if (val == 0)
-		rfi_flush_enable(false);
+		enable = false;
 	else
 		return -EINVAL;
 
+	/* Only do anything if we're changing state */
+	if (enable != rfi_flush)
+		rfi_flush_enable(enable);
+
 	return 0;
 }
 


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/security: Fix spectre_v2 reporting" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mikey,
	mpe, msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/security: Fix spectre_v2 reporting

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-security-fix-spectre_v2-reporting.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:36 +1000
Subject: powerpc/security: Fix spectre_v2 reporting
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-52-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 92edf8df0ff2ae86cc632eeca0e651fd8431d40d upstream.

When I updated the spectre_v2 reporting to handle software count cache
flush I got the logic wrong when there's no software count cache
enabled at all.

The result is that on systems with the software count cache flush
disabled we print:

  Mitigation: Indirect branch cache disabled, Software count cache flush

Which correctly indicates that the count cache is disabled, but
incorrectly says the software count cache flush is enabled.

The root of the problem is that we are trying to handle all
combinations of options. But we know now that we only expect to see
the software count cache flush enabled if the other options are false.

So split the two cases, which simplifies the logic and fixes the bug.
We were also missing a space before "(hardware accelerated)".

The result is we see one of:

  Mitigation: Indirect branch serialisation (kernel only)
  Mitigation: Indirect branch cache disabled
  Mitigation: Software count cache flush
  Mitigation: Software count cache flush (hardware accelerated)

Fixes: ee13cb249fab ("powerpc/64s: Add support for software count cache flush")
Cc: stable@vger.kernel.org # v4.19+
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Michael Neuling <mikey@neuling.org>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/security.c |   23 ++++++++---------------
 1 file changed, 8 insertions(+), 15 deletions(-)

--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -190,29 +190,22 @@ ssize_t cpu_show_spectre_v2(struct devic
 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
 
-	if (bcs || ccd || count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
-		bool comma = false;
+	if (bcs || ccd) {
 		seq_buf_printf(&s, "Mitigation: ");
 
-		if (bcs) {
+		if (bcs)
 			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
-			comma = true;
-		}
 
-		if (ccd) {
-			if (comma)
-				seq_buf_printf(&s, ", ");
-			seq_buf_printf(&s, "Indirect branch cache disabled");
-			comma = true;
-		}
-
-		if (comma)
+		if (bcs && ccd)
 			seq_buf_printf(&s, ", ");
 
-		seq_buf_printf(&s, "Software count cache flush");
+		if (ccd)
+			seq_buf_printf(&s, "Indirect branch cache disabled");
+	} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
+		seq_buf_printf(&s, "Mitigation: Software count cache flush");
 
 		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
-			seq_buf_printf(&s, "(hardware accelerated)");
+			seq_buf_printf(&s, " (hardware accelerated)");
 	} else if (btb_flush_enabled) {
 		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
 	} else {


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again" has been added to the 4.4-stable tree
  2019-04-21 14:19   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mauricfo,
	mpe, msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:19:51 +1000
Subject: powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-7-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit abf110f3e1cea40f5ea15e85f5d67c39c14568a7 upstream.

For PowerVM migration we want to be able to call setup_rfi_flush()
again after we've migrated the partition.

To support that we need to check that we're not trying to allocate the
fallback flush area after memblock has gone away (i.e., boot-time only).

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/setup.h |    2 +-
 arch/powerpc/kernel/setup_64.c   |    6 +++++-
 2 files changed, 6 insertions(+), 2 deletions(-)

--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -36,7 +36,7 @@ enum l1d_flush_type {
 	L1D_FLUSH_MTTRIG	= 0x8,
 };
 
-void __init setup_rfi_flush(enum l1d_flush_type, bool enable);
+void setup_rfi_flush(enum l1d_flush_type, bool enable);
 void do_rfi_flush_fixups(enum l1d_flush_type types);
 
 #endif /* !__ASSEMBLY__ */
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -887,6 +887,10 @@ static void init_fallback_flush(void)
 	u64 l1d_size, limit;
 	int cpu;
 
+	/* Only allocate the fallback flush area once (at boot time). */
+	if (l1d_flush_fallback_area)
+		return;
+
 	l1d_size = ppc64_caches.dsize;
 	limit = min(safe_stack_limit(), ppc64_rma_size);
 
@@ -904,7 +908,7 @@ static void init_fallback_flush(void)
 	}
 }
 
-void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
+void setup_rfi_flush(enum l1d_flush_type types, bool enable)
 {
 	if (types & L1D_FLUSH_FALLBACK) {
 		pr_info("rfi-flush: Using fallback displacement flush\n");


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc/xmon: Add RFI flush related fields to paca dump" has been added to the 4.4-stable tree
  2019-04-21 14:19   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc/xmon: Add RFI flush related fields to paca dump

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:19:46 +1000
Subject: powerpc/xmon: Add RFI flush related fields to paca dump
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-2-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 274920a3ecd5f43af0cc380bc0a9ee73a52b9f8a upstream.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/xmon/xmon.c |    4 ++++
 1 file changed, 4 insertions(+)

--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -2144,6 +2144,10 @@ static void dump_one_paca(int cpu)
 	DUMP(p, slb_cache_ptr, "x");
 	for (i = 0; i < SLB_CACHE_ENTRIES; i++)
 		printf(" slb_cache[%d]:        = 0x%016lx\n", i, p->slb_cache[i]);
+
+	DUMP(p, rfi_flush_fallback_area, "px");
+	DUMP(p, l1d_flush_congruence, "llx");
+	DUMP(p, l1d_flush_sets, "llx");
 #endif
 	DUMP(p, dscr_default, "llx");
 #ifdef CONFIG_PPC_BOOK3E


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:17 +1000
Subject: powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-33-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit 6d44acae1937b81cf8115ada8958e04f601f3f2e upstream.

When I added the spectre_v2 information in sysfs, I included the
availability of the ori31 speculation barrier.

Although the ori31 barrier can be used to mitigate v2, it's primarily
intended as a spectre v1 mitigation. Spectre v2 is mitigated by
hardware changes.

So rework the sysfs files to show the ori31 information in the
spectre_v1 file, rather than v2.

Currently we display eg:

  $ grep . spectre_v*
  spectre_v1:Mitigation: __user pointer sanitization
  spectre_v2:Mitigation: Indirect branch cache disabled, ori31 speculation barrier enabled

After:

  $ grep . spectre_v*
  spectre_v1:Mitigation: __user pointer sanitization, ori31 speculation barrier enabled
  spectre_v2:Mitigation: Indirect branch cache disabled

Fixes: d6fbe1c55c55 ("powerpc/64s: Wire up cpu_show_spectre_v2()")
Cc: stable@vger.kernel.org # v4.17+
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/kernel/security.c |   27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

--- a/arch/powerpc/kernel/security.c
+++ b/arch/powerpc/kernel/security.c
@@ -118,25 +118,35 @@ ssize_t cpu_show_meltdown(struct device
 
 ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
 {
-	if (!security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR))
-		return sprintf(buf, "Not affected\n");
+	struct seq_buf s;
+
+	seq_buf_init(&s, buf, PAGE_SIZE - 1);
+
+	if (security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR)) {
+		if (barrier_nospec_enabled)
+			seq_buf_printf(&s, "Mitigation: __user pointer sanitization");
+		else
+			seq_buf_printf(&s, "Vulnerable");
 
-	if (barrier_nospec_enabled)
-		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+		if (security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31))
+			seq_buf_printf(&s, ", ori31 speculation barrier enabled");
 
-	return sprintf(buf, "Vulnerable\n");
+		seq_buf_printf(&s, "\n");
+	} else
+		seq_buf_printf(&s, "Not affected\n");
+
+	return s.len;
 }
 
 ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
 {
-	bool bcs, ccd, ori;
 	struct seq_buf s;
+	bool bcs, ccd;
 
 	seq_buf_init(&s, buf, PAGE_SIZE - 1);
 
 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
-	ori = security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31);
 
 	if (bcs || ccd) {
 		seq_buf_printf(&s, "Mitigation: ");
@@ -152,9 +162,6 @@ ssize_t cpu_show_spectre_v2(struct devic
 	} else
 		seq_buf_printf(&s, "Vulnerable");
 
-	if (ori)
-		seq_buf_printf(&s, ", ori31 speculation barrier enabled");
-
 	seq_buf_printf(&s, "\n");
 
 	return s.len;


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Patch "powerpc: Use barrier_nospec in copy_from_user()" has been added to the 4.4-stable tree
  2019-04-21 14:20   ` Michael Ellerman
  (?)
@ 2019-04-29  9:51   ` gregkh
  -1 siblings, 0 replies; 180+ messages in thread
From: gregkh @ 2019-04-29  9:51 UTC (permalink / raw)
  To: christophe.leroy, diana.craciun, gregkh, linuxppc-dev, mpe,
	msuchanek, npiggin
  Cc: stable-commits


This is a note to let you know that I've just added the patch titled

    powerpc: Use barrier_nospec in copy_from_user()

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-use-barrier_nospec-in-copy_from_user.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


From foo@baz Mon 29 Apr 2019 11:38:37 AM CEST
From: Michael Ellerman <mpe@ellerman.id.au>
Date: Mon, 22 Apr 2019 00:20:15 +1000
Subject: powerpc: Use barrier_nospec in copy_from_user()
To: stable@vger.kernel.org, gregkh@linuxfoundation.org
Cc: linuxppc-dev@ozlabs.org, diana.craciun@nxp.com, msuchanek@suse.de, npiggin@gmail.com, christophe.leroy@c-s.fr
Message-ID: <20190421142037.21881-31-mpe@ellerman.id.au>

From: Michael Ellerman <mpe@ellerman.id.au>

commit ddf35cf3764b5a182b178105f57515b42e2634f8 upstream.

Based on the x86 commit doing the same.

See commit 304ec1b05031 ("x86/uaccess: Use __uaccess_begin_nospec()
and uaccess_try_nospec") and b3bbfb3fb5d2 ("x86: Introduce
__uaccess_begin_nospec() and uaccess_try_nospec") for more detail.

In all cases we are ordering the load from the potentially
user-controlled pointer vs a previous branch based on an access_ok()
check or similar.

Base on a patch from Michal Suchanek.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/include/asm/uaccess.h |   18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -269,6 +269,7 @@ do {								\
 	__chk_user_ptr(ptr);					\
 	if (!is_kernel_addr((unsigned long)__gu_addr))		\
 		might_fault();					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -283,6 +284,7 @@ do {								\
 	__chk_user_ptr(ptr);					\
 	if (!is_kernel_addr((unsigned long)__gu_addr))		\
 		might_fault();					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -295,8 +297,10 @@ do {								\
 	unsigned long  __gu_val = 0;					\
 	__typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
 	might_fault();							\
-	if (access_ok(VERIFY_READ, __gu_addr, (size)))			\
+	if (access_ok(VERIFY_READ, __gu_addr, (size))) {		\
+		barrier_nospec();					\
 		__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
+	}								\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
 	__gu_err;							\
 })
@@ -307,6 +311,7 @@ do {								\
 	unsigned long __gu_val;					\
 	__typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
 	__chk_user_ptr(ptr);					\
+	barrier_nospec();					\
 	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
 	__gu_err;						\
@@ -323,8 +328,10 @@ extern unsigned long __copy_tofrom_user(
 static inline unsigned long copy_from_user(void *to,
 		const void __user *from, unsigned long n)
 {
-	if (likely(access_ok(VERIFY_READ, from, n)))
+	if (likely(access_ok(VERIFY_READ, from, n))) {
+		barrier_nospec();
 		return __copy_tofrom_user((__force void __user *)to, from, n);
+	}
 	memset(to, 0, n);
 	return n;
 }
@@ -359,21 +366,27 @@ static inline unsigned long __copy_from_
 
 		switch (n) {
 		case 1:
+			barrier_nospec();
 			__get_user_size(*(u8 *)to, from, 1, ret);
 			break;
 		case 2:
+			barrier_nospec();
 			__get_user_size(*(u16 *)to, from, 2, ret);
 			break;
 		case 4:
+			barrier_nospec();
 			__get_user_size(*(u32 *)to, from, 4, ret);
 			break;
 		case 8:
+			barrier_nospec();
 			__get_user_size(*(u64 *)to, from, 8, ret);
 			break;
 		}
 		if (ret == 0)
 			return 0;
 	}
+
+	barrier_nospec();
 	return __copy_tofrom_user((__force void __user *)to, from, n);
 }
 
@@ -400,6 +413,7 @@ static inline unsigned long __copy_to_us
 		if (ret == 0)
 			return 0;
 	}
+
 	return __copy_tofrom_user(to, (__force const void __user *)from, n);
 }
 


Patches currently in stable-queue which might be from mpe@ellerman.id.au are

queue-4.4/powerpc-64s-add-support-for-a-store-forwarding-barrier-at-kernel-entry-exit.patch
queue-4.4/powerpc-64-make-stf-barrier-ppc_book3s_64-specific.patch
queue-4.4/powerpc-pseries-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-fsl-fix-spectre_v2-mitigations-reporting.patch
queue-4.4/powerpc-64s-patch-barrier_nospec-in-modules.patch
queue-4.4/powerpc-pseries-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-call-setup_rfi_flush-after-lpm-migration.patch
queue-4.4/powerpc-pseries-query-hypervisor-for-count-cache-flush-settings.patch
queue-4.4/powerpc-powernv-set-or-clear-security-feature-flags.patch
queue-4.4/powerpc-64s-add-support-for-software-count-cache-flush.patch
queue-4.4/powerpc64s-show-ori31-availability-in-spectre_v1-sysfs-file-not-v2.patch
queue-4.4/powerpc-fsl-flush-the-branch-predictor-at-each-kernel-entry-64bit.patch
queue-4.4/powerpc-fsl-update-spectre-v2-reporting.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v2.patch
queue-4.4/powerpc-64-make-meltdown-reporting-book3s-64-specific.patch
queue-4.4/powerpc-rfi-flush-make-it-possible-to-call-setup_rfi_flush-again.patch
queue-4.4/powerpc-64s-add-support-for-ori-barrier_nospec-patching.patch
queue-4.4/powerpc-use-barrier_nospec-in-copy_from_user.patch
queue-4.4/powerpc-64s-fix-section-mismatch-warnings-from-setup_rfi_flush.patch
queue-4.4/powerpc-avoid-code-patching-freed-init-sections.patch
queue-4.4/powerpc-fsl-add-macro-to-flush-the-branch-predictor.patch
queue-4.4/powerpc-xmon-add-rfi-flush-related-fields-to-paca-dump.patch
queue-4.4/powerpc-fsl-add-barrier_nospec-implementation-for-nxp-powerpc-book3e.patch
queue-4.4/powerpc-security-fix-spectre_v2-reporting.patch
queue-4.4/powerpc-add-security-feature-flags-for-spectre-meltdown.patch
queue-4.4/powerpc-powernv-use-the-security-flags-in-pnv_setup_rfi_flush.patch
queue-4.4/powerpc-64-disable-the-speculation-barrier-from-the-command-line.patch
queue-4.4/powerpc-fsl-fix-the-flush-of-branch-predictor.patch
queue-4.4/powerpc-pseries-use-the-security-flags-in-pseries_setup_rfi_flush.patch
queue-4.4/powerpc-64-add-config_ppc_barrier_nospec.patch
queue-4.4/powerpc-64s-move-cpu_show_meltdown.patch
queue-4.4/powerpc-64-use-barrier_nospec-in-syscall-entry.patch
queue-4.4/powerpc-fsl-add-nospectre_v2-command-line-argument.patch
queue-4.4/powerpc-64s-add-new-security-feature-flags-for-count-cache-flush.patch
queue-4.4/powerpc-fsl-add-infrastructure-to-fixup-branch-predictor-flush.patch
queue-4.4/powerpc-rfi-flush-differentiate-enabled-and-patched-flush-types.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64-call-setup_barrier_nospec-from-setup_arch.patch
queue-4.4/powerpc-rfi-flush-always-enable-fallback-flush-on-pseries.patch
queue-4.4/powerpc-64s-improve-rfi-l1-d-cache-flush-fallback.patch
queue-4.4/powerpc-asm-add-a-patch_site-macro-helpers-for-patching-instructions.patch
queue-4.4/powerpc-pseries-add-new-h_get_cpu_characteristics-flags.patch
queue-4.4/powerpc-64s-enable-barrier_nospec-based-on-firmware-settings.patch
queue-4.4/powerpc-powernv-support-firmware-disable-of-rfi-flush.patch
queue-4.4/powerpc-rfi-flush-move-the-logic-to-avoid-a-redo-into-the-debugfs-code.patch
queue-4.4/powerpc-powernv-query-firmware-for-count-cache-flush-settings.patch
queue-4.4/powerpc-64s-wire-up-cpu_show_spectre_v1.patch
queue-4.4/powerpc-64s-add-barrier_nospec.patch
queue-4.4/powerpc-64s-enhance-the-information-in-cpu_show_meltdown.patch
queue-4.4/powerpc-move-default-security-feature-flags.patch
queue-4.4/powerpc-pseries-fix-clearing-of-security-feature-flags.patch
queue-4.4/powerpc-pseries-restore-default-security-feature-flags-on-setup.patch

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
  2019-04-29  7:03         ` Greg KH
@ 2019-04-29 11:56           ` Michael Ellerman
  -1 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-29 11:56 UTC (permalink / raw)
  To: Greg KH
  Cc: stable, linuxppc-dev, diana.craciun, msuchanek, npiggin,
	christophe.leroy

Greg KH <gregkh@linuxfoundation.org> writes:
> On Mon, Apr 29, 2019 at 04:26:45PM +1000, Michael Ellerman wrote:
>> Michael Ellerman <mpe@ellerman.id.au> writes:
>> > Greg KH <gregkh@linuxfoundation.org> writes:
>> >> On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
>> >>> -----BEGIN PGP SIGNED MESSAGE-----
>> >>> Hash: SHA1
>> >>> 
>> >>> Hi Greg/Sasha,
>> >>> 
>> >>> Please queue up these powerpc patches for 4.4 if you have no objections.
>> >>
>> >> why?  Do you, or someone else, really care about spectre issues in 4.4?
>> >> Who is using ppc for 4.4 becides a specific enterprise distro (and they
>> >> don't seem to be pulling in my stable updates anyway...)?
>> >
>> > Someone asked for it, but TBH I can't remember who it was. I can chase
>> > it up if you like.
>> 
>> Yeah it was a request from one of the distros. They plan to take it once
>> it lands in 4.4 stable.
>
> Ok, thanks for confirming, I'll work on this this afternoon.

Thanks. If there's any problems let us know.

cheers

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
@ 2019-04-29 11:56           ` Michael Ellerman
  0 siblings, 0 replies; 180+ messages in thread
From: Michael Ellerman @ 2019-04-29 11:56 UTC (permalink / raw)
  To: Greg KH; +Cc: npiggin, diana.craciun, linuxppc-dev, stable, msuchanek

Greg KH <gregkh@linuxfoundation.org> writes:
> On Mon, Apr 29, 2019 at 04:26:45PM +1000, Michael Ellerman wrote:
>> Michael Ellerman <mpe@ellerman.id.au> writes:
>> > Greg KH <gregkh@linuxfoundation.org> writes:
>> >> On Mon, Apr 22, 2019 at 12:19:45AM +1000, Michael Ellerman wrote:
>> >>> -----BEGIN PGP SIGNED MESSAGE-----
>> >>> Hash: SHA1
>> >>> 
>> >>> Hi Greg/Sasha,
>> >>> 
>> >>> Please queue up these powerpc patches for 4.4 if you have no objections.
>> >>
>> >> why?  Do you, or someone else, really care about spectre issues in 4.4?
>> >> Who is using ppc for 4.4 becides a specific enterprise distro (and they
>> >> don't seem to be pulling in my stable updates anyway...)?
>> >
>> > Someone asked for it, but TBH I can't remember who it was. I can chase
>> > it up if you like.
>> 
>> Yeah it was a request from one of the distros. They plan to take it once
>> it lands in 4.4 stable.
>
> Ok, thanks for confirming, I'll work on this this afternoon.

Thanks. If there's any problems let us know.

cheers

^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
  2019-04-28  6:20     ` Michael Ellerman
@ 2019-04-29 15:52       ` Diana Madalina Craciun
  -1 siblings, 0 replies; 180+ messages in thread
From: Diana Madalina Craciun @ 2019-04-29 15:52 UTC (permalink / raw)
  To: Michael Ellerman, stable, gregkh
  Cc: linuxppc-dev, msuchanek, npiggin, christophe.leroy

On 4/28/2019 9:19 AM, Michael Ellerman wrote:
> Diana Madalina Craciun <diana.craciun@nxp.com> writes:
>> Hi Michael,
>>
>> There are some missing NXP Spectre v2 patches. I can send them
>> separately if the series will be accepted. I have merged them, but I did
>> not test them, I was sick today and incapable of doing that.
> No worries, there's no rush :)
>
> Sorry I missed them, I thought I had a list that included everything.
> Which commits was it I missed?
>
> I guess post them as a reply to this thread? That way whether the series
> is merged by Greg or not, there's a record here of what the backports
> look like.

I have sent them as a separate series, but mentioning them here as well:

Diana Craciun (8):
  powerpc/fsl: Enable runtime patching if nospectre_v2 boot arg is used
  powerpc/fsl: Flush branch predictor when entering KVM
  powerpc/fsl: Emulate SPRN_BUCSR register
  powerpc/fsl: Flush the branch predictor at each kernel entry (32 bit)
  powerpc/fsl: Sanitize the syscall table for NXP PowerPC 32 bit
    platforms
  powerpc/fsl: Fixed warning: orphan section `__btb_flush_fixup'
  powerpc/fsl: Add FSL_PPC_BOOK3E as supported arch for nospectre_v2
    boot arg
  Documentation: Add nospectre_v1 parameter

regards

> cheers
>
>> On 4/21/2019 5:21 PM, Michael Ellerman wrote:
>>> -----BEGIN PGP SIGNED MESSAGE-----
>>> Hash: SHA1
>>>
>>> Hi Greg/Sasha,
>>>
>>> Please queue up these powerpc patches for 4.4 if you have no objections.
>>>
>>> cheers
>>>
>>>
>>> Christophe Leroy (1):
>>>   powerpc/fsl: Fix the flush of branch predictor.
>>>
>>> Diana Craciun (10):
>>>   powerpc/64: Disable the speculation barrier from the command line
>>>   powerpc/64: Make stf barrier PPC_BOOK3S_64 specific.
>>>   powerpc/64: Make meltdown reporting Book3S 64 specific
>>>   powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E
>>>   powerpc/fsl: Add infrastructure to fixup branch predictor flush
>>>   powerpc/fsl: Add macro to flush the branch predictor
>>>   powerpc/fsl: Fix spectre_v2 mitigations reporting
>>>   powerpc/fsl: Add nospectre_v2 command line argument
>>>   powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)
>>>   powerpc/fsl: Update Spectre v2 reporting
>>>
>>> Mauricio Faria de Oliveira (4):
>>>   powerpc/rfi-flush: Differentiate enabled and patched flush types
>>>   powerpc/pseries: Fix clearing of security feature flags
>>>   powerpc: Move default security feature flags
>>>   powerpc/pseries: Restore default security feature flags on setup
>>>
>>> Michael Ellerman (29):
>>>   powerpc/xmon: Add RFI flush related fields to paca dump
>>>   powerpc/pseries: Support firmware disable of RFI flush
>>>   powerpc/powernv: Support firmware disable of RFI flush
>>>   powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs
>>>     code
>>>   powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again
>>>   powerpc/rfi-flush: Always enable fallback flush on pseries
>>>   powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags
>>>   powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration
>>>   powerpc: Add security feature flags for Spectre/Meltdown
>>>   powerpc/pseries: Set or clear security feature flags
>>>   powerpc/powernv: Set or clear security feature flags
>>>   powerpc/64s: Move cpu_show_meltdown()
>>>   powerpc/64s: Enhance the information in cpu_show_meltdown()
>>>   powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()
>>>   powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()
>>>   powerpc/64s: Wire up cpu_show_spectre_v1()
>>>   powerpc/64s: Wire up cpu_show_spectre_v2()
>>>   powerpc/64s: Fix section mismatch warnings from setup_rfi_flush()
>>>   powerpc/64: Use barrier_nospec in syscall entry
>>>   powerpc: Use barrier_nospec in copy_from_user()
>>>   powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2
>>>   powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC
>>>   powerpc/64: Call setup_barrier_nospec() from setup_arch()
>>>   powerpc/asm: Add a patch_site macro & helpers for patching
>>>     instructions
>>>   powerpc/64s: Add new security feature flags for count cache flush
>>>   powerpc/64s: Add support for software count cache flush
>>>   powerpc/pseries: Query hypervisor for count cache flush settings
>>>   powerpc/powernv: Query firmware for count cache flush settings
>>>   powerpc/security: Fix spectre_v2 reporting
>>>
>>> Michael Neuling (1):
>>>   powerpc: Avoid code patching freed init sections
>>>
>>> Michal Suchanek (5):
>>>   powerpc/64s: Add barrier_nospec
>>>   powerpc/64s: Add support for ori barrier_nospec patching
>>>   powerpc/64s: Patch barrier_nospec in modules
>>>   powerpc/64s: Enable barrier_nospec based on firmware settings
>>>   powerpc/64s: Enhance the information in cpu_show_spectre_v1()
>>>
>>> Nicholas Piggin (2):
>>>   powerpc/64s: Improve RFI L1-D cache flush fallback
>>>   powerpc/64s: Add support for a store forwarding barrier at kernel
>>>     entry/exit
>>>
>>>  arch/powerpc/Kconfig                         |   7 +-
>>>  arch/powerpc/include/asm/asm-prototypes.h    |  21 +
>>>  arch/powerpc/include/asm/barrier.h           |  21 +
>>>  arch/powerpc/include/asm/code-patching-asm.h |  18 +
>>>  arch/powerpc/include/asm/code-patching.h     |   2 +
>>>  arch/powerpc/include/asm/exception-64s.h     |  35 ++
>>>  arch/powerpc/include/asm/feature-fixups.h    |  40 ++
>>>  arch/powerpc/include/asm/hvcall.h            |   5 +
>>>  arch/powerpc/include/asm/paca.h              |   3 +-
>>>  arch/powerpc/include/asm/ppc-opcode.h        |   1 +
>>>  arch/powerpc/include/asm/ppc_asm.h           |  11 +
>>>  arch/powerpc/include/asm/security_features.h |  92 ++++
>>>  arch/powerpc/include/asm/setup.h             |  23 +-
>>>  arch/powerpc/include/asm/uaccess.h           |  18 +-
>>>  arch/powerpc/kernel/Makefile                 |   1 +
>>>  arch/powerpc/kernel/asm-offsets.c            |   3 +-
>>>  arch/powerpc/kernel/entry_64.S               |  69 +++
>>>  arch/powerpc/kernel/exceptions-64e.S         |  27 +-
>>>  arch/powerpc/kernel/exceptions-64s.S         |  98 +++--
>>>  arch/powerpc/kernel/module.c                 |  10 +-
>>>  arch/powerpc/kernel/security.c               | 433 +++++++++++++++++++
>>>  arch/powerpc/kernel/setup_32.c               |   2 +
>>>  arch/powerpc/kernel/setup_64.c               |  50 +--
>>>  arch/powerpc/kernel/vmlinux.lds.S            |  33 +-
>>>  arch/powerpc/lib/code-patching.c             |  29 ++
>>>  arch/powerpc/lib/feature-fixups.c            | 218 +++++++++-
>>>  arch/powerpc/mm/mem.c                        |   2 +
>>>  arch/powerpc/mm/tlb_low_64e.S                |   7 +
>>>  arch/powerpc/platforms/powernv/setup.c       |  99 +++--
>>>  arch/powerpc/platforms/pseries/mobility.c    |   3 +
>>>  arch/powerpc/platforms/pseries/pseries.h     |   2 +
>>>  arch/powerpc/platforms/pseries/setup.c       |  88 +++-
>>>  arch/powerpc/xmon/xmon.c                     |   2 +
>>>  33 files changed, 1345 insertions(+), 128 deletions(-)
>>>  create mode 100644 arch/powerpc/include/asm/asm-prototypes.h
>>>  create mode 100644 arch/powerpc/include/asm/code-patching-asm.h
>>>  create mode 100644 arch/powerpc/include/asm/security_features.h
>>>  create mode 100644 arch/powerpc/kernel/security.c
>>>
>>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>>> index 58a1fa979655..01b6c00a7060 100644
>>> - --- a/arch/powerpc/Kconfig
>>> +++ b/arch/powerpc/Kconfig
>>> @@ -136,7 +136,7 @@ config PPC
>>>  	select GENERIC_SMP_IDLE_THREAD
>>>  	select GENERIC_CMOS_UPDATE
>>>  	select GENERIC_TIME_VSYSCALL_OLD
>>> - -	select GENERIC_CPU_VULNERABILITIES	if PPC_BOOK3S_64
>>> +	select GENERIC_CPU_VULNERABILITIES	if PPC_BARRIER_NOSPEC
>>>  	select GENERIC_CLOCKEVENTS
>>>  	select GENERIC_CLOCKEVENTS_BROADCAST if SMP
>>>  	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
>>> @@ -162,6 +162,11 @@ config PPC
>>>  	select ARCH_HAS_DMA_SET_COHERENT_MASK
>>>  	select HAVE_ARCH_SECCOMP_FILTER
>>>  
>>> +config PPC_BARRIER_NOSPEC
>>> +    bool
>>> +    default y
>>> +    depends on PPC_BOOK3S_64 || PPC_FSL_BOOK3E
>>> +
>>>  config GENERIC_CSUM
>>>  	def_bool CPU_LITTLE_ENDIAN
>>>  
>>> diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
>>> new file mode 100644
>>> index 000000000000..8944c55591cf
>>> - --- /dev/null
>>> +++ b/arch/powerpc/include/asm/asm-prototypes.h
>>> @@ -0,0 +1,21 @@
>>> +#ifndef _ASM_POWERPC_ASM_PROTOTYPES_H
>>> +#define _ASM_POWERPC_ASM_PROTOTYPES_H
>>> +/*
>>> + * This file is for prototypes of C functions that are only called
>>> + * from asm, and any associated variables.
>>> + *
>>> + * Copyright 2016, Daniel Axtens, IBM Corporation.
>>> + *
>>> + * This program is free software; you can redistribute it and/or
>>> + * modify it under the terms of the GNU General Public License
>>> + * as published by the Free Software Foundation; either version 2
>>> + * of the License, or (at your option) any later version.
>>> + */
>>> +
>>> +/* Patch sites */
>>> +extern s32 patch__call_flush_count_cache;
>>> +extern s32 patch__flush_count_cache_return;
>>> +
>>> +extern long flush_count_cache;
>>> +
>>> +#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
>>> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
>>> index b9e16855a037..e7cb72cdb2ba 100644
>>> - --- a/arch/powerpc/include/asm/barrier.h
>>> +++ b/arch/powerpc/include/asm/barrier.h
>>> @@ -92,4 +92,25 @@ do {									\
>>>  #define smp_mb__after_atomic()      smp_mb()
>>>  #define smp_mb__before_spinlock()   smp_mb()
>>>  
>>> +#ifdef CONFIG_PPC_BOOK3S_64
>>> +#define NOSPEC_BARRIER_SLOT   nop
>>> +#elif defined(CONFIG_PPC_FSL_BOOK3E)
>>> +#define NOSPEC_BARRIER_SLOT   nop; nop
>>> +#endif
>>> +
>>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>>> +/*
>>> + * Prevent execution of subsequent instructions until preceding branches have
>>> + * been fully resolved and are no longer executing speculatively.
>>> + */
>>> +#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; NOSPEC_BARRIER_SLOT
>>> +
>>> +// This also acts as a compiler barrier due to the memory clobber.
>>> +#define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
>>> +
>>> +#else /* !CONFIG_PPC_BARRIER_NOSPEC */
>>> +#define barrier_nospec_asm
>>> +#define barrier_nospec()
>>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>>> +
>>>  #endif /* _ASM_POWERPC_BARRIER_H */
>>> diff --git a/arch/powerpc/include/asm/code-patching-asm.h b/arch/powerpc/include/asm/code-patching-asm.h
>>> new file mode 100644
>>> index 000000000000..ed7b1448493a
>>> - --- /dev/null
>>> +++ b/arch/powerpc/include/asm/code-patching-asm.h
>>> @@ -0,0 +1,18 @@
>>> +/* SPDX-License-Identifier: GPL-2.0+ */
>>> +/*
>>> + * Copyright 2018, Michael Ellerman, IBM Corporation.
>>> + */
>>> +#ifndef _ASM_POWERPC_CODE_PATCHING_ASM_H
>>> +#define _ASM_POWERPC_CODE_PATCHING_ASM_H
>>> +
>>> +/* Define a "site" that can be patched */
>>> +.macro patch_site label name
>>> +	.pushsection ".rodata"
>>> +	.balign 4
>>> +	.global \name
>>> +\name:
>>> +	.4byte	\label - .
>>> +	.popsection
>>> +.endm
>>> +
>>> +#endif /* _ASM_POWERPC_CODE_PATCHING_ASM_H */
>>> diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
>>> index 840a5509b3f1..a734b4b34d26 100644
>>> - --- a/arch/powerpc/include/asm/code-patching.h
>>> +++ b/arch/powerpc/include/asm/code-patching.h
>>> @@ -28,6 +28,8 @@ unsigned int create_cond_branch(const unsigned int *addr,
>>>  				unsigned long target, int flags);
>>>  int patch_branch(unsigned int *addr, unsigned long target, int flags);
>>>  int patch_instruction(unsigned int *addr, unsigned int instr);
>>> +int patch_instruction_site(s32 *addr, unsigned int instr);
>>> +int patch_branch_site(s32 *site, unsigned long target, int flags);
>>>  
>>>  int instr_is_relative_branch(unsigned int instr);
>>>  int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
>>> diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
>>> index 9bddbec441b8..3ed536bec462 100644
>>> - --- a/arch/powerpc/include/asm/exception-64s.h
>>> +++ b/arch/powerpc/include/asm/exception-64s.h
>>> @@ -50,6 +50,27 @@
>>>  #define EX_PPR		88	/* SMT thread status register (priority) */
>>>  #define EX_CTR		96
>>>  
>>> +#define STF_ENTRY_BARRIER_SLOT						\
>>> +	STF_ENTRY_BARRIER_FIXUP_SECTION;				\
>>> +	nop;								\
>>> +	nop;								\
>>> +	nop
>>> +
>>> +#define STF_EXIT_BARRIER_SLOT						\
>>> +	STF_EXIT_BARRIER_FIXUP_SECTION;					\
>>> +	nop;								\
>>> +	nop;								\
>>> +	nop;								\
>>> +	nop;								\
>>> +	nop;								\
>>> +	nop
>>> +
>>> +/*
>>> + * r10 must be free to use, r13 must be paca
>>> + */
>>> +#define INTERRUPT_TO_KERNEL						\
>>> +	STF_ENTRY_BARRIER_SLOT
>>> +
>>>  /*
>>>   * Macros for annotating the expected destination of (h)rfid
>>>   *
>>> @@ -66,16 +87,19 @@
>>>  	rfid
>>>  
>>>  #define RFI_TO_USER							\
>>> +	STF_EXIT_BARRIER_SLOT;						\
>>>  	RFI_FLUSH_SLOT;							\
>>>  	rfid;								\
>>>  	b	rfi_flush_fallback
>>>  
>>>  #define RFI_TO_USER_OR_KERNEL						\
>>> +	STF_EXIT_BARRIER_SLOT;						\
>>>  	RFI_FLUSH_SLOT;							\
>>>  	rfid;								\
>>>  	b	rfi_flush_fallback
>>>  
>>>  #define RFI_TO_GUEST							\
>>> +	STF_EXIT_BARRIER_SLOT;						\
>>>  	RFI_FLUSH_SLOT;							\
>>>  	rfid;								\
>>>  	b	rfi_flush_fallback
>>> @@ -84,21 +108,25 @@
>>>  	hrfid
>>>  
>>>  #define HRFI_TO_USER							\
>>> +	STF_EXIT_BARRIER_SLOT;						\
>>>  	RFI_FLUSH_SLOT;							\
>>>  	hrfid;								\
>>>  	b	hrfi_flush_fallback
>>>  
>>>  #define HRFI_TO_USER_OR_KERNEL						\
>>> +	STF_EXIT_BARRIER_SLOT;						\
>>>  	RFI_FLUSH_SLOT;							\
>>>  	hrfid;								\
>>>  	b	hrfi_flush_fallback
>>>  
>>>  #define HRFI_TO_GUEST							\
>>> +	STF_EXIT_BARRIER_SLOT;						\
>>>  	RFI_FLUSH_SLOT;							\
>>>  	hrfid;								\
>>>  	b	hrfi_flush_fallback
>>>  
>>>  #define HRFI_TO_UNKNOWN							\
>>> +	STF_EXIT_BARRIER_SLOT;						\
>>>  	RFI_FLUSH_SLOT;							\
>>>  	hrfid;								\
>>>  	b	hrfi_flush_fallback
>>> @@ -226,6 +254,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
>>>  #define __EXCEPTION_PROLOG_1(area, extra, vec)				\
>>>  	OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR);		\
>>>  	OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);		\
>>> +	INTERRUPT_TO_KERNEL;						\
>>>  	SAVE_CTR(r10, area);						\
>>>  	mfcr	r9;							\
>>>  	extra(vec);							\
>>> @@ -512,6 +541,12 @@ label##_relon_hv:						\
>>>  #define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
>>>  	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
>>>  
>>> +#define MASKABLE_EXCEPTION_OOL(vec, label)				\
>>> +	.globl label##_ool;						\
>>> +label##_ool:								\
>>> +	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
>>> +	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
>>> +
>>>  #define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
>>>  	. = loc;							\
>>>  	.globl label##_pSeries;						\
>>> diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
>>> index 7068bafbb2d6..145a37ab2d3e 100644
>>> - --- a/arch/powerpc/include/asm/feature-fixups.h
>>> +++ b/arch/powerpc/include/asm/feature-fixups.h
>>> @@ -184,6 +184,22 @@ label##3:					       	\
>>>  	FTR_ENTRY_OFFSET label##1b-label##3b;		\
>>>  	.popsection;
>>>  
>>> +#define STF_ENTRY_BARRIER_FIXUP_SECTION			\
>>> +953:							\
>>> +	.pushsection __stf_entry_barrier_fixup,"a";	\
>>> +	.align 2;					\
>>> +954:							\
>>> +	FTR_ENTRY_OFFSET 953b-954b;			\
>>> +	.popsection;
>>> +
>>> +#define STF_EXIT_BARRIER_FIXUP_SECTION			\
>>> +955:							\
>>> +	.pushsection __stf_exit_barrier_fixup,"a";	\
>>> +	.align 2;					\
>>> +956:							\
>>> +	FTR_ENTRY_OFFSET 955b-956b;			\
>>> +	.popsection;
>>> +
>>>  #define RFI_FLUSH_FIXUP_SECTION				\
>>>  951:							\
>>>  	.pushsection __rfi_flush_fixup,"a";		\
>>> @@ -192,10 +208,34 @@ label##3:					       	\
>>>  	FTR_ENTRY_OFFSET 951b-952b;			\
>>>  	.popsection;
>>>  
>>> +#define NOSPEC_BARRIER_FIXUP_SECTION			\
>>> +953:							\
>>> +	.pushsection __barrier_nospec_fixup,"a";	\
>>> +	.align 2;					\
>>> +954:							\
>>> +	FTR_ENTRY_OFFSET 953b-954b;			\
>>> +	.popsection;
>>> +
>>> +#define START_BTB_FLUSH_SECTION			\
>>> +955:							\
>>> +
>>> +#define END_BTB_FLUSH_SECTION			\
>>> +956:							\
>>> +	.pushsection __btb_flush_fixup,"a";	\
>>> +	.align 2;							\
>>> +957:						\
>>> +	FTR_ENTRY_OFFSET 955b-957b;			\
>>> +	FTR_ENTRY_OFFSET 956b-957b;			\
>>> +	.popsection;
>>>  
>>>  #ifndef __ASSEMBLY__
>>>  
>>> +extern long stf_barrier_fallback;
>>> +extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
>>> +extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
>>>  extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
>>> +extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
>>> +extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
>>>  
>>>  #endif
>>>  
>>> diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
>>> index 449bbb87c257..b57db9d09db9 100644
>>> - --- a/arch/powerpc/include/asm/hvcall.h
>>> +++ b/arch/powerpc/include/asm/hvcall.h
>>> @@ -292,10 +292,15 @@
>>>  #define H_CPU_CHAR_L1D_FLUSH_ORI30	(1ull << 61) // IBM bit 2
>>>  #define H_CPU_CHAR_L1D_FLUSH_TRIG2	(1ull << 60) // IBM bit 3
>>>  #define H_CPU_CHAR_L1D_THREAD_PRIV	(1ull << 59) // IBM bit 4
>>> +#define H_CPU_CHAR_BRANCH_HINTS_HONORED	(1ull << 58) // IBM bit 5
>>> +#define H_CPU_CHAR_THREAD_RECONFIG_CTRL	(1ull << 57) // IBM bit 6
>>> +#define H_CPU_CHAR_COUNT_CACHE_DISABLED	(1ull << 56) // IBM bit 7
>>> +#define H_CPU_CHAR_BCCTR_FLUSH_ASSIST	(1ull << 54) // IBM bit 9
>>>  
>>>  #define H_CPU_BEHAV_FAVOUR_SECURITY	(1ull << 63) // IBM bit 0
>>>  #define H_CPU_BEHAV_L1D_FLUSH_PR	(1ull << 62) // IBM bit 1
>>>  #define H_CPU_BEHAV_BNDS_CHK_SPEC_BAR	(1ull << 61) // IBM bit 2
>>> +#define H_CPU_BEHAV_FLUSH_COUNT_CACHE	(1ull << 58) // IBM bit 5
>>>  
>>>  #ifndef __ASSEMBLY__
>>>  #include <linux/types.h>
>>> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
>>> index 45e2aefece16..08e5df3395fa 100644
>>> - --- a/arch/powerpc/include/asm/paca.h
>>> +++ b/arch/powerpc/include/asm/paca.h
>>> @@ -199,8 +199,7 @@ struct paca_struct {
>>>  	 */
>>>  	u64 exrfi[13] __aligned(0x80);
>>>  	void *rfi_flush_fallback_area;
>>> - -	u64 l1d_flush_congruence;
>>> - -	u64 l1d_flush_sets;
>>> +	u64 l1d_flush_size;
>>>  #endif
>>>  };
>>>  
>>> diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
>>> index 7ab04fc59e24..faf1bb045dee 100644
>>> - --- a/arch/powerpc/include/asm/ppc-opcode.h
>>> +++ b/arch/powerpc/include/asm/ppc-opcode.h
>>> @@ -147,6 +147,7 @@
>>>  #define PPC_INST_LWSYNC			0x7c2004ac
>>>  #define PPC_INST_SYNC			0x7c0004ac
>>>  #define PPC_INST_SYNC_MASK		0xfc0007fe
>>> +#define PPC_INST_ISYNC			0x4c00012c
>>>  #define PPC_INST_LXVD2X			0x7c000698
>>>  #define PPC_INST_MCRXR			0x7c000400
>>>  #define PPC_INST_MCRXR_MASK		0xfc0007fe
>>> diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
>>> index 160bb2311bbb..d219816b3e19 100644
>>> - --- a/arch/powerpc/include/asm/ppc_asm.h
>>> +++ b/arch/powerpc/include/asm/ppc_asm.h
>>> @@ -821,4 +821,15 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,945)
>>>  	.long 0x2400004c  /* rfid				*/
>>>  #endif /* !CONFIG_PPC_BOOK3E */
>>>  #endif /*  __ASSEMBLY__ */
>>> +
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +#define BTB_FLUSH(reg)			\
>>> +	lis reg,BUCSR_INIT@h;		\
>>> +	ori reg,reg,BUCSR_INIT@l;	\
>>> +	mtspr SPRN_BUCSR,reg;		\
>>> +	isync;
>>> +#else
>>> +#define BTB_FLUSH(reg)
>>> +#endif /* CONFIG_PPC_FSL_BOOK3E */
>>> +
>>>  #endif /* _ASM_POWERPC_PPC_ASM_H */
>>> diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
>>> new file mode 100644
>>> index 000000000000..759597bf0fd8
>>> - --- /dev/null
>>> +++ b/arch/powerpc/include/asm/security_features.h
>>> @@ -0,0 +1,92 @@
>>> +/* SPDX-License-Identifier: GPL-2.0+ */
>>> +/*
>>> + * Security related feature bit definitions.
>>> + *
>>> + * Copyright 2018, Michael Ellerman, IBM Corporation.
>>> + */
>>> +
>>> +#ifndef _ASM_POWERPC_SECURITY_FEATURES_H
>>> +#define _ASM_POWERPC_SECURITY_FEATURES_H
>>> +
>>> +
>>> +extern unsigned long powerpc_security_features;
>>> +extern bool rfi_flush;
>>> +
>>> +/* These are bit flags */
>>> +enum stf_barrier_type {
>>> +	STF_BARRIER_NONE	= 0x1,
>>> +	STF_BARRIER_FALLBACK	= 0x2,
>>> +	STF_BARRIER_EIEIO	= 0x4,
>>> +	STF_BARRIER_SYNC_ORI	= 0x8,
>>> +};
>>> +
>>> +void setup_stf_barrier(void);
>>> +void do_stf_barrier_fixups(enum stf_barrier_type types);
>>> +void setup_count_cache_flush(void);
>>> +
>>> +static inline void security_ftr_set(unsigned long feature)
>>> +{
>>> +	powerpc_security_features |= feature;
>>> +}
>>> +
>>> +static inline void security_ftr_clear(unsigned long feature)
>>> +{
>>> +	powerpc_security_features &= ~feature;
>>> +}
>>> +
>>> +static inline bool security_ftr_enabled(unsigned long feature)
>>> +{
>>> +	return !!(powerpc_security_features & feature);
>>> +}
>>> +
>>> +
>>> +// Features indicating support for Spectre/Meltdown mitigations
>>> +
>>> +// The L1-D cache can be flushed with ori r30,r30,0
>>> +#define SEC_FTR_L1D_FLUSH_ORI30		0x0000000000000001ull
>>> +
>>> +// The L1-D cache can be flushed with mtspr 882,r0 (aka SPRN_TRIG2)
>>> +#define SEC_FTR_L1D_FLUSH_TRIG2		0x0000000000000002ull
>>> +
>>> +// ori r31,r31,0 acts as a speculation barrier
>>> +#define SEC_FTR_SPEC_BAR_ORI31		0x0000000000000004ull
>>> +
>>> +// Speculation past bctr is disabled
>>> +#define SEC_FTR_BCCTRL_SERIALISED	0x0000000000000008ull
>>> +
>>> +// Entries in L1-D are private to a SMT thread
>>> +#define SEC_FTR_L1D_THREAD_PRIV		0x0000000000000010ull
>>> +
>>> +// Indirect branch prediction cache disabled
>>> +#define SEC_FTR_COUNT_CACHE_DISABLED	0x0000000000000020ull
>>> +
>>> +// bcctr 2,0,0 triggers a hardware assisted count cache flush
>>> +#define SEC_FTR_BCCTR_FLUSH_ASSIST	0x0000000000000800ull
>>> +
>>> +
>>> +// Features indicating need for Spectre/Meltdown mitigations
>>> +
>>> +// The L1-D cache should be flushed on MSR[HV] 1->0 transition (hypervisor to guest)
>>> +#define SEC_FTR_L1D_FLUSH_HV		0x0000000000000040ull
>>> +
>>> +// The L1-D cache should be flushed on MSR[PR] 0->1 transition (kernel to userspace)
>>> +#define SEC_FTR_L1D_FLUSH_PR		0x0000000000000080ull
>>> +
>>> +// A speculation barrier should be used for bounds checks (Spectre variant 1)
>>> +#define SEC_FTR_BNDS_CHK_SPEC_BAR	0x0000000000000100ull
>>> +
>>> +// Firmware configuration indicates user favours security over performance
>>> +#define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
>>> +
>>> +// Software required to flush count cache on context switch
>>> +#define SEC_FTR_FLUSH_COUNT_CACHE	0x0000000000000400ull
>>> +
>>> +
>>> +// Features enabled by default
>>> +#define SEC_FTR_DEFAULT \
>>> +	(SEC_FTR_L1D_FLUSH_HV | \
>>> +	 SEC_FTR_L1D_FLUSH_PR | \
>>> +	 SEC_FTR_BNDS_CHK_SPEC_BAR | \
>>> +	 SEC_FTR_FAVOUR_SECURITY)
>>> +
>>> +#endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
>>> diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
>>> index 7916b56f2e60..d299479c770b 100644
>>> - --- a/arch/powerpc/include/asm/setup.h
>>> +++ b/arch/powerpc/include/asm/setup.h
>>> @@ -8,6 +8,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
>>>  
>>>  extern unsigned int rtas_data;
>>>  extern unsigned long long memory_limit;
>>> +extern bool init_mem_is_free;
>>>  extern unsigned long klimit;
>>>  extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
>>>  
>>> @@ -36,8 +37,28 @@ enum l1d_flush_type {
>>>  	L1D_FLUSH_MTTRIG	= 0x8,
>>>  };
>>>  
>>> - -void __init setup_rfi_flush(enum l1d_flush_type, bool enable);
>>> +void setup_rfi_flush(enum l1d_flush_type, bool enable);
>>>  void do_rfi_flush_fixups(enum l1d_flush_type types);
>>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>>> +void setup_barrier_nospec(void);
>>> +#else
>>> +static inline void setup_barrier_nospec(void) { };
>>> +#endif
>>> +void do_barrier_nospec_fixups(bool enable);
>>> +extern bool barrier_nospec_enabled;
>>> +
>>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>>> +void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
>>> +#else
>>> +static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
>>> +#endif
>>> +
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +void setup_spectre_v2(void);
>>> +#else
>>> +static inline void setup_spectre_v2(void) {};
>>> +#endif
>>> +void do_btb_flush_fixups(void);
>>>  
>>>  #endif /* !__ASSEMBLY__ */
>>>  
>>> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
>>> index 05f1389228d2..e51ce5a0e221 100644
>>> - --- a/arch/powerpc/include/asm/uaccess.h
>>> +++ b/arch/powerpc/include/asm/uaccess.h
>>> @@ -269,6 +269,7 @@ do {								\
>>>  	__chk_user_ptr(ptr);					\
>>>  	if (!is_kernel_addr((unsigned long)__gu_addr))		\
>>>  		might_fault();					\
>>> +	barrier_nospec();					\
>>>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>>>  	(x) = (__typeof__(*(ptr)))__gu_val;			\
>>>  	__gu_err;						\
>>> @@ -283,6 +284,7 @@ do {								\
>>>  	__chk_user_ptr(ptr);					\
>>>  	if (!is_kernel_addr((unsigned long)__gu_addr))		\
>>>  		might_fault();					\
>>> +	barrier_nospec();					\
>>>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>>>  	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
>>>  	__gu_err;						\
>>> @@ -295,8 +297,10 @@ do {								\
>>>  	unsigned long  __gu_val = 0;					\
>>>  	__typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
>>>  	might_fault();							\
>>> - -	if (access_ok(VERIFY_READ, __gu_addr, (size)))			\
>>> +	if (access_ok(VERIFY_READ, __gu_addr, (size))) {		\
>>> +		barrier_nospec();					\
>>>  		__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>>> +	}								\
>>>  	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
>>>  	__gu_err;							\
>>>  })
>>> @@ -307,6 +311,7 @@ do {								\
>>>  	unsigned long __gu_val;					\
>>>  	__typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
>>>  	__chk_user_ptr(ptr);					\
>>> +	barrier_nospec();					\
>>>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>>>  	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
>>>  	__gu_err;						\
>>> @@ -323,8 +328,10 @@ extern unsigned long __copy_tofrom_user(void __user *to,
>>>  static inline unsigned long copy_from_user(void *to,
>>>  		const void __user *from, unsigned long n)
>>>  {
>>> - -	if (likely(access_ok(VERIFY_READ, from, n)))
>>> +	if (likely(access_ok(VERIFY_READ, from, n))) {
>>> +		barrier_nospec();
>>>  		return __copy_tofrom_user((__force void __user *)to, from, n);
>>> +	}
>>>  	memset(to, 0, n);
>>>  	return n;
>>>  }
>>> @@ -359,21 +366,27 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
>>>  
>>>  		switch (n) {
>>>  		case 1:
>>> +			barrier_nospec();
>>>  			__get_user_size(*(u8 *)to, from, 1, ret);
>>>  			break;
>>>  		case 2:
>>> +			barrier_nospec();
>>>  			__get_user_size(*(u16 *)to, from, 2, ret);
>>>  			break;
>>>  		case 4:
>>> +			barrier_nospec();
>>>  			__get_user_size(*(u32 *)to, from, 4, ret);
>>>  			break;
>>>  		case 8:
>>> +			barrier_nospec();
>>>  			__get_user_size(*(u64 *)to, from, 8, ret);
>>>  			break;
>>>  		}
>>>  		if (ret == 0)
>>>  			return 0;
>>>  	}
>>> +
>>> +	barrier_nospec();
>>>  	return __copy_tofrom_user((__force void __user *)to, from, n);
>>>  }
>>>  
>>> @@ -400,6 +413,7 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
>>>  		if (ret == 0)
>>>  			return 0;
>>>  	}
>>> +
>>>  	return __copy_tofrom_user(to, (__force const void __user *)from, n);
>>>  }
>>>  
>>> diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
>>> index ba336930d448..22ed3c32fca8 100644
>>> - --- a/arch/powerpc/kernel/Makefile
>>> +++ b/arch/powerpc/kernel/Makefile
>>> @@ -44,6 +44,7 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o
>>>  obj-$(CONFIG_PPC_BOOK3S_64)	+= mce.o mce_power.o
>>>  obj64-$(CONFIG_RELOCATABLE)	+= reloc_64.o
>>>  obj-$(CONFIG_PPC_BOOK3E_64)	+= exceptions-64e.o idle_book3e.o
>>> +obj-$(CONFIG_PPC_BARRIER_NOSPEC) += security.o
>>>  obj-$(CONFIG_PPC64)		+= vdso64/
>>>  obj-$(CONFIG_ALTIVEC)		+= vecemu.o
>>>  obj-$(CONFIG_PPC_970_NAP)	+= idle_power4.o
>>> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
>>> index d92705e3a0c1..de3c29c51503 100644
>>> - --- a/arch/powerpc/kernel/asm-offsets.c
>>> +++ b/arch/powerpc/kernel/asm-offsets.c
>>> @@ -245,8 +245,7 @@ int main(void)
>>>  	DEFINE(PACA_IN_MCE, offsetof(struct paca_struct, in_mce));
>>>  	DEFINE(PACA_RFI_FLUSH_FALLBACK_AREA, offsetof(struct paca_struct, rfi_flush_fallback_area));
>>>  	DEFINE(PACA_EXRFI, offsetof(struct paca_struct, exrfi));
>>> - -	DEFINE(PACA_L1D_FLUSH_CONGRUENCE, offsetof(struct paca_struct, l1d_flush_congruence));
>>> - -	DEFINE(PACA_L1D_FLUSH_SETS, offsetof(struct paca_struct, l1d_flush_sets));
>>> +	DEFINE(PACA_L1D_FLUSH_SIZE, offsetof(struct paca_struct, l1d_flush_size));
>>>  #endif
>>>  	DEFINE(PACAHWCPUID, offsetof(struct paca_struct, hw_cpu_id));
>>>  	DEFINE(PACAKEXECSTATE, offsetof(struct paca_struct, kexec_state));
>>> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
>>> index 59be96917369..6d36a4fb4acf 100644
>>> - --- a/arch/powerpc/kernel/entry_64.S
>>> +++ b/arch/powerpc/kernel/entry_64.S
>>> @@ -25,6 +25,7 @@
>>>  #include <asm/page.h>
>>>  #include <asm/mmu.h>
>>>  #include <asm/thread_info.h>
>>> +#include <asm/code-patching-asm.h>
>>>  #include <asm/ppc_asm.h>
>>>  #include <asm/asm-offsets.h>
>>>  #include <asm/cputable.h>
>>> @@ -36,6 +37,7 @@
>>>  #include <asm/hw_irq.h>
>>>  #include <asm/context_tracking.h>
>>>  #include <asm/tm.h>
>>> +#include <asm/barrier.h>
>>>  #ifdef CONFIG_PPC_BOOK3S
>>>  #include <asm/exception-64s.h>
>>>  #else
>>> @@ -75,6 +77,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
>>>  	std	r0,GPR0(r1)
>>>  	std	r10,GPR1(r1)
>>>  	beq	2f			/* if from kernel mode */
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +START_BTB_FLUSH_SECTION
>>> +	BTB_FLUSH(r10)
>>> +END_BTB_FLUSH_SECTION
>>> +#endif
>>>  	ACCOUNT_CPU_USER_ENTRY(r10, r11)
>>>  2:	std	r2,GPR2(r1)
>>>  	std	r3,GPR3(r1)
>>> @@ -177,6 +184,15 @@ system_call:			/* label this so stack traces look sane */
>>>  	clrldi	r8,r8,32
>>>  15:
>>>  	slwi	r0,r0,4
>>> +
>>> +	barrier_nospec_asm
>>> +	/*
>>> +	 * Prevent the load of the handler below (based on the user-passed
>>> +	 * system call number) being speculatively executed until the test
>>> +	 * against NR_syscalls and branch to .Lsyscall_enosys above has
>>> +	 * committed.
>>> +	 */
>>> +
>>>  	ldx	r12,r11,r0	/* Fetch system call handler [ptr] */
>>>  	mtctr   r12
>>>  	bctrl			/* Call handler */
>>> @@ -440,6 +456,57 @@ _GLOBAL(ret_from_kernel_thread)
>>>  	li	r3,0
>>>  	b	.Lsyscall_exit
>>>  
>>> +#ifdef CONFIG_PPC_BOOK3S_64
>>> +
>>> +#define FLUSH_COUNT_CACHE	\
>>> +1:	nop;			\
>>> +	patch_site 1b, patch__call_flush_count_cache
>>> +
>>> +
>>> +#define BCCTR_FLUSH	.long 0x4c400420
>>> +
>>> +.macro nops number
>>> +	.rept \number
>>> +	nop
>>> +	.endr
>>> +.endm
>>> +
>>> +.balign 32
>>> +.global flush_count_cache
>>> +flush_count_cache:
>>> +	/* Save LR into r9 */
>>> +	mflr	r9
>>> +
>>> +	.rept 64
>>> +	bl	.+4
>>> +	.endr
>>> +	b	1f
>>> +	nops	6
>>> +
>>> +	.balign 32
>>> +	/* Restore LR */
>>> +1:	mtlr	r9
>>> +	li	r9,0x7fff
>>> +	mtctr	r9
>>> +
>>> +	BCCTR_FLUSH
>>> +
>>> +2:	nop
>>> +	patch_site 2b patch__flush_count_cache_return
>>> +
>>> +	nops	3
>>> +
>>> +	.rept 278
>>> +	.balign 32
>>> +	BCCTR_FLUSH
>>> +	nops	7
>>> +	.endr
>>> +
>>> +	blr
>>> +#else
>>> +#define FLUSH_COUNT_CACHE
>>> +#endif /* CONFIG_PPC_BOOK3S_64 */
>>> +
>>>  /*
>>>   * This routine switches between two different tasks.  The process
>>>   * state of one is saved on its kernel stack.  Then the state
>>> @@ -503,6 +570,8 @@ BEGIN_FTR_SECTION
>>>  END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
>>>  #endif
>>>  
>>> +	FLUSH_COUNT_CACHE
>>> +
>>>  #ifdef CONFIG_SMP
>>>  	/* We need a sync somewhere here to make sure that if the
>>>  	 * previous task gets rescheduled on another CPU, it sees all
>>> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
>>> index 5cc93f0b52ca..48ec841ea1bf 100644
>>> - --- a/arch/powerpc/kernel/exceptions-64e.S
>>> +++ b/arch/powerpc/kernel/exceptions-64e.S
>>> @@ -295,7 +295,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>>>  	andi.	r10,r11,MSR_PR;		/* save stack pointer */	    \
>>>  	beq	1f;			/* branch around if supervisor */   \
>>>  	ld	r1,PACAKSAVE(r13);	/* get kernel stack coming from usr */\
>>> - -1:	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
>>> +1:	type##_BTB_FLUSH		\
>>> +	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
>>>  	bge-	cr1,exc_##n##_bad_stack;/* bad stack (TODO: out of line) */ \
>>>  	mfspr	r10,SPRN_##type##_SRR0;	/* read SRR0 before touching stack */
>>>  
>>> @@ -327,6 +328,30 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>>>  #define SPRN_MC_SRR0	SPRN_MCSRR0
>>>  #define SPRN_MC_SRR1	SPRN_MCSRR1
>>>  
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +#define GEN_BTB_FLUSH			\
>>> +	START_BTB_FLUSH_SECTION		\
>>> +		beq 1f;			\
>>> +		BTB_FLUSH(r10)			\
>>> +		1:		\
>>> +	END_BTB_FLUSH_SECTION
>>> +
>>> +#define CRIT_BTB_FLUSH			\
>>> +	START_BTB_FLUSH_SECTION		\
>>> +		BTB_FLUSH(r10)		\
>>> +	END_BTB_FLUSH_SECTION
>>> +
>>> +#define DBG_BTB_FLUSH CRIT_BTB_FLUSH
>>> +#define MC_BTB_FLUSH CRIT_BTB_FLUSH
>>> +#define GDBELL_BTB_FLUSH GEN_BTB_FLUSH
>>> +#else
>>> +#define GEN_BTB_FLUSH
>>> +#define CRIT_BTB_FLUSH
>>> +#define DBG_BTB_FLUSH
>>> +#define MC_BTB_FLUSH
>>> +#define GDBELL_BTB_FLUSH
>>> +#endif
>>> +
>>>  #define NORMAL_EXCEPTION_PROLOG(n, intnum, addition)			    \
>>>  	EXCEPTION_PROLOG(n, intnum, GEN, addition##_GEN(n))
>>>  
>>> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
>>> index 938a30fef031..10e7cec9553d 100644
>>> - --- a/arch/powerpc/kernel/exceptions-64s.S
>>> +++ b/arch/powerpc/kernel/exceptions-64s.S
>>> @@ -36,6 +36,7 @@ BEGIN_FTR_SECTION						\
>>>  END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
>>>  	mr	r9,r13 ;					\
>>>  	GET_PACA(r13) ;						\
>>> +	INTERRUPT_TO_KERNEL ;					\
>>>  	mfspr	r11,SPRN_SRR0 ;					\
>>>  0:
>>>  
>>> @@ -292,7 +293,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
>>>  	. = 0x900
>>>  	.globl decrementer_pSeries
>>>  decrementer_pSeries:
>>> - -	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
>>> +	SET_SCRATCH0(r13)
>>> +	EXCEPTION_PROLOG_0(PACA_EXGEN)
>>> +	b	decrementer_ool
>>>  
>>>  	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
>>>  
>>> @@ -319,6 +322,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
>>>  	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
>>>  	HMT_MEDIUM;
>>>  	std	r10,PACA_EXGEN+EX_R10(r13)
>>> +	INTERRUPT_TO_KERNEL
>>>  	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
>>>  	mfcr	r9
>>>  	KVMTEST(0xc00)
>>> @@ -607,6 +611,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
>>>  
>>>  	.align	7
>>>  	/* moved from 0xe00 */
>>> +	MASKABLE_EXCEPTION_OOL(0x900, decrementer)
>>>  	STD_EXCEPTION_HV_OOL(0xe02, h_data_storage)
>>>  	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02)
>>>  	STD_EXCEPTION_HV_OOL(0xe22, h_instr_storage)
>>> @@ -1564,6 +1569,21 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>>>  	blr
>>>  #endif
>>>  
>>> +	.balign 16
>>> +	.globl stf_barrier_fallback
>>> +stf_barrier_fallback:
>>> +	std	r9,PACA_EXRFI+EX_R9(r13)
>>> +	std	r10,PACA_EXRFI+EX_R10(r13)
>>> +	sync
>>> +	ld	r9,PACA_EXRFI+EX_R9(r13)
>>> +	ld	r10,PACA_EXRFI+EX_R10(r13)
>>> +	ori	31,31,0
>>> +	.rept 14
>>> +	b	1f
>>> +1:
>>> +	.endr
>>> +	blr
>>> +
>>>  	.globl rfi_flush_fallback
>>>  rfi_flush_fallback:
>>>  	SET_SCRATCH0(r13);
>>> @@ -1571,39 +1591,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>>>  	std	r9,PACA_EXRFI+EX_R9(r13)
>>>  	std	r10,PACA_EXRFI+EX_R10(r13)
>>>  	std	r11,PACA_EXRFI+EX_R11(r13)
>>> - -	std	r12,PACA_EXRFI+EX_R12(r13)
>>> - -	std	r8,PACA_EXRFI+EX_R13(r13)
>>>  	mfctr	r9
>>>  	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
>>> - -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
>>> - -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
>>> - -	/*
>>> - -	 * The load adresses are at staggered offsets within cachelines,
>>> - -	 * which suits some pipelines better (on others it should not
>>> - -	 * hurt).
>>> - -	 */
>>> - -	addi	r12,r12,8
>>> +	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
>>> +	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
>>>  	mtctr	r11
>>>  	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
>>>  
>>>  	/* order ld/st prior to dcbt stop all streams with flushing */
>>>  	sync
>>> - -1:	li	r8,0
>>> - -	.rept	8 /* 8-way set associative */
>>> - -	ldx	r11,r10,r8
>>> - -	add	r8,r8,r12
>>> - -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
>>> - -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
>>> - -	.endr
>>> - -	addi	r10,r10,128 /* 128 byte cache line */
>>> +
>>> +	/*
>>> +	 * The load adresses are at staggered offsets within cachelines,
>>> +	 * which suits some pipelines better (on others it should not
>>> +	 * hurt).
>>> +	 */
>>> +1:
>>> +	ld	r11,(0x80 + 8)*0(r10)
>>> +	ld	r11,(0x80 + 8)*1(r10)
>>> +	ld	r11,(0x80 + 8)*2(r10)
>>> +	ld	r11,(0x80 + 8)*3(r10)
>>> +	ld	r11,(0x80 + 8)*4(r10)
>>> +	ld	r11,(0x80 + 8)*5(r10)
>>> +	ld	r11,(0x80 + 8)*6(r10)
>>> +	ld	r11,(0x80 + 8)*7(r10)
>>> +	addi	r10,r10,0x80*8
>>>  	bdnz	1b
>>>  
>>>  	mtctr	r9
>>>  	ld	r9,PACA_EXRFI+EX_R9(r13)
>>>  	ld	r10,PACA_EXRFI+EX_R10(r13)
>>>  	ld	r11,PACA_EXRFI+EX_R11(r13)
>>> - -	ld	r12,PACA_EXRFI+EX_R12(r13)
>>> - -	ld	r8,PACA_EXRFI+EX_R13(r13)
>>>  	GET_SCRATCH0(r13);
>>>  	rfid
>>>  
>>> @@ -1614,39 +1632,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>>>  	std	r9,PACA_EXRFI+EX_R9(r13)
>>>  	std	r10,PACA_EXRFI+EX_R10(r13)
>>>  	std	r11,PACA_EXRFI+EX_R11(r13)
>>> - -	std	r12,PACA_EXRFI+EX_R12(r13)
>>> - -	std	r8,PACA_EXRFI+EX_R13(r13)
>>>  	mfctr	r9
>>>  	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
>>> - -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
>>> - -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
>>> - -	/*
>>> - -	 * The load adresses are at staggered offsets within cachelines,
>>> - -	 * which suits some pipelines better (on others it should not
>>> - -	 * hurt).
>>> - -	 */
>>> - -	addi	r12,r12,8
>>> +	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
>>> +	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
>>>  	mtctr	r11
>>>  	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
>>>  
>>>  	/* order ld/st prior to dcbt stop all streams with flushing */
>>>  	sync
>>> - -1:	li	r8,0
>>> - -	.rept	8 /* 8-way set associative */
>>> - -	ldx	r11,r10,r8
>>> - -	add	r8,r8,r12
>>> - -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
>>> - -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
>>> - -	.endr
>>> - -	addi	r10,r10,128 /* 128 byte cache line */
>>> +
>>> +	/*
>>> +	 * The load adresses are at staggered offsets within cachelines,
>>> +	 * which suits some pipelines better (on others it should not
>>> +	 * hurt).
>>> +	 */
>>> +1:
>>> +	ld	r11,(0x80 + 8)*0(r10)
>>> +	ld	r11,(0x80 + 8)*1(r10)
>>> +	ld	r11,(0x80 + 8)*2(r10)
>>> +	ld	r11,(0x80 + 8)*3(r10)
>>> +	ld	r11,(0x80 + 8)*4(r10)
>>> +	ld	r11,(0x80 + 8)*5(r10)
>>> +	ld	r11,(0x80 + 8)*6(r10)
>>> +	ld	r11,(0x80 + 8)*7(r10)
>>> +	addi	r10,r10,0x80*8
>>>  	bdnz	1b
>>>  
>>>  	mtctr	r9
>>>  	ld	r9,PACA_EXRFI+EX_R9(r13)
>>>  	ld	r10,PACA_EXRFI+EX_R10(r13)
>>>  	ld	r11,PACA_EXRFI+EX_R11(r13)
>>> - -	ld	r12,PACA_EXRFI+EX_R12(r13)
>>> - -	ld	r8,PACA_EXRFI+EX_R13(r13)
>>>  	GET_SCRATCH0(r13);
>>>  	hrfid
>>>  
>>> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
>>> index 9547381b631a..ff009be97a42 100644
>>> - --- a/arch/powerpc/kernel/module.c
>>> +++ b/arch/powerpc/kernel/module.c
>>> @@ -67,7 +67,15 @@ int module_finalize(const Elf_Ehdr *hdr,
>>>  		do_feature_fixups(powerpc_firmware_features,
>>>  				  (void *)sect->sh_addr,
>>>  				  (void *)sect->sh_addr + sect->sh_size);
>>> - -#endif
>>> +#endif /* CONFIG_PPC64 */
>>> +
>>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>>> +	sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
>>> +	if (sect != NULL)
>>> +		do_barrier_nospec_fixups_range(barrier_nospec_enabled,
>>> +				  (void *)sect->sh_addr,
>>> +				  (void *)sect->sh_addr + sect->sh_size);
>>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>>>  
>>>  	sect = find_section(hdr, sechdrs, "__lwsync_fixup");
>>>  	if (sect != NULL)
>>> diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
>>> new file mode 100644
>>> index 000000000000..58f0602a92b9
>>> - --- /dev/null
>>> +++ b/arch/powerpc/kernel/security.c
>>> @@ -0,0 +1,433 @@
>>> +// SPDX-License-Identifier: GPL-2.0+
>>> +//
>>> +// Security related flags and so on.
>>> +//
>>> +// Copyright 2018, Michael Ellerman, IBM Corporation.
>>> +
>>> +#include <linux/kernel.h>
>>> +#include <linux/debugfs.h>
>>> +#include <linux/device.h>
>>> +#include <linux/seq_buf.h>
>>> +
>>> +#include <asm/debug.h>
>>> +#include <asm/asm-prototypes.h>
>>> +#include <asm/code-patching.h>
>>> +#include <asm/security_features.h>
>>> +#include <asm/setup.h>
>>> +
>>> +
>>> +unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
>>> +
>>> +enum count_cache_flush_type {
>>> +	COUNT_CACHE_FLUSH_NONE	= 0x1,
>>> +	COUNT_CACHE_FLUSH_SW	= 0x2,
>>> +	COUNT_CACHE_FLUSH_HW	= 0x4,
>>> +};
>>> +static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
>>> +
>>> +bool barrier_nospec_enabled;
>>> +static bool no_nospec;
>>> +static bool btb_flush_enabled;
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +static bool no_spectrev2;
>>> +#endif
>>> +
>>> +static void enable_barrier_nospec(bool enable)
>>> +{
>>> +	barrier_nospec_enabled = enable;
>>> +	do_barrier_nospec_fixups(enable);
>>> +}
>>> +
>>> +void setup_barrier_nospec(void)
>>> +{
>>> +	bool enable;
>>> +
>>> +	/*
>>> +	 * It would make sense to check SEC_FTR_SPEC_BAR_ORI31 below as well.
>>> +	 * But there's a good reason not to. The two flags we check below are
>>> +	 * both are enabled by default in the kernel, so if the hcall is not
>>> +	 * functional they will be enabled.
>>> +	 * On a system where the host firmware has been updated (so the ori
>>> +	 * functions as a barrier), but on which the hypervisor (KVM/Qemu) has
>>> +	 * not been updated, we would like to enable the barrier. Dropping the
>>> +	 * check for SEC_FTR_SPEC_BAR_ORI31 achieves that. The only downside is
>>> +	 * we potentially enable the barrier on systems where the host firmware
>>> +	 * is not updated, but that's harmless as it's a no-op.
>>> +	 */
>>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
>>> +		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
>>> +
>>> +	if (!no_nospec)
>>> +		enable_barrier_nospec(enable);
>>> +}
>>> +
>>> +static int __init handle_nospectre_v1(char *p)
>>> +{
>>> +	no_nospec = true;
>>> +
>>> +	return 0;
>>> +}
>>> +early_param("nospectre_v1", handle_nospectre_v1);
>>> +
>>> +#ifdef CONFIG_DEBUG_FS
>>> +static int barrier_nospec_set(void *data, u64 val)
>>> +{
>>> +	switch (val) {
>>> +	case 0:
>>> +	case 1:
>>> +		break;
>>> +	default:
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	if (!!val == !!barrier_nospec_enabled)
>>> +		return 0;
>>> +
>>> +	enable_barrier_nospec(!!val);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int barrier_nospec_get(void *data, u64 *val)
>>> +{
>>> +	*val = barrier_nospec_enabled ? 1 : 0;
>>> +	return 0;
>>> +}
>>> +
>>> +DEFINE_SIMPLE_ATTRIBUTE(fops_barrier_nospec,
>>> +			barrier_nospec_get, barrier_nospec_set, "%llu\n");
>>> +
>>> +static __init int barrier_nospec_debugfs_init(void)
>>> +{
>>> +	debugfs_create_file("barrier_nospec", 0600, powerpc_debugfs_root, NULL,
>>> +			    &fops_barrier_nospec);
>>> +	return 0;
>>> +}
>>> +device_initcall(barrier_nospec_debugfs_init);
>>> +#endif /* CONFIG_DEBUG_FS */
>>> +
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +static int __init handle_nospectre_v2(char *p)
>>> +{
>>> +	no_spectrev2 = true;
>>> +
>>> +	return 0;
>>> +}
>>> +early_param("nospectre_v2", handle_nospectre_v2);
>>> +void setup_spectre_v2(void)
>>> +{
>>> +	if (no_spectrev2)
>>> +		do_btb_flush_fixups();
>>> +	else
>>> +		btb_flush_enabled = true;
>>> +}
>>> +#endif /* CONFIG_PPC_FSL_BOOK3E */
>>> +
>>> +#ifdef CONFIG_PPC_BOOK3S_64
>>> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
>>> +{
>>> +	bool thread_priv;
>>> +
>>> +	thread_priv = security_ftr_enabled(SEC_FTR_L1D_THREAD_PRIV);
>>> +
>>> +	if (rfi_flush || thread_priv) {
>>> +		struct seq_buf s;
>>> +		seq_buf_init(&s, buf, PAGE_SIZE - 1);
>>> +
>>> +		seq_buf_printf(&s, "Mitigation: ");
>>> +
>>> +		if (rfi_flush)
>>> +			seq_buf_printf(&s, "RFI Flush");
>>> +
>>> +		if (rfi_flush && thread_priv)
>>> +			seq_buf_printf(&s, ", ");
>>> +
>>> +		if (thread_priv)
>>> +			seq_buf_printf(&s, "L1D private per thread");
>>> +
>>> +		seq_buf_printf(&s, "\n");
>>> +
>>> +		return s.len;
>>> +	}
>>> +
>>> +	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
>>> +	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
>>> +		return sprintf(buf, "Not affected\n");
>>> +
>>> +	return sprintf(buf, "Vulnerable\n");
>>> +}
>>> +#endif
>>> +
>>> +ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
>>> +{
>>> +	struct seq_buf s;
>>> +
>>> +	seq_buf_init(&s, buf, PAGE_SIZE - 1);
>>> +
>>> +	if (security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR)) {
>>> +		if (barrier_nospec_enabled)
>>> +			seq_buf_printf(&s, "Mitigation: __user pointer sanitization");
>>> +		else
>>> +			seq_buf_printf(&s, "Vulnerable");
>>> +
>>> +		if (security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31))
>>> +			seq_buf_printf(&s, ", ori31 speculation barrier enabled");
>>> +
>>> +		seq_buf_printf(&s, "\n");
>>> +	} else
>>> +		seq_buf_printf(&s, "Not affected\n");
>>> +
>>> +	return s.len;
>>> +}
>>> +
>>> +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
>>> +{
>>> +	struct seq_buf s;
>>> +	bool bcs, ccd;
>>> +
>>> +	seq_buf_init(&s, buf, PAGE_SIZE - 1);
>>> +
>>> +	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
>>> +	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
>>> +
>>> +	if (bcs || ccd) {
>>> +		seq_buf_printf(&s, "Mitigation: ");
>>> +
>>> +		if (bcs)
>>> +			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
>>> +
>>> +		if (bcs && ccd)
>>> +			seq_buf_printf(&s, ", ");
>>> +
>>> +		if (ccd)
>>> +			seq_buf_printf(&s, "Indirect branch cache disabled");
>>> +	} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
>>> +		seq_buf_printf(&s, "Mitigation: Software count cache flush");
>>> +
>>> +		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
>>> +			seq_buf_printf(&s, " (hardware accelerated)");
>>> +	} else if (btb_flush_enabled) {
>>> +		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
>>> +	} else {
>>> +		seq_buf_printf(&s, "Vulnerable");
>>> +	}
>>> +
>>> +	seq_buf_printf(&s, "\n");
>>> +
>>> +	return s.len;
>>> +}
>>> +
>>> +#ifdef CONFIG_PPC_BOOK3S_64
>>> +/*
>>> + * Store-forwarding barrier support.
>>> + */
>>> +
>>> +static enum stf_barrier_type stf_enabled_flush_types;
>>> +static bool no_stf_barrier;
>>> +bool stf_barrier;
>>> +
>>> +static int __init handle_no_stf_barrier(char *p)
>>> +{
>>> +	pr_info("stf-barrier: disabled on command line.");
>>> +	no_stf_barrier = true;
>>> +	return 0;
>>> +}
>>> +
>>> +early_param("no_stf_barrier", handle_no_stf_barrier);
>>> +
>>> +/* This is the generic flag used by other architectures */
>>> +static int __init handle_ssbd(char *p)
>>> +{
>>> +	if (!p || strncmp(p, "auto", 5) == 0 || strncmp(p, "on", 2) == 0 ) {
>>> +		/* Until firmware tells us, we have the barrier with auto */
>>> +		return 0;
>>> +	} else if (strncmp(p, "off", 3) == 0) {
>>> +		handle_no_stf_barrier(NULL);
>>> +		return 0;
>>> +	} else
>>> +		return 1;
>>> +
>>> +	return 0;
>>> +}
>>> +early_param("spec_store_bypass_disable", handle_ssbd);
>>> +
>>> +/* This is the generic flag used by other architectures */
>>> +static int __init handle_no_ssbd(char *p)
>>> +{
>>> +	handle_no_stf_barrier(NULL);
>>> +	return 0;
>>> +}
>>> +early_param("nospec_store_bypass_disable", handle_no_ssbd);
>>> +
>>> +static void stf_barrier_enable(bool enable)
>>> +{
>>> +	if (enable)
>>> +		do_stf_barrier_fixups(stf_enabled_flush_types);
>>> +	else
>>> +		do_stf_barrier_fixups(STF_BARRIER_NONE);
>>> +
>>> +	stf_barrier = enable;
>>> +}
>>> +
>>> +void setup_stf_barrier(void)
>>> +{
>>> +	enum stf_barrier_type type;
>>> +	bool enable, hv;
>>> +
>>> +	hv = cpu_has_feature(CPU_FTR_HVMODE);
>>> +
>>> +	/* Default to fallback in case fw-features are not available */
>>> +	if (cpu_has_feature(CPU_FTR_ARCH_207S))
>>> +		type = STF_BARRIER_SYNC_ORI;
>>> +	else if (cpu_has_feature(CPU_FTR_ARCH_206))
>>> +		type = STF_BARRIER_FALLBACK;
>>> +	else
>>> +		type = STF_BARRIER_NONE;
>>> +
>>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
>>> +		(security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) ||
>>> +		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) && hv));
>>> +
>>> +	if (type == STF_BARRIER_FALLBACK) {
>>> +		pr_info("stf-barrier: fallback barrier available\n");
>>> +	} else if (type == STF_BARRIER_SYNC_ORI) {
>>> +		pr_info("stf-barrier: hwsync barrier available\n");
>>> +	} else if (type == STF_BARRIER_EIEIO) {
>>> +		pr_info("stf-barrier: eieio barrier available\n");
>>> +	}
>>> +
>>> +	stf_enabled_flush_types = type;
>>> +
>>> +	if (!no_stf_barrier)
>>> +		stf_barrier_enable(enable);
>>> +}
>>> +
>>> +ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *attr, char *buf)
>>> +{
>>> +	if (stf_barrier && stf_enabled_flush_types != STF_BARRIER_NONE) {
>>> +		const char *type;
>>> +		switch (stf_enabled_flush_types) {
>>> +		case STF_BARRIER_EIEIO:
>>> +			type = "eieio";
>>> +			break;
>>> +		case STF_BARRIER_SYNC_ORI:
>>> +			type = "hwsync";
>>> +			break;
>>> +		case STF_BARRIER_FALLBACK:
>>> +			type = "fallback";
>>> +			break;
>>> +		default:
>>> +			type = "unknown";
>>> +		}
>>> +		return sprintf(buf, "Mitigation: Kernel entry/exit barrier (%s)\n", type);
>>> +	}
>>> +
>>> +	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
>>> +	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
>>> +		return sprintf(buf, "Not affected\n");
>>> +
>>> +	return sprintf(buf, "Vulnerable\n");
>>> +}
>>> +
>>> +#ifdef CONFIG_DEBUG_FS
>>> +static int stf_barrier_set(void *data, u64 val)
>>> +{
>>> +	bool enable;
>>> +
>>> +	if (val == 1)
>>> +		enable = true;
>>> +	else if (val == 0)
>>> +		enable = false;
>>> +	else
>>> +		return -EINVAL;
>>> +
>>> +	/* Only do anything if we're changing state */
>>> +	if (enable != stf_barrier)
>>> +		stf_barrier_enable(enable);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int stf_barrier_get(void *data, u64 *val)
>>> +{
>>> +	*val = stf_barrier ? 1 : 0;
>>> +	return 0;
>>> +}
>>> +
>>> +DEFINE_SIMPLE_ATTRIBUTE(fops_stf_barrier, stf_barrier_get, stf_barrier_set, "%llu\n");
>>> +
>>> +static __init int stf_barrier_debugfs_init(void)
>>> +{
>>> +	debugfs_create_file("stf_barrier", 0600, powerpc_debugfs_root, NULL, &fops_stf_barrier);
>>> +	return 0;
>>> +}
>>> +device_initcall(stf_barrier_debugfs_init);
>>> +#endif /* CONFIG_DEBUG_FS */
>>> +
>>> +static void toggle_count_cache_flush(bool enable)
>>> +{
>>> +	if (!enable || !security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
>>> +		patch_instruction_site(&patch__call_flush_count_cache, PPC_INST_NOP);
>>> +		count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
>>> +		pr_info("count-cache-flush: software flush disabled.\n");
>>> +		return;
>>> +	}
>>> +
>>> +	patch_branch_site(&patch__call_flush_count_cache,
>>> +			  (u64)&flush_count_cache, BRANCH_SET_LINK);
>>> +
>>> +	if (!security_ftr_enabled(SEC_FTR_BCCTR_FLUSH_ASSIST)) {
>>> +		count_cache_flush_type = COUNT_CACHE_FLUSH_SW;
>>> +		pr_info("count-cache-flush: full software flush sequence enabled.\n");
>>> +		return;
>>> +	}
>>> +
>>> +	patch_instruction_site(&patch__flush_count_cache_return, PPC_INST_BLR);
>>> +	count_cache_flush_type = COUNT_CACHE_FLUSH_HW;
>>> +	pr_info("count-cache-flush: hardware assisted flush sequence enabled\n");
>>> +}
>>> +
>>> +void setup_count_cache_flush(void)
>>> +{
>>> +	toggle_count_cache_flush(true);
>>> +}
>>> +
>>> +#ifdef CONFIG_DEBUG_FS
>>> +static int count_cache_flush_set(void *data, u64 val)
>>> +{
>>> +	bool enable;
>>> +
>>> +	if (val == 1)
>>> +		enable = true;
>>> +	else if (val == 0)
>>> +		enable = false;
>>> +	else
>>> +		return -EINVAL;
>>> +
>>> +	toggle_count_cache_flush(enable);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int count_cache_flush_get(void *data, u64 *val)
>>> +{
>>> +	if (count_cache_flush_type == COUNT_CACHE_FLUSH_NONE)
>>> +		*val = 0;
>>> +	else
>>> +		*val = 1;
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +DEFINE_SIMPLE_ATTRIBUTE(fops_count_cache_flush, count_cache_flush_get,
>>> +			count_cache_flush_set, "%llu\n");
>>> +
>>> +static __init int count_cache_flush_debugfs_init(void)
>>> +{
>>> +	debugfs_create_file("count_cache_flush", 0600, powerpc_debugfs_root,
>>> +			    NULL, &fops_count_cache_flush);
>>> +	return 0;
>>> +}
>>> +device_initcall(count_cache_flush_debugfs_init);
>>> +#endif /* CONFIG_DEBUG_FS */
>>> +#endif /* CONFIG_PPC_BOOK3S_64 */
>>> diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
>>> index ad8c9db61237..5a9f035bcd6b 100644
>>> - --- a/arch/powerpc/kernel/setup_32.c
>>> +++ b/arch/powerpc/kernel/setup_32.c
>>> @@ -322,6 +322,8 @@ void __init setup_arch(char **cmdline_p)
>>>  		ppc_md.setup_arch();
>>>  	if ( ppc_md.progress ) ppc_md.progress("arch: exit", 0x3eab);
>>>  
>>> +	setup_barrier_nospec();
>>> +
>>>  	paging_init();
>>>  
>>>  	/* Initialize the MMU context management stuff */
>>> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
>>> index 9eb469bed22b..6bb731ababc6 100644
>>> - --- a/arch/powerpc/kernel/setup_64.c
>>> +++ b/arch/powerpc/kernel/setup_64.c
>>> @@ -736,6 +736,8 @@ void __init setup_arch(char **cmdline_p)
>>>  	if (ppc_md.setup_arch)
>>>  		ppc_md.setup_arch();
>>>  
>>> +	setup_barrier_nospec();
>>> +
>>>  	paging_init();
>>>  
>>>  	/* Initialize the MMU context management stuff */
>>> @@ -873,9 +875,6 @@ static void do_nothing(void *unused)
>>>  
>>>  void rfi_flush_enable(bool enable)
>>>  {
>>> - -	if (rfi_flush == enable)
>>> - -		return;
>>> - -
>>>  	if (enable) {
>>>  		do_rfi_flush_fixups(enabled_flush_types);
>>>  		on_each_cpu(do_nothing, NULL, 1);
>>> @@ -885,11 +884,15 @@ void rfi_flush_enable(bool enable)
>>>  	rfi_flush = enable;
>>>  }
>>>  
>>> - -static void init_fallback_flush(void)
>>> +static void __ref init_fallback_flush(void)
>>>  {
>>>  	u64 l1d_size, limit;
>>>  	int cpu;
>>>  
>>> +	/* Only allocate the fallback flush area once (at boot time). */
>>> +	if (l1d_flush_fallback_area)
>>> +		return;
>>> +
>>>  	l1d_size = ppc64_caches.dsize;
>>>  	limit = min(safe_stack_limit(), ppc64_rma_size);
>>>  
>>> @@ -902,34 +905,23 @@ static void init_fallback_flush(void)
>>>  	memset(l1d_flush_fallback_area, 0, l1d_size * 2);
>>>  
>>>  	for_each_possible_cpu(cpu) {
>>> - -		/*
>>> - -		 * The fallback flush is currently coded for 8-way
>>> - -		 * associativity. Different associativity is possible, but it
>>> - -		 * will be treated as 8-way and may not evict the lines as
>>> - -		 * effectively.
>>> - -		 *
>>> - -		 * 128 byte lines are mandatory.
>>> - -		 */
>>> - -		u64 c = l1d_size / 8;
>>> - -
>>>  		paca[cpu].rfi_flush_fallback_area = l1d_flush_fallback_area;
>>> - -		paca[cpu].l1d_flush_congruence = c;
>>> - -		paca[cpu].l1d_flush_sets = c / 128;
>>> +		paca[cpu].l1d_flush_size = l1d_size;
>>>  	}
>>>  }
>>>  
>>> - -void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
>>> +void setup_rfi_flush(enum l1d_flush_type types, bool enable)
>>>  {
>>>  	if (types & L1D_FLUSH_FALLBACK) {
>>> - -		pr_info("rfi-flush: Using fallback displacement flush\n");
>>> +		pr_info("rfi-flush: fallback displacement flush available\n");
>>>  		init_fallback_flush();
>>>  	}
>>>  
>>>  	if (types & L1D_FLUSH_ORI)
>>> - -		pr_info("rfi-flush: Using ori type flush\n");
>>> +		pr_info("rfi-flush: ori type flush available\n");
>>>  
>>>  	if (types & L1D_FLUSH_MTTRIG)
>>> - -		pr_info("rfi-flush: Using mttrig type flush\n");
>>> +		pr_info("rfi-flush: mttrig type flush available\n");
>>>  
>>>  	enabled_flush_types = types;
>>>  
>>> @@ -940,13 +932,19 @@ void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
>>>  #ifdef CONFIG_DEBUG_FS
>>>  static int rfi_flush_set(void *data, u64 val)
>>>  {
>>> +	bool enable;
>>> +
>>>  	if (val == 1)
>>> - -		rfi_flush_enable(true);
>>> +		enable = true;
>>>  	else if (val == 0)
>>> - -		rfi_flush_enable(false);
>>> +		enable = false;
>>>  	else
>>>  		return -EINVAL;
>>>  
>>> +	/* Only do anything if we're changing state */
>>> +	if (enable != rfi_flush)
>>> +		rfi_flush_enable(enable);
>>> +
>>>  	return 0;
>>>  }
>>>  
>>> @@ -965,12 +963,4 @@ static __init int rfi_flush_debugfs_init(void)
>>>  }
>>>  device_initcall(rfi_flush_debugfs_init);
>>>  #endif
>>> - -
>>> - -ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
>>> - -{
>>> - -	if (rfi_flush)
>>> - -		return sprintf(buf, "Mitigation: RFI Flush\n");
>>> - -
>>> - -	return sprintf(buf, "Vulnerable\n");
>>> - -}
>>>  #endif /* CONFIG_PPC_BOOK3S_64 */
>>> diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
>>> index 072a23a17350..876ac9d52afc 100644
>>> - --- a/arch/powerpc/kernel/vmlinux.lds.S
>>> +++ b/arch/powerpc/kernel/vmlinux.lds.S
>>> @@ -73,14 +73,45 @@ SECTIONS
>>>  	RODATA
>>>  
>>>  #ifdef CONFIG_PPC64
>>> +	. = ALIGN(8);
>>> +	__stf_entry_barrier_fixup : AT(ADDR(__stf_entry_barrier_fixup) - LOAD_OFFSET) {
>>> +		__start___stf_entry_barrier_fixup = .;
>>> +		*(__stf_entry_barrier_fixup)
>>> +		__stop___stf_entry_barrier_fixup = .;
>>> +	}
>>> +
>>> +	. = ALIGN(8);
>>> +	__stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) {
>>> +		__start___stf_exit_barrier_fixup = .;
>>> +		*(__stf_exit_barrier_fixup)
>>> +		__stop___stf_exit_barrier_fixup = .;
>>> +	}
>>> +
>>>  	. = ALIGN(8);
>>>  	__rfi_flush_fixup : AT(ADDR(__rfi_flush_fixup) - LOAD_OFFSET) {
>>>  		__start___rfi_flush_fixup = .;
>>>  		*(__rfi_flush_fixup)
>>>  		__stop___rfi_flush_fixup = .;
>>>  	}
>>> - -#endif
>>> +#endif /* CONFIG_PPC64 */
>>>  
>>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>>> +	. = ALIGN(8);
>>> +	__spec_barrier_fixup : AT(ADDR(__spec_barrier_fixup) - LOAD_OFFSET) {
>>> +		__start___barrier_nospec_fixup = .;
>>> +		*(__barrier_nospec_fixup)
>>> +		__stop___barrier_nospec_fixup = .;
>>> +	}
>>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>>> +
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +	. = ALIGN(8);
>>> +	__spec_btb_flush_fixup : AT(ADDR(__spec_btb_flush_fixup) - LOAD_OFFSET) {
>>> +		__start__btb_flush_fixup = .;
>>> +		*(__btb_flush_fixup)
>>> +		__stop__btb_flush_fixup = .;
>>> +	}
>>> +#endif
>>>  	EXCEPTION_TABLE(0)
>>>  
>>>  	NOTES :kernel :notes
>>> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
>>> index d5edbeb8eb82..570c06a00db6 100644
>>> - --- a/arch/powerpc/lib/code-patching.c
>>> +++ b/arch/powerpc/lib/code-patching.c
>>> @@ -14,12 +14,25 @@
>>>  #include <asm/page.h>
>>>  #include <asm/code-patching.h>
>>>  #include <asm/uaccess.h>
>>> +#include <asm/setup.h>
>>> +#include <asm/sections.h>
>>>  
>>>  
>>> +static inline bool is_init(unsigned int *addr)
>>> +{
>>> +	return addr >= (unsigned int *)__init_begin && addr < (unsigned int *)__init_end;
>>> +}
>>> +
>>>  int patch_instruction(unsigned int *addr, unsigned int instr)
>>>  {
>>>  	int err;
>>>  
>>> +	/* Make sure we aren't patching a freed init section */
>>> +	if (init_mem_is_free && is_init(addr)) {
>>> +		pr_debug("Skipping init section patching addr: 0x%px\n", addr);
>>> +		return 0;
>>> +	}
>>> +
>>>  	__put_user_size(instr, addr, 4, err);
>>>  	if (err)
>>>  		return err;
>>> @@ -32,6 +45,22 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags)
>>>  	return patch_instruction(addr, create_branch(addr, target, flags));
>>>  }
>>>  
>>> +int patch_branch_site(s32 *site, unsigned long target, int flags)
>>> +{
>>> +	unsigned int *addr;
>>> +
>>> +	addr = (unsigned int *)((unsigned long)site + *site);
>>> +	return patch_instruction(addr, create_branch(addr, target, flags));
>>> +}
>>> +
>>> +int patch_instruction_site(s32 *site, unsigned int instr)
>>> +{
>>> +	unsigned int *addr;
>>> +
>>> +	addr = (unsigned int *)((unsigned long)site + *site);
>>> +	return patch_instruction(addr, instr);
>>> +}
>>> +
>>>  unsigned int create_branch(const unsigned int *addr,
>>>  			   unsigned long target, int flags)
>>>  {
>>> diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
>>> index 3af014684872..7bdfc19a491d 100644
>>> - --- a/arch/powerpc/lib/feature-fixups.c
>>> +++ b/arch/powerpc/lib/feature-fixups.c
>>> @@ -21,7 +21,7 @@
>>>  #include <asm/page.h>
>>>  #include <asm/sections.h>
>>>  #include <asm/setup.h>
>>> - -
>>> +#include <asm/security_features.h>
>>>  
>>>  struct fixup_entry {
>>>  	unsigned long	mask;
>>> @@ -115,6 +115,120 @@ void do_feature_fixups(unsigned long value, void *fixup_start, void *fixup_end)
>>>  }
>>>  
>>>  #ifdef CONFIG_PPC_BOOK3S_64
>>> +void do_stf_entry_barrier_fixups(enum stf_barrier_type types)
>>> +{
>>> +	unsigned int instrs[3], *dest;
>>> +	long *start, *end;
>>> +	int i;
>>> +
>>> +	start = PTRRELOC(&__start___stf_entry_barrier_fixup),
>>> +	end = PTRRELOC(&__stop___stf_entry_barrier_fixup);
>>> +
>>> +	instrs[0] = 0x60000000; /* nop */
>>> +	instrs[1] = 0x60000000; /* nop */
>>> +	instrs[2] = 0x60000000; /* nop */
>>> +
>>> +	i = 0;
>>> +	if (types & STF_BARRIER_FALLBACK) {
>>> +		instrs[i++] = 0x7d4802a6; /* mflr r10		*/
>>> +		instrs[i++] = 0x60000000; /* branch patched below */
>>> +		instrs[i++] = 0x7d4803a6; /* mtlr r10		*/
>>> +	} else if (types & STF_BARRIER_EIEIO) {
>>> +		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
>>> +	} else if (types & STF_BARRIER_SYNC_ORI) {
>>> +		instrs[i++] = 0x7c0004ac; /* hwsync		*/
>>> +		instrs[i++] = 0xe94d0000; /* ld r10,0(r13)	*/
>>> +		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
>>> +	}
>>> +
>>> +	for (i = 0; start < end; start++, i++) {
>>> +		dest = (void *)start + *start;
>>> +
>>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>>> +
>>> +		patch_instruction(dest, instrs[0]);
>>> +
>>> +		if (types & STF_BARRIER_FALLBACK)
>>> +			patch_branch(dest + 1, (unsigned long)&stf_barrier_fallback,
>>> +				     BRANCH_SET_LINK);
>>> +		else
>>> +			patch_instruction(dest + 1, instrs[1]);
>>> +
>>> +		patch_instruction(dest + 2, instrs[2]);
>>> +	}
>>> +
>>> +	printk(KERN_DEBUG "stf-barrier: patched %d entry locations (%s barrier)\n", i,
>>> +		(types == STF_BARRIER_NONE)                  ? "no" :
>>> +		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
>>> +		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
>>> +		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
>>> +		                                           : "unknown");
>>> +}
>>> +
>>> +void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
>>> +{
>>> +	unsigned int instrs[6], *dest;
>>> +	long *start, *end;
>>> +	int i;
>>> +
>>> +	start = PTRRELOC(&__start___stf_exit_barrier_fixup),
>>> +	end = PTRRELOC(&__stop___stf_exit_barrier_fixup);
>>> +
>>> +	instrs[0] = 0x60000000; /* nop */
>>> +	instrs[1] = 0x60000000; /* nop */
>>> +	instrs[2] = 0x60000000; /* nop */
>>> +	instrs[3] = 0x60000000; /* nop */
>>> +	instrs[4] = 0x60000000; /* nop */
>>> +	instrs[5] = 0x60000000; /* nop */
>>> +
>>> +	i = 0;
>>> +	if (types & STF_BARRIER_FALLBACK || types & STF_BARRIER_SYNC_ORI) {
>>> +		if (cpu_has_feature(CPU_FTR_HVMODE)) {
>>> +			instrs[i++] = 0x7db14ba6; /* mtspr 0x131, r13 (HSPRG1) */
>>> +			instrs[i++] = 0x7db04aa6; /* mfspr r13, 0x130 (HSPRG0) */
>>> +		} else {
>>> +			instrs[i++] = 0x7db243a6; /* mtsprg 2,r13	*/
>>> +			instrs[i++] = 0x7db142a6; /* mfsprg r13,1    */
>>> +	        }
>>> +		instrs[i++] = 0x7c0004ac; /* hwsync		*/
>>> +		instrs[i++] = 0xe9ad0000; /* ld r13,0(r13)	*/
>>> +		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
>>> +		if (cpu_has_feature(CPU_FTR_HVMODE)) {
>>> +			instrs[i++] = 0x7db14aa6; /* mfspr r13, 0x131 (HSPRG1) */
>>> +		} else {
>>> +			instrs[i++] = 0x7db242a6; /* mfsprg r13,2 */
>>> +		}
>>> +	} else if (types & STF_BARRIER_EIEIO) {
>>> +		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
>>> +	}
>>> +
>>> +	for (i = 0; start < end; start++, i++) {
>>> +		dest = (void *)start + *start;
>>> +
>>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>>> +
>>> +		patch_instruction(dest, instrs[0]);
>>> +		patch_instruction(dest + 1, instrs[1]);
>>> +		patch_instruction(dest + 2, instrs[2]);
>>> +		patch_instruction(dest + 3, instrs[3]);
>>> +		patch_instruction(dest + 4, instrs[4]);
>>> +		patch_instruction(dest + 5, instrs[5]);
>>> +	}
>>> +	printk(KERN_DEBUG "stf-barrier: patched %d exit locations (%s barrier)\n", i,
>>> +		(types == STF_BARRIER_NONE)                  ? "no" :
>>> +		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
>>> +		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
>>> +		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
>>> +		                                           : "unknown");
>>> +}
>>> +
>>> +
>>> +void do_stf_barrier_fixups(enum stf_barrier_type types)
>>> +{
>>> +	do_stf_entry_barrier_fixups(types);
>>> +	do_stf_exit_barrier_fixups(types);
>>> +}
>>> +
>>>  void do_rfi_flush_fixups(enum l1d_flush_type types)
>>>  {
>>>  	unsigned int instrs[3], *dest;
>>> @@ -151,10 +265,110 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
>>>  		patch_instruction(dest + 2, instrs[2]);
>>>  	}
>>>  
>>> - -	printk(KERN_DEBUG "rfi-flush: patched %d locations\n", i);
>>> +	printk(KERN_DEBUG "rfi-flush: patched %d locations (%s flush)\n", i,
>>> +		(types == L1D_FLUSH_NONE)       ? "no" :
>>> +		(types == L1D_FLUSH_FALLBACK)   ? "fallback displacement" :
>>> +		(types &  L1D_FLUSH_ORI)        ? (types & L1D_FLUSH_MTTRIG)
>>> +							? "ori+mttrig type"
>>> +							: "ori type" :
>>> +		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
>>> +						: "unknown");
>>> +}
>>> +
>>> +void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
>>> +{
>>> +	unsigned int instr, *dest;
>>> +	long *start, *end;
>>> +	int i;
>>> +
>>> +	start = fixup_start;
>>> +	end = fixup_end;
>>> +
>>> +	instr = 0x60000000; /* nop */
>>> +
>>> +	if (enable) {
>>> +		pr_info("barrier-nospec: using ORI speculation barrier\n");
>>> +		instr = 0x63ff0000; /* ori 31,31,0 speculation barrier */
>>> +	}
>>> +
>>> +	for (i = 0; start < end; start++, i++) {
>>> +		dest = (void *)start + *start;
>>> +
>>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>>> +		patch_instruction(dest, instr);
>>> +	}
>>> +
>>> +	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
>>>  }
>>> +
>>>  #endif /* CONFIG_PPC_BOOK3S_64 */
>>>  
>>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>>> +void do_barrier_nospec_fixups(bool enable)
>>> +{
>>> +	void *start, *end;
>>> +
>>> +	start = PTRRELOC(&__start___barrier_nospec_fixup),
>>> +	end = PTRRELOC(&__stop___barrier_nospec_fixup);
>>> +
>>> +	do_barrier_nospec_fixups_range(enable, start, end);
>>> +}
>>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>>> +
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
>>> +{
>>> +	unsigned int instr[2], *dest;
>>> +	long *start, *end;
>>> +	int i;
>>> +
>>> +	start = fixup_start;
>>> +	end = fixup_end;
>>> +
>>> +	instr[0] = PPC_INST_NOP;
>>> +	instr[1] = PPC_INST_NOP;
>>> +
>>> +	if (enable) {
>>> +		pr_info("barrier-nospec: using isync; sync as speculation barrier\n");
>>> +		instr[0] = PPC_INST_ISYNC;
>>> +		instr[1] = PPC_INST_SYNC;
>>> +	}
>>> +
>>> +	for (i = 0; start < end; start++, i++) {
>>> +		dest = (void *)start + *start;
>>> +
>>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>>> +		patch_instruction(dest, instr[0]);
>>> +		patch_instruction(dest + 1, instr[1]);
>>> +	}
>>> +
>>> +	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
>>> +}
>>> +
>>> +static void patch_btb_flush_section(long *curr)
>>> +{
>>> +	unsigned int *start, *end;
>>> +
>>> +	start = (void *)curr + *curr;
>>> +	end = (void *)curr + *(curr + 1);
>>> +	for (; start < end; start++) {
>>> +		pr_devel("patching dest %lx\n", (unsigned long)start);
>>> +		patch_instruction(start, PPC_INST_NOP);
>>> +	}
>>> +}
>>> +
>>> +void do_btb_flush_fixups(void)
>>> +{
>>> +	long *start, *end;
>>> +
>>> +	start = PTRRELOC(&__start__btb_flush_fixup);
>>> +	end = PTRRELOC(&__stop__btb_flush_fixup);
>>> +
>>> +	for (; start < end; start += 2)
>>> +		patch_btb_flush_section(start);
>>> +}
>>> +#endif /* CONFIG_PPC_FSL_BOOK3E */
>>> +
>>>  void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
>>>  {
>>>  	long *start, *end;
>>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>>> index 22d94c3e6fc4..1efe5ca5c3bc 100644
>>> - --- a/arch/powerpc/mm/mem.c
>>> +++ b/arch/powerpc/mm/mem.c
>>> @@ -62,6 +62,7 @@
>>>  #endif
>>>  
>>>  unsigned long long memory_limit;
>>> +bool init_mem_is_free;
>>>  
>>>  #ifdef CONFIG_HIGHMEM
>>>  pte_t *kmap_pte;
>>> @@ -381,6 +382,7 @@ void __init mem_init(void)
>>>  void free_initmem(void)
>>>  {
>>>  	ppc_md.progress = ppc_printk_progress;
>>> +	init_mem_is_free = true;
>>>  	free_initmem_default(POISON_FREE_INITMEM);
>>>  }
>>>  
>>> diff --git a/arch/powerpc/mm/tlb_low_64e.S b/arch/powerpc/mm/tlb_low_64e.S
>>> index 29d6987c37ba..5486d56da289 100644
>>> - --- a/arch/powerpc/mm/tlb_low_64e.S
>>> +++ b/arch/powerpc/mm/tlb_low_64e.S
>>> @@ -69,6 +69,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>>>  	std	r15,EX_TLB_R15(r12)
>>>  	std	r10,EX_TLB_CR(r12)
>>>  #ifdef CONFIG_PPC_FSL_BOOK3E
>>> +START_BTB_FLUSH_SECTION
>>> +	mfspr r11, SPRN_SRR1
>>> +	andi. r10,r11,MSR_PR
>>> +	beq 1f
>>> +	BTB_FLUSH(r10)
>>> +1:
>>> +END_BTB_FLUSH_SECTION
>>>  	std	r7,EX_TLB_R7(r12)
>>>  #endif
>>>  	TLB_MISS_PROLOG_STATS
>>> diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
>>> index c57afc619b20..e14b52c7ebd8 100644
>>> - --- a/arch/powerpc/platforms/powernv/setup.c
>>> +++ b/arch/powerpc/platforms/powernv/setup.c
>>> @@ -37,53 +37,99 @@
>>>  #include <asm/smp.h>
>>>  #include <asm/tm.h>
>>>  #include <asm/setup.h>
>>> +#include <asm/security_features.h>
>>>  
>>>  #include "powernv.h"
>>>  
>>> +
>>> +static bool fw_feature_is(const char *state, const char *name,
>>> +			  struct device_node *fw_features)
>>> +{
>>> +	struct device_node *np;
>>> +	bool rc = false;
>>> +
>>> +	np = of_get_child_by_name(fw_features, name);
>>> +	if (np) {
>>> +		rc = of_property_read_bool(np, state);
>>> +		of_node_put(np);
>>> +	}
>>> +
>>> +	return rc;
>>> +}
>>> +
>>> +static void init_fw_feat_flags(struct device_node *np)
>>> +{
>>> +	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
>>> +		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
>>> +
>>> +	if (fw_feature_is("enabled", "fw-bcctrl-serialized", np))
>>> +		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
>>> +
>>> +	if (fw_feature_is("enabled", "inst-l1d-flush-ori30,30,0", np))
>>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
>>> +
>>> +	if (fw_feature_is("enabled", "inst-l1d-flush-trig2", np))
>>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
>>> +
>>> +	if (fw_feature_is("enabled", "fw-l1d-thread-split", np))
>>> +		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
>>> +
>>> +	if (fw_feature_is("enabled", "fw-count-cache-disabled", np))
>>> +		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
>>> +
>>> +	if (fw_feature_is("enabled", "fw-count-cache-flush-bcctr2,0,0", np))
>>> +		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
>>> +
>>> +	if (fw_feature_is("enabled", "needs-count-cache-flush-on-context-switch", np))
>>> +		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
>>> +
>>> +	/*
>>> +	 * The features below are enabled by default, so we instead look to see
>>> +	 * if firmware has *disabled* them, and clear them if so.
>>> +	 */
>>> +	if (fw_feature_is("disabled", "speculation-policy-favor-security", np))
>>> +		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
>>> +
>>> +	if (fw_feature_is("disabled", "needs-l1d-flush-msr-pr-0-to-1", np))
>>> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
>>> +
>>> +	if (fw_feature_is("disabled", "needs-l1d-flush-msr-hv-1-to-0", np))
>>> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
>>> +
>>> +	if (fw_feature_is("disabled", "needs-spec-barrier-for-bound-checks", np))
>>> +		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
>>> +}
>>> +
>>>  static void pnv_setup_rfi_flush(void)
>>>  {
>>>  	struct device_node *np, *fw_features;
>>>  	enum l1d_flush_type type;
>>> - -	int enable;
>>> +	bool enable;
>>>  
>>>  	/* Default to fallback in case fw-features are not available */
>>>  	type = L1D_FLUSH_FALLBACK;
>>> - -	enable = 1;
>>>  
>>>  	np = of_find_node_by_name(NULL, "ibm,opal");
>>>  	fw_features = of_get_child_by_name(np, "fw-features");
>>>  	of_node_put(np);
>>>  
>>>  	if (fw_features) {
>>> - -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-trig2");
>>> - -		if (np && of_property_read_bool(np, "enabled"))
>>> - -			type = L1D_FLUSH_MTTRIG;
>>> +		init_fw_feat_flags(fw_features);
>>> +		of_node_put(fw_features);
>>>  
>>> - -		of_node_put(np);
>>> +		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
>>> +			type = L1D_FLUSH_MTTRIG;
>>>  
>>> - -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-ori30,30,0");
>>> - -		if (np && of_property_read_bool(np, "enabled"))
>>> +		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
>>>  			type = L1D_FLUSH_ORI;
>>> - -
>>> - -		of_node_put(np);
>>> - -
>>> - -		/* Enable unless firmware says NOT to */
>>> - -		enable = 2;
>>> - -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-hv-1-to-0");
>>> - -		if (np && of_property_read_bool(np, "disabled"))
>>> - -			enable--;
>>> - -
>>> - -		of_node_put(np);
>>> - -
>>> - -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-pr-0-to-1");
>>> - -		if (np && of_property_read_bool(np, "disabled"))
>>> - -			enable--;
>>> - -
>>> - -		of_node_put(np);
>>> - -		of_node_put(fw_features);
>>>  	}
>>>  
>>> - -	setup_rfi_flush(type, enable > 0);
>>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
>>> +		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR)   || \
>>> +		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
>>> +
>>> +	setup_rfi_flush(type, enable);
>>> +	setup_count_cache_flush();
>>>  }
>>>  
>>>  static void __init pnv_setup_arch(void)
>>> @@ -91,6 +137,7 @@ static void __init pnv_setup_arch(void)
>>>  	set_arch_panic_timeout(10, ARCH_PANIC_TIMEOUT);
>>>  
>>>  	pnv_setup_rfi_flush();
>>> +	setup_stf_barrier();
>>>  
>>>  	/* Initialize SMP */
>>>  	pnv_smp_init();
>>> diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
>>> index 8dd0c8edefd6..c773396d0969 100644
>>> - --- a/arch/powerpc/platforms/pseries/mobility.c
>>> +++ b/arch/powerpc/platforms/pseries/mobility.c
>>> @@ -314,6 +314,9 @@ void post_mobility_fixup(void)
>>>  		printk(KERN_ERR "Post-mobility device tree update "
>>>  			"failed: %d\n", rc);
>>>  
>>> +	/* Possibly switch to a new RFI flush type */
>>> +	pseries_setup_rfi_flush();
>>> +
>>>  	return;
>>>  }
>>>  
>>> diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h
>>> index 8411c27293e4..e7d80797384d 100644
>>> - --- a/arch/powerpc/platforms/pseries/pseries.h
>>> +++ b/arch/powerpc/platforms/pseries/pseries.h
>>> @@ -81,4 +81,6 @@ extern struct pci_controller_ops pseries_pci_controller_ops;
>>>  
>>>  unsigned long pseries_memory_block_size(void);
>>>  
>>> +void pseries_setup_rfi_flush(void);
>>> +
>>>  #endif /* _PSERIES_PSERIES_H */
>>> diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
>>> index dd2545fc9947..9cc976ff7fec 100644
>>> - --- a/arch/powerpc/platforms/pseries/setup.c
>>> +++ b/arch/powerpc/platforms/pseries/setup.c
>>> @@ -67,6 +67,7 @@
>>>  #include <asm/eeh.h>
>>>  #include <asm/reg.h>
>>>  #include <asm/plpar_wrappers.h>
>>> +#include <asm/security_features.h>
>>>  
>>>  #include "pseries.h"
>>>  
>>> @@ -499,37 +500,87 @@ static void __init find_and_init_phbs(void)
>>>  	of_pci_check_probe_only();
>>>  }
>>>  
>>> - -static void pseries_setup_rfi_flush(void)
>>> +static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
>>> +{
>>> +	/*
>>> +	 * The features below are disabled by default, so we instead look to see
>>> +	 * if firmware has *enabled* them, and set them if so.
>>> +	 */
>>> +	if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31)
>>> +		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
>>> +
>>> +	if (result->character & H_CPU_CHAR_BCCTRL_SERIALISED)
>>> +		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
>>> +
>>> +	if (result->character & H_CPU_CHAR_L1D_FLUSH_ORI30)
>>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
>>> +
>>> +	if (result->character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
>>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
>>> +
>>> +	if (result->character & H_CPU_CHAR_L1D_THREAD_PRIV)
>>> +		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
>>> +
>>> +	if (result->character & H_CPU_CHAR_COUNT_CACHE_DISABLED)
>>> +		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
>>> +
>>> +	if (result->character & H_CPU_CHAR_BCCTR_FLUSH_ASSIST)
>>> +		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
>>> +
>>> +	if (result->behaviour & H_CPU_BEHAV_FLUSH_COUNT_CACHE)
>>> +		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
>>> +
>>> +	/*
>>> +	 * The features below are enabled by default, so we instead look to see
>>> +	 * if firmware has *disabled* them, and clear them if so.
>>> +	 */
>>> +	if (!(result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY))
>>> +		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
>>> +
>>> +	if (!(result->behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
>>> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
>>> +
>>> +	if (!(result->behaviour & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
>>> +		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
>>> +}
>>> +
>>> +void pseries_setup_rfi_flush(void)
>>>  {
>>>  	struct h_cpu_char_result result;
>>>  	enum l1d_flush_type types;
>>>  	bool enable;
>>>  	long rc;
>>>  
>>> - -	/* Enable by default */
>>> - -	enable = true;
>>> +	/*
>>> +	 * Set features to the defaults assumed by init_cpu_char_feature_flags()
>>> +	 * so it can set/clear again any features that might have changed after
>>> +	 * migration, and in case the hypercall fails and it is not even called.
>>> +	 */
>>> +	powerpc_security_features = SEC_FTR_DEFAULT;
>>>  
>>>  	rc = plpar_get_cpu_characteristics(&result);
>>> - -	if (rc == H_SUCCESS) {
>>> - -		types = L1D_FLUSH_NONE;
>>> +	if (rc == H_SUCCESS)
>>> +		init_cpu_char_feature_flags(&result);
>>>  
>>> - -		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
>>> - -			types |= L1D_FLUSH_MTTRIG;
>>> - -		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
>>> - -			types |= L1D_FLUSH_ORI;
>>> +	/*
>>> +	 * We're the guest so this doesn't apply to us, clear it to simplify
>>> +	 * handling of it elsewhere.
>>> +	 */
>>> +	security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
>>>  
>>> - -		/* Use fallback if nothing set in hcall */
>>> - -		if (types == L1D_FLUSH_NONE)
>>> - -			types = L1D_FLUSH_FALLBACK;
>>> +	types = L1D_FLUSH_FALLBACK;
>>>  
>>> - -		if (!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
>>> - -			enable = false;
>>> - -	} else {
>>> - -		/* Default to fallback if case hcall is not available */
>>> - -		types = L1D_FLUSH_FALLBACK;
>>> - -	}
>>> +	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
>>> +		types |= L1D_FLUSH_MTTRIG;
>>> +
>>> +	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
>>> +		types |= L1D_FLUSH_ORI;
>>> +
>>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
>>> +		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
>>>  
>>>  	setup_rfi_flush(types, enable);
>>> +	setup_count_cache_flush();
>>>  }
>>>  
>>>  static void __init pSeries_setup_arch(void)
>>> @@ -549,6 +600,7 @@ static void __init pSeries_setup_arch(void)
>>>  	fwnmi_init();
>>>  
>>>  	pseries_setup_rfi_flush();
>>> +	setup_stf_barrier();
>>>  
>>>  	/* By default, only probe PCI (can be overridden by rtas_pci) */
>>>  	pci_add_flags(PCI_PROBE_ONLY);
>>> diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
>>> index 786bf01691c9..83619ebede93 100644
>>> - --- a/arch/powerpc/xmon/xmon.c
>>> +++ b/arch/powerpc/xmon/xmon.c
>>> @@ -2144,6 +2144,8 @@ static void dump_one_paca(int cpu)
>>>  	DUMP(p, slb_cache_ptr, "x");
>>>  	for (i = 0; i < SLB_CACHE_ENTRIES; i++)
>>>  		printf(" slb_cache[%d]:        = 0x%016lx\n", i, p->slb_cache[i]);
>>> +
>>> +	DUMP(p, rfi_flush_fallback_area, "px");
>>>  #endif
>>>  	DUMP(p, dscr_default, "llx");
>>>  #ifdef CONFIG_PPC_BOOK3E
>>> - -- 
>>> 2.20.1
>>>
>>> -----BEGIN PGP SIGNATURE-----
>>>
>>> iQIcBAEBAgAGBQJcvHWhAAoJEFHr6jzI4aWA6nsP/0YskmAfLovcUmERQ7+bIjq6
>>> IcS1T466dvy6MlqeBXU4x8pVgInWeHKEC9XJdkM1lOeib/SLW7Hbz4kgJeOGwFGY
>>> lOTaexrxvsBqPm7f6GC0zbl9obEIIIIUs+TielFQANBgqm+q8Wio+XXPP9bpKeKY
>>> agSpQ3nwL/PYixznbNmN/lP9py5p89LQ0IBcR7dDBGGWJtD/AXeZ9hslsZxPbPtI
>>> nZJ0vdnjuoB2z+hCxfKWlYfLwH0VfoTpqP5x3ALCkvbBr67e8bf6EK8+trnvhyQ8
>>> iLY4bp1pm2epAI0/3NfyEiDMsGjVJ6IFlkyhDkHJgJNu0BGcGOSX2GpyU3juviAK
>>> c95FtBft/i8AwigOMCivg2mN5edYjsSiPoEItwT5KWqgByJsdr5i5mYVx8cUjMOz
>>> iAxLZCdg+UHZYuCBCAO2ZI1G9bVXI1Pa3btMspiCOOOsYGjXGf0oFfKQ+7957hUO
>>> ftYYJoGHlMHiHR1OPas6T3lk6YKF9uvfIDTE3OKw2obHbbRz3u82xoWMRGW503MN
>>> 7WpkpAP7oZ9RgqIWFVhatWy5f+7GFL0akEi4o2tsZHhYlPau7YWo+nToTd87itwt
>>> GBaWJipzge4s13VkhAE+jWFO35Fvwi8uNZ7UgpuKMBECEjkGbtzBTq2MjSF5G8wc
>>> yPEod5jby/Iqb7DkGPVG
>>> =6DnF
>>> -----END PGP SIGNATURE-----
>>>


^ permalink raw reply	[flat|nested] 180+ messages in thread

* Re: [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4
@ 2019-04-29 15:52       ` Diana Madalina Craciun
  0 siblings, 0 replies; 180+ messages in thread
From: Diana Madalina Craciun @ 2019-04-29 15:52 UTC (permalink / raw)
  To: Michael Ellerman, stable, gregkh; +Cc: linuxppc-dev, msuchanek, npiggin

On 4/28/2019 9:19 AM, Michael Ellerman wrote:
> Diana Madalina Craciun <diana.craciun@nxp.com> writes:
>> Hi Michael,
>>
>> There are some missing NXP Spectre v2 patches. I can send them
>> separately if the series will be accepted. I have merged them, but I did
>> not test them, I was sick today and incapable of doing that.
> No worries, there's no rush :)
>
> Sorry I missed them, I thought I had a list that included everything.
> Which commits was it I missed?
>
> I guess post them as a reply to this thread? That way whether the series
> is merged by Greg or not, there's a record here of what the backports
> look like.

I have sent them as a separate series, but mentioning them here as well:

Diana Craciun (8):
  powerpc/fsl: Enable runtime patching if nospectre_v2 boot arg is used
  powerpc/fsl: Flush branch predictor when entering KVM
  powerpc/fsl: Emulate SPRN_BUCSR register
  powerpc/fsl: Flush the branch predictor at each kernel entry (32 bit)
  powerpc/fsl: Sanitize the syscall table for NXP PowerPC 32 bit
    platforms
  powerpc/fsl: Fixed warning: orphan section `__btb_flush_fixup'
  powerpc/fsl: Add FSL_PPC_BOOK3E as supported arch for nospectre_v2
    boot arg
  Documentation: Add nospectre_v1 parameter

regards

> cheers
>
>> On 4/21/2019 5:21 PM, Michael Ellerman wrote:
>>> -----BEGIN PGP SIGNED MESSAGE-----
>>> Hash: SHA1
>>>
>>> Hi Greg/Sasha,
>>>
>>> Please queue up these powerpc patches for 4.4 if you have no objections.
>>>
>>> cheers
>>>
>>>
>>> Christophe Leroy (1):
>>>   powerpc/fsl: Fix the flush of branch predictor.
>>>
>>> Diana Craciun (10):
>>>   powerpc/64: Disable the speculation barrier from the command line
>>>   powerpc/64: Make stf barrier PPC_BOOK3S_64 specific.
>>>   powerpc/64: Make meltdown reporting Book3S 64 specific
>>>   powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E
>>>   powerpc/fsl: Add infrastructure to fixup branch predictor flush
>>>   powerpc/fsl: Add macro to flush the branch predictor
>>>   powerpc/fsl: Fix spectre_v2 mitigations reporting
>>>   powerpc/fsl: Add nospectre_v2 command line argument
>>>   powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)
>>>   powerpc/fsl: Update Spectre v2 reporting
>>>
>>> Mauricio Faria de Oliveira (4):
>>>   powerpc/rfi-flush: Differentiate enabled and patched flush types
>>>   powerpc/pseries: Fix clearing of security feature flags
>>>   powerpc: Move default security feature flags
>>>   powerpc/pseries: Restore default security feature flags on setup
>>>
>>> Michael Ellerman (29):
>>>   powerpc/xmon: Add RFI flush related fields to paca dump
>>>   powerpc/pseries: Support firmware disable of RFI flush
>>>   powerpc/powernv: Support firmware disable of RFI flush
>>>   powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs
>>>     code
>>>   powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again
>>>   powerpc/rfi-flush: Always enable fallback flush on pseries
>>>   powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags
>>>   powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration
>>>   powerpc: Add security feature flags for Spectre/Meltdown
>>>   powerpc/pseries: Set or clear security feature flags
>>>   powerpc/powernv: Set or clear security feature flags
>>>   powerpc/64s: Move cpu_show_meltdown()
>>>   powerpc/64s: Enhance the information in cpu_show_meltdown()
>>>   powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()
>>>   powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()
>>>   powerpc/64s: Wire up cpu_show_spectre_v1()
>>>   powerpc/64s: Wire up cpu_show_spectre_v2()
>>>   powerpc/64s: Fix section mismatch warnings from setup_rfi_flush()
>>>   powerpc/64: Use barrier_nospec in syscall entry
>>>   powerpc: Use barrier_nospec in copy_from_user()
>>>   powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2
>>>   powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC
>>>   powerpc/64: Call setup_barrier_nospec() from setup_arch()
>>>   powerpc/asm: Add a patch_site macro & helpers for patching
>>>     instructions
>>>   powerpc/64s: Add new security feature flags for count cache flush
>>>   powerpc/64s: Add support for software count cache flush
>>>   powerpc/pseries: Query hypervisor for count cache flush settings
>>>   powerpc/powernv: Query firmware for count cache flush settings
>>>   powerpc/security: Fix spectre_v2 reporting
>>>
>>> Michael Neuling (1):
>>>   powerpc: Avoid code patching freed init sections
>>>
>>> Michal Suchanek (5):
>>>   powerpc/64s: Add barrier_nospec
>>>   powerpc/64s: Add support for ori barrier_nospec patching
>>>   powerpc/64s: Patch barrier_nospec in modules
>>>   powerpc/64s: Enable barrier_nospec based on firmware settings
>>>   powerpc/64s: Enhance the information in cpu_show_spectre_v1()
>>>
>>> Nicholas Piggin (2):
>>>   powerpc/64s: Improve RFI L1-D cache flush fallback
>>>   powerpc/64s: Add support for a store forwarding barrier at kernel
>>>     entry/exit
>>>
>>>  arch/powerpc/Kconfig                         |   7 +-
>>>  arch/powerpc/include/asm/asm-prototypes.h    |  21 +
>>>  arch/powerpc/include/asm/barrier.h           |  21 +
>>>  arch/powerpc/include/asm/code-patching-asm.h |  18 +
>>>  arch/powerpc/include/asm/code-patching.h     |   2 +
>>>  arch/powerpc/include/asm/exception-64s.h     |  35 ++
>>>  arch/powerpc/include/asm/feature-fixups.h    |  40 ++
>>>  arch/powerpc/include/asm/hvcall.h            |   5 +
>>>  arch/powerpc/include/asm/paca.h              |   3 +-
>>>  arch/powerpc/include/asm/ppc-opcode.h        |   1 +
>>>  arch/powerpc/include/asm/ppc_asm.h           |  11 +
>>>  arch/powerpc/include/asm/security_features.h |  92 ++++
>>>  arch/powerpc/include/asm/setup.h             |  23 +-
>>>  arch/powerpc/include/asm/uaccess.h           |  18 +-
>>>  arch/powerpc/kernel/Makefile                 |   1 +
>>>  arch/powerpc/kernel/asm-offsets.c            |   3 +-
>>>  arch/powerpc/kernel/entry_64.S               |  69 +++
>>>  arch/powerpc/kernel/exceptions-64e.S         |  27 +-
>>>  arch/powerpc/kernel/exceptions-64s.S         |  98 +++--
>>>  arch/powerpc/kernel/module.c                 |  10 +-
>>>  arch/powerpc/kernel/security.c               | 433 +++++++++++++++++++
>>>  arch/powerpc/kernel/setup_32.c               |   2 +
>>>  arch/powerpc/kernel/setup_64.c               |  50 +--
>>>  arch/powerpc/kernel/vmlinux.lds.S            |  33 +-
>>>  arch/powerpc/lib/code-patching.c             |  29 ++
>>>  arch/powerpc/lib/feature-fixups.c            | 218 +++++++++-
>>>  arch/powerpc/mm/mem.c                        |   2 +
>>>  arch/powerpc/mm/tlb_low_64e.S                |   7 +
>>>  arch/powerpc/platforms/powernv/setup.c       |  99 +++--
>>>  arch/powerpc/platforms/pseries/mobility.c    |   3 +
>>>  arch/powerpc/platforms/pseries/pseries.h     |   2 +
>>>  arch/powerpc/platforms/pseries/setup.c       |  88 +++-
>>>  arch/powerpc/xmon/xmon.c                     |   2 +
>>>  33 files changed, 1345 insertions(+), 128 deletions(-)
>>>  create mode 100644 arch/powerpc/include/asm/asm-prototypes.h
>>>  create mode 100644 arch/powerpc/include/asm/code-patching-asm.h
>>>  create mode 100644 arch/powerpc/include/asm/security_features.h
>>>  create mode 100644 arch/powerpc/kernel/security.c
>>>
>>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>>> index 58a1fa979655..01b6c00a7060 100644
>>> - --- a/arch/powerpc/Kconfig
>>> +++ b/arch/powerpc/Kconfig
>>> @@ -136,7 +136,7 @@ config PPC
>>>  	select GENERIC_SMP_IDLE_THREAD
>>>  	select GENERIC_CMOS_UPDATE
>>>  	select GENERIC_TIME_VSYSCALL_OLD
>>> - -	select GENERIC_CPU_VULNERABILITIES	if PPC_BOOK3S_64
>>> +	select GENERIC_CPU_VULNERABILITIES	if PPC_BARRIER_NOSPEC
>>>  	select GENERIC_CLOCKEVENTS
>>>  	select GENERIC_CLOCKEVENTS_BROADCAST if SMP
>>>  	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
>>> @@ -162,6 +162,11 @@ config PPC
>>>  	select ARCH_HAS_DMA_SET_COHERENT_MASK
>>>  	select HAVE_ARCH_SECCOMP_FILTER
>>>  
>>> +config PPC_BARRIER_NOSPEC
>>> +    bool
>>> +    default y
>>> +    depends on PPC_BOOK3S_64 || PPC_FSL_BOOK3E
>>> +
>>>  config GENERIC_CSUM
>>>  	def_bool CPU_LITTLE_ENDIAN
>>>  
>>> diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
>>> new file mode 100644
>>> index 000000000000..8944c55591cf
>>> - --- /dev/null
>>> +++ b/arch/powerpc/include/asm/asm-prototypes.h
>>> @@ -0,0 +1,21 @@
>>> +#ifndef _ASM_POWERPC_ASM_PROTOTYPES_H
>>> +#define _ASM_POWERPC_ASM_PROTOTYPES_H
>>> +/*
>>> + * This file is for prototypes of C functions that are only called
>>> + * from asm, and any associated variables.
>>> + *
>>> + * Copyright 2016, Daniel Axtens, IBM Corporation.
>>> + *
>>> + * This program is free software; you can redistribute it and/or
>>> + * modify it under the terms of the GNU General Public License
>>> + * as published by the Free Software Foundation; either version 2
>>> + * of the License, or (at your option) any later version.
>>> + */
>>> +
>>> +/* Patch sites */
>>> +extern s32 patch__call_flush_count_cache;
>>> +extern s32 patch__flush_count_cache_return;
>>> +
>>> +extern long flush_count_cache;
>>> +
>>> +#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
>>> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
>>> index b9e16855a037..e7cb72cdb2ba 100644
>>> - --- a/arch/powerpc/include/asm/barrier.h
>>> +++ b/arch/powerpc/include/asm/barrier.h
>>> @@ -92,4 +92,25 @@ do {									\
>>>  #define smp_mb__after_atomic()      smp_mb()
>>>  #define smp_mb__before_spinlock()   smp_mb()
>>>  
>>> +#ifdef CONFIG_PPC_BOOK3S_64
>>> +#define NOSPEC_BARRIER_SLOT   nop
>>> +#elif defined(CONFIG_PPC_FSL_BOOK3E)
>>> +#define NOSPEC_BARRIER_SLOT   nop; nop
>>> +#endif
>>> +
>>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>>> +/*
>>> + * Prevent execution of subsequent instructions until preceding branches have
>>> + * been fully resolved and are no longer executing speculatively.
>>> + */
>>> +#define barrier_nospec_asm NOSPEC_BARRIER_FIXUP_SECTION; NOSPEC_BARRIER_SLOT
>>> +
>>> +// This also acts as a compiler barrier due to the memory clobber.
>>> +#define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
>>> +
>>> +#else /* !CONFIG_PPC_BARRIER_NOSPEC */
>>> +#define barrier_nospec_asm
>>> +#define barrier_nospec()
>>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>>> +
>>>  #endif /* _ASM_POWERPC_BARRIER_H */
>>> diff --git a/arch/powerpc/include/asm/code-patching-asm.h b/arch/powerpc/include/asm/code-patching-asm.h
>>> new file mode 100644
>>> index 000000000000..ed7b1448493a
>>> - --- /dev/null
>>> +++ b/arch/powerpc/include/asm/code-patching-asm.h
>>> @@ -0,0 +1,18 @@
>>> +/* SPDX-License-Identifier: GPL-2.0+ */
>>> +/*
>>> + * Copyright 2018, Michael Ellerman, IBM Corporation.
>>> + */
>>> +#ifndef _ASM_POWERPC_CODE_PATCHING_ASM_H
>>> +#define _ASM_POWERPC_CODE_PATCHING_ASM_H
>>> +
>>> +/* Define a "site" that can be patched */
>>> +.macro patch_site label name
>>> +	.pushsection ".rodata"
>>> +	.balign 4
>>> +	.global \name
>>> +\name:
>>> +	.4byte	\label - .
>>> +	.popsection
>>> +.endm
>>> +
>>> +#endif /* _ASM_POWERPC_CODE_PATCHING_ASM_H */
>>> diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
>>> index 840a5509b3f1..a734b4b34d26 100644
>>> - --- a/arch/powerpc/include/asm/code-patching.h
>>> +++ b/arch/powerpc/include/asm/code-patching.h
>>> @@ -28,6 +28,8 @@ unsigned int create_cond_branch(const unsigned int *addr,
>>>  				unsigned long target, int flags);
>>>  int patch_branch(unsigned int *addr, unsigned long target, int flags);
>>>  int patch_instruction(unsigned int *addr, unsigned int instr);
>>> +int patch_instruction_site(s32 *addr, unsigned int instr);
>>> +int patch_branch_site(s32 *site, unsigned long target, int flags);
>>>  
>>>  int instr_is_relative_branch(unsigned int instr);
>>>  int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
>>> diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
>>> index 9bddbec441b8..3ed536bec462 100644
>>> - --- a/arch/powerpc/include/asm/exception-64s.h
>>> +++ b/arch/powerpc/include/asm/exception-64s.h
>>> @@ -50,6 +50,27 @@
>>>  #define EX_PPR		88	/* SMT thread status register (priority) */
>>>  #define EX_CTR		96
>>>  
>>> +#define STF_ENTRY_BARRIER_SLOT						\
>>> +	STF_ENTRY_BARRIER_FIXUP_SECTION;				\
>>> +	nop;								\
>>> +	nop;								\
>>> +	nop
>>> +
>>> +#define STF_EXIT_BARRIER_SLOT						\
>>> +	STF_EXIT_BARRIER_FIXUP_SECTION;					\
>>> +	nop;								\
>>> +	nop;								\
>>> +	nop;								\
>>> +	nop;								\
>>> +	nop;								\
>>> +	nop
>>> +
>>> +/*
>>> + * r10 must be free to use, r13 must be paca
>>> + */
>>> +#define INTERRUPT_TO_KERNEL						\
>>> +	STF_ENTRY_BARRIER_SLOT
>>> +
>>>  /*
>>>   * Macros for annotating the expected destination of (h)rfid
>>>   *
>>> @@ -66,16 +87,19 @@
>>>  	rfid
>>>  
>>>  #define RFI_TO_USER							\
>>> +	STF_EXIT_BARRIER_SLOT;						\
>>>  	RFI_FLUSH_SLOT;							\
>>>  	rfid;								\
>>>  	b	rfi_flush_fallback
>>>  
>>>  #define RFI_TO_USER_OR_KERNEL						\
>>> +	STF_EXIT_BARRIER_SLOT;						\
>>>  	RFI_FLUSH_SLOT;							\
>>>  	rfid;								\
>>>  	b	rfi_flush_fallback
>>>  
>>>  #define RFI_TO_GUEST							\
>>> +	STF_EXIT_BARRIER_SLOT;						\
>>>  	RFI_FLUSH_SLOT;							\
>>>  	rfid;								\
>>>  	b	rfi_flush_fallback
>>> @@ -84,21 +108,25 @@
>>>  	hrfid
>>>  
>>>  #define HRFI_TO_USER							\
>>> +	STF_EXIT_BARRIER_SLOT;						\
>>>  	RFI_FLUSH_SLOT;							\
>>>  	hrfid;								\
>>>  	b	hrfi_flush_fallback
>>>  
>>>  #define HRFI_TO_USER_OR_KERNEL						\
>>> +	STF_EXIT_BARRIER_SLOT;						\
>>>  	RFI_FLUSH_SLOT;							\
>>>  	hrfid;								\
>>>  	b	hrfi_flush_fallback
>>>  
>>>  #define HRFI_TO_GUEST							\
>>> +	STF_EXIT_BARRIER_SLOT;						\
>>>  	RFI_FLUSH_SLOT;							\
>>>  	hrfid;								\
>>>  	b	hrfi_flush_fallback
>>>  
>>>  #define HRFI_TO_UNKNOWN							\
>>> +	STF_EXIT_BARRIER_SLOT;						\
>>>  	RFI_FLUSH_SLOT;							\
>>>  	hrfid;								\
>>>  	b	hrfi_flush_fallback
>>> @@ -226,6 +254,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
>>>  #define __EXCEPTION_PROLOG_1(area, extra, vec)				\
>>>  	OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR);		\
>>>  	OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);		\
>>> +	INTERRUPT_TO_KERNEL;						\
>>>  	SAVE_CTR(r10, area);						\
>>>  	mfcr	r9;							\
>>>  	extra(vec);							\
>>> @@ -512,6 +541,12 @@ label##_relon_hv:						\
>>>  #define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
>>>  	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
>>>  
>>> +#define MASKABLE_EXCEPTION_OOL(vec, label)				\
>>> +	.globl label##_ool;						\
>>> +label##_ool:								\
>>> +	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
>>> +	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
>>> +
>>>  #define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
>>>  	. = loc;							\
>>>  	.globl label##_pSeries;						\
>>> diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
>>> index 7068bafbb2d6..145a37ab2d3e 100644
>>> - --- a/arch/powerpc/include/asm/feature-fixups.h
>>> +++ b/arch/powerpc/include/asm/feature-fixups.h
>>> @@ -184,6 +184,22 @@ label##3:					       	\
>>>  	FTR_ENTRY_OFFSET label##1b-label##3b;		\
>>>  	.popsection;
>>>  
>>> +#define STF_ENTRY_BARRIER_FIXUP_SECTION			\
>>> +953:							\
>>> +	.pushsection __stf_entry_barrier_fixup,"a";	\
>>> +	.align 2;					\
>>> +954:							\
>>> +	FTR_ENTRY_OFFSET 953b-954b;			\
>>> +	.popsection;
>>> +
>>> +#define STF_EXIT_BARRIER_FIXUP_SECTION			\
>>> +955:							\
>>> +	.pushsection __stf_exit_barrier_fixup,"a";	\
>>> +	.align 2;					\
>>> +956:							\
>>> +	FTR_ENTRY_OFFSET 955b-956b;			\
>>> +	.popsection;
>>> +
>>>  #define RFI_FLUSH_FIXUP_SECTION				\
>>>  951:							\
>>>  	.pushsection __rfi_flush_fixup,"a";		\
>>> @@ -192,10 +208,34 @@ label##3:					       	\
>>>  	FTR_ENTRY_OFFSET 951b-952b;			\
>>>  	.popsection;
>>>  
>>> +#define NOSPEC_BARRIER_FIXUP_SECTION			\
>>> +953:							\
>>> +	.pushsection __barrier_nospec_fixup,"a";	\
>>> +	.align 2;					\
>>> +954:							\
>>> +	FTR_ENTRY_OFFSET 953b-954b;			\
>>> +	.popsection;
>>> +
>>> +#define START_BTB_FLUSH_SECTION			\
>>> +955:							\
>>> +
>>> +#define END_BTB_FLUSH_SECTION			\
>>> +956:							\
>>> +	.pushsection __btb_flush_fixup,"a";	\
>>> +	.align 2;							\
>>> +957:						\
>>> +	FTR_ENTRY_OFFSET 955b-957b;			\
>>> +	FTR_ENTRY_OFFSET 956b-957b;			\
>>> +	.popsection;
>>>  
>>>  #ifndef __ASSEMBLY__
>>>  
>>> +extern long stf_barrier_fallback;
>>> +extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
>>> +extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
>>>  extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
>>> +extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
>>> +extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
>>>  
>>>  #endif
>>>  
>>> diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
>>> index 449bbb87c257..b57db9d09db9 100644
>>> - --- a/arch/powerpc/include/asm/hvcall.h
>>> +++ b/arch/powerpc/include/asm/hvcall.h
>>> @@ -292,10 +292,15 @@
>>>  #define H_CPU_CHAR_L1D_FLUSH_ORI30	(1ull << 61) // IBM bit 2
>>>  #define H_CPU_CHAR_L1D_FLUSH_TRIG2	(1ull << 60) // IBM bit 3
>>>  #define H_CPU_CHAR_L1D_THREAD_PRIV	(1ull << 59) // IBM bit 4
>>> +#define H_CPU_CHAR_BRANCH_HINTS_HONORED	(1ull << 58) // IBM bit 5
>>> +#define H_CPU_CHAR_THREAD_RECONFIG_CTRL	(1ull << 57) // IBM bit 6
>>> +#define H_CPU_CHAR_COUNT_CACHE_DISABLED	(1ull << 56) // IBM bit 7
>>> +#define H_CPU_CHAR_BCCTR_FLUSH_ASSIST	(1ull << 54) // IBM bit 9
>>>  
>>>  #define H_CPU_BEHAV_FAVOUR_SECURITY	(1ull << 63) // IBM bit 0
>>>  #define H_CPU_BEHAV_L1D_FLUSH_PR	(1ull << 62) // IBM bit 1
>>>  #define H_CPU_BEHAV_BNDS_CHK_SPEC_BAR	(1ull << 61) // IBM bit 2
>>> +#define H_CPU_BEHAV_FLUSH_COUNT_CACHE	(1ull << 58) // IBM bit 5
>>>  
>>>  #ifndef __ASSEMBLY__
>>>  #include <linux/types.h>
>>> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
>>> index 45e2aefece16..08e5df3395fa 100644
>>> - --- a/arch/powerpc/include/asm/paca.h
>>> +++ b/arch/powerpc/include/asm/paca.h
>>> @@ -199,8 +199,7 @@ struct paca_struct {
>>>  	 */
>>>  	u64 exrfi[13] __aligned(0x80);
>>>  	void *rfi_flush_fallback_area;
>>> - -	u64 l1d_flush_congruence;
>>> - -	u64 l1d_flush_sets;
>>> +	u64 l1d_flush_size;
>>>  #endif
>>>  };
>>>  
>>> diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
>>> index 7ab04fc59e24..faf1bb045dee 100644
>>> - --- a/arch/powerpc/include/asm/ppc-opcode.h
>>> +++ b/arch/powerpc/include/asm/ppc-opcode.h
>>> @@ -147,6 +147,7 @@
>>>  #define PPC_INST_LWSYNC			0x7c2004ac
>>>  #define PPC_INST_SYNC			0x7c0004ac
>>>  #define PPC_INST_SYNC_MASK		0xfc0007fe
>>> +#define PPC_INST_ISYNC			0x4c00012c
>>>  #define PPC_INST_LXVD2X			0x7c000698
>>>  #define PPC_INST_MCRXR			0x7c000400
>>>  #define PPC_INST_MCRXR_MASK		0xfc0007fe
>>> diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
>>> index 160bb2311bbb..d219816b3e19 100644
>>> - --- a/arch/powerpc/include/asm/ppc_asm.h
>>> +++ b/arch/powerpc/include/asm/ppc_asm.h
>>> @@ -821,4 +821,15 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,945)
>>>  	.long 0x2400004c  /* rfid				*/
>>>  #endif /* !CONFIG_PPC_BOOK3E */
>>>  #endif /*  __ASSEMBLY__ */
>>> +
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +#define BTB_FLUSH(reg)			\
>>> +	lis reg,BUCSR_INIT@h;		\
>>> +	ori reg,reg,BUCSR_INIT@l;	\
>>> +	mtspr SPRN_BUCSR,reg;		\
>>> +	isync;
>>> +#else
>>> +#define BTB_FLUSH(reg)
>>> +#endif /* CONFIG_PPC_FSL_BOOK3E */
>>> +
>>>  #endif /* _ASM_POWERPC_PPC_ASM_H */
>>> diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
>>> new file mode 100644
>>> index 000000000000..759597bf0fd8
>>> - --- /dev/null
>>> +++ b/arch/powerpc/include/asm/security_features.h
>>> @@ -0,0 +1,92 @@
>>> +/* SPDX-License-Identifier: GPL-2.0+ */
>>> +/*
>>> + * Security related feature bit definitions.
>>> + *
>>> + * Copyright 2018, Michael Ellerman, IBM Corporation.
>>> + */
>>> +
>>> +#ifndef _ASM_POWERPC_SECURITY_FEATURES_H
>>> +#define _ASM_POWERPC_SECURITY_FEATURES_H
>>> +
>>> +
>>> +extern unsigned long powerpc_security_features;
>>> +extern bool rfi_flush;
>>> +
>>> +/* These are bit flags */
>>> +enum stf_barrier_type {
>>> +	STF_BARRIER_NONE	= 0x1,
>>> +	STF_BARRIER_FALLBACK	= 0x2,
>>> +	STF_BARRIER_EIEIO	= 0x4,
>>> +	STF_BARRIER_SYNC_ORI	= 0x8,
>>> +};
>>> +
>>> +void setup_stf_barrier(void);
>>> +void do_stf_barrier_fixups(enum stf_barrier_type types);
>>> +void setup_count_cache_flush(void);
>>> +
>>> +static inline void security_ftr_set(unsigned long feature)
>>> +{
>>> +	powerpc_security_features |= feature;
>>> +}
>>> +
>>> +static inline void security_ftr_clear(unsigned long feature)
>>> +{
>>> +	powerpc_security_features &= ~feature;
>>> +}
>>> +
>>> +static inline bool security_ftr_enabled(unsigned long feature)
>>> +{
>>> +	return !!(powerpc_security_features & feature);
>>> +}
>>> +
>>> +
>>> +// Features indicating support for Spectre/Meltdown mitigations
>>> +
>>> +// The L1-D cache can be flushed with ori r30,r30,0
>>> +#define SEC_FTR_L1D_FLUSH_ORI30		0x0000000000000001ull
>>> +
>>> +// The L1-D cache can be flushed with mtspr 882,r0 (aka SPRN_TRIG2)
>>> +#define SEC_FTR_L1D_FLUSH_TRIG2		0x0000000000000002ull
>>> +
>>> +// ori r31,r31,0 acts as a speculation barrier
>>> +#define SEC_FTR_SPEC_BAR_ORI31		0x0000000000000004ull
>>> +
>>> +// Speculation past bctr is disabled
>>> +#define SEC_FTR_BCCTRL_SERIALISED	0x0000000000000008ull
>>> +
>>> +// Entries in L1-D are private to a SMT thread
>>> +#define SEC_FTR_L1D_THREAD_PRIV		0x0000000000000010ull
>>> +
>>> +// Indirect branch prediction cache disabled
>>> +#define SEC_FTR_COUNT_CACHE_DISABLED	0x0000000000000020ull
>>> +
>>> +// bcctr 2,0,0 triggers a hardware assisted count cache flush
>>> +#define SEC_FTR_BCCTR_FLUSH_ASSIST	0x0000000000000800ull
>>> +
>>> +
>>> +// Features indicating need for Spectre/Meltdown mitigations
>>> +
>>> +// The L1-D cache should be flushed on MSR[HV] 1->0 transition (hypervisor to guest)
>>> +#define SEC_FTR_L1D_FLUSH_HV		0x0000000000000040ull
>>> +
>>> +// The L1-D cache should be flushed on MSR[PR] 0->1 transition (kernel to userspace)
>>> +#define SEC_FTR_L1D_FLUSH_PR		0x0000000000000080ull
>>> +
>>> +// A speculation barrier should be used for bounds checks (Spectre variant 1)
>>> +#define SEC_FTR_BNDS_CHK_SPEC_BAR	0x0000000000000100ull
>>> +
>>> +// Firmware configuration indicates user favours security over performance
>>> +#define SEC_FTR_FAVOUR_SECURITY		0x0000000000000200ull
>>> +
>>> +// Software required to flush count cache on context switch
>>> +#define SEC_FTR_FLUSH_COUNT_CACHE	0x0000000000000400ull
>>> +
>>> +
>>> +// Features enabled by default
>>> +#define SEC_FTR_DEFAULT \
>>> +	(SEC_FTR_L1D_FLUSH_HV | \
>>> +	 SEC_FTR_L1D_FLUSH_PR | \
>>> +	 SEC_FTR_BNDS_CHK_SPEC_BAR | \
>>> +	 SEC_FTR_FAVOUR_SECURITY)
>>> +
>>> +#endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
>>> diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
>>> index 7916b56f2e60..d299479c770b 100644
>>> - --- a/arch/powerpc/include/asm/setup.h
>>> +++ b/arch/powerpc/include/asm/setup.h
>>> @@ -8,6 +8,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
>>>  
>>>  extern unsigned int rtas_data;
>>>  extern unsigned long long memory_limit;
>>> +extern bool init_mem_is_free;
>>>  extern unsigned long klimit;
>>>  extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
>>>  
>>> @@ -36,8 +37,28 @@ enum l1d_flush_type {
>>>  	L1D_FLUSH_MTTRIG	= 0x8,
>>>  };
>>>  
>>> - -void __init setup_rfi_flush(enum l1d_flush_type, bool enable);
>>> +void setup_rfi_flush(enum l1d_flush_type, bool enable);
>>>  void do_rfi_flush_fixups(enum l1d_flush_type types);
>>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>>> +void setup_barrier_nospec(void);
>>> +#else
>>> +static inline void setup_barrier_nospec(void) { };
>>> +#endif
>>> +void do_barrier_nospec_fixups(bool enable);
>>> +extern bool barrier_nospec_enabled;
>>> +
>>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>>> +void do_barrier_nospec_fixups_range(bool enable, void *start, void *end);
>>> +#else
>>> +static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { };
>>> +#endif
>>> +
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +void setup_spectre_v2(void);
>>> +#else
>>> +static inline void setup_spectre_v2(void) {};
>>> +#endif
>>> +void do_btb_flush_fixups(void);
>>>  
>>>  #endif /* !__ASSEMBLY__ */
>>>  
>>> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
>>> index 05f1389228d2..e51ce5a0e221 100644
>>> - --- a/arch/powerpc/include/asm/uaccess.h
>>> +++ b/arch/powerpc/include/asm/uaccess.h
>>> @@ -269,6 +269,7 @@ do {								\
>>>  	__chk_user_ptr(ptr);					\
>>>  	if (!is_kernel_addr((unsigned long)__gu_addr))		\
>>>  		might_fault();					\
>>> +	barrier_nospec();					\
>>>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>>>  	(x) = (__typeof__(*(ptr)))__gu_val;			\
>>>  	__gu_err;						\
>>> @@ -283,6 +284,7 @@ do {								\
>>>  	__chk_user_ptr(ptr);					\
>>>  	if (!is_kernel_addr((unsigned long)__gu_addr))		\
>>>  		might_fault();					\
>>> +	barrier_nospec();					\
>>>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>>>  	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
>>>  	__gu_err;						\
>>> @@ -295,8 +297,10 @@ do {								\
>>>  	unsigned long  __gu_val = 0;					\
>>>  	__typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
>>>  	might_fault();							\
>>> - -	if (access_ok(VERIFY_READ, __gu_addr, (size)))			\
>>> +	if (access_ok(VERIFY_READ, __gu_addr, (size))) {		\
>>> +		barrier_nospec();					\
>>>  		__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>>> +	}								\
>>>  	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
>>>  	__gu_err;							\
>>>  })
>>> @@ -307,6 +311,7 @@ do {								\
>>>  	unsigned long __gu_val;					\
>>>  	__typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
>>>  	__chk_user_ptr(ptr);					\
>>> +	barrier_nospec();					\
>>>  	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
>>>  	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
>>>  	__gu_err;						\
>>> @@ -323,8 +328,10 @@ extern unsigned long __copy_tofrom_user(void __user *to,
>>>  static inline unsigned long copy_from_user(void *to,
>>>  		const void __user *from, unsigned long n)
>>>  {
>>> - -	if (likely(access_ok(VERIFY_READ, from, n)))
>>> +	if (likely(access_ok(VERIFY_READ, from, n))) {
>>> +		barrier_nospec();
>>>  		return __copy_tofrom_user((__force void __user *)to, from, n);
>>> +	}
>>>  	memset(to, 0, n);
>>>  	return n;
>>>  }
>>> @@ -359,21 +366,27 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
>>>  
>>>  		switch (n) {
>>>  		case 1:
>>> +			barrier_nospec();
>>>  			__get_user_size(*(u8 *)to, from, 1, ret);
>>>  			break;
>>>  		case 2:
>>> +			barrier_nospec();
>>>  			__get_user_size(*(u16 *)to, from, 2, ret);
>>>  			break;
>>>  		case 4:
>>> +			barrier_nospec();
>>>  			__get_user_size(*(u32 *)to, from, 4, ret);
>>>  			break;
>>>  		case 8:
>>> +			barrier_nospec();
>>>  			__get_user_size(*(u64 *)to, from, 8, ret);
>>>  			break;
>>>  		}
>>>  		if (ret == 0)
>>>  			return 0;
>>>  	}
>>> +
>>> +	barrier_nospec();
>>>  	return __copy_tofrom_user((__force void __user *)to, from, n);
>>>  }
>>>  
>>> @@ -400,6 +413,7 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
>>>  		if (ret == 0)
>>>  			return 0;
>>>  	}
>>> +
>>>  	return __copy_tofrom_user(to, (__force const void __user *)from, n);
>>>  }
>>>  
>>> diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
>>> index ba336930d448..22ed3c32fca8 100644
>>> - --- a/arch/powerpc/kernel/Makefile
>>> +++ b/arch/powerpc/kernel/Makefile
>>> @@ -44,6 +44,7 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= cpu_setup_power.o
>>>  obj-$(CONFIG_PPC_BOOK3S_64)	+= mce.o mce_power.o
>>>  obj64-$(CONFIG_RELOCATABLE)	+= reloc_64.o
>>>  obj-$(CONFIG_PPC_BOOK3E_64)	+= exceptions-64e.o idle_book3e.o
>>> +obj-$(CONFIG_PPC_BARRIER_NOSPEC) += security.o
>>>  obj-$(CONFIG_PPC64)		+= vdso64/
>>>  obj-$(CONFIG_ALTIVEC)		+= vecemu.o
>>>  obj-$(CONFIG_PPC_970_NAP)	+= idle_power4.o
>>> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
>>> index d92705e3a0c1..de3c29c51503 100644
>>> - --- a/arch/powerpc/kernel/asm-offsets.c
>>> +++ b/arch/powerpc/kernel/asm-offsets.c
>>> @@ -245,8 +245,7 @@ int main(void)
>>>  	DEFINE(PACA_IN_MCE, offsetof(struct paca_struct, in_mce));
>>>  	DEFINE(PACA_RFI_FLUSH_FALLBACK_AREA, offsetof(struct paca_struct, rfi_flush_fallback_area));
>>>  	DEFINE(PACA_EXRFI, offsetof(struct paca_struct, exrfi));
>>> - -	DEFINE(PACA_L1D_FLUSH_CONGRUENCE, offsetof(struct paca_struct, l1d_flush_congruence));
>>> - -	DEFINE(PACA_L1D_FLUSH_SETS, offsetof(struct paca_struct, l1d_flush_sets));
>>> +	DEFINE(PACA_L1D_FLUSH_SIZE, offsetof(struct paca_struct, l1d_flush_size));
>>>  #endif
>>>  	DEFINE(PACAHWCPUID, offsetof(struct paca_struct, hw_cpu_id));
>>>  	DEFINE(PACAKEXECSTATE, offsetof(struct paca_struct, kexec_state));
>>> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
>>> index 59be96917369..6d36a4fb4acf 100644
>>> - --- a/arch/powerpc/kernel/entry_64.S
>>> +++ b/arch/powerpc/kernel/entry_64.S
>>> @@ -25,6 +25,7 @@
>>>  #include <asm/page.h>
>>>  #include <asm/mmu.h>
>>>  #include <asm/thread_info.h>
>>> +#include <asm/code-patching-asm.h>
>>>  #include <asm/ppc_asm.h>
>>>  #include <asm/asm-offsets.h>
>>>  #include <asm/cputable.h>
>>> @@ -36,6 +37,7 @@
>>>  #include <asm/hw_irq.h>
>>>  #include <asm/context_tracking.h>
>>>  #include <asm/tm.h>
>>> +#include <asm/barrier.h>
>>>  #ifdef CONFIG_PPC_BOOK3S
>>>  #include <asm/exception-64s.h>
>>>  #else
>>> @@ -75,6 +77,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
>>>  	std	r0,GPR0(r1)
>>>  	std	r10,GPR1(r1)
>>>  	beq	2f			/* if from kernel mode */
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +START_BTB_FLUSH_SECTION
>>> +	BTB_FLUSH(r10)
>>> +END_BTB_FLUSH_SECTION
>>> +#endif
>>>  	ACCOUNT_CPU_USER_ENTRY(r10, r11)
>>>  2:	std	r2,GPR2(r1)
>>>  	std	r3,GPR3(r1)
>>> @@ -177,6 +184,15 @@ system_call:			/* label this so stack traces look sane */
>>>  	clrldi	r8,r8,32
>>>  15:
>>>  	slwi	r0,r0,4
>>> +
>>> +	barrier_nospec_asm
>>> +	/*
>>> +	 * Prevent the load of the handler below (based on the user-passed
>>> +	 * system call number) being speculatively executed until the test
>>> +	 * against NR_syscalls and branch to .Lsyscall_enosys above has
>>> +	 * committed.
>>> +	 */
>>> +
>>>  	ldx	r12,r11,r0	/* Fetch system call handler [ptr] */
>>>  	mtctr   r12
>>>  	bctrl			/* Call handler */
>>> @@ -440,6 +456,57 @@ _GLOBAL(ret_from_kernel_thread)
>>>  	li	r3,0
>>>  	b	.Lsyscall_exit
>>>  
>>> +#ifdef CONFIG_PPC_BOOK3S_64
>>> +
>>> +#define FLUSH_COUNT_CACHE	\
>>> +1:	nop;			\
>>> +	patch_site 1b, patch__call_flush_count_cache
>>> +
>>> +
>>> +#define BCCTR_FLUSH	.long 0x4c400420
>>> +
>>> +.macro nops number
>>> +	.rept \number
>>> +	nop
>>> +	.endr
>>> +.endm
>>> +
>>> +.balign 32
>>> +.global flush_count_cache
>>> +flush_count_cache:
>>> +	/* Save LR into r9 */
>>> +	mflr	r9
>>> +
>>> +	.rept 64
>>> +	bl	.+4
>>> +	.endr
>>> +	b	1f
>>> +	nops	6
>>> +
>>> +	.balign 32
>>> +	/* Restore LR */
>>> +1:	mtlr	r9
>>> +	li	r9,0x7fff
>>> +	mtctr	r9
>>> +
>>> +	BCCTR_FLUSH
>>> +
>>> +2:	nop
>>> +	patch_site 2b patch__flush_count_cache_return
>>> +
>>> +	nops	3
>>> +
>>> +	.rept 278
>>> +	.balign 32
>>> +	BCCTR_FLUSH
>>> +	nops	7
>>> +	.endr
>>> +
>>> +	blr
>>> +#else
>>> +#define FLUSH_COUNT_CACHE
>>> +#endif /* CONFIG_PPC_BOOK3S_64 */
>>> +
>>>  /*
>>>   * This routine switches between two different tasks.  The process
>>>   * state of one is saved on its kernel stack.  Then the state
>>> @@ -503,6 +570,8 @@ BEGIN_FTR_SECTION
>>>  END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
>>>  #endif
>>>  
>>> +	FLUSH_COUNT_CACHE
>>> +
>>>  #ifdef CONFIG_SMP
>>>  	/* We need a sync somewhere here to make sure that if the
>>>  	 * previous task gets rescheduled on another CPU, it sees all
>>> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
>>> index 5cc93f0b52ca..48ec841ea1bf 100644
>>> - --- a/arch/powerpc/kernel/exceptions-64e.S
>>> +++ b/arch/powerpc/kernel/exceptions-64e.S
>>> @@ -295,7 +295,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>>>  	andi.	r10,r11,MSR_PR;		/* save stack pointer */	    \
>>>  	beq	1f;			/* branch around if supervisor */   \
>>>  	ld	r1,PACAKSAVE(r13);	/* get kernel stack coming from usr */\
>>> - -1:	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
>>> +1:	type##_BTB_FLUSH		\
>>> +	cmpdi	cr1,r1,0;		/* check if SP makes sense */	    \
>>>  	bge-	cr1,exc_##n##_bad_stack;/* bad stack (TODO: out of line) */ \
>>>  	mfspr	r10,SPRN_##type##_SRR0;	/* read SRR0 before touching stack */
>>>  
>>> @@ -327,6 +328,30 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>>>  #define SPRN_MC_SRR0	SPRN_MCSRR0
>>>  #define SPRN_MC_SRR1	SPRN_MCSRR1
>>>  
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +#define GEN_BTB_FLUSH			\
>>> +	START_BTB_FLUSH_SECTION		\
>>> +		beq 1f;			\
>>> +		BTB_FLUSH(r10)			\
>>> +		1:		\
>>> +	END_BTB_FLUSH_SECTION
>>> +
>>> +#define CRIT_BTB_FLUSH			\
>>> +	START_BTB_FLUSH_SECTION		\
>>> +		BTB_FLUSH(r10)		\
>>> +	END_BTB_FLUSH_SECTION
>>> +
>>> +#define DBG_BTB_FLUSH CRIT_BTB_FLUSH
>>> +#define MC_BTB_FLUSH CRIT_BTB_FLUSH
>>> +#define GDBELL_BTB_FLUSH GEN_BTB_FLUSH
>>> +#else
>>> +#define GEN_BTB_FLUSH
>>> +#define CRIT_BTB_FLUSH
>>> +#define DBG_BTB_FLUSH
>>> +#define MC_BTB_FLUSH
>>> +#define GDBELL_BTB_FLUSH
>>> +#endif
>>> +
>>>  #define NORMAL_EXCEPTION_PROLOG(n, intnum, addition)			    \
>>>  	EXCEPTION_PROLOG(n, intnum, GEN, addition##_GEN(n))
>>>  
>>> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
>>> index 938a30fef031..10e7cec9553d 100644
>>> - --- a/arch/powerpc/kernel/exceptions-64s.S
>>> +++ b/arch/powerpc/kernel/exceptions-64s.S
>>> @@ -36,6 +36,7 @@ BEGIN_FTR_SECTION						\
>>>  END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
>>>  	mr	r9,r13 ;					\
>>>  	GET_PACA(r13) ;						\
>>> +	INTERRUPT_TO_KERNEL ;					\
>>>  	mfspr	r11,SPRN_SRR0 ;					\
>>>  0:
>>>  
>>> @@ -292,7 +293,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
>>>  	. = 0x900
>>>  	.globl decrementer_pSeries
>>>  decrementer_pSeries:
>>> - -	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
>>> +	SET_SCRATCH0(r13)
>>> +	EXCEPTION_PROLOG_0(PACA_EXGEN)
>>> +	b	decrementer_ool
>>>  
>>>  	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
>>>  
>>> @@ -319,6 +322,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
>>>  	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
>>>  	HMT_MEDIUM;
>>>  	std	r10,PACA_EXGEN+EX_R10(r13)
>>> +	INTERRUPT_TO_KERNEL
>>>  	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
>>>  	mfcr	r9
>>>  	KVMTEST(0xc00)
>>> @@ -607,6 +611,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
>>>  
>>>  	.align	7
>>>  	/* moved from 0xe00 */
>>> +	MASKABLE_EXCEPTION_OOL(0x900, decrementer)
>>>  	STD_EXCEPTION_HV_OOL(0xe02, h_data_storage)
>>>  	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02)
>>>  	STD_EXCEPTION_HV_OOL(0xe22, h_instr_storage)
>>> @@ -1564,6 +1569,21 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>>>  	blr
>>>  #endif
>>>  
>>> +	.balign 16
>>> +	.globl stf_barrier_fallback
>>> +stf_barrier_fallback:
>>> +	std	r9,PACA_EXRFI+EX_R9(r13)
>>> +	std	r10,PACA_EXRFI+EX_R10(r13)
>>> +	sync
>>> +	ld	r9,PACA_EXRFI+EX_R9(r13)
>>> +	ld	r10,PACA_EXRFI+EX_R10(r13)
>>> +	ori	31,31,0
>>> +	.rept 14
>>> +	b	1f
>>> +1:
>>> +	.endr
>>> +	blr
>>> +
>>>  	.globl rfi_flush_fallback
>>>  rfi_flush_fallback:
>>>  	SET_SCRATCH0(r13);
>>> @@ -1571,39 +1591,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>>>  	std	r9,PACA_EXRFI+EX_R9(r13)
>>>  	std	r10,PACA_EXRFI+EX_R10(r13)
>>>  	std	r11,PACA_EXRFI+EX_R11(r13)
>>> - -	std	r12,PACA_EXRFI+EX_R12(r13)
>>> - -	std	r8,PACA_EXRFI+EX_R13(r13)
>>>  	mfctr	r9
>>>  	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
>>> - -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
>>> - -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
>>> - -	/*
>>> - -	 * The load adresses are at staggered offsets within cachelines,
>>> - -	 * which suits some pipelines better (on others it should not
>>> - -	 * hurt).
>>> - -	 */
>>> - -	addi	r12,r12,8
>>> +	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
>>> +	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
>>>  	mtctr	r11
>>>  	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
>>>  
>>>  	/* order ld/st prior to dcbt stop all streams with flushing */
>>>  	sync
>>> - -1:	li	r8,0
>>> - -	.rept	8 /* 8-way set associative */
>>> - -	ldx	r11,r10,r8
>>> - -	add	r8,r8,r12
>>> - -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
>>> - -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
>>> - -	.endr
>>> - -	addi	r10,r10,128 /* 128 byte cache line */
>>> +
>>> +	/*
>>> +	 * The load adresses are at staggered offsets within cachelines,
>>> +	 * which suits some pipelines better (on others it should not
>>> +	 * hurt).
>>> +	 */
>>> +1:
>>> +	ld	r11,(0x80 + 8)*0(r10)
>>> +	ld	r11,(0x80 + 8)*1(r10)
>>> +	ld	r11,(0x80 + 8)*2(r10)
>>> +	ld	r11,(0x80 + 8)*3(r10)
>>> +	ld	r11,(0x80 + 8)*4(r10)
>>> +	ld	r11,(0x80 + 8)*5(r10)
>>> +	ld	r11,(0x80 + 8)*6(r10)
>>> +	ld	r11,(0x80 + 8)*7(r10)
>>> +	addi	r10,r10,0x80*8
>>>  	bdnz	1b
>>>  
>>>  	mtctr	r9
>>>  	ld	r9,PACA_EXRFI+EX_R9(r13)
>>>  	ld	r10,PACA_EXRFI+EX_R10(r13)
>>>  	ld	r11,PACA_EXRFI+EX_R11(r13)
>>> - -	ld	r12,PACA_EXRFI+EX_R12(r13)
>>> - -	ld	r8,PACA_EXRFI+EX_R13(r13)
>>>  	GET_SCRATCH0(r13);
>>>  	rfid
>>>  
>>> @@ -1614,39 +1632,37 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
>>>  	std	r9,PACA_EXRFI+EX_R9(r13)
>>>  	std	r10,PACA_EXRFI+EX_R10(r13)
>>>  	std	r11,PACA_EXRFI+EX_R11(r13)
>>> - -	std	r12,PACA_EXRFI+EX_R12(r13)
>>> - -	std	r8,PACA_EXRFI+EX_R13(r13)
>>>  	mfctr	r9
>>>  	ld	r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13)
>>> - -	ld	r11,PACA_L1D_FLUSH_SETS(r13)
>>> - -	ld	r12,PACA_L1D_FLUSH_CONGRUENCE(r13)
>>> - -	/*
>>> - -	 * The load adresses are at staggered offsets within cachelines,
>>> - -	 * which suits some pipelines better (on others it should not
>>> - -	 * hurt).
>>> - -	 */
>>> - -	addi	r12,r12,8
>>> +	ld	r11,PACA_L1D_FLUSH_SIZE(r13)
>>> +	srdi	r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */
>>>  	mtctr	r11
>>>  	DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */
>>>  
>>>  	/* order ld/st prior to dcbt stop all streams with flushing */
>>>  	sync
>>> - -1:	li	r8,0
>>> - -	.rept	8 /* 8-way set associative */
>>> - -	ldx	r11,r10,r8
>>> - -	add	r8,r8,r12
>>> - -	xor	r11,r11,r11	// Ensure r11 is 0 even if fallback area is not
>>> - -	add	r8,r8,r11	// Add 0, this creates a dependency on the ldx
>>> - -	.endr
>>> - -	addi	r10,r10,128 /* 128 byte cache line */
>>> +
>>> +	/*
>>> +	 * The load adresses are at staggered offsets within cachelines,
>>> +	 * which suits some pipelines better (on others it should not
>>> +	 * hurt).
>>> +	 */
>>> +1:
>>> +	ld	r11,(0x80 + 8)*0(r10)
>>> +	ld	r11,(0x80 + 8)*1(r10)
>>> +	ld	r11,(0x80 + 8)*2(r10)
>>> +	ld	r11,(0x80 + 8)*3(r10)
>>> +	ld	r11,(0x80 + 8)*4(r10)
>>> +	ld	r11,(0x80 + 8)*5(r10)
>>> +	ld	r11,(0x80 + 8)*6(r10)
>>> +	ld	r11,(0x80 + 8)*7(r10)
>>> +	addi	r10,r10,0x80*8
>>>  	bdnz	1b
>>>  
>>>  	mtctr	r9
>>>  	ld	r9,PACA_EXRFI+EX_R9(r13)
>>>  	ld	r10,PACA_EXRFI+EX_R10(r13)
>>>  	ld	r11,PACA_EXRFI+EX_R11(r13)
>>> - -	ld	r12,PACA_EXRFI+EX_R12(r13)
>>> - -	ld	r8,PACA_EXRFI+EX_R13(r13)
>>>  	GET_SCRATCH0(r13);
>>>  	hrfid
>>>  
>>> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
>>> index 9547381b631a..ff009be97a42 100644
>>> - --- a/arch/powerpc/kernel/module.c
>>> +++ b/arch/powerpc/kernel/module.c
>>> @@ -67,7 +67,15 @@ int module_finalize(const Elf_Ehdr *hdr,
>>>  		do_feature_fixups(powerpc_firmware_features,
>>>  				  (void *)sect->sh_addr,
>>>  				  (void *)sect->sh_addr + sect->sh_size);
>>> - -#endif
>>> +#endif /* CONFIG_PPC64 */
>>> +
>>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>>> +	sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
>>> +	if (sect != NULL)
>>> +		do_barrier_nospec_fixups_range(barrier_nospec_enabled,
>>> +				  (void *)sect->sh_addr,
>>> +				  (void *)sect->sh_addr + sect->sh_size);
>>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>>>  
>>>  	sect = find_section(hdr, sechdrs, "__lwsync_fixup");
>>>  	if (sect != NULL)
>>> diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
>>> new file mode 100644
>>> index 000000000000..58f0602a92b9
>>> - --- /dev/null
>>> +++ b/arch/powerpc/kernel/security.c
>>> @@ -0,0 +1,433 @@
>>> +// SPDX-License-Identifier: GPL-2.0+
>>> +//
>>> +// Security related flags and so on.
>>> +//
>>> +// Copyright 2018, Michael Ellerman, IBM Corporation.
>>> +
>>> +#include <linux/kernel.h>
>>> +#include <linux/debugfs.h>
>>> +#include <linux/device.h>
>>> +#include <linux/seq_buf.h>
>>> +
>>> +#include <asm/debug.h>
>>> +#include <asm/asm-prototypes.h>
>>> +#include <asm/code-patching.h>
>>> +#include <asm/security_features.h>
>>> +#include <asm/setup.h>
>>> +
>>> +
>>> +unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT;
>>> +
>>> +enum count_cache_flush_type {
>>> +	COUNT_CACHE_FLUSH_NONE	= 0x1,
>>> +	COUNT_CACHE_FLUSH_SW	= 0x2,
>>> +	COUNT_CACHE_FLUSH_HW	= 0x4,
>>> +};
>>> +static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
>>> +
>>> +bool barrier_nospec_enabled;
>>> +static bool no_nospec;
>>> +static bool btb_flush_enabled;
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +static bool no_spectrev2;
>>> +#endif
>>> +
>>> +static void enable_barrier_nospec(bool enable)
>>> +{
>>> +	barrier_nospec_enabled = enable;
>>> +	do_barrier_nospec_fixups(enable);
>>> +}
>>> +
>>> +void setup_barrier_nospec(void)
>>> +{
>>> +	bool enable;
>>> +
>>> +	/*
>>> +	 * It would make sense to check SEC_FTR_SPEC_BAR_ORI31 below as well.
>>> +	 * But there's a good reason not to. The two flags we check below are
>>> +	 * both are enabled by default in the kernel, so if the hcall is not
>>> +	 * functional they will be enabled.
>>> +	 * On a system where the host firmware has been updated (so the ori
>>> +	 * functions as a barrier), but on which the hypervisor (KVM/Qemu) has
>>> +	 * not been updated, we would like to enable the barrier. Dropping the
>>> +	 * check for SEC_FTR_SPEC_BAR_ORI31 achieves that. The only downside is
>>> +	 * we potentially enable the barrier on systems where the host firmware
>>> +	 * is not updated, but that's harmless as it's a no-op.
>>> +	 */
>>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
>>> +		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
>>> +
>>> +	if (!no_nospec)
>>> +		enable_barrier_nospec(enable);
>>> +}
>>> +
>>> +static int __init handle_nospectre_v1(char *p)
>>> +{
>>> +	no_nospec = true;
>>> +
>>> +	return 0;
>>> +}
>>> +early_param("nospectre_v1", handle_nospectre_v1);
>>> +
>>> +#ifdef CONFIG_DEBUG_FS
>>> +static int barrier_nospec_set(void *data, u64 val)
>>> +{
>>> +	switch (val) {
>>> +	case 0:
>>> +	case 1:
>>> +		break;
>>> +	default:
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	if (!!val == !!barrier_nospec_enabled)
>>> +		return 0;
>>> +
>>> +	enable_barrier_nospec(!!val);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int barrier_nospec_get(void *data, u64 *val)
>>> +{
>>> +	*val = barrier_nospec_enabled ? 1 : 0;
>>> +	return 0;
>>> +}
>>> +
>>> +DEFINE_SIMPLE_ATTRIBUTE(fops_barrier_nospec,
>>> +			barrier_nospec_get, barrier_nospec_set, "%llu\n");
>>> +
>>> +static __init int barrier_nospec_debugfs_init(void)
>>> +{
>>> +	debugfs_create_file("barrier_nospec", 0600, powerpc_debugfs_root, NULL,
>>> +			    &fops_barrier_nospec);
>>> +	return 0;
>>> +}
>>> +device_initcall(barrier_nospec_debugfs_init);
>>> +#endif /* CONFIG_DEBUG_FS */
>>> +
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +static int __init handle_nospectre_v2(char *p)
>>> +{
>>> +	no_spectrev2 = true;
>>> +
>>> +	return 0;
>>> +}
>>> +early_param("nospectre_v2", handle_nospectre_v2);
>>> +void setup_spectre_v2(void)
>>> +{
>>> +	if (no_spectrev2)
>>> +		do_btb_flush_fixups();
>>> +	else
>>> +		btb_flush_enabled = true;
>>> +}
>>> +#endif /* CONFIG_PPC_FSL_BOOK3E */
>>> +
>>> +#ifdef CONFIG_PPC_BOOK3S_64
>>> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
>>> +{
>>> +	bool thread_priv;
>>> +
>>> +	thread_priv = security_ftr_enabled(SEC_FTR_L1D_THREAD_PRIV);
>>> +
>>> +	if (rfi_flush || thread_priv) {
>>> +		struct seq_buf s;
>>> +		seq_buf_init(&s, buf, PAGE_SIZE - 1);
>>> +
>>> +		seq_buf_printf(&s, "Mitigation: ");
>>> +
>>> +		if (rfi_flush)
>>> +			seq_buf_printf(&s, "RFI Flush");
>>> +
>>> +		if (rfi_flush && thread_priv)
>>> +			seq_buf_printf(&s, ", ");
>>> +
>>> +		if (thread_priv)
>>> +			seq_buf_printf(&s, "L1D private per thread");
>>> +
>>> +		seq_buf_printf(&s, "\n");
>>> +
>>> +		return s.len;
>>> +	}
>>> +
>>> +	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
>>> +	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
>>> +		return sprintf(buf, "Not affected\n");
>>> +
>>> +	return sprintf(buf, "Vulnerable\n");
>>> +}
>>> +#endif
>>> +
>>> +ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
>>> +{
>>> +	struct seq_buf s;
>>> +
>>> +	seq_buf_init(&s, buf, PAGE_SIZE - 1);
>>> +
>>> +	if (security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR)) {
>>> +		if (barrier_nospec_enabled)
>>> +			seq_buf_printf(&s, "Mitigation: __user pointer sanitization");
>>> +		else
>>> +			seq_buf_printf(&s, "Vulnerable");
>>> +
>>> +		if (security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31))
>>> +			seq_buf_printf(&s, ", ori31 speculation barrier enabled");
>>> +
>>> +		seq_buf_printf(&s, "\n");
>>> +	} else
>>> +		seq_buf_printf(&s, "Not affected\n");
>>> +
>>> +	return s.len;
>>> +}
>>> +
>>> +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
>>> +{
>>> +	struct seq_buf s;
>>> +	bool bcs, ccd;
>>> +
>>> +	seq_buf_init(&s, buf, PAGE_SIZE - 1);
>>> +
>>> +	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
>>> +	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
>>> +
>>> +	if (bcs || ccd) {
>>> +		seq_buf_printf(&s, "Mitigation: ");
>>> +
>>> +		if (bcs)
>>> +			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
>>> +
>>> +		if (bcs && ccd)
>>> +			seq_buf_printf(&s, ", ");
>>> +
>>> +		if (ccd)
>>> +			seq_buf_printf(&s, "Indirect branch cache disabled");
>>> +	} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
>>> +		seq_buf_printf(&s, "Mitigation: Software count cache flush");
>>> +
>>> +		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
>>> +			seq_buf_printf(&s, " (hardware accelerated)");
>>> +	} else if (btb_flush_enabled) {
>>> +		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
>>> +	} else {
>>> +		seq_buf_printf(&s, "Vulnerable");
>>> +	}
>>> +
>>> +	seq_buf_printf(&s, "\n");
>>> +
>>> +	return s.len;
>>> +}
>>> +
>>> +#ifdef CONFIG_PPC_BOOK3S_64
>>> +/*
>>> + * Store-forwarding barrier support.
>>> + */
>>> +
>>> +static enum stf_barrier_type stf_enabled_flush_types;
>>> +static bool no_stf_barrier;
>>> +bool stf_barrier;
>>> +
>>> +static int __init handle_no_stf_barrier(char *p)
>>> +{
>>> +	pr_info("stf-barrier: disabled on command line.");
>>> +	no_stf_barrier = true;
>>> +	return 0;
>>> +}
>>> +
>>> +early_param("no_stf_barrier", handle_no_stf_barrier);
>>> +
>>> +/* This is the generic flag used by other architectures */
>>> +static int __init handle_ssbd(char *p)
>>> +{
>>> +	if (!p || strncmp(p, "auto", 5) == 0 || strncmp(p, "on", 2) == 0 ) {
>>> +		/* Until firmware tells us, we have the barrier with auto */
>>> +		return 0;
>>> +	} else if (strncmp(p, "off", 3) == 0) {
>>> +		handle_no_stf_barrier(NULL);
>>> +		return 0;
>>> +	} else
>>> +		return 1;
>>> +
>>> +	return 0;
>>> +}
>>> +early_param("spec_store_bypass_disable", handle_ssbd);
>>> +
>>> +/* This is the generic flag used by other architectures */
>>> +static int __init handle_no_ssbd(char *p)
>>> +{
>>> +	handle_no_stf_barrier(NULL);
>>> +	return 0;
>>> +}
>>> +early_param("nospec_store_bypass_disable", handle_no_ssbd);
>>> +
>>> +static void stf_barrier_enable(bool enable)
>>> +{
>>> +	if (enable)
>>> +		do_stf_barrier_fixups(stf_enabled_flush_types);
>>> +	else
>>> +		do_stf_barrier_fixups(STF_BARRIER_NONE);
>>> +
>>> +	stf_barrier = enable;
>>> +}
>>> +
>>> +void setup_stf_barrier(void)
>>> +{
>>> +	enum stf_barrier_type type;
>>> +	bool enable, hv;
>>> +
>>> +	hv = cpu_has_feature(CPU_FTR_HVMODE);
>>> +
>>> +	/* Default to fallback in case fw-features are not available */
>>> +	if (cpu_has_feature(CPU_FTR_ARCH_207S))
>>> +		type = STF_BARRIER_SYNC_ORI;
>>> +	else if (cpu_has_feature(CPU_FTR_ARCH_206))
>>> +		type = STF_BARRIER_FALLBACK;
>>> +	else
>>> +		type = STF_BARRIER_NONE;
>>> +
>>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
>>> +		(security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) ||
>>> +		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) && hv));
>>> +
>>> +	if (type == STF_BARRIER_FALLBACK) {
>>> +		pr_info("stf-barrier: fallback barrier available\n");
>>> +	} else if (type == STF_BARRIER_SYNC_ORI) {
>>> +		pr_info("stf-barrier: hwsync barrier available\n");
>>> +	} else if (type == STF_BARRIER_EIEIO) {
>>> +		pr_info("stf-barrier: eieio barrier available\n");
>>> +	}
>>> +
>>> +	stf_enabled_flush_types = type;
>>> +
>>> +	if (!no_stf_barrier)
>>> +		stf_barrier_enable(enable);
>>> +}
>>> +
>>> +ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *attr, char *buf)
>>> +{
>>> +	if (stf_barrier && stf_enabled_flush_types != STF_BARRIER_NONE) {
>>> +		const char *type;
>>> +		switch (stf_enabled_flush_types) {
>>> +		case STF_BARRIER_EIEIO:
>>> +			type = "eieio";
>>> +			break;
>>> +		case STF_BARRIER_SYNC_ORI:
>>> +			type = "hwsync";
>>> +			break;
>>> +		case STF_BARRIER_FALLBACK:
>>> +			type = "fallback";
>>> +			break;
>>> +		default:
>>> +			type = "unknown";
>>> +		}
>>> +		return sprintf(buf, "Mitigation: Kernel entry/exit barrier (%s)\n", type);
>>> +	}
>>> +
>>> +	if (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) &&
>>> +	    !security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR))
>>> +		return sprintf(buf, "Not affected\n");
>>> +
>>> +	return sprintf(buf, "Vulnerable\n");
>>> +}
>>> +
>>> +#ifdef CONFIG_DEBUG_FS
>>> +static int stf_barrier_set(void *data, u64 val)
>>> +{
>>> +	bool enable;
>>> +
>>> +	if (val == 1)
>>> +		enable = true;
>>> +	else if (val == 0)
>>> +		enable = false;
>>> +	else
>>> +		return -EINVAL;
>>> +
>>> +	/* Only do anything if we're changing state */
>>> +	if (enable != stf_barrier)
>>> +		stf_barrier_enable(enable);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int stf_barrier_get(void *data, u64 *val)
>>> +{
>>> +	*val = stf_barrier ? 1 : 0;
>>> +	return 0;
>>> +}
>>> +
>>> +DEFINE_SIMPLE_ATTRIBUTE(fops_stf_barrier, stf_barrier_get, stf_barrier_set, "%llu\n");
>>> +
>>> +static __init int stf_barrier_debugfs_init(void)
>>> +{
>>> +	debugfs_create_file("stf_barrier", 0600, powerpc_debugfs_root, NULL, &fops_stf_barrier);
>>> +	return 0;
>>> +}
>>> +device_initcall(stf_barrier_debugfs_init);
>>> +#endif /* CONFIG_DEBUG_FS */
>>> +
>>> +static void toggle_count_cache_flush(bool enable)
>>> +{
>>> +	if (!enable || !security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
>>> +		patch_instruction_site(&patch__call_flush_count_cache, PPC_INST_NOP);
>>> +		count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
>>> +		pr_info("count-cache-flush: software flush disabled.\n");
>>> +		return;
>>> +	}
>>> +
>>> +	patch_branch_site(&patch__call_flush_count_cache,
>>> +			  (u64)&flush_count_cache, BRANCH_SET_LINK);
>>> +
>>> +	if (!security_ftr_enabled(SEC_FTR_BCCTR_FLUSH_ASSIST)) {
>>> +		count_cache_flush_type = COUNT_CACHE_FLUSH_SW;
>>> +		pr_info("count-cache-flush: full software flush sequence enabled.\n");
>>> +		return;
>>> +	}
>>> +
>>> +	patch_instruction_site(&patch__flush_count_cache_return, PPC_INST_BLR);
>>> +	count_cache_flush_type = COUNT_CACHE_FLUSH_HW;
>>> +	pr_info("count-cache-flush: hardware assisted flush sequence enabled\n");
>>> +}
>>> +
>>> +void setup_count_cache_flush(void)
>>> +{
>>> +	toggle_count_cache_flush(true);
>>> +}
>>> +
>>> +#ifdef CONFIG_DEBUG_FS
>>> +static int count_cache_flush_set(void *data, u64 val)
>>> +{
>>> +	bool enable;
>>> +
>>> +	if (val == 1)
>>> +		enable = true;
>>> +	else if (val == 0)
>>> +		enable = false;
>>> +	else
>>> +		return -EINVAL;
>>> +
>>> +	toggle_count_cache_flush(enable);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int count_cache_flush_get(void *data, u64 *val)
>>> +{
>>> +	if (count_cache_flush_type == COUNT_CACHE_FLUSH_NONE)
>>> +		*val = 0;
>>> +	else
>>> +		*val = 1;
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +DEFINE_SIMPLE_ATTRIBUTE(fops_count_cache_flush, count_cache_flush_get,
>>> +			count_cache_flush_set, "%llu\n");
>>> +
>>> +static __init int count_cache_flush_debugfs_init(void)
>>> +{
>>> +	debugfs_create_file("count_cache_flush", 0600, powerpc_debugfs_root,
>>> +			    NULL, &fops_count_cache_flush);
>>> +	return 0;
>>> +}
>>> +device_initcall(count_cache_flush_debugfs_init);
>>> +#endif /* CONFIG_DEBUG_FS */
>>> +#endif /* CONFIG_PPC_BOOK3S_64 */
>>> diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
>>> index ad8c9db61237..5a9f035bcd6b 100644
>>> - --- a/arch/powerpc/kernel/setup_32.c
>>> +++ b/arch/powerpc/kernel/setup_32.c
>>> @@ -322,6 +322,8 @@ void __init setup_arch(char **cmdline_p)
>>>  		ppc_md.setup_arch();
>>>  	if ( ppc_md.progress ) ppc_md.progress("arch: exit", 0x3eab);
>>>  
>>> +	setup_barrier_nospec();
>>> +
>>>  	paging_init();
>>>  
>>>  	/* Initialize the MMU context management stuff */
>>> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
>>> index 9eb469bed22b..6bb731ababc6 100644
>>> - --- a/arch/powerpc/kernel/setup_64.c
>>> +++ b/arch/powerpc/kernel/setup_64.c
>>> @@ -736,6 +736,8 @@ void __init setup_arch(char **cmdline_p)
>>>  	if (ppc_md.setup_arch)
>>>  		ppc_md.setup_arch();
>>>  
>>> +	setup_barrier_nospec();
>>> +
>>>  	paging_init();
>>>  
>>>  	/* Initialize the MMU context management stuff */
>>> @@ -873,9 +875,6 @@ static void do_nothing(void *unused)
>>>  
>>>  void rfi_flush_enable(bool enable)
>>>  {
>>> - -	if (rfi_flush == enable)
>>> - -		return;
>>> - -
>>>  	if (enable) {
>>>  		do_rfi_flush_fixups(enabled_flush_types);
>>>  		on_each_cpu(do_nothing, NULL, 1);
>>> @@ -885,11 +884,15 @@ void rfi_flush_enable(bool enable)
>>>  	rfi_flush = enable;
>>>  }
>>>  
>>> - -static void init_fallback_flush(void)
>>> +static void __ref init_fallback_flush(void)
>>>  {
>>>  	u64 l1d_size, limit;
>>>  	int cpu;
>>>  
>>> +	/* Only allocate the fallback flush area once (at boot time). */
>>> +	if (l1d_flush_fallback_area)
>>> +		return;
>>> +
>>>  	l1d_size = ppc64_caches.dsize;
>>>  	limit = min(safe_stack_limit(), ppc64_rma_size);
>>>  
>>> @@ -902,34 +905,23 @@ static void init_fallback_flush(void)
>>>  	memset(l1d_flush_fallback_area, 0, l1d_size * 2);
>>>  
>>>  	for_each_possible_cpu(cpu) {
>>> - -		/*
>>> - -		 * The fallback flush is currently coded for 8-way
>>> - -		 * associativity. Different associativity is possible, but it
>>> - -		 * will be treated as 8-way and may not evict the lines as
>>> - -		 * effectively.
>>> - -		 *
>>> - -		 * 128 byte lines are mandatory.
>>> - -		 */
>>> - -		u64 c = l1d_size / 8;
>>> - -
>>>  		paca[cpu].rfi_flush_fallback_area = l1d_flush_fallback_area;
>>> - -		paca[cpu].l1d_flush_congruence = c;
>>> - -		paca[cpu].l1d_flush_sets = c / 128;
>>> +		paca[cpu].l1d_flush_size = l1d_size;
>>>  	}
>>>  }
>>>  
>>> - -void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
>>> +void setup_rfi_flush(enum l1d_flush_type types, bool enable)
>>>  {
>>>  	if (types & L1D_FLUSH_FALLBACK) {
>>> - -		pr_info("rfi-flush: Using fallback displacement flush\n");
>>> +		pr_info("rfi-flush: fallback displacement flush available\n");
>>>  		init_fallback_flush();
>>>  	}
>>>  
>>>  	if (types & L1D_FLUSH_ORI)
>>> - -		pr_info("rfi-flush: Using ori type flush\n");
>>> +		pr_info("rfi-flush: ori type flush available\n");
>>>  
>>>  	if (types & L1D_FLUSH_MTTRIG)
>>> - -		pr_info("rfi-flush: Using mttrig type flush\n");
>>> +		pr_info("rfi-flush: mttrig type flush available\n");
>>>  
>>>  	enabled_flush_types = types;
>>>  
>>> @@ -940,13 +932,19 @@ void __init setup_rfi_flush(enum l1d_flush_type types, bool enable)
>>>  #ifdef CONFIG_DEBUG_FS
>>>  static int rfi_flush_set(void *data, u64 val)
>>>  {
>>> +	bool enable;
>>> +
>>>  	if (val == 1)
>>> - -		rfi_flush_enable(true);
>>> +		enable = true;
>>>  	else if (val == 0)
>>> - -		rfi_flush_enable(false);
>>> +		enable = false;
>>>  	else
>>>  		return -EINVAL;
>>>  
>>> +	/* Only do anything if we're changing state */
>>> +	if (enable != rfi_flush)
>>> +		rfi_flush_enable(enable);
>>> +
>>>  	return 0;
>>>  }
>>>  
>>> @@ -965,12 +963,4 @@ static __init int rfi_flush_debugfs_init(void)
>>>  }
>>>  device_initcall(rfi_flush_debugfs_init);
>>>  #endif
>>> - -
>>> - -ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
>>> - -{
>>> - -	if (rfi_flush)
>>> - -		return sprintf(buf, "Mitigation: RFI Flush\n");
>>> - -
>>> - -	return sprintf(buf, "Vulnerable\n");
>>> - -}
>>>  #endif /* CONFIG_PPC_BOOK3S_64 */
>>> diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
>>> index 072a23a17350..876ac9d52afc 100644
>>> - --- a/arch/powerpc/kernel/vmlinux.lds.S
>>> +++ b/arch/powerpc/kernel/vmlinux.lds.S
>>> @@ -73,14 +73,45 @@ SECTIONS
>>>  	RODATA
>>>  
>>>  #ifdef CONFIG_PPC64
>>> +	. = ALIGN(8);
>>> +	__stf_entry_barrier_fixup : AT(ADDR(__stf_entry_barrier_fixup) - LOAD_OFFSET) {
>>> +		__start___stf_entry_barrier_fixup = .;
>>> +		*(__stf_entry_barrier_fixup)
>>> +		__stop___stf_entry_barrier_fixup = .;
>>> +	}
>>> +
>>> +	. = ALIGN(8);
>>> +	__stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) {
>>> +		__start___stf_exit_barrier_fixup = .;
>>> +		*(__stf_exit_barrier_fixup)
>>> +		__stop___stf_exit_barrier_fixup = .;
>>> +	}
>>> +
>>>  	. = ALIGN(8);
>>>  	__rfi_flush_fixup : AT(ADDR(__rfi_flush_fixup) - LOAD_OFFSET) {
>>>  		__start___rfi_flush_fixup = .;
>>>  		*(__rfi_flush_fixup)
>>>  		__stop___rfi_flush_fixup = .;
>>>  	}
>>> - -#endif
>>> +#endif /* CONFIG_PPC64 */
>>>  
>>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>>> +	. = ALIGN(8);
>>> +	__spec_barrier_fixup : AT(ADDR(__spec_barrier_fixup) - LOAD_OFFSET) {
>>> +		__start___barrier_nospec_fixup = .;
>>> +		*(__barrier_nospec_fixup)
>>> +		__stop___barrier_nospec_fixup = .;
>>> +	}
>>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>>> +
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +	. = ALIGN(8);
>>> +	__spec_btb_flush_fixup : AT(ADDR(__spec_btb_flush_fixup) - LOAD_OFFSET) {
>>> +		__start__btb_flush_fixup = .;
>>> +		*(__btb_flush_fixup)
>>> +		__stop__btb_flush_fixup = .;
>>> +	}
>>> +#endif
>>>  	EXCEPTION_TABLE(0)
>>>  
>>>  	NOTES :kernel :notes
>>> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
>>> index d5edbeb8eb82..570c06a00db6 100644
>>> - --- a/arch/powerpc/lib/code-patching.c
>>> +++ b/arch/powerpc/lib/code-patching.c
>>> @@ -14,12 +14,25 @@
>>>  #include <asm/page.h>
>>>  #include <asm/code-patching.h>
>>>  #include <asm/uaccess.h>
>>> +#include <asm/setup.h>
>>> +#include <asm/sections.h>
>>>  
>>>  
>>> +static inline bool is_init(unsigned int *addr)
>>> +{
>>> +	return addr >= (unsigned int *)__init_begin && addr < (unsigned int *)__init_end;
>>> +}
>>> +
>>>  int patch_instruction(unsigned int *addr, unsigned int instr)
>>>  {
>>>  	int err;
>>>  
>>> +	/* Make sure we aren't patching a freed init section */
>>> +	if (init_mem_is_free && is_init(addr)) {
>>> +		pr_debug("Skipping init section patching addr: 0x%px\n", addr);
>>> +		return 0;
>>> +	}
>>> +
>>>  	__put_user_size(instr, addr, 4, err);
>>>  	if (err)
>>>  		return err;
>>> @@ -32,6 +45,22 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags)
>>>  	return patch_instruction(addr, create_branch(addr, target, flags));
>>>  }
>>>  
>>> +int patch_branch_site(s32 *site, unsigned long target, int flags)
>>> +{
>>> +	unsigned int *addr;
>>> +
>>> +	addr = (unsigned int *)((unsigned long)site + *site);
>>> +	return patch_instruction(addr, create_branch(addr, target, flags));
>>> +}
>>> +
>>> +int patch_instruction_site(s32 *site, unsigned int instr)
>>> +{
>>> +	unsigned int *addr;
>>> +
>>> +	addr = (unsigned int *)((unsigned long)site + *site);
>>> +	return patch_instruction(addr, instr);
>>> +}
>>> +
>>>  unsigned int create_branch(const unsigned int *addr,
>>>  			   unsigned long target, int flags)
>>>  {
>>> diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
>>> index 3af014684872..7bdfc19a491d 100644
>>> - --- a/arch/powerpc/lib/feature-fixups.c
>>> +++ b/arch/powerpc/lib/feature-fixups.c
>>> @@ -21,7 +21,7 @@
>>>  #include <asm/page.h>
>>>  #include <asm/sections.h>
>>>  #include <asm/setup.h>
>>> - -
>>> +#include <asm/security_features.h>
>>>  
>>>  struct fixup_entry {
>>>  	unsigned long	mask;
>>> @@ -115,6 +115,120 @@ void do_feature_fixups(unsigned long value, void *fixup_start, void *fixup_end)
>>>  }
>>>  
>>>  #ifdef CONFIG_PPC_BOOK3S_64
>>> +void do_stf_entry_barrier_fixups(enum stf_barrier_type types)
>>> +{
>>> +	unsigned int instrs[3], *dest;
>>> +	long *start, *end;
>>> +	int i;
>>> +
>>> +	start = PTRRELOC(&__start___stf_entry_barrier_fixup),
>>> +	end = PTRRELOC(&__stop___stf_entry_barrier_fixup);
>>> +
>>> +	instrs[0] = 0x60000000; /* nop */
>>> +	instrs[1] = 0x60000000; /* nop */
>>> +	instrs[2] = 0x60000000; /* nop */
>>> +
>>> +	i = 0;
>>> +	if (types & STF_BARRIER_FALLBACK) {
>>> +		instrs[i++] = 0x7d4802a6; /* mflr r10		*/
>>> +		instrs[i++] = 0x60000000; /* branch patched below */
>>> +		instrs[i++] = 0x7d4803a6; /* mtlr r10		*/
>>> +	} else if (types & STF_BARRIER_EIEIO) {
>>> +		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
>>> +	} else if (types & STF_BARRIER_SYNC_ORI) {
>>> +		instrs[i++] = 0x7c0004ac; /* hwsync		*/
>>> +		instrs[i++] = 0xe94d0000; /* ld r10,0(r13)	*/
>>> +		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
>>> +	}
>>> +
>>> +	for (i = 0; start < end; start++, i++) {
>>> +		dest = (void *)start + *start;
>>> +
>>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>>> +
>>> +		patch_instruction(dest, instrs[0]);
>>> +
>>> +		if (types & STF_BARRIER_FALLBACK)
>>> +			patch_branch(dest + 1, (unsigned long)&stf_barrier_fallback,
>>> +				     BRANCH_SET_LINK);
>>> +		else
>>> +			patch_instruction(dest + 1, instrs[1]);
>>> +
>>> +		patch_instruction(dest + 2, instrs[2]);
>>> +	}
>>> +
>>> +	printk(KERN_DEBUG "stf-barrier: patched %d entry locations (%s barrier)\n", i,
>>> +		(types == STF_BARRIER_NONE)                  ? "no" :
>>> +		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
>>> +		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
>>> +		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
>>> +		                                           : "unknown");
>>> +}
>>> +
>>> +void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
>>> +{
>>> +	unsigned int instrs[6], *dest;
>>> +	long *start, *end;
>>> +	int i;
>>> +
>>> +	start = PTRRELOC(&__start___stf_exit_barrier_fixup),
>>> +	end = PTRRELOC(&__stop___stf_exit_barrier_fixup);
>>> +
>>> +	instrs[0] = 0x60000000; /* nop */
>>> +	instrs[1] = 0x60000000; /* nop */
>>> +	instrs[2] = 0x60000000; /* nop */
>>> +	instrs[3] = 0x60000000; /* nop */
>>> +	instrs[4] = 0x60000000; /* nop */
>>> +	instrs[5] = 0x60000000; /* nop */
>>> +
>>> +	i = 0;
>>> +	if (types & STF_BARRIER_FALLBACK || types & STF_BARRIER_SYNC_ORI) {
>>> +		if (cpu_has_feature(CPU_FTR_HVMODE)) {
>>> +			instrs[i++] = 0x7db14ba6; /* mtspr 0x131, r13 (HSPRG1) */
>>> +			instrs[i++] = 0x7db04aa6; /* mfspr r13, 0x130 (HSPRG0) */
>>> +		} else {
>>> +			instrs[i++] = 0x7db243a6; /* mtsprg 2,r13	*/
>>> +			instrs[i++] = 0x7db142a6; /* mfsprg r13,1    */
>>> +	        }
>>> +		instrs[i++] = 0x7c0004ac; /* hwsync		*/
>>> +		instrs[i++] = 0xe9ad0000; /* ld r13,0(r13)	*/
>>> +		instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */
>>> +		if (cpu_has_feature(CPU_FTR_HVMODE)) {
>>> +			instrs[i++] = 0x7db14aa6; /* mfspr r13, 0x131 (HSPRG1) */
>>> +		} else {
>>> +			instrs[i++] = 0x7db242a6; /* mfsprg r13,2 */
>>> +		}
>>> +	} else if (types & STF_BARRIER_EIEIO) {
>>> +		instrs[i++] = 0x7e0006ac; /* eieio + bit 6 hint */
>>> +	}
>>> +
>>> +	for (i = 0; start < end; start++, i++) {
>>> +		dest = (void *)start + *start;
>>> +
>>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>>> +
>>> +		patch_instruction(dest, instrs[0]);
>>> +		patch_instruction(dest + 1, instrs[1]);
>>> +		patch_instruction(dest + 2, instrs[2]);
>>> +		patch_instruction(dest + 3, instrs[3]);
>>> +		patch_instruction(dest + 4, instrs[4]);
>>> +		patch_instruction(dest + 5, instrs[5]);
>>> +	}
>>> +	printk(KERN_DEBUG "stf-barrier: patched %d exit locations (%s barrier)\n", i,
>>> +		(types == STF_BARRIER_NONE)                  ? "no" :
>>> +		(types == STF_BARRIER_FALLBACK)              ? "fallback" :
>>> +		(types == STF_BARRIER_EIEIO)                 ? "eieio" :
>>> +		(types == (STF_BARRIER_SYNC_ORI))            ? "hwsync"
>>> +		                                           : "unknown");
>>> +}
>>> +
>>> +
>>> +void do_stf_barrier_fixups(enum stf_barrier_type types)
>>> +{
>>> +	do_stf_entry_barrier_fixups(types);
>>> +	do_stf_exit_barrier_fixups(types);
>>> +}
>>> +
>>>  void do_rfi_flush_fixups(enum l1d_flush_type types)
>>>  {
>>>  	unsigned int instrs[3], *dest;
>>> @@ -151,10 +265,110 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
>>>  		patch_instruction(dest + 2, instrs[2]);
>>>  	}
>>>  
>>> - -	printk(KERN_DEBUG "rfi-flush: patched %d locations\n", i);
>>> +	printk(KERN_DEBUG "rfi-flush: patched %d locations (%s flush)\n", i,
>>> +		(types == L1D_FLUSH_NONE)       ? "no" :
>>> +		(types == L1D_FLUSH_FALLBACK)   ? "fallback displacement" :
>>> +		(types &  L1D_FLUSH_ORI)        ? (types & L1D_FLUSH_MTTRIG)
>>> +							? "ori+mttrig type"
>>> +							: "ori type" :
>>> +		(types &  L1D_FLUSH_MTTRIG)     ? "mttrig type"
>>> +						: "unknown");
>>> +}
>>> +
>>> +void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
>>> +{
>>> +	unsigned int instr, *dest;
>>> +	long *start, *end;
>>> +	int i;
>>> +
>>> +	start = fixup_start;
>>> +	end = fixup_end;
>>> +
>>> +	instr = 0x60000000; /* nop */
>>> +
>>> +	if (enable) {
>>> +		pr_info("barrier-nospec: using ORI speculation barrier\n");
>>> +		instr = 0x63ff0000; /* ori 31,31,0 speculation barrier */
>>> +	}
>>> +
>>> +	for (i = 0; start < end; start++, i++) {
>>> +		dest = (void *)start + *start;
>>> +
>>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>>> +		patch_instruction(dest, instr);
>>> +	}
>>> +
>>> +	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
>>>  }
>>> +
>>>  #endif /* CONFIG_PPC_BOOK3S_64 */
>>>  
>>> +#ifdef CONFIG_PPC_BARRIER_NOSPEC
>>> +void do_barrier_nospec_fixups(bool enable)
>>> +{
>>> +	void *start, *end;
>>> +
>>> +	start = PTRRELOC(&__start___barrier_nospec_fixup),
>>> +	end = PTRRELOC(&__stop___barrier_nospec_fixup);
>>> +
>>> +	do_barrier_nospec_fixups_range(enable, start, end);
>>> +}
>>> +#endif /* CONFIG_PPC_BARRIER_NOSPEC */
>>> +
>>> +#ifdef CONFIG_PPC_FSL_BOOK3E
>>> +void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
>>> +{
>>> +	unsigned int instr[2], *dest;
>>> +	long *start, *end;
>>> +	int i;
>>> +
>>> +	start = fixup_start;
>>> +	end = fixup_end;
>>> +
>>> +	instr[0] = PPC_INST_NOP;
>>> +	instr[1] = PPC_INST_NOP;
>>> +
>>> +	if (enable) {
>>> +		pr_info("barrier-nospec: using isync; sync as speculation barrier\n");
>>> +		instr[0] = PPC_INST_ISYNC;
>>> +		instr[1] = PPC_INST_SYNC;
>>> +	}
>>> +
>>> +	for (i = 0; start < end; start++, i++) {
>>> +		dest = (void *)start + *start;
>>> +
>>> +		pr_devel("patching dest %lx\n", (unsigned long)dest);
>>> +		patch_instruction(dest, instr[0]);
>>> +		patch_instruction(dest + 1, instr[1]);
>>> +	}
>>> +
>>> +	printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i);
>>> +}
>>> +
>>> +static void patch_btb_flush_section(long *curr)
>>> +{
>>> +	unsigned int *start, *end;
>>> +
>>> +	start = (void *)curr + *curr;
>>> +	end = (void *)curr + *(curr + 1);
>>> +	for (; start < end; start++) {
>>> +		pr_devel("patching dest %lx\n", (unsigned long)start);
>>> +		patch_instruction(start, PPC_INST_NOP);
>>> +	}
>>> +}
>>> +
>>> +void do_btb_flush_fixups(void)
>>> +{
>>> +	long *start, *end;
>>> +
>>> +	start = PTRRELOC(&__start__btb_flush_fixup);
>>> +	end = PTRRELOC(&__stop__btb_flush_fixup);
>>> +
>>> +	for (; start < end; start += 2)
>>> +		patch_btb_flush_section(start);
>>> +}
>>> +#endif /* CONFIG_PPC_FSL_BOOK3E */
>>> +
>>>  void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end)
>>>  {
>>>  	long *start, *end;
>>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>>> index 22d94c3e6fc4..1efe5ca5c3bc 100644
>>> - --- a/arch/powerpc/mm/mem.c
>>> +++ b/arch/powerpc/mm/mem.c
>>> @@ -62,6 +62,7 @@
>>>  #endif
>>>  
>>>  unsigned long long memory_limit;
>>> +bool init_mem_is_free;
>>>  
>>>  #ifdef CONFIG_HIGHMEM
>>>  pte_t *kmap_pte;
>>> @@ -381,6 +382,7 @@ void __init mem_init(void)
>>>  void free_initmem(void)
>>>  {
>>>  	ppc_md.progress = ppc_printk_progress;
>>> +	init_mem_is_free = true;
>>>  	free_initmem_default(POISON_FREE_INITMEM);
>>>  }
>>>  
>>> diff --git a/arch/powerpc/mm/tlb_low_64e.S b/arch/powerpc/mm/tlb_low_64e.S
>>> index 29d6987c37ba..5486d56da289 100644
>>> - --- a/arch/powerpc/mm/tlb_low_64e.S
>>> +++ b/arch/powerpc/mm/tlb_low_64e.S
>>> @@ -69,6 +69,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>>>  	std	r15,EX_TLB_R15(r12)
>>>  	std	r10,EX_TLB_CR(r12)
>>>  #ifdef CONFIG_PPC_FSL_BOOK3E
>>> +START_BTB_FLUSH_SECTION
>>> +	mfspr r11, SPRN_SRR1
>>> +	andi. r10,r11,MSR_PR
>>> +	beq 1f
>>> +	BTB_FLUSH(r10)
>>> +1:
>>> +END_BTB_FLUSH_SECTION
>>>  	std	r7,EX_TLB_R7(r12)
>>>  #endif
>>>  	TLB_MISS_PROLOG_STATS
>>> diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
>>> index c57afc619b20..e14b52c7ebd8 100644
>>> - --- a/arch/powerpc/platforms/powernv/setup.c
>>> +++ b/arch/powerpc/platforms/powernv/setup.c
>>> @@ -37,53 +37,99 @@
>>>  #include <asm/smp.h>
>>>  #include <asm/tm.h>
>>>  #include <asm/setup.h>
>>> +#include <asm/security_features.h>
>>>  
>>>  #include "powernv.h"
>>>  
>>> +
>>> +static bool fw_feature_is(const char *state, const char *name,
>>> +			  struct device_node *fw_features)
>>> +{
>>> +	struct device_node *np;
>>> +	bool rc = false;
>>> +
>>> +	np = of_get_child_by_name(fw_features, name);
>>> +	if (np) {
>>> +		rc = of_property_read_bool(np, state);
>>> +		of_node_put(np);
>>> +	}
>>> +
>>> +	return rc;
>>> +}
>>> +
>>> +static void init_fw_feat_flags(struct device_node *np)
>>> +{
>>> +	if (fw_feature_is("enabled", "inst-spec-barrier-ori31,31,0", np))
>>> +		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
>>> +
>>> +	if (fw_feature_is("enabled", "fw-bcctrl-serialized", np))
>>> +		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
>>> +
>>> +	if (fw_feature_is("enabled", "inst-l1d-flush-ori30,30,0", np))
>>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
>>> +
>>> +	if (fw_feature_is("enabled", "inst-l1d-flush-trig2", np))
>>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
>>> +
>>> +	if (fw_feature_is("enabled", "fw-l1d-thread-split", np))
>>> +		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
>>> +
>>> +	if (fw_feature_is("enabled", "fw-count-cache-disabled", np))
>>> +		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
>>> +
>>> +	if (fw_feature_is("enabled", "fw-count-cache-flush-bcctr2,0,0", np))
>>> +		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
>>> +
>>> +	if (fw_feature_is("enabled", "needs-count-cache-flush-on-context-switch", np))
>>> +		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
>>> +
>>> +	/*
>>> +	 * The features below are enabled by default, so we instead look to see
>>> +	 * if firmware has *disabled* them, and clear them if so.
>>> +	 */
>>> +	if (fw_feature_is("disabled", "speculation-policy-favor-security", np))
>>> +		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
>>> +
>>> +	if (fw_feature_is("disabled", "needs-l1d-flush-msr-pr-0-to-1", np))
>>> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
>>> +
>>> +	if (fw_feature_is("disabled", "needs-l1d-flush-msr-hv-1-to-0", np))
>>> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
>>> +
>>> +	if (fw_feature_is("disabled", "needs-spec-barrier-for-bound-checks", np))
>>> +		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
>>> +}
>>> +
>>>  static void pnv_setup_rfi_flush(void)
>>>  {
>>>  	struct device_node *np, *fw_features;
>>>  	enum l1d_flush_type type;
>>> - -	int enable;
>>> +	bool enable;
>>>  
>>>  	/* Default to fallback in case fw-features are not available */
>>>  	type = L1D_FLUSH_FALLBACK;
>>> - -	enable = 1;
>>>  
>>>  	np = of_find_node_by_name(NULL, "ibm,opal");
>>>  	fw_features = of_get_child_by_name(np, "fw-features");
>>>  	of_node_put(np);
>>>  
>>>  	if (fw_features) {
>>> - -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-trig2");
>>> - -		if (np && of_property_read_bool(np, "enabled"))
>>> - -			type = L1D_FLUSH_MTTRIG;
>>> +		init_fw_feat_flags(fw_features);
>>> +		of_node_put(fw_features);
>>>  
>>> - -		of_node_put(np);
>>> +		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
>>> +			type = L1D_FLUSH_MTTRIG;
>>>  
>>> - -		np = of_get_child_by_name(fw_features, "inst-l1d-flush-ori30,30,0");
>>> - -		if (np && of_property_read_bool(np, "enabled"))
>>> +		if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
>>>  			type = L1D_FLUSH_ORI;
>>> - -
>>> - -		of_node_put(np);
>>> - -
>>> - -		/* Enable unless firmware says NOT to */
>>> - -		enable = 2;
>>> - -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-hv-1-to-0");
>>> - -		if (np && of_property_read_bool(np, "disabled"))
>>> - -			enable--;
>>> - -
>>> - -		of_node_put(np);
>>> - -
>>> - -		np = of_get_child_by_name(fw_features, "needs-l1d-flush-msr-pr-0-to-1");
>>> - -		if (np && of_property_read_bool(np, "disabled"))
>>> - -			enable--;
>>> - -
>>> - -		of_node_put(np);
>>> - -		of_node_put(fw_features);
>>>  	}
>>>  
>>> - -	setup_rfi_flush(type, enable > 0);
>>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
>>> +		 (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR)   || \
>>> +		  security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV));
>>> +
>>> +	setup_rfi_flush(type, enable);
>>> +	setup_count_cache_flush();
>>>  }
>>>  
>>>  static void __init pnv_setup_arch(void)
>>> @@ -91,6 +137,7 @@ static void __init pnv_setup_arch(void)
>>>  	set_arch_panic_timeout(10, ARCH_PANIC_TIMEOUT);
>>>  
>>>  	pnv_setup_rfi_flush();
>>> +	setup_stf_barrier();
>>>  
>>>  	/* Initialize SMP */
>>>  	pnv_smp_init();
>>> diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
>>> index 8dd0c8edefd6..c773396d0969 100644
>>> - --- a/arch/powerpc/platforms/pseries/mobility.c
>>> +++ b/arch/powerpc/platforms/pseries/mobility.c
>>> @@ -314,6 +314,9 @@ void post_mobility_fixup(void)
>>>  		printk(KERN_ERR "Post-mobility device tree update "
>>>  			"failed: %d\n", rc);
>>>  
>>> +	/* Possibly switch to a new RFI flush type */
>>> +	pseries_setup_rfi_flush();
>>> +
>>>  	return;
>>>  }
>>>  
>>> diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h
>>> index 8411c27293e4..e7d80797384d 100644
>>> - --- a/arch/powerpc/platforms/pseries/pseries.h
>>> +++ b/arch/powerpc/platforms/pseries/pseries.h
>>> @@ -81,4 +81,6 @@ extern struct pci_controller_ops pseries_pci_controller_ops;
>>>  
>>>  unsigned long pseries_memory_block_size(void);
>>>  
>>> +void pseries_setup_rfi_flush(void);
>>> +
>>>  #endif /* _PSERIES_PSERIES_H */
>>> diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
>>> index dd2545fc9947..9cc976ff7fec 100644
>>> - --- a/arch/powerpc/platforms/pseries/setup.c
>>> +++ b/arch/powerpc/platforms/pseries/setup.c
>>> @@ -67,6 +67,7 @@
>>>  #include <asm/eeh.h>
>>>  #include <asm/reg.h>
>>>  #include <asm/plpar_wrappers.h>
>>> +#include <asm/security_features.h>
>>>  
>>>  #include "pseries.h"
>>>  
>>> @@ -499,37 +500,87 @@ static void __init find_and_init_phbs(void)
>>>  	of_pci_check_probe_only();
>>>  }
>>>  
>>> - -static void pseries_setup_rfi_flush(void)
>>> +static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
>>> +{
>>> +	/*
>>> +	 * The features below are disabled by default, so we instead look to see
>>> +	 * if firmware has *enabled* them, and set them if so.
>>> +	 */
>>> +	if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31)
>>> +		security_ftr_set(SEC_FTR_SPEC_BAR_ORI31);
>>> +
>>> +	if (result->character & H_CPU_CHAR_BCCTRL_SERIALISED)
>>> +		security_ftr_set(SEC_FTR_BCCTRL_SERIALISED);
>>> +
>>> +	if (result->character & H_CPU_CHAR_L1D_FLUSH_ORI30)
>>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_ORI30);
>>> +
>>> +	if (result->character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
>>> +		security_ftr_set(SEC_FTR_L1D_FLUSH_TRIG2);
>>> +
>>> +	if (result->character & H_CPU_CHAR_L1D_THREAD_PRIV)
>>> +		security_ftr_set(SEC_FTR_L1D_THREAD_PRIV);
>>> +
>>> +	if (result->character & H_CPU_CHAR_COUNT_CACHE_DISABLED)
>>> +		security_ftr_set(SEC_FTR_COUNT_CACHE_DISABLED);
>>> +
>>> +	if (result->character & H_CPU_CHAR_BCCTR_FLUSH_ASSIST)
>>> +		security_ftr_set(SEC_FTR_BCCTR_FLUSH_ASSIST);
>>> +
>>> +	if (result->behaviour & H_CPU_BEHAV_FLUSH_COUNT_CACHE)
>>> +		security_ftr_set(SEC_FTR_FLUSH_COUNT_CACHE);
>>> +
>>> +	/*
>>> +	 * The features below are enabled by default, so we instead look to see
>>> +	 * if firmware has *disabled* them, and clear them if so.
>>> +	 */
>>> +	if (!(result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY))
>>> +		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
>>> +
>>> +	if (!(result->behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
>>> +		security_ftr_clear(SEC_FTR_L1D_FLUSH_PR);
>>> +
>>> +	if (!(result->behaviour & H_CPU_BEHAV_BNDS_CHK_SPEC_BAR))
>>> +		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
>>> +}
>>> +
>>> +void pseries_setup_rfi_flush(void)
>>>  {
>>>  	struct h_cpu_char_result result;
>>>  	enum l1d_flush_type types;
>>>  	bool enable;
>>>  	long rc;
>>>  
>>> - -	/* Enable by default */
>>> - -	enable = true;
>>> +	/*
>>> +	 * Set features to the defaults assumed by init_cpu_char_feature_flags()
>>> +	 * so it can set/clear again any features that might have changed after
>>> +	 * migration, and in case the hypercall fails and it is not even called.
>>> +	 */
>>> +	powerpc_security_features = SEC_FTR_DEFAULT;
>>>  
>>>  	rc = plpar_get_cpu_characteristics(&result);
>>> - -	if (rc == H_SUCCESS) {
>>> - -		types = L1D_FLUSH_NONE;
>>> +	if (rc == H_SUCCESS)
>>> +		init_cpu_char_feature_flags(&result);
>>>  
>>> - -		if (result.character & H_CPU_CHAR_L1D_FLUSH_TRIG2)
>>> - -			types |= L1D_FLUSH_MTTRIG;
>>> - -		if (result.character & H_CPU_CHAR_L1D_FLUSH_ORI30)
>>> - -			types |= L1D_FLUSH_ORI;
>>> +	/*
>>> +	 * We're the guest so this doesn't apply to us, clear it to simplify
>>> +	 * handling of it elsewhere.
>>> +	 */
>>> +	security_ftr_clear(SEC_FTR_L1D_FLUSH_HV);
>>>  
>>> - -		/* Use fallback if nothing set in hcall */
>>> - -		if (types == L1D_FLUSH_NONE)
>>> - -			types = L1D_FLUSH_FALLBACK;
>>> +	types = L1D_FLUSH_FALLBACK;
>>>  
>>> - -		if (!(result.behaviour & H_CPU_BEHAV_L1D_FLUSH_PR))
>>> - -			enable = false;
>>> - -	} else {
>>> - -		/* Default to fallback if case hcall is not available */
>>> - -		types = L1D_FLUSH_FALLBACK;
>>> - -	}
>>> +	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_TRIG2))
>>> +		types |= L1D_FLUSH_MTTRIG;
>>> +
>>> +	if (security_ftr_enabled(SEC_FTR_L1D_FLUSH_ORI30))
>>> +		types |= L1D_FLUSH_ORI;
>>> +
>>> +	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \
>>> +		 security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR);
>>>  
>>>  	setup_rfi_flush(types, enable);
>>> +	setup_count_cache_flush();
>>>  }
>>>  
>>>  static void __init pSeries_setup_arch(void)
>>> @@ -549,6 +600,7 @@ static void __init pSeries_setup_arch(void)
>>>  	fwnmi_init();
>>>  
>>>  	pseries_setup_rfi_flush();
>>> +	setup_stf_barrier();
>>>  
>>>  	/* By default, only probe PCI (can be overridden by rtas_pci) */
>>>  	pci_add_flags(PCI_PROBE_ONLY);
>>> diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
>>> index 786bf01691c9..83619ebede93 100644
>>> - --- a/arch/powerpc/xmon/xmon.c
>>> +++ b/arch/powerpc/xmon/xmon.c
>>> @@ -2144,6 +2144,8 @@ static void dump_one_paca(int cpu)
>>>  	DUMP(p, slb_cache_ptr, "x");
>>>  	for (i = 0; i < SLB_CACHE_ENTRIES; i++)
>>>  		printf(" slb_cache[%d]:        = 0x%016lx\n", i, p->slb_cache[i]);
>>> +
>>> +	DUMP(p, rfi_flush_fallback_area, "px");
>>>  #endif
>>>  	DUMP(p, dscr_default, "llx");
>>>  #ifdef CONFIG_PPC_BOOK3E
>>> - -- 
>>> 2.20.1
>>>
>>> -----BEGIN PGP SIGNATURE-----
>>>
>>> iQIcBAEBAgAGBQJcvHWhAAoJEFHr6jzI4aWA6nsP/0YskmAfLovcUmERQ7+bIjq6
>>> IcS1T466dvy6MlqeBXU4x8pVgInWeHKEC9XJdkM1lOeib/SLW7Hbz4kgJeOGwFGY
>>> lOTaexrxvsBqPm7f6GC0zbl9obEIIIIUs+TielFQANBgqm+q8Wio+XXPP9bpKeKY
>>> agSpQ3nwL/PYixznbNmN/lP9py5p89LQ0IBcR7dDBGGWJtD/AXeZ9hslsZxPbPtI
>>> nZJ0vdnjuoB2z+hCxfKWlYfLwH0VfoTpqP5x3ALCkvbBr67e8bf6EK8+trnvhyQ8
>>> iLY4bp1pm2epAI0/3NfyEiDMsGjVJ6IFlkyhDkHJgJNu0BGcGOSX2GpyU3juviAK
>>> c95FtBft/i8AwigOMCivg2mN5edYjsSiPoEItwT5KWqgByJsdr5i5mYVx8cUjMOz
>>> iAxLZCdg+UHZYuCBCAO2ZI1G9bVXI1Pa3btMspiCOOOsYGjXGf0oFfKQ+7957hUO
>>> ftYYJoGHlMHiHR1OPas6T3lk6YKF9uvfIDTE3OKw2obHbbRz3u82xoWMRGW503MN
>>> 7WpkpAP7oZ9RgqIWFVhatWy5f+7GFL0akEi4o2tsZHhYlPau7YWo+nToTd87itwt
>>> GBaWJipzge4s13VkhAE+jWFO35Fvwi8uNZ7UgpuKMBECEjkGbtzBTq2MjSF5G8wc
>>> yPEod5jby/Iqb7DkGPVG
>>> =6DnF
>>> -----END PGP SIGNATURE-----
>>>


^ permalink raw reply	[flat|nested] 180+ messages in thread

end of thread, other threads:[~2019-04-29 15:54 UTC | newest]

Thread overview: 180+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-21 14:19 [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4 Michael Ellerman
2019-04-21 14:19 ` Michael Ellerman
2019-04-21 14:19 ` [PATCH stable v4.4 01/52] powerpc/xmon: Add RFI flush related fields to paca dump Michael Ellerman
2019-04-21 14:19   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/xmon: Add RFI flush related fields to paca dump" has been added to the 4.4-stable tree gregkh
2019-04-21 14:19 ` [PATCH stable v4.4 02/52] powerpc/64s: Improve RFI L1-D cache flush fallback Michael Ellerman
2019-04-21 14:19   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64s: Improve RFI L1-D cache flush fallback" has been added to the 4.4-stable tree gregkh
2019-04-21 14:19 ` [PATCH stable v4.4 03/52] powerpc/pseries: Support firmware disable of RFI flush Michael Ellerman
2019-04-21 14:19   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/pseries: Support firmware disable of RFI flush" has been added to the 4.4-stable tree gregkh
2019-04-21 14:19 ` [PATCH stable v4.4 04/52] powerpc/powernv: Support firmware disable of RFI flush Michael Ellerman
2019-04-21 14:19   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/powernv: Support firmware disable of RFI flush" has been added to the 4.4-stable tree gregkh
2019-04-21 14:19 ` [PATCH stable v4.4 05/52] powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs code Michael Ellerman
2019-04-21 14:19   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs code" has been added to the 4.4-stable tree gregkh
2019-04-21 14:19 ` [PATCH stable v4.4 06/52] powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again Michael Ellerman
2019-04-21 14:19   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again" has been added to the 4.4-stable tree gregkh
2019-04-21 14:19 ` [PATCH stable v4.4 07/52] powerpc/rfi-flush: Always enable fallback flush on pseries Michael Ellerman
2019-04-21 14:19   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/rfi-flush: Always enable fallback flush on pseries" has been added to the 4.4-stable tree gregkh
2019-04-21 14:19 ` [PATCH stable v4.4 08/52] powerpc/rfi-flush: Differentiate enabled and patched flush types Michael Ellerman
2019-04-21 14:19   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/rfi-flush: Differentiate enabled and patched flush types" has been added to the 4.4-stable tree gregkh
2019-04-21 14:19 ` [PATCH stable v4.4 09/52] powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags Michael Ellerman
2019-04-21 14:19   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags" has been added to the 4.4-stable tree gregkh
2019-04-21 14:19 ` [PATCH stable v4.4 10/52] powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration Michael Ellerman
2019-04-21 14:19   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration" has been added to the 4.4-stable tree gregkh
2019-04-21 14:19 ` [PATCH stable v4.4 11/52] powerpc: Add security feature flags for Spectre/Meltdown Michael Ellerman
2019-04-21 14:19   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc: Add security feature flags for Spectre/Meltdown" has been added to the 4.4-stable tree gregkh
2019-04-21 14:19 ` [PATCH stable v4.4 12/52] powerpc/pseries: Set or clear security feature flags Michael Ellerman
2019-04-21 14:19   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/pseries: Set or clear security feature flags" has been added to the 4.4-stable tree gregkh
2019-04-21 14:19 ` [PATCH stable v4.4 13/52] powerpc/powernv: Set or clear security feature flags Michael Ellerman
2019-04-21 14:19   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/powernv: Set or clear security feature flags" has been added to the 4.4-stable tree gregkh
2019-04-21 14:19 ` [PATCH stable v4.4 14/52] powerpc/64s: Move cpu_show_meltdown() Michael Ellerman
2019-04-21 14:19   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64s: Move cpu_show_meltdown()" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 15/52] powerpc/64s: Enhance the information in cpu_show_meltdown() Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64s: Enhance the information in cpu_show_meltdown()" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 16/52] powerpc/powernv: Use the security flags in pnv_setup_rfi_flush() Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 17/52] powerpc/pseries: Use the security flags in pseries_setup_rfi_flush() Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 18/52] powerpc/64s: Wire up cpu_show_spectre_v1() Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64s: Wire up cpu_show_spectre_v1()" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 19/52] powerpc/64s: Wire up cpu_show_spectre_v2() Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64s: Wire up cpu_show_spectre_v2()" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 20/52] powerpc/pseries: Fix clearing of security feature flags Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/pseries: Fix clearing of security feature flags" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 21/52] powerpc: Move default security feature flags Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc: Move default security feature flags" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 22/52] powerpc/pseries: Restore default security feature flags on setup Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/pseries: Restore default security feature flags on setup" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 23/52] powerpc/64s: Fix section mismatch warnings from setup_rfi_flush() Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64s: Fix section mismatch warnings from setup_rfi_flush()" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 24/52] powerpc/64s: Add support for a store forwarding barrier at kernel entry/exit Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64s: Add support for a store forwarding barrier at kernel entry/exit" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 25/52] powerpc/64s: Add barrier_nospec Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64s: Add barrier_nospec" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 26/52] powerpc/64s: Add support for ori barrier_nospec patching Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64s: Add support for ori barrier_nospec patching" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 27/52] powerpc/64s: Patch barrier_nospec in modules Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64s: Patch barrier_nospec in modules" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 28/52] powerpc/64s: Enable barrier_nospec based on firmware settings Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64s: Enable barrier_nospec based on firmware settings" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 29/52] powerpc/64: Use barrier_nospec in syscall entry Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64: Use barrier_nospec in syscall entry" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 30/52] powerpc: Use barrier_nospec in copy_from_user() Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc: Use barrier_nospec in copy_from_user()" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 31/52] powerpc/64s: Enhance the information in cpu_show_spectre_v1() Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64s: Enhance the information in cpu_show_spectre_v1()" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 32/52] powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2 Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 33/52] powerpc/64: Disable the speculation barrier from the command line Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64: Disable the speculation barrier from the command line" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 34/52] powerpc/64: Make stf barrier PPC_BOOK3S_64 specific Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64: Make stf barrier PPC_BOOK3S_64 specific." has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 35/52] powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64: Add CONFIG_PPC_BARRIER_NOSPEC" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 36/52] powerpc/64: Call setup_barrier_nospec() from setup_arch() Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64: Call setup_barrier_nospec() from setup_arch()" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 37/52] powerpc/64: Make meltdown reporting Book3S 64 specific Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64: Make meltdown reporting Book3S 64 specific" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 38/52] powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 39/52] powerpc/asm: Add a patch_site macro & helpers for patching instructions Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/asm: Add a patch_site macro & helpers for patching instructions" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 40/52] powerpc/64s: Add new security feature flags for count cache flush Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64s: Add new security feature flags for count cache flush" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 41/52] powerpc/64s: Add support for software count cache flush Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/64s: Add support for software count cache flush" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 42/52] powerpc/pseries: Query hypervisor for count cache flush settings Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/pseries: Query hypervisor for count cache flush settings" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 43/52] powerpc/powernv: Query firmware for count cache flush settings Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/powernv: Query firmware for count cache flush settings" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 44/52] powerpc: Avoid code patching freed init sections Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc: Avoid code patching freed init sections" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 45/52] powerpc/fsl: Add infrastructure to fixup branch predictor flush Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/fsl: Add infrastructure to fixup branch predictor flush" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 46/52] powerpc/fsl: Add macro to flush the branch predictor Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/fsl: Add macro to flush the branch predictor" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 47/52] powerpc/fsl: Fix spectre_v2 mitigations reporting Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/fsl: Fix spectre_v2 mitigations reporting" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 48/52] powerpc/fsl: Add nospectre_v2 command line argument Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/fsl: Add nospectre_v2 command line argument" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 49/52] powerpc/fsl: Flush the branch predictor at each kernel entry (64bit) Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 50/52] powerpc/fsl: Update Spectre v2 reporting Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/fsl: Update Spectre v2 reporting" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 51/52] powerpc/security: Fix spectre_v2 reporting Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/security: Fix spectre_v2 reporting" has been added to the 4.4-stable tree gregkh
2019-04-21 14:20 ` [PATCH stable v4.4 52/52] powerpc/fsl: Fix the flush of branch predictor Michael Ellerman
2019-04-21 14:20   ` Michael Ellerman
2019-04-29  9:51   ` Patch "powerpc/fsl: Fix the flush of branch predictor." has been added to the 4.4-stable tree gregkh
2019-04-21 16:34 ` [PATCH stable v4.4 00/52] powerpc spectre backports for 4.4 Greg KH
2019-04-21 16:34   ` Greg KH
2019-04-22 15:27   ` Diana Madalina Craciun
2019-04-22 15:27     ` Diana Madalina Craciun
2019-04-24 13:48     ` Greg KH
2019-04-24 13:48       ` Greg KH
2019-04-28  6:17   ` Michael Ellerman
2019-04-28  6:17     ` Michael Ellerman
2019-04-29  6:26     ` Michael Ellerman
2019-04-29  6:26       ` Michael Ellerman
2019-04-29  7:03       ` Greg KH
2019-04-29  7:03         ` Greg KH
2019-04-29 11:56         ` Michael Ellerman
2019-04-29 11:56           ` Michael Ellerman
2019-04-22 15:32 ` Diana Madalina Craciun
2019-04-22 15:32   ` Diana Madalina Craciun
2019-04-28  6:20   ` Michael Ellerman
2019-04-28  6:20     ` Michael Ellerman
2019-04-29 15:52     ` Diana Madalina Craciun
2019-04-29 15:52       ` Diana Madalina Craciun
2019-04-29  9:43 ` Greg KH
2019-04-29  9:43   ` Greg KH

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.