All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4.9.y 00/27] arm64 meltdown patches
@ 2018-04-03 11:08 Mark Rutland
  2018-04-03 11:08 ` [PATCH v4.9.y 01/27] arm64: mm: Use non-global mappings for kernel space Mark Rutland
                   ` (28 more replies)
  0 siblings, 29 replies; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:08 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

Hi Greg,

These patches backport KPTI to v4.9.y (based on v4.9.92), providing protection
against meltdown on arm64 platforms.

I picked up Alex Shi's backport for review and testing, and as I found a couple
of issues to fix up, I'm sending this with my Signed-off-by in the chain, with
those fixups applied and noted.

To the best of my understanding the code is correct, in the context of the
v4.9.y kernel, and I've tested the seires on arm64 hardware available to me.
i.e. if this didn't have my Signed-off-by it would have my Reviewed-by and
Tested-by tags.

Are you happy to pick these up for v4.9.93?

Thanks,
Mark.

AKASHI Takahiro (1):
  module: extend 'rodata=off' boot cmdline parameter to module mappings

Jayachandran C (2):
  arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs
  arm64: Turn on KPTI only on CPUs that need it

Marc Zyngier (2):
  arm64: Allow checking of a CPU-local erratum
  arm64: Force KPTI to be disabled on Cavium ThunderX

Mark Rutland (1):
  arm64: factor out entry stack manipulation

Suzuki K Poulose (1):
  arm64: capabilities: Handle duplicate entries for a capability

Will Deacon (20):
  arm64: mm: Use non-global mappings for kernel space
  arm64: mm: Move ASID from TTBR0 to TTBR1
  arm64: mm: Allocate ASIDs in pairs
  arm64: mm: Add arm64_kernel_unmapped_at_el0 helper
  arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI
  arm64: entry: Add exception trampoline page for exceptions from EL0
  arm64: mm: Map entry trampoline into trampoline and kernel page tables
  arm64: entry: Explicitly pass exception level to kernel_ventry macro
  arm64: entry: Hook up entry trampoline to exception vectors
  arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native
    tasks
  arm64: entry: Add fake CPU feature for unmapping the kernel at EL0
  arm64: kaslr: Put kernel vectors address in separate data page
  arm64: use RET instruction for exiting the trampoline
  arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0
  arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
  arm64: Take into account ID_AA64PFR0_EL1.CSV3
  arm64: kpti: Make use of nG dependent on
    arm64_kernel_unmapped_at_el0()
  arm64: kpti: Add ->enable callback to remap swapper using nG mappings
  arm64: entry: Reword comment about post_ttbr_update_workaround
  arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives

 arch/arm64/Kconfig                     |  12 ++
 arch/arm64/include/asm/assembler.h     |   3 +
 arch/arm64/include/asm/cpucaps.h       |   3 +-
 arch/arm64/include/asm/cputype.h       |   3 +
 arch/arm64/include/asm/fixmap.h        |   6 +
 arch/arm64/include/asm/mmu.h           |  11 ++
 arch/arm64/include/asm/mmu_context.h   |   7 ++
 arch/arm64/include/asm/pgtable-hwdef.h |   1 +
 arch/arm64/include/asm/pgtable-prot.h  |  35 +++---
 arch/arm64/include/asm/pgtable.h       |   1 +
 arch/arm64/include/asm/proc-fns.h      |   6 -
 arch/arm64/include/asm/sysreg.h        |   1 +
 arch/arm64/include/asm/tlbflush.h      |  16 ++-
 arch/arm64/kernel/asm-offsets.c        |   6 +-
 arch/arm64/kernel/cpu-reset.S          |   2 +-
 arch/arm64/kernel/cpufeature.c         | 135 ++++++++++++++++++---
 arch/arm64/kernel/entry.S              | 188 ++++++++++++++++++++++++----
 arch/arm64/kernel/head.S               |   2 +-
 arch/arm64/kernel/process.c            |  12 +-
 arch/arm64/kernel/sleep.S              |   2 +-
 arch/arm64/kernel/vmlinux.lds.S        |  22 +++-
 arch/arm64/mm/context.c                |  25 ++--
 arch/arm64/mm/mmu.c                    |  31 +++++
 arch/arm64/mm/proc.S                   | 216 +++++++++++++++++++++++++++++++--
 include/linux/init.h                   |   3 +
 init/main.c                            |   7 +-
 kernel/module.c                        |  20 ++-
 27 files changed, 675 insertions(+), 101 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 01/27] arm64: mm: Use non-global mappings for kernel space
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
@ 2018-04-03 11:08 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: mm: Use non-global mappings for kernel space" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:08 ` [PATCH v4.9.y 02/27] arm64: mm: Move ASID from TTBR0 to TTBR1 Mark Rutland
                   ` (27 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:08 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit e046eb0c9bf2 upstream.

In preparation for unmapping the kernel whilst running in userspace,
make the kernel mappings non-global so we can avoid expensive TLB
invalidation on kernel exit to userspace.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/include/asm/kernel-pgtable.h | 12 ++++++++++--
 arch/arm64/include/asm/pgtable-prot.h   | 21 +++++++++++++++------
 2 files changed, 25 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 7e51d1b57c0c..e4ddac983640 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -71,8 +71,16 @@
 /*
  * Initial memory map attributes.
  */
-#define SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
-#define SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+#define _SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+#define _SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define SWAPPER_PTE_FLAGS	(_SWAPPER_PTE_FLAGS | PTE_NG)
+#define SWAPPER_PMD_FLAGS	(_SWAPPER_PMD_FLAGS | PMD_SECT_NG)
+#else
+#define SWAPPER_PTE_FLAGS	_SWAPPER_PTE_FLAGS
+#define SWAPPER_PMD_FLAGS	_SWAPPER_PMD_FLAGS
+#endif
 
 #if ARM64_SWAPPER_USES_SECTION_MAPS
 #define SWAPPER_MM_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index 2142c7726e76..84b5283d2e7f 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -34,8 +34,16 @@
 
 #include <asm/pgtable-types.h>
 
-#define PROT_DEFAULT		(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
-#define PROT_SECT_DEFAULT	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+#define _PROT_DEFAULT		(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+#define _PROT_SECT_DEFAULT	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define PROT_DEFAULT		(_PROT_DEFAULT | PTE_NG)
+#define PROT_SECT_DEFAULT	(_PROT_SECT_DEFAULT | PMD_SECT_NG)
+#else
+#define PROT_DEFAULT		_PROT_DEFAULT
+#define PROT_SECT_DEFAULT	_PROT_SECT_DEFAULT
+#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 #define PROT_DEVICE_nGnRnE	(PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE))
 #define PROT_DEVICE_nGnRE	(PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE))
@@ -48,6 +56,7 @@
 #define PROT_SECT_NORMAL_EXEC	(PROT_SECT_DEFAULT | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
 
 #define _PAGE_DEFAULT		(PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
+#define _HYP_PAGE_DEFAULT	(_PAGE_DEFAULT & ~PTE_NG)
 
 #define PAGE_KERNEL		__pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE)
 #define PAGE_KERNEL_RO		__pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_RDONLY)
@@ -55,15 +64,15 @@
 #define PAGE_KERNEL_EXEC	__pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE)
 #define PAGE_KERNEL_EXEC_CONT	__pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT)
 
-#define PAGE_HYP		__pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
-#define PAGE_HYP_EXEC		__pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
-#define PAGE_HYP_RO		__pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
+#define PAGE_HYP		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
+#define PAGE_HYP_EXEC		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
+#define PAGE_HYP_RO		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
 #define PAGE_HYP_DEVICE		__pgprot(PROT_DEVICE_nGnRE | PTE_HYP)
 
 #define PAGE_S2			__pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY)
 #define PAGE_S2_DEVICE		__pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_UXN)
 
-#define PAGE_NONE		__pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_PXN | PTE_UXN)
+#define PAGE_NONE		__pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_NG | PTE_PXN | PTE_UXN)
 #define PAGE_SHARED		__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)
 #define PAGE_SHARED_EXEC	__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_WRITE)
 #define PAGE_COPY		__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 02/27] arm64: mm: Move ASID from TTBR0 to TTBR1
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
  2018-04-03 11:08 ` [PATCH v4.9.y 01/27] arm64: mm: Use non-global mappings for kernel space Mark Rutland
@ 2018-04-03 11:08 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: mm: Move ASID from TTBR0 to TTBR1" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:08 ` [PATCH v4.9.y 03/27] arm64: mm: Allocate ASIDs in pairs Mark Rutland
                   ` (26 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:08 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit 7655abb95386 upstream.

In preparation for mapping kernelspace and userspace with different
ASIDs, move the ASID to TTBR1 and update switch_mm to context-switch
TTBR0 via an invalid mapping (the zero page).

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/include/asm/mmu_context.h   | 7 +++++++
 arch/arm64/include/asm/pgtable-hwdef.h | 1 +
 arch/arm64/include/asm/proc-fns.h      | 6 ------
 arch/arm64/mm/proc.S                   | 9 ++++++---
 4 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index a50185375f09..b96c4799f881 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -50,6 +50,13 @@ static inline void cpu_set_reserved_ttbr0(void)
 	isb();
 }
 
+static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm)
+{
+	BUG_ON(pgd == swapper_pg_dir);
+	cpu_set_reserved_ttbr0();
+	cpu_do_switch_mm(virt_to_phys(pgd),mm);
+}
+
 /*
  * TCR.T0SZ value to use when the ID map is active. Usually equals
  * TCR_T0SZ(VA_BITS), unless system RAM is positioned very high in
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index eb0c2bd90de9..8df4cb6ac6f7 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -272,6 +272,7 @@
 #define TCR_TG1_4K		(UL(2) << TCR_TG1_SHIFT)
 #define TCR_TG1_64K		(UL(3) << TCR_TG1_SHIFT)
 
+#define TCR_A1			(UL(1) << 22)
 #define TCR_ASID16		(UL(1) << 36)
 #define TCR_TBI0		(UL(1) << 37)
 #define TCR_HA			(UL(1) << 39)
diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
index 14ad6e4e87d1..16cef2e8449e 100644
--- a/arch/arm64/include/asm/proc-fns.h
+++ b/arch/arm64/include/asm/proc-fns.h
@@ -35,12 +35,6 @@ extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
 
 #include <asm/memory.h>
 
-#define cpu_switch_mm(pgd,mm)				\
-do {							\
-	BUG_ON(pgd == swapper_pg_dir);			\
-	cpu_do_switch_mm(virt_to_phys(pgd),mm);		\
-} while (0)
-
 #endif /* __ASSEMBLY__ */
 #endif /* __KERNEL__ */
 #endif /* __ASM_PROCFNS_H */
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 352c73b6a59e..3378f3e21224 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -132,9 +132,12 @@ ENDPROC(cpu_do_resume)
  *	- pgd_phys - physical address of new TTB
  */
 ENTRY(cpu_do_switch_mm)
+	mrs	x2, ttbr1_el1
 	mmid	x1, x1				// get mm->context.id
-	bfi	x0, x1, #48, #16		// set the ASID
-	msr	ttbr0_el1, x0			// set TTBR0
+	bfi	x2, x1, #48, #16		// set the ASID
+	msr	ttbr1_el1, x2			// in TTBR1 (since TCR.A1 is set)
+	isb
+	msr	ttbr0_el1, x0			// now update TTBR0
 	isb
 alternative_if ARM64_WORKAROUND_CAVIUM_27456
 	ic	iallu
@@ -222,7 +225,7 @@ ENTRY(__cpu_setup)
 	 * both user and kernel.
 	 */
 	ldr	x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
-			TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0
+			TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0 | TCR_A1
 	tcr_set_idmap_t0sz	x10, x9
 
 	/*
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 03/27] arm64: mm: Allocate ASIDs in pairs
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
  2018-04-03 11:08 ` [PATCH v4.9.y 01/27] arm64: mm: Use non-global mappings for kernel space Mark Rutland
  2018-04-03 11:08 ` [PATCH v4.9.y 02/27] arm64: mm: Move ASID from TTBR0 to TTBR1 Mark Rutland
@ 2018-04-03 11:08 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: mm: Allocate ASIDs in pairs" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 04/27] arm64: mm: Add arm64_kernel_unmapped_at_el0 helper Mark Rutland
                   ` (25 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:08 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit 0c8ea531b774 upstream.

In preparation for separate kernel/user ASIDs, allocate them in pairs
for each mm_struct. The bottom bit distinguishes the two: if it is set,
then the ASID will map only userspace.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/include/asm/mmu.h |  2 ++
 arch/arm64/mm/context.c      | 25 +++++++++++++++++--------
 2 files changed, 19 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 8d9fce037b2f..49924e56048e 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -16,6 +16,8 @@
 #ifndef __ASM_MMU_H
 #define __ASM_MMU_H
 
+#define USER_ASID_FLAG	(UL(1) << 48)
+
 typedef struct {
 	atomic64_t	id;
 	void		*vdso;
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index efcf1f7ef1e4..f00f5eeb556f 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -39,7 +39,16 @@ static cpumask_t tlb_flush_pending;
 
 #define ASID_MASK		(~GENMASK(asid_bits - 1, 0))
 #define ASID_FIRST_VERSION	(1UL << asid_bits)
-#define NUM_USER_ASIDS		ASID_FIRST_VERSION
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define NUM_USER_ASIDS		(ASID_FIRST_VERSION >> 1)
+#define asid2idx(asid)		(((asid) & ~ASID_MASK) >> 1)
+#define idx2asid(idx)		(((idx) << 1) & ~ASID_MASK)
+#else
+#define NUM_USER_ASIDS		(ASID_FIRST_VERSION)
+#define asid2idx(asid)		((asid) & ~ASID_MASK)
+#define idx2asid(idx)		asid2idx(idx)
+#endif
 
 /* Get the ASIDBits supported by the current CPU */
 static u32 get_cpu_asid_bits(void)
@@ -104,7 +113,7 @@ static void flush_context(unsigned int cpu)
 		 */
 		if (asid == 0)
 			asid = per_cpu(reserved_asids, i);
-		__set_bit(asid & ~ASID_MASK, asid_map);
+		__set_bit(asid2idx(asid), asid_map);
 		per_cpu(reserved_asids, i) = asid;
 	}
 
@@ -159,16 +168,16 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu)
 		 * We had a valid ASID in a previous life, so try to re-use
 		 * it if possible.
 		 */
-		asid &= ~ASID_MASK;
-		if (!__test_and_set_bit(asid, asid_map))
+		if (!__test_and_set_bit(asid2idx(asid), asid_map))
 			return newasid;
 	}
 
 	/*
 	 * Allocate a free ASID. If we can't find one, take a note of the
-	 * currently active ASIDs and mark the TLBs as requiring flushes.
-	 * We always count from ASID #1, as we use ASID #0 when setting a
-	 * reserved TTBR0 for the init_mm.
+	 * currently active ASIDs and mark the TLBs as requiring flushes.  We
+	 * always count from ASID #2 (index 1), as we use ASID #0 when setting
+	 * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd
+	 * pairs.
 	 */
 	asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx);
 	if (asid != NUM_USER_ASIDS)
@@ -185,7 +194,7 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu)
 set_asid:
 	__set_bit(asid, asid_map);
 	cur_idx = asid;
-	return asid | generation;
+	return idx2asid(asid) | generation;
 }
 
 void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 04/27] arm64: mm: Add arm64_kernel_unmapped_at_el0 helper
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (2 preceding siblings ...)
  2018-04-03 11:08 ` [PATCH v4.9.y 03/27] arm64: mm: Allocate ASIDs in pairs Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: mm: Add arm64_kernel_unmapped_at_el0 helper" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 05/27] arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI Mark Rutland
                   ` (24 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit fc0e1299da54 upstream.

In order for code such as TLB invalidation to operate efficiently when
the decision to map the kernel at EL0 is determined at runtime, this
patch introduces a helper function, arm64_kernel_unmapped_at_el0, to
determine whether or not the kernel is mapped whilst running in userspace.

Currently, this just reports the value of CONFIG_UNMAP_KERNEL_AT_EL0,
but will later be hooked up to a fake CPU capability using a static key.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/include/asm/mmu.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 49924e56048e..279e75b8a49e 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -18,6 +18,8 @@
 
 #define USER_ASID_FLAG	(UL(1) << 48)
 
+#ifndef __ASSEMBLY__
+
 typedef struct {
 	atomic64_t	id;
 	void		*vdso;
@@ -30,6 +32,11 @@ typedef struct {
  */
 #define ASID(mm)	((mm)->context.id.counter & 0xffff)
 
+static inline bool arm64_kernel_unmapped_at_el0(void)
+{
+	return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0);
+}
+
 extern void paging_init(void);
 extern void bootmem_init(void);
 extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
@@ -39,4 +46,5 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       pgprot_t prot, bool allow_block_mappings);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
 
+#endif	/* !__ASSEMBLY__ */
 #endif
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 05/27] arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (3 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 04/27] arm64: mm: Add arm64_kernel_unmapped_at_el0 helper Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 06/27] arm64: factor out entry stack manipulation Mark Rutland
                   ` (23 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit 9b0de864b5bc upstream.

Since an mm has both a kernel and a user ASID, we need to ensure that
broadcast TLB maintenance targets both address spaces so that things
like CoW continue to work with the uaccess primitives in the kernel.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/include/asm/tlbflush.h | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index deab52374119..ad6bd8b26ada 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -23,6 +23,7 @@
 
 #include <linux/sched.h>
 #include <asm/cputype.h>
+#include <asm/mmu.h>
 
 /*
  * Raw TLBI operations.
@@ -42,6 +43,11 @@
 
 #define __tlbi(op, ...)		__TLBI_N(op, ##__VA_ARGS__, 1, 0)
 
+#define __tlbi_user(op, arg) do {						\
+	if (arm64_kernel_unmapped_at_el0())					\
+		__tlbi(op, (arg) | USER_ASID_FLAG);				\
+} while (0)
+
 /*
  *	TLB Management
  *	==============
@@ -103,6 +109,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
 
 	dsb(ishst);
 	__tlbi(aside1is, asid);
+	__tlbi_user(aside1is, asid);
 	dsb(ish);
 }
 
@@ -113,6 +120,7 @@ static inline void flush_tlb_page(struct vm_area_struct *vma,
 
 	dsb(ishst);
 	__tlbi(vale1is, addr);
+	__tlbi_user(vale1is, addr);
 	dsb(ish);
 }
 
@@ -139,10 +147,13 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
 
 	dsb(ishst);
 	for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) {
-		if (last_level)
+		if (last_level) {
 			__tlbi(vale1is, addr);
-		else
+			__tlbi_user(vale1is, addr);
+		} else {
 			__tlbi(vae1is, addr);
+			__tlbi_user(vae1is, addr);
+		}
 	}
 	dsb(ish);
 }
@@ -182,6 +193,7 @@ static inline void __flush_tlb_pgtable(struct mm_struct *mm,
 	unsigned long addr = uaddr >> 12 | (ASID(mm) << 48);
 
 	__tlbi(vae1is, addr);
+	__tlbi_user(vae1is, addr);
 	dsb(ish);
 }
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 06/27] arm64: factor out entry stack manipulation
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (4 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 05/27] arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: factor out entry stack manipulation" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 07/27] module: extend 'rodata=off' boot cmdline parameter to module mappings Mark Rutland
                   ` (22 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

commit b11e5759bfac upstream.

In subsequent patches, we will detect stack overflow in our exception
entry code, by verifying the SP after it has been decremented to make
space for the exception regs.

This verification code is small, and we can minimize its impact by
placing it directly in the vectors. To avoid redundant modification of
the SP, we also need to move the initial decrement of the SP into the
vectors.

As a preparatory step, this patch introduces kernel_ventry, which
performs this decrement, and updates the entry code accordingly.
Subsequent patches will fold SP verification into kernel_ventry.

There should be no functional change as a result of this patch.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[Mark: turn into prep patch, expand commit msg]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/kernel/entry.S | 47 ++++++++++++++++++++++++++---------------------
 1 file changed, 26 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index b4c7db434654..f5aa8f010254 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -68,8 +68,13 @@
 #define BAD_FIQ		2
 #define BAD_ERROR	3
 
-	.macro	kernel_entry, el, regsize = 64
+	.macro kernel_ventry	label
+	.align 7
 	sub	sp, sp, #S_FRAME_SIZE
+	b	\label
+	.endm
+
+	.macro	kernel_entry, el, regsize = 64
 	.if	\regsize == 32
 	mov	w0, w0				// zero upper 32 bits of x0
 	.endif
@@ -257,31 +262,31 @@ tsk	.req	x28		// current thread_info
 
 	.align	11
 ENTRY(vectors)
-	ventry	el1_sync_invalid		// Synchronous EL1t
-	ventry	el1_irq_invalid			// IRQ EL1t
-	ventry	el1_fiq_invalid			// FIQ EL1t
-	ventry	el1_error_invalid		// Error EL1t
+	kernel_ventry	el1_sync_invalid		// Synchronous EL1t
+	kernel_ventry	el1_irq_invalid			// IRQ EL1t
+	kernel_ventry	el1_fiq_invalid			// FIQ EL1t
+	kernel_ventry	el1_error_invalid		// Error EL1t
 
-	ventry	el1_sync			// Synchronous EL1h
-	ventry	el1_irq				// IRQ EL1h
-	ventry	el1_fiq_invalid			// FIQ EL1h
-	ventry	el1_error_invalid		// Error EL1h
+	kernel_ventry	el1_sync			// Synchronous EL1h
+	kernel_ventry	el1_irq				// IRQ EL1h
+	kernel_ventry	el1_fiq_invalid			// FIQ EL1h
+	kernel_ventry	el1_error_invalid		// Error EL1h
 
-	ventry	el0_sync			// Synchronous 64-bit EL0
-	ventry	el0_irq				// IRQ 64-bit EL0
-	ventry	el0_fiq_invalid			// FIQ 64-bit EL0
-	ventry	el0_error_invalid		// Error 64-bit EL0
+	kernel_ventry	el0_sync			// Synchronous 64-bit EL0
+	kernel_ventry	el0_irq				// IRQ 64-bit EL0
+	kernel_ventry	el0_fiq_invalid			// FIQ 64-bit EL0
+	kernel_ventry	el0_error_invalid		// Error 64-bit EL0
 
 #ifdef CONFIG_COMPAT
-	ventry	el0_sync_compat			// Synchronous 32-bit EL0
-	ventry	el0_irq_compat			// IRQ 32-bit EL0
-	ventry	el0_fiq_invalid_compat		// FIQ 32-bit EL0
-	ventry	el0_error_invalid_compat	// Error 32-bit EL0
+	kernel_ventry	el0_sync_compat			// Synchronous 32-bit EL0
+	kernel_ventry	el0_irq_compat			// IRQ 32-bit EL0
+	kernel_ventry	el0_fiq_invalid_compat		// FIQ 32-bit EL0
+	kernel_ventry	el0_error_invalid_compat	// Error 32-bit EL0
 #else
-	ventry	el0_sync_invalid		// Synchronous 32-bit EL0
-	ventry	el0_irq_invalid			// IRQ 32-bit EL0
-	ventry	el0_fiq_invalid			// FIQ 32-bit EL0
-	ventry	el0_error_invalid		// Error 32-bit EL0
+	kernel_ventry	el0_sync_invalid		// Synchronous 32-bit EL0
+	kernel_ventry	el0_irq_invalid			// IRQ 32-bit EL0
+	kernel_ventry	el0_fiq_invalid			// FIQ 32-bit EL0
+	kernel_ventry	el0_error_invalid		// Error 32-bit EL0
 #endif
 END(vectors)
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 07/27] module: extend 'rodata=off' boot cmdline parameter to module mappings
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (5 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 06/27] arm64: factor out entry stack manipulation Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "module: extend 'rodata=off' boot cmdline parameter to module mappings" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 08/27] arm64: entry: Add exception trampoline page for exceptions from EL0 Mark Rutland
                   ` (21 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: AKASHI Takahiro <takahiro.akashi@linaro.org>

commit 39290b389ea upstream.

The current "rodata=off" parameter disables read-only kernel mappings
under CONFIG_DEBUG_RODATA:
    commit d2aa1acad22f ("mm/init: Add 'rodata=off' boot cmdline parameter
    to disable read-only kernel mappings")

This patch is a logical extension to module mappings ie. read-only mappings
at module loading can be disabled even if CONFIG_DEBUG_SET_MODULE_RONX
(mainly for debug use). Please note, however, that it only affects RO/RW
permissions, keeping NX set.

This is the first step to make CONFIG_DEBUG_SET_MODULE_RONX mandatory
(always-on) in the future as CONFIG_DEBUG_RODATA on x86 and arm64.

Suggested-by: and Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Link: http://lkml.kernel.org/r/20161114061505.15238-1-takahiro.akashi@linaro.org
Signed-off-by: Jessica Yu <jeyu@redhat.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 include/linux/init.h |  3 +++
 init/main.c          |  7 +++++--
 kernel/module.c      | 20 +++++++++++++++++---
 3 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/include/linux/init.h b/include/linux/init.h
index 683508f6bb4e..0cca4142987f 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -133,6 +133,9 @@ void prepare_namespace(void);
 void __init load_default_modules(void);
 int __init init_rootfs(void);
 
+#if defined(CONFIG_DEBUG_RODATA) || defined(CONFIG_DEBUG_SET_MODULE_RONX)
+extern bool rodata_enabled;
+#endif
 #ifdef CONFIG_DEBUG_RODATA
 void mark_rodata_ro(void);
 #endif
diff --git a/init/main.c b/init/main.c
index 99f026565608..f22957afb37e 100644
--- a/init/main.c
+++ b/init/main.c
@@ -81,6 +81,7 @@
 #include <linux/proc_ns.h>
 #include <linux/io.h>
 #include <linux/kaiser.h>
+#include <linux/cache.h>
 
 #include <asm/io.h>
 #include <asm/bugs.h>
@@ -914,14 +915,16 @@ static int try_to_run_init_process(const char *init_filename)
 
 static noinline void __init kernel_init_freeable(void);
 
-#ifdef CONFIG_DEBUG_RODATA
-static bool rodata_enabled = true;
+#if defined(CONFIG_DEBUG_RODATA) || defined(CONFIG_SET_MODULE_RONX)
+bool rodata_enabled __ro_after_init = true;
 static int __init set_debug_rodata(char *str)
 {
 	return strtobool(str, &rodata_enabled);
 }
 __setup("rodata=", set_debug_rodata);
+#endif
 
+#ifdef CONFIG_DEBUG_RODATA
 static void mark_readonly(void)
 {
 	if (rodata_enabled)
diff --git a/kernel/module.c b/kernel/module.c
index 07bfb9971f2f..0651f2d25fc9 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -1911,6 +1911,9 @@ static void frob_writable_data(const struct module_layout *layout,
 /* livepatching wants to disable read-only so it can frob module. */
 void module_disable_ro(const struct module *mod)
 {
+	if (!rodata_enabled)
+		return;
+
 	frob_text(&mod->core_layout, set_memory_rw);
 	frob_rodata(&mod->core_layout, set_memory_rw);
 	frob_ro_after_init(&mod->core_layout, set_memory_rw);
@@ -1920,6 +1923,9 @@ void module_disable_ro(const struct module *mod)
 
 void module_enable_ro(const struct module *mod, bool after_init)
 {
+	if (!rodata_enabled)
+		return;
+
 	frob_text(&mod->core_layout, set_memory_ro);
 	frob_rodata(&mod->core_layout, set_memory_ro);
 	frob_text(&mod->init_layout, set_memory_ro);
@@ -1952,6 +1958,9 @@ void set_all_modules_text_rw(void)
 {
 	struct module *mod;
 
+	if (!rodata_enabled)
+		return;
+
 	mutex_lock(&module_mutex);
 	list_for_each_entry_rcu(mod, &modules, list) {
 		if (mod->state == MODULE_STATE_UNFORMED)
@@ -1968,6 +1977,9 @@ void set_all_modules_text_ro(void)
 {
 	struct module *mod;
 
+	if (!rodata_enabled)
+		return;
+
 	mutex_lock(&module_mutex);
 	list_for_each_entry_rcu(mod, &modules, list) {
 		if (mod->state == MODULE_STATE_UNFORMED)
@@ -1981,10 +1993,12 @@ void set_all_modules_text_ro(void)
 
 static void disable_ro_nx(const struct module_layout *layout)
 {
-	frob_text(layout, set_memory_rw);
-	frob_rodata(layout, set_memory_rw);
+	if (rodata_enabled) {
+		frob_text(layout, set_memory_rw);
+		frob_rodata(layout, set_memory_rw);
+		frob_ro_after_init(layout, set_memory_rw);
+	}
 	frob_rodata(layout, set_memory_x);
-	frob_ro_after_init(layout, set_memory_rw);
 	frob_ro_after_init(layout, set_memory_x);
 	frob_writable_data(layout, set_memory_x);
 }
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 08/27] arm64: entry: Add exception trampoline page for exceptions from EL0
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (6 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 07/27] module: extend 'rodata=off' boot cmdline parameter to module mappings Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: entry: Add exception trampoline page for exceptions from EL0" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 09/27] arm64: mm: Map entry trampoline into trampoline and kernel page tables Mark Rutland
                   ` (20 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit c7b9adaf85f8 upstream.

To allow unmapping of the kernel whilst running at EL0, we need to
point the exception vectors at an entry trampoline that can map/unmap
the kernel on entry/exit respectively.

This patch adds the trampoline page, although it is not yet plugged
into the vector table and is therefore unused.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[Alex: avoid dependency on SW PAN patches]
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
[Mark: remove dummy SW PAN definitions]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/kernel/entry.S       | 86 +++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/vmlinux.lds.S | 17 ++++++++
 2 files changed, 103 insertions(+)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index f5aa8f010254..08f6f059e960 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -29,9 +29,11 @@
 #include <asm/esr.h>
 #include <asm/irq.h>
 #include <asm/memory.h>
+#include <asm/mmu.h>
 #include <asm/thread_info.h>
 #include <asm/asm-uaccess.h>
 #include <asm/unistd.h>
+#include <asm/kernel-pgtable.h>
 
 /*
  * Context tracking subsystem.  Used to instrument transitions
@@ -806,6 +808,90 @@ __ni_sys_trace:
 
 	.popsection				// .entry.text
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+/*
+ * Exception vectors trampoline.
+ */
+	.pushsection ".entry.tramp.text", "ax"
+
+	.macro tramp_map_kernel, tmp
+	mrs	\tmp, ttbr1_el1
+	sub	\tmp, \tmp, #SWAPPER_DIR_SIZE
+	bic	\tmp, \tmp, #USER_ASID_FLAG
+	msr	ttbr1_el1, \tmp
+	.endm
+
+	.macro tramp_unmap_kernel, tmp
+	mrs	\tmp, ttbr1_el1
+	add	\tmp, \tmp, #SWAPPER_DIR_SIZE
+	orr	\tmp, \tmp, #USER_ASID_FLAG
+	msr	ttbr1_el1, \tmp
+	/*
+	 * We avoid running the post_ttbr_update_workaround here because the
+	 * user and kernel ASIDs don't have conflicting mappings, so any
+	 * "blessing" as described in:
+	 *
+	 *   http://lkml.kernel.org/r/56BB848A.6060603@caviumnetworks.com
+	 *
+	 * will not hurt correctness. Whilst this may partially defeat the
+	 * point of using split ASIDs in the first place, it avoids
+	 * the hit of invalidating the entire I-cache on every return to
+	 * userspace.
+	 */
+	.endm
+
+	.macro tramp_ventry, regsize = 64
+	.align	7
+1:
+	.if	\regsize == 64
+	msr	tpidrro_el0, x30	// Restored in kernel_ventry
+	.endif
+	tramp_map_kernel	x30
+	ldr	x30, =vectors
+	prfm	plil1strm, [x30, #(1b - tramp_vectors)]
+	msr	vbar_el1, x30
+	add	x30, x30, #(1b - tramp_vectors)
+	isb
+	br	x30
+	.endm
+
+	.macro tramp_exit, regsize = 64
+	adr	x30, tramp_vectors
+	msr	vbar_el1, x30
+	tramp_unmap_kernel	x30
+	.if	\regsize == 64
+	mrs	x30, far_el1
+	.endif
+	eret
+	.endm
+
+	.align	11
+ENTRY(tramp_vectors)
+	.space	0x400
+
+	tramp_ventry
+	tramp_ventry
+	tramp_ventry
+	tramp_ventry
+
+	tramp_ventry	32
+	tramp_ventry	32
+	tramp_ventry	32
+	tramp_ventry	32
+END(tramp_vectors)
+
+ENTRY(tramp_exit_native)
+	tramp_exit
+END(tramp_exit_native)
+
+ENTRY(tramp_exit_compat)
+	tramp_exit	32
+END(tramp_exit_compat)
+
+	.ltorg
+	.popsection				// .entry.tramp.text
+#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+
 /*
  * Special system call wrappers.
  */
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 1105aab1e6d6..466a43adec9f 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -56,6 +56,17 @@ jiffies = jiffies_64;
 #define HIBERNATE_TEXT
 #endif
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define TRAMP_TEXT					\
+	. = ALIGN(PAGE_SIZE);				\
+	VMLINUX_SYMBOL(__entry_tramp_text_start) = .;	\
+	*(.entry.tramp.text)				\
+	. = ALIGN(PAGE_SIZE);				\
+	VMLINUX_SYMBOL(__entry_tramp_text_end) = .;
+#else
+#define TRAMP_TEXT
+#endif
+
 /*
  * The size of the PE/COFF section that covers the kernel image, which
  * runs from stext to _edata, must be a round multiple of the PE/COFF
@@ -128,6 +139,7 @@ SECTIONS
 			HYPERVISOR_TEXT
 			IDMAP_TEXT
 			HIBERNATE_TEXT
+			TRAMP_TEXT
 			*(.fixup)
 			*(.gnu.warning)
 		. = ALIGN(16);
@@ -216,6 +228,11 @@ SECTIONS
 	swapper_pg_dir = .;
 	. += SWAPPER_DIR_SIZE;
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	tramp_pg_dir = .;
+	. += PAGE_SIZE;
+#endif
+
 	_end = .;
 
 	STABS_DEBUG
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 09/27] arm64: mm: Map entry trampoline into trampoline and kernel page tables
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (7 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 08/27] arm64: entry: Add exception trampoline page for exceptions from EL0 Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-03 11:15   ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: mm: Map entry trampoline into trampoline and kernel page tables" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 10/27] arm64: entry: Explicitly pass exception level to kernel_ventry macro Mark Rutland
                   ` (19 subsequent siblings)
  28 siblings, 2 replies; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit 51a0048beb44 upstream.

The exception entry trampoline needs to be mapped at the same virtual
address in both the trampoline page table (which maps nothing else)
and also the kernel page table, so that we can swizzle TTBR1_EL1 on
exceptions from and return to EL0.

This patch maps the trampoline at a fixed virtual address in the fixmap
area of the kernel virtual address space, which allows the kernel proper
to be randomized with respect to the trampoline when KASLR is enabled.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/include/asm/fixmap.h  |  5 +++++
 arch/arm64/include/asm/pgtable.h |  1 +
 arch/arm64/kernel/asm-offsets.c  |  6 +++++-
 arch/arm64/mm/mmu.c              | 23 +++++++++++++++++++++++
 4 files changed, 34 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h
index caf86be815ba..7b1d88c18143 100644
--- a/arch/arm64/include/asm/fixmap.h
+++ b/arch/arm64/include/asm/fixmap.h
@@ -51,6 +51,11 @@ enum fixed_addresses {
 
 	FIX_EARLYCON_MEM_BASE,
 	FIX_TEXT_POKE0,
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	FIX_ENTRY_TRAMP_TEXT,
+#define TRAMP_VALIAS		(__fix_to_virt(FIX_ENTRY_TRAMP_TEXT))
+#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
 	__end_of_permanent_fixed_addresses,
 
 	/*
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 7acd3c5c7643..3a30a3994e4a 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -692,6 +692,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 
 extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
 extern pgd_t idmap_pg_dir[PTRS_PER_PGD];
+extern pgd_t tramp_pg_dir[PTRS_PER_PGD];
 
 /*
  * Encode and decode a swap entry:
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index c58ddf8c4062..5f4bf3c6f016 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -24,6 +24,7 @@
 #include <linux/kvm_host.h>
 #include <linux/suspend.h>
 #include <asm/cpufeature.h>
+#include <asm/fixmap.h>
 #include <asm/thread_info.h>
 #include <asm/memory.h>
 #include <asm/smp_plat.h>
@@ -144,11 +145,14 @@ int main(void)
   DEFINE(ARM_SMCCC_RES_X2_OFFS,		offsetof(struct arm_smccc_res, a2));
   DEFINE(ARM_SMCCC_QUIRK_ID_OFFS,	offsetof(struct arm_smccc_quirk, id));
   DEFINE(ARM_SMCCC_QUIRK_STATE_OFFS,	offsetof(struct arm_smccc_quirk, state));
-
   BLANK();
   DEFINE(HIBERN_PBE_ORIG,	offsetof(struct pbe, orig_address));
   DEFINE(HIBERN_PBE_ADDR,	offsetof(struct pbe, address));
   DEFINE(HIBERN_PBE_NEXT,	offsetof(struct pbe, next));
   DEFINE(ARM64_FTR_SYSVAL,	offsetof(struct arm64_ftr_reg, sys_val));
+  BLANK();
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+  DEFINE(TRAMP_VALIAS,		TRAMP_VALIAS);
+#endif
   return 0;
 }
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 638f7f2bd79c..3a57fec16b32 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -419,6 +419,29 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
 	vm_area_add_early(vma);
 }
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+static int __init map_entry_trampoline(void)
+{
+	extern char __entry_tramp_text_start[];
+
+	pgprot_t prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC;
+	phys_addr_t pa_start = __pa_symbol(__entry_tramp_text_start);
+
+	/* The trampoline is always mapped and can therefore be global */
+	pgprot_val(prot) &= ~PTE_NG;
+
+	/* Map only the text into the trampoline page table */
+	memset(tramp_pg_dir, 0, PGD_SIZE);
+	__create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE,
+			     prot, pgd_pgtable_alloc, 0);
+
+	/* ...as well as the kernel page table */
+	__set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot);
+	return 0;
+}
+core_initcall(map_entry_trampoline);
+#endif
+
 /*
  * Create fine-grained mappings for the kernel.
  */
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 10/27] arm64: entry: Explicitly pass exception level to kernel_ventry macro
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (8 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 09/27] arm64: mm: Map entry trampoline into trampoline and kernel page tables Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: entry: Explicitly pass exception level to kernel_ventry macro" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 11/27] arm64: entry: Hook up entry trampoline to exception vectors Mark Rutland
                   ` (18 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit 5b1f7fe41909 upstream.

We will need to treat exceptions from EL0 differently in kernel_ventry,
so rework the macro to take the exception level as an argument and
construct the branch target using that.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
[Mark: avoid dependency on C error handler backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/kernel/entry.S | 44 ++++++++++++++++++++++----------------------
 1 file changed, 22 insertions(+), 22 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 08f6f059e960..0a9f6e7a76d6 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -70,10 +70,10 @@
 #define BAD_FIQ		2
 #define BAD_ERROR	3
 
-	.macro kernel_ventry	label
+	.macro kernel_ventry, el, label, regsize = 64
 	.align 7
 	sub	sp, sp, #S_FRAME_SIZE
-	b	\label
+	b	el\()\el\()_\label
 	.endm
 
 	.macro	kernel_entry, el, regsize = 64
@@ -264,31 +264,31 @@ tsk	.req	x28		// current thread_info
 
 	.align	11
 ENTRY(vectors)
-	kernel_ventry	el1_sync_invalid		// Synchronous EL1t
-	kernel_ventry	el1_irq_invalid			// IRQ EL1t
-	kernel_ventry	el1_fiq_invalid			// FIQ EL1t
-	kernel_ventry	el1_error_invalid		// Error EL1t
+	kernel_ventry	1, sync_invalid			// Synchronous EL1t
+	kernel_ventry	1, irq_invalid			// IRQ EL1t
+	kernel_ventry	1, fiq_invalid			// FIQ EL1t
+	kernel_ventry	1, error_invalid		// Error EL1t
 
-	kernel_ventry	el1_sync			// Synchronous EL1h
-	kernel_ventry	el1_irq				// IRQ EL1h
-	kernel_ventry	el1_fiq_invalid			// FIQ EL1h
-	kernel_ventry	el1_error_invalid		// Error EL1h
+	kernel_ventry	1, sync				// Synchronous EL1h
+	kernel_ventry	1, irq				// IRQ EL1h
+	kernel_ventry	1, fiq_invalid			// FIQ EL1h
+	kernel_ventry	1, error_invalid		// Error EL1h
 
-	kernel_ventry	el0_sync			// Synchronous 64-bit EL0
-	kernel_ventry	el0_irq				// IRQ 64-bit EL0
-	kernel_ventry	el0_fiq_invalid			// FIQ 64-bit EL0
-	kernel_ventry	el0_error_invalid		// Error 64-bit EL0
+	kernel_ventry	0, sync				// Synchronous 64-bit EL0
+	kernel_ventry	0, irq				// IRQ 64-bit EL0
+	kernel_ventry	0, fiq_invalid			// FIQ 64-bit EL0
+	kernel_ventry	0, error_invalid		// Error 64-bit EL0
 
 #ifdef CONFIG_COMPAT
-	kernel_ventry	el0_sync_compat			// Synchronous 32-bit EL0
-	kernel_ventry	el0_irq_compat			// IRQ 32-bit EL0
-	kernel_ventry	el0_fiq_invalid_compat		// FIQ 32-bit EL0
-	kernel_ventry	el0_error_invalid_compat	// Error 32-bit EL0
+	kernel_ventry	0, sync_compat, 32		// Synchronous 32-bit EL0
+	kernel_ventry	0, irq_compat, 32		// IRQ 32-bit EL0
+	kernel_ventry	0, fiq_invalid_compat, 32	// FIQ 32-bit EL0
+	kernel_ventry	0, error_invalid_compat, 32	// Error 32-bit EL0
 #else
-	kernel_ventry	el0_sync_invalid		// Synchronous 32-bit EL0
-	kernel_ventry	el0_irq_invalid			// IRQ 32-bit EL0
-	kernel_ventry	el0_fiq_invalid			// FIQ 32-bit EL0
-	kernel_ventry	el0_error_invalid		// Error 32-bit EL0
+	kernel_ventry	0, sync_invalid, 32		// Synchronous 32-bit EL0
+	kernel_ventry	0, irq_invalid, 32		// IRQ 32-bit EL0
+	kernel_ventry	0, fiq_invalid, 32		// FIQ 32-bit EL0
+	kernel_ventry	0, error_invalid, 32		// Error 32-bit EL0
 #endif
 END(vectors)
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 11/27] arm64: entry: Hook up entry trampoline to exception vectors
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (9 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 10/27] arm64: entry: Explicitly pass exception level to kernel_ventry macro Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: entry: Hook up entry trampoline to exception vectors" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 12/27] arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks Mark Rutland
                   ` (17 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit 4bf3286d29f3 upstream.

Hook up the entry trampoline to our exception vectors so that all
exceptions from and returns to EL0 go via the trampoline, which swizzles
the vector base register accordingly. Transitioning to and from the
kernel clobbers x30, so we use tpidrro_el0 and far_el1 as scratch
registers for native tasks.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/kernel/entry.S | 39 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 36 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 0a9f6e7a76d6..623c160bf68e 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -72,10 +72,26 @@
 
 	.macro kernel_ventry, el, label, regsize = 64
 	.align 7
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	.if	\el == 0
+	.if	\regsize == 64
+	mrs	x30, tpidrro_el0
+	msr	tpidrro_el0, xzr
+	.else
+	mov	x30, xzr
+	.endif
+	.endif
+#endif
+
 	sub	sp, sp, #S_FRAME_SIZE
 	b	el\()\el\()_\label
 	.endm
 
+	.macro tramp_alias, dst, sym
+	mov_q	\dst, TRAMP_VALIAS
+	add	\dst, \dst, #(\sym - .entry.tramp.text)
+	.endm
+
 	.macro	kernel_entry, el, regsize = 64
 	.if	\regsize == 32
 	mov	w0, w0				// zero upper 32 bits of x0
@@ -157,18 +173,20 @@
 	ct_user_enter
 	ldr	x23, [sp, #S_SP]		// load return stack pointer
 	msr	sp_el0, x23
+	tst	x22, #PSR_MODE32_BIT		// native task?
+	b.eq	3f
+
 #ifdef CONFIG_ARM64_ERRATUM_845719
 alternative_if ARM64_WORKAROUND_845719
-	tbz	x22, #4, 1f
 #ifdef CONFIG_PID_IN_CONTEXTIDR
 	mrs	x29, contextidr_el1
 	msr	contextidr_el1, x29
 #else
 	msr contextidr_el1, xzr
 #endif
-1:
 alternative_else_nop_endif
 #endif
+3:
 	.endif
 	msr	elr_el1, x21			// set up the return data
 	msr	spsr_el1, x22
@@ -189,7 +207,22 @@ alternative_else_nop_endif
 	ldp	x28, x29, [sp, #16 * 14]
 	ldr	lr, [sp, #S_LR]
 	add	sp, sp, #S_FRAME_SIZE		// restore sp
-	eret					// return to kernel
+
+#ifndef CONFIG_UNMAP_KERNEL_AT_EL0
+	eret
+#else
+	.if	\el == 0
+	bne	4f
+	msr	far_el1, x30
+	tramp_alias	x30, tramp_exit_native
+	br	x30
+4:
+	tramp_alias	x30, tramp_exit_compat
+	br	x30
+	.else
+	eret
+	.endif
+#endif
 	.endm
 
 	.macro	get_thread_info, rd
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 12/27] arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (10 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 11/27] arm64: entry: Hook up entry trampoline to exception vectors Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 13/27] arm64: entry: Add fake CPU feature for unmapping the kernel at EL0 Mark Rutland
                   ` (16 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit 18011eac28c7 upstream.

When unmapping the kernel at EL0, we use tpidrro_el0 as a scratch register
during exception entry from native tasks and subsequently zero it in
the kernel_ventry macro. We can therefore avoid zeroing tpidrro_el0
in the context-switch path for native tasks using the entry trampoline.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/kernel/process.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 0e7394915c70..0972ce58316d 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -306,17 +306,17 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
 
 static void tls_thread_switch(struct task_struct *next)
 {
-	unsigned long tpidr, tpidrro;
+	unsigned long tpidr;
 
 	tpidr = read_sysreg(tpidr_el0);
 	*task_user_tls(current) = tpidr;
 
-	tpidr = *task_user_tls(next);
-	tpidrro = is_compat_thread(task_thread_info(next)) ?
-		  next->thread.tp_value : 0;
+	if (is_compat_thread(task_thread_info(next)))
+		write_sysreg(next->thread.tp_value, tpidrro_el0);
+	else if (!arm64_kernel_unmapped_at_el0())
+		write_sysreg(0, tpidrro_el0);
 
-	write_sysreg(tpidr, tpidr_el0);
-	write_sysreg(tpidrro, tpidrro_el0);
+	write_sysreg(*task_user_tls(next), tpidr_el0);
 }
 
 /* Restore the UAO state depending on next's addr_limit */
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 13/27] arm64: entry: Add fake CPU feature for unmapping the kernel at EL0
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (11 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 12/27] arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: entry: Add fake CPU feature for unmapping the kernel at EL0" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 14/27] arm64: kaslr: Put kernel vectors address in separate data page Mark Rutland
                   ` (15 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit ea1e3de85e94 upstream.

Allow explicit disabling of the entry trampoline on the kernel command
line (kpti=off) by adding a fake CPU feature (ARM64_UNMAP_KERNEL_AT_EL0)
that can be used to toggle the alternative sequences in our entry code and
avoid use of the trampoline altogether if desired. This also allows us to
make use of a static key in arm64_kernel_unmapped_at_el0().

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[Alex: use first free cpucap number, use cpus_have_cap]
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/include/asm/cpucaps.h |  3 ++-
 arch/arm64/include/asm/mmu.h     |  3 ++-
 arch/arm64/kernel/cpufeature.c   | 41 ++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/entry.S        |  9 +++++----
 4 files changed, 50 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 87b446535185..7ddf233f05bd 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -34,7 +34,8 @@
 #define ARM64_HAS_32BIT_EL0			13
 #define ARM64_HYP_OFFSET_LOW			14
 #define ARM64_MISMATCHED_CACHE_LINE_SIZE	15
+#define ARM64_UNMAP_KERNEL_AT_EL0		16
 
-#define ARM64_NCAPS				16
+#define ARM64_NCAPS				17
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 279e75b8a49e..a813edf28737 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -34,7 +34,8 @@ typedef struct {
 
 static inline bool arm64_kernel_unmapped_at_el0(void)
 {
-	return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0);
+	return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) &&
+	       cpus_have_cap(ARM64_UNMAP_KERNEL_AT_EL0);
 }
 
 extern void paging_init(void);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 3a129d48674e..74b168c51abd 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -746,6 +746,40 @@ static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry,
 	return idmap_addr > GENMASK(VA_BITS - 2, 0) && !is_kernel_in_hyp_mode();
 }
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
+
+static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
+				int __unused)
+{
+	/* Forced on command line? */
+	if (__kpti_forced) {
+		pr_info_once("kernel page table isolation forced %s by command line option\n",
+			     __kpti_forced > 0 ? "ON" : "OFF");
+		return __kpti_forced > 0;
+	}
+
+	/* Useful for KASLR robustness */
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+		return true;
+
+	return false;
+}
+
+static int __init parse_kpti(char *str)
+{
+	bool enabled;
+	int ret = strtobool(str, &enabled);
+
+	if (ret)
+		return ret;
+
+	__kpti_forced = enabled ? 1 : -1;
+	return 0;
+}
+__setup("kpti=", parse_kpti);
+#endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
@@ -829,6 +863,13 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.def_scope = SCOPE_SYSTEM,
 		.matches = hyp_offset_low,
 	},
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	{
+		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
+		.def_scope = SCOPE_SYSTEM,
+		.matches = unmap_kernel_at_el0,
+	},
+#endif
 	{},
 };
 
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 623c160bf68e..b16d0534cda3 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -73,6 +73,7 @@
 	.macro kernel_ventry, el, label, regsize = 64
 	.align 7
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+alternative_if ARM64_UNMAP_KERNEL_AT_EL0
 	.if	\el == 0
 	.if	\regsize == 64
 	mrs	x30, tpidrro_el0
@@ -81,6 +82,7 @@
 	mov	x30, xzr
 	.endif
 	.endif
+alternative_else_nop_endif
 #endif
 
 	sub	sp, sp, #S_FRAME_SIZE
@@ -208,10 +210,9 @@ alternative_else_nop_endif
 	ldr	lr, [sp, #S_LR]
 	add	sp, sp, #S_FRAME_SIZE		// restore sp
 
-#ifndef CONFIG_UNMAP_KERNEL_AT_EL0
-	eret
-#else
 	.if	\el == 0
+alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	bne	4f
 	msr	far_el1, x30
 	tramp_alias	x30, tramp_exit_native
@@ -219,10 +220,10 @@ alternative_else_nop_endif
 4:
 	tramp_alias	x30, tramp_exit_compat
 	br	x30
+#endif
 	.else
 	eret
 	.endif
-#endif
 	.endm
 
 	.macro	get_thread_info, rd
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 14/27] arm64: kaslr: Put kernel vectors address in separate data page
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (12 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 13/27] arm64: entry: Add fake CPU feature for unmapping the kernel at EL0 Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: kaslr: Put kernel vectors address in separate data page" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 15/27] arm64: use RET instruction for exiting the trampoline Mark Rutland
                   ` (14 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit 6c27c4082f4f upstream.

The literal pool entry for identifying the vectors base is the only piece
of information in the trampoline page that identifies the true location
of the kernel.

This patch moves it into a page-aligned region of the .rodata section
and maps this adjacent to the trampoline text via an additional fixmap
entry, which protects against any accidental leakage of the trampoline
contents.

Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[Alex: avoid ARM64_WORKAROUND_QCOM_FALKOR_E1003 dependency]
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/include/asm/fixmap.h |  1 +
 arch/arm64/kernel/entry.S       | 14 ++++++++++++++
 arch/arm64/kernel/vmlinux.lds.S |  5 ++++-
 arch/arm64/mm/mmu.c             | 10 +++++++++-
 4 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h
index 7b1d88c18143..d8e58051f32d 100644
--- a/arch/arm64/include/asm/fixmap.h
+++ b/arch/arm64/include/asm/fixmap.h
@@ -53,6 +53,7 @@ enum fixed_addresses {
 	FIX_TEXT_POKE0,
 
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	FIX_ENTRY_TRAMP_DATA,
 	FIX_ENTRY_TRAMP_TEXT,
 #define TRAMP_VALIAS		(__fix_to_virt(FIX_ENTRY_TRAMP_TEXT))
 #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index b16d0534cda3..805dc76517c3 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -881,7 +881,13 @@ __ni_sys_trace:
 	msr	tpidrro_el0, x30	// Restored in kernel_ventry
 	.endif
 	tramp_map_kernel	x30
+#ifdef CONFIG_RANDOMIZE_BASE
+	adr	x30, tramp_vectors + PAGE_SIZE
+	isb
+	ldr	x30, [x30]
+#else
 	ldr	x30, =vectors
+#endif
 	prfm	plil1strm, [x30, #(1b - tramp_vectors)]
 	msr	vbar_el1, x30
 	add	x30, x30, #(1b - tramp_vectors)
@@ -924,6 +930,14 @@ END(tramp_exit_compat)
 
 	.ltorg
 	.popsection				// .entry.tramp.text
+#ifdef CONFIG_RANDOMIZE_BASE
+	.pushsection ".rodata", "a"
+	.align PAGE_SHIFT
+	.globl	__entry_tramp_data_start
+__entry_tramp_data_start:
+	.quad	vectors
+	.popsection				// .rodata
+#endif /* CONFIG_RANDOMIZE_BASE */
 #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 /*
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 466a43adec9f..6a584558b29d 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -252,7 +252,10 @@ ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
 ASSERT(__hibernate_exit_text_end - (__hibernate_exit_text_start & ~(SZ_4K - 1))
 	<= SZ_4K, "Hibernate exit text too big or misaligned")
 #endif
-
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE,
+	"Entry trampoline text too big")
+#endif
 /*
  * If padding is applied before .head.text, virt<->phys conversions will fail.
  */
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 3a57fec16b32..4cd4862845cd 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -435,8 +435,16 @@ static int __init map_entry_trampoline(void)
 	__create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE,
 			     prot, pgd_pgtable_alloc, 0);
 
-	/* ...as well as the kernel page table */
+	/* Map both the text and data into the kernel page table */
 	__set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot);
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+		extern char __entry_tramp_data_start[];
+
+		__set_fixmap(FIX_ENTRY_TRAMP_DATA,
+			     __pa_symbol(__entry_tramp_data_start),
+			     PAGE_KERNEL_RO);
+	}
+
 	return 0;
 }
 core_initcall(map_entry_trampoline);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 15/27] arm64: use RET instruction for exiting the trampoline
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (13 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 14/27] arm64: kaslr: Put kernel vectors address in separate data page Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: use RET instruction for exiting the trampoline" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 16/27] arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0 Mark Rutland
                   ` (13 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit be04a6d1126b upstream.

Speculation attacks against the entry trampoline can potentially resteer
the speculative instruction stream through the indirect branch and into
arbitrary gadgets within the kernel.

This patch defends against these attacks by forcing a misprediction
through the return stack: a dummy BL instruction loads an entry into
the stack, so that the predicted program flow of the subsequent RET
instruction is to a branch-to-self instruction which is finally resolved
as a branch to the kernel vectors with speculation suppressed.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/kernel/entry.S | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 805dc76517c3..f35ca1e54b5a 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -880,6 +880,14 @@ __ni_sys_trace:
 	.if	\regsize == 64
 	msr	tpidrro_el0, x30	// Restored in kernel_ventry
 	.endif
+	/*
+	 * Defend against branch aliasing attacks by pushing a dummy
+	 * entry onto the return stack and using a RET instruction to
+	 * enter the full-fat kernel vectors.
+	 */
+	bl	2f
+	b	.
+2:
 	tramp_map_kernel	x30
 #ifdef CONFIG_RANDOMIZE_BASE
 	adr	x30, tramp_vectors + PAGE_SIZE
@@ -892,7 +900,7 @@ __ni_sys_trace:
 	msr	vbar_el1, x30
 	add	x30, x30, #(1b - tramp_vectors)
 	isb
-	br	x30
+	ret
 	.endm
 
 	.macro tramp_exit, regsize = 64
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 16/27] arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (14 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 15/27] arm64: use RET instruction for exiting the trampoline Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 17/27] arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry Mark Rutland
                   ` (12 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit 084eb77cd3a8 upstream.

Add a Kconfig entry to control use of the entry trampoline, which allows
us to unmap the kernel whilst running in userspace and improve the
robustness of KASLR.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/Kconfig | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7769c2e27788..6b6e9f89e40a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -733,6 +733,19 @@ config FORCE_MAX_ZONEORDER
 	  However for 4K, we choose a higher default value, 11 as opposed to 10, giving us
 	  4M allocations matching the default size used by generic code.
 
+config UNMAP_KERNEL_AT_EL0
+	bool "Unmap kernel when running in userspace (aka \"KAISER\")"
+	default y
+	help
+	  Some attacks against KASLR make use of the timing difference between
+	  a permission fault which could arise from a page table entry that is
+	  present in the TLB, and a translation fault which always requires a
+	  page table walk. This option defends against these attacks by unmapping
+	  the kernel whilst running in userspace, therefore forcing translation
+	  faults for all of kernel space.
+
+	  If unsure, say Y.
+
 menuconfig ARMV8_DEPRECATED
 	bool "Emulate deprecated/obsolete ARMv8 instructions"
 	depends on COMPAT
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 17/27] arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (15 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 16/27] arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0 Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 18/27] arm64: Take into account ID_AA64PFR0_EL1.CSV3 Mark Rutland
                   ` (11 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit 0617052ddde3 upstream.

Although CONFIG_UNMAP_KERNEL_AT_EL0 does make KASLR more robust, it's
actually more useful as a mitigation against speculation attacks that
can leak arbitrary kernel data to userspace through speculation.

Reword the Kconfig help message to reflect this, and make the option
depend on EXPERT so that it is on by default for the majority of users.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/Kconfig | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6b6e9f89e40a..c8471cf46cbb 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -734,15 +734,14 @@ config FORCE_MAX_ZONEORDER
 	  4M allocations matching the default size used by generic code.
 
 config UNMAP_KERNEL_AT_EL0
-	bool "Unmap kernel when running in userspace (aka \"KAISER\")"
+	bool "Unmap kernel when running in userspace (aka \"KAISER\")" if EXPERT
 	default y
 	help
-	  Some attacks against KASLR make use of the timing difference between
-	  a permission fault which could arise from a page table entry that is
-	  present in the TLB, and a translation fault which always requires a
-	  page table walk. This option defends against these attacks by unmapping
-	  the kernel whilst running in userspace, therefore forcing translation
-	  faults for all of kernel space.
+	  Speculation attacks against some high-performance processors can
+	  be used to bypass MMU permission checks and leak kernel data to
+	  userspace. This can be defended against by unmapping the kernel
+	  when running in userspace, mapping it back in on exception entry
+	  via a trampoline page in the vector table.
 
 	  If unsure, say Y.
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 18/27] arm64: Take into account ID_AA64PFR0_EL1.CSV3
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (16 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 17/27] arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: Take into account ID_AA64PFR0_EL1.CSV3" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 19/27] arm64: Allow checking of a CPU-local erratum Mark Rutland
                   ` (10 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit 179a56f6f9fb upstream.

For non-KASLR kernels where the KPTI behaviour has not been overridden
on the command line we can use ID_AA64PFR0_EL1.CSV3 to determine whether
or not we should unmap the kernel whilst running at EL0.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[Alex: s/read_sanitised_ftr_reg/read_system_reg/ to match v4.9 naming]
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
[Mark: correct zero bits in ftr_id_aa64pfr0 to account for CSV3]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/include/asm/sysreg.h |  1 +
 arch/arm64/kernel/cpufeature.c  | 10 ++++++++--
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 7393cc767edb..7cb7f7cdcfbc 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -117,6 +117,7 @@
 #define ID_AA64ISAR0_AES_SHIFT		4
 
 /* id_aa64pfr0 */
+#define ID_AA64PFR0_CSV3_SHIFT		60
 #define ID_AA64PFR0_GIC_SHIFT		24
 #define ID_AA64PFR0_ASIMD_SHIFT		20
 #define ID_AA64PFR0_FP_SHIFT		16
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 74b168c51abd..d24e59cedae5 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -93,7 +93,8 @@ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 32, 32, 0),
+	ARM64_FTR_BITS(FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 32, 28, 0),
 	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 28, 4, 0),
 	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64PFR0_GIC_SHIFT, 4, 0),
 	S_ARM64_FTR_BITS(FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
@@ -752,6 +753,8 @@ static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 				int __unused)
 {
+	u64 pfr0 = read_system_reg(SYS_ID_AA64PFR0_EL1);
+
 	/* Forced on command line? */
 	if (__kpti_forced) {
 		pr_info_once("kernel page table isolation forced %s by command line option\n",
@@ -763,7 +766,9 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
 		return true;
 
-	return false;
+	/* Defer to CPU feature registers */
+	return !cpuid_feature_extract_unsigned_field(pfr0,
+						     ID_AA64PFR0_CSV3_SHIFT);
 }
 
 static int __init parse_kpti(char *str)
@@ -865,6 +870,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	},
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	{
+		.desc = "Kernel page table isolation (KPTI)",
 		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
 		.def_scope = SCOPE_SYSTEM,
 		.matches = unmap_kernel_at_el0,
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 19/27] arm64: Allow checking of a CPU-local erratum
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (17 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 18/27] arm64: Take into account ID_AA64PFR0_EL1.CSV3 Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: Allow checking of a CPU-local erratum" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 20/27] arm64: capabilities: Handle duplicate entries for a capability Mark Rutland
                   ` (9 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Marc Zyngier <marc.zyngier@arm.com>

commit 8f4137588261d7504f4aa022dc9d1a1fd1940e8e upstream.

this_cpu_has_cap() only checks the feature array, and not the errata
one. In order to be able to check for a CPU-local erratum, allow it
to inspect the latter as well.

This is consistent with cpus_have_cap()'s behaviour, which includes
errata already.

Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/kernel/cpufeature.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d24e59cedae5..810f8bf7c57f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1103,20 +1103,29 @@ static void __init setup_feature_capabilities(void)
  * Check if the current CPU has a given feature capability.
  * Should be called from non-preemptible context.
  */
-bool this_cpu_has_cap(unsigned int cap)
+static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
+			       unsigned int cap)
 {
 	const struct arm64_cpu_capabilities *caps;
 
 	if (WARN_ON(preemptible()))
 		return false;
 
-	for (caps = arm64_features; caps->desc; caps++)
+	for (caps = cap_array; caps->desc; caps++)
 		if (caps->capability == cap && caps->matches)
 			return caps->matches(caps, SCOPE_LOCAL_CPU);
 
 	return false;
 }
 
+extern const struct arm64_cpu_capabilities arm64_errata[];
+
+bool this_cpu_has_cap(unsigned int cap)
+{
+	return (__this_cpu_has_cap(arm64_features, cap) ||
+		__this_cpu_has_cap(arm64_errata, cap));
+}
+
 void __init setup_cpu_features(void)
 {
 	u32 cwg;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 20/27] arm64: capabilities: Handle duplicate entries for a capability
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (18 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 19/27] arm64: Allow checking of a CPU-local erratum Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: capabilities: Handle duplicate entries for a capability" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 21/27] arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs Mark Rutland
                   ` (8 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Suzuki K Poulose <suzuki.poulose@arm.com>

commit 67948af41f2e upstream.

Sometimes a single capability could be listed multiple times with
differing matches(), e.g, CPU errata for different MIDR versions.
This breaks verify_local_cpu_feature() and this_cpu_has_cap() as
we stop checking for a capability on a CPU with the first
entry in the given table, which is not sufficient. Make sure we
run the checks for all entries of the same capability. We do
this by fixing __this_cpu_has_cap() to run through all the
entries in the given table for a match and reuse it for
verify_local_cpu_feature().

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/kernel/cpufeature.c | 44 ++++++++++++++++++++++--------------------
 1 file changed, 23 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 810f8bf7c57f..2d7c7796cce1 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -969,6 +969,26 @@ static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
 			cap_set_elf_hwcap(hwcaps);
 }
 
+/*
+ * Check if the current CPU has a given feature capability.
+ * Should be called from non-preemptible context.
+ */
+static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
+			       unsigned int cap)
+{
+	const struct arm64_cpu_capabilities *caps;
+
+	if (WARN_ON(preemptible()))
+		return false;
+
+	for (caps = cap_array; caps->desc; caps++)
+		if (caps->capability == cap &&
+		    caps->matches &&
+		    caps->matches(caps, SCOPE_LOCAL_CPU))
+			return true;
+	return false;
+}
+
 void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 			    const char *info)
 {
@@ -1037,8 +1057,9 @@ verify_local_elf_hwcaps(const struct arm64_cpu_capabilities *caps)
 }
 
 static void
-verify_local_cpu_features(const struct arm64_cpu_capabilities *caps)
+verify_local_cpu_features(const struct arm64_cpu_capabilities *caps_list)
 {
+	const struct arm64_cpu_capabilities *caps = caps_list;
 	for (; caps->matches; caps++) {
 		if (!cpus_have_cap(caps->capability))
 			continue;
@@ -1046,7 +1067,7 @@ verify_local_cpu_features(const struct arm64_cpu_capabilities *caps)
 		 * If the new CPU misses an advertised feature, we cannot proceed
 		 * further, park the cpu.
 		 */
-		if (!caps->matches(caps, SCOPE_LOCAL_CPU)) {
+		if (!__this_cpu_has_cap(caps_list, caps->capability)) {
 			pr_crit("CPU%d: missing feature: %s\n",
 					smp_processor_id(), caps->desc);
 			cpu_die_early();
@@ -1099,25 +1120,6 @@ static void __init setup_feature_capabilities(void)
 	enable_cpu_capabilities(arm64_features);
 }
 
-/*
- * Check if the current CPU has a given feature capability.
- * Should be called from non-preemptible context.
- */
-static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
-			       unsigned int cap)
-{
-	const struct arm64_cpu_capabilities *caps;
-
-	if (WARN_ON(preemptible()))
-		return false;
-
-	for (caps = cap_array; caps->desc; caps++)
-		if (caps->capability == cap && caps->matches)
-			return caps->matches(caps, SCOPE_LOCAL_CPU);
-
-	return false;
-}
-
 extern const struct arm64_cpu_capabilities arm64_errata[];
 
 bool this_cpu_has_cap(unsigned int cap)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 21/27] arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (19 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 20/27] arm64: capabilities: Handle duplicate entries for a capability Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 22/27] arm64: Turn on KPTI only on CPUs that need it Mark Rutland
                   ` (7 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Jayachandran C <jnair@caviumnetworks.com>

commit 0d90718871fe upstream.

Add the older Broadcom ID as well as the new Cavium ID for ThunderX2
CPUs.

Signed-off-by: Jayachandran C <jnair@caviumnetworks.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/include/asm/cputype.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index 26a68ddb11c1..1d47930c30dc 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -81,6 +81,7 @@
 
 #define CAVIUM_CPU_PART_THUNDERX	0x0A1
 #define CAVIUM_CPU_PART_THUNDERX_81XX	0x0A2
+#define CAVIUM_CPU_PART_THUNDERX2	0x0AF
 
 #define BRCM_CPU_PART_VULCAN		0x516
 
@@ -88,6 +89,8 @@
 #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
 #define MIDR_THUNDERX	MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
 #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
+#define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2)
+#define MIDR_BRCM_VULCAN MIDR_CPU_MODEL(ARM_CPU_IMP_BRCM, BRCM_CPU_PART_VULCAN)
 
 #ifndef __ASSEMBLY__
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 22/27] arm64: Turn on KPTI only on CPUs that need it
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (20 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 21/27] arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: Turn on KPTI only on CPUs that need it" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 23/27] arm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0() Mark Rutland
                   ` (6 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Jayachandran C <jnair@caviumnetworks.com>

commit 0ba2e29c7fc1 upstream.

Whitelist Broadcom Vulcan/Cavium ThunderX2 processors in
unmap_kernel_at_el0(). These CPUs are not vulnerable to
CVE-2017-5754 and do not need KPTI when KASLR is off.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jayachandran C <jnair@caviumnetworks.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/kernel/cpufeature.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 2d7c7796cce1..6015a3cac930 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -766,6 +766,13 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
 		return true;
 
+	/* Don't force KPTI for CPUs that are not vulnerable */
+	switch (read_cpuid_id() & MIDR_CPU_MODEL_MASK) {
+	case MIDR_CAVIUM_THUNDERX2:
+	case MIDR_BRCM_VULCAN:
+		return false;
+	}
+
 	/* Defer to CPU feature registers */
 	return !cpuid_feature_extract_unsigned_field(pfr0,
 						     ID_AA64PFR0_CSV3_SHIFT);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 23/27] arm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0()
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (21 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 22/27] arm64: Turn on KPTI only on CPUs that need it Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0()" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 24/27] arm64: kpti: Add ->enable callback to remap swapper using nG mappings Mark Rutland
                   ` (5 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit 41acec624087 upstream.

To allow systems which do not require kpti to continue running with
global kernel mappings (which appears to be a requirement for Cavium
ThunderX due to a CPU erratum), make the use of nG in the kernel page
tables dependent on arm64_kernel_unmapped_at_el0(), which is resolved
at runtime.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/include/asm/kernel-pgtable.h | 12 ++----------
 arch/arm64/include/asm/pgtable-prot.h   | 30 ++++++++++++++----------------
 2 files changed, 16 insertions(+), 26 deletions(-)

diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index e4ddac983640..7e51d1b57c0c 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -71,16 +71,8 @@
 /*
  * Initial memory map attributes.
  */
-#define _SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
-#define _SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
-
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-#define SWAPPER_PTE_FLAGS	(_SWAPPER_PTE_FLAGS | PTE_NG)
-#define SWAPPER_PMD_FLAGS	(_SWAPPER_PMD_FLAGS | PMD_SECT_NG)
-#else
-#define SWAPPER_PTE_FLAGS	_SWAPPER_PTE_FLAGS
-#define SWAPPER_PMD_FLAGS	_SWAPPER_PMD_FLAGS
-#endif
+#define SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+#define SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
 
 #if ARM64_SWAPPER_USES_SECTION_MAPS
 #define SWAPPER_MM_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index 84b5283d2e7f..f705d96a76f2 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -37,13 +37,11 @@
 #define _PROT_DEFAULT		(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
 #define _PROT_SECT_DEFAULT	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-#define PROT_DEFAULT		(_PROT_DEFAULT | PTE_NG)
-#define PROT_SECT_DEFAULT	(_PROT_SECT_DEFAULT | PMD_SECT_NG)
-#else
-#define PROT_DEFAULT		_PROT_DEFAULT
-#define PROT_SECT_DEFAULT	_PROT_SECT_DEFAULT
-#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+#define PTE_MAYBE_NG		(arm64_kernel_unmapped_at_el0() ? PTE_NG : 0)
+#define PMD_MAYBE_NG		(arm64_kernel_unmapped_at_el0() ? PMD_SECT_NG : 0)
+
+#define PROT_DEFAULT		(_PROT_DEFAULT | PTE_MAYBE_NG)
+#define PROT_SECT_DEFAULT	(_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
 
 #define PROT_DEVICE_nGnRnE	(PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE))
 #define PROT_DEVICE_nGnRE	(PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE))
@@ -55,22 +53,22 @@
 #define PROT_SECT_NORMAL	(PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
 #define PROT_SECT_NORMAL_EXEC	(PROT_SECT_DEFAULT | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
 
-#define _PAGE_DEFAULT		(PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
-#define _HYP_PAGE_DEFAULT	(_PAGE_DEFAULT & ~PTE_NG)
+#define _PAGE_DEFAULT		(_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
+#define _HYP_PAGE_DEFAULT	_PAGE_DEFAULT
 
-#define PAGE_KERNEL		__pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE)
-#define PAGE_KERNEL_RO		__pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_RDONLY)
-#define PAGE_KERNEL_ROX		__pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_RDONLY)
-#define PAGE_KERNEL_EXEC	__pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE)
-#define PAGE_KERNEL_EXEC_CONT	__pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT)
+#define PAGE_KERNEL		__pgprot(PROT_NORMAL)
+#define PAGE_KERNEL_RO		__pgprot((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY)
+#define PAGE_KERNEL_ROX		__pgprot((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY)
+#define PAGE_KERNEL_EXEC	__pgprot(PROT_NORMAL & ~PTE_PXN)
+#define PAGE_KERNEL_EXEC_CONT	__pgprot((PROT_NORMAL & ~PTE_PXN) | PTE_CONT)
 
 #define PAGE_HYP		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
 #define PAGE_HYP_EXEC		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
 #define PAGE_HYP_RO		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
 #define PAGE_HYP_DEVICE		__pgprot(PROT_DEVICE_nGnRE | PTE_HYP)
 
-#define PAGE_S2			__pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY)
-#define PAGE_S2_DEVICE		__pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_UXN)
+#define PAGE_S2			__pgprot(_PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY)
+#define PAGE_S2_DEVICE		__pgprot(_PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_UXN)
 
 #define PAGE_NONE		__pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_NG | PTE_PXN | PTE_UXN)
 #define PAGE_SHARED		__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 24/27] arm64: kpti: Add ->enable callback to remap swapper using nG mappings
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (22 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 23/27] arm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0() Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: kpti: Add ->enable callback to remap swapper using nG mappings" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 25/27] arm64: Force KPTI to be disabled on Cavium ThunderX Mark Rutland
                   ` (4 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit f992b4dfd58b upstream.

Defaulting to global mappings for kernel space is generally good for
performance and appears to be necessary for Cavium ThunderX. If we
subsequently decide that we need to enable kpti, then we need to rewrite
our existing page table entries to be non-global. This is fiddly, and
made worse by the possible use of contiguous mappings, which require
a strict break-before-make sequence.

Since the enable callback runs on each online CPU from stop_machine
context, we can have all CPUs enter the idmap, where secondaries can
wait for the primary CPU to rewrite swapper with its MMU off. It's all
fairly horrible, but at least it only runs once.

Nicolas Dechesne <nicolas.dechesne@linaro.org> found a bug on this commit
which cause boot failure on db410c etc board. Ard Biesheuvel found it
writting wrong contenct to ttbr1_el1 in __idmap_cpu_set_reserved_ttbr1
macro and fixed it by give it the right content.

Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[Alex: avoid dependency on 52-bit PA patches and TTBR/MMU erratum patches]
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/include/asm/assembler.h |   3 +
 arch/arm64/kernel/cpufeature.c     |  25 +++++
 arch/arm64/mm/proc.S               | 201 +++++++++++++++++++++++++++++++++++--
 3 files changed, 222 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 851290d2bfe3..7193bf97b8da 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -413,4 +413,7 @@ alternative_endif
 	movk	\reg, :abs_g0_nc:\val
 	.endm
 
+	.macro	pte_to_phys, phys, pte
+	and	\phys, \pte, #(((1 << (48 - PAGE_SHIFT)) - 1) << PAGE_SHIFT)
+	.endm
 #endif	/* __ASM_ASSEMBLER_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 6015a3cac930..8d41a3f94954 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -778,6 +778,30 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 						     ID_AA64PFR0_CSV3_SHIFT);
 }
 
+static int kpti_install_ng_mappings(void *__unused)
+{
+	typedef void (kpti_remap_fn)(int, int, phys_addr_t);
+	extern kpti_remap_fn idmap_kpti_install_ng_mappings;
+	kpti_remap_fn *remap_fn;
+
+	static bool kpti_applied = false;
+	int cpu = smp_processor_id();
+
+	if (kpti_applied)
+		return 0;
+
+	remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
+
+	cpu_install_idmap();
+	remap_fn(cpu, num_online_cpus(), __pa_symbol(swapper_pg_dir));
+	cpu_uninstall_idmap();
+
+	if (!cpu)
+		kpti_applied = true;
+
+	return 0;
+}
+
 static int __init parse_kpti(char *str)
 {
 	bool enabled;
@@ -881,6 +905,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
 		.def_scope = SCOPE_SYSTEM,
 		.matches = unmap_kernel_at_el0,
+		.enable = kpti_install_ng_mappings,
 	},
 #endif
 	{},
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 3378f3e21224..5c268f5767b4 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -148,6 +148,16 @@ alternative_else_nop_endif
 ENDPROC(cpu_do_switch_mm)
 
 	.pushsection ".idmap.text", "ax"
+
+.macro	__idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
+	adrp	\tmp1, empty_zero_page
+	msr	ttbr1_el1, \tmp1
+	isb
+	tlbi	vmalle1
+	dsb	nsh
+	isb
+.endm
+
 /*
  * void idmap_cpu_replace_ttbr1(phys_addr_t new_pgd)
  *
@@ -158,13 +168,7 @@ ENTRY(idmap_cpu_replace_ttbr1)
 	mrs	x2, daif
 	msr	daifset, #0xf
 
-	adrp	x1, empty_zero_page
-	msr	ttbr1_el1, x1
-	isb
-
-	tlbi	vmalle1
-	dsb	nsh
-	isb
+	__idmap_cpu_set_reserved_ttbr1 x1, x3
 
 	msr	ttbr1_el1, x0
 	isb
@@ -175,6 +179,189 @@ ENTRY(idmap_cpu_replace_ttbr1)
 ENDPROC(idmap_cpu_replace_ttbr1)
 	.popsection
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	.pushsection ".idmap.text", "ax"
+
+	.macro	__idmap_kpti_get_pgtable_ent, type
+	dc	cvac, cur_\()\type\()p		// Ensure any existing dirty
+	dmb	sy				// lines are written back before
+	ldr	\type, [cur_\()\type\()p]	// loading the entry
+	tbz	\type, #0, next_\()\type	// Skip invalid entries
+	.endm
+
+	.macro __idmap_kpti_put_pgtable_ent_ng, type
+	orr	\type, \type, #PTE_NG		// Same bit for blocks and pages
+	str	\type, [cur_\()\type\()p]	// Update the entry and ensure it
+	dc	civac, cur_\()\type\()p		// is visible to all CPUs.
+	.endm
+
+/*
+ * void __kpti_install_ng_mappings(int cpu, int num_cpus, phys_addr_t swapper)
+ *
+ * Called exactly once from stop_machine context by each CPU found during boot.
+ */
+__idmap_kpti_flag:
+	.long	1
+ENTRY(idmap_kpti_install_ng_mappings)
+	cpu		.req	w0
+	num_cpus	.req	w1
+	swapper_pa	.req	x2
+	swapper_ttb	.req	x3
+	flag_ptr	.req	x4
+	cur_pgdp	.req	x5
+	end_pgdp	.req	x6
+	pgd		.req	x7
+	cur_pudp	.req	x8
+	end_pudp	.req	x9
+	pud		.req	x10
+	cur_pmdp	.req	x11
+	end_pmdp	.req	x12
+	pmd		.req	x13
+	cur_ptep	.req	x14
+	end_ptep	.req	x15
+	pte		.req	x16
+
+	mrs	swapper_ttb, ttbr1_el1
+	adr	flag_ptr, __idmap_kpti_flag
+
+	cbnz	cpu, __idmap_kpti_secondary
+
+	/* We're the boot CPU. Wait for the others to catch up */
+	sevl
+1:	wfe
+	ldaxr	w18, [flag_ptr]
+	eor	w18, w18, num_cpus
+	cbnz	w18, 1b
+
+	/* We need to walk swapper, so turn off the MMU. */
+	mrs	x18, sctlr_el1
+	bic	x18, x18, #SCTLR_ELx_M
+	msr	sctlr_el1, x18
+	isb
+
+	/* Everybody is enjoying the idmap, so we can rewrite swapper. */
+	/* PGD */
+	mov	cur_pgdp, swapper_pa
+	add	end_pgdp, cur_pgdp, #(PTRS_PER_PGD * 8)
+do_pgd:	__idmap_kpti_get_pgtable_ent	pgd
+	tbnz	pgd, #1, walk_puds
+	__idmap_kpti_put_pgtable_ent_ng	pgd
+next_pgd:
+	add	cur_pgdp, cur_pgdp, #8
+	cmp	cur_pgdp, end_pgdp
+	b.ne	do_pgd
+
+	/* Publish the updated tables and nuke all the TLBs */
+	dsb	sy
+	tlbi	vmalle1is
+	dsb	ish
+	isb
+
+	/* We're done: fire up the MMU again */
+	mrs	x18, sctlr_el1
+	orr	x18, x18, #SCTLR_ELx_M
+	msr	sctlr_el1, x18
+	isb
+
+	/* Set the flag to zero to indicate that we're all done */
+	str	wzr, [flag_ptr]
+	ret
+
+	/* PUD */
+walk_puds:
+	.if CONFIG_PGTABLE_LEVELS > 3
+	pte_to_phys	cur_pudp, pgd
+	add	end_pudp, cur_pudp, #(PTRS_PER_PUD * 8)
+do_pud:	__idmap_kpti_get_pgtable_ent	pud
+	tbnz	pud, #1, walk_pmds
+	__idmap_kpti_put_pgtable_ent_ng	pud
+next_pud:
+	add	cur_pudp, cur_pudp, 8
+	cmp	cur_pudp, end_pudp
+	b.ne	do_pud
+	b	next_pgd
+	.else /* CONFIG_PGTABLE_LEVELS <= 3 */
+	mov	pud, pgd
+	b	walk_pmds
+next_pud:
+	b	next_pgd
+	.endif
+
+	/* PMD */
+walk_pmds:
+	.if CONFIG_PGTABLE_LEVELS > 2
+	pte_to_phys	cur_pmdp, pud
+	add	end_pmdp, cur_pmdp, #(PTRS_PER_PMD * 8)
+do_pmd:	__idmap_kpti_get_pgtable_ent	pmd
+	tbnz	pmd, #1, walk_ptes
+	__idmap_kpti_put_pgtable_ent_ng	pmd
+next_pmd:
+	add	cur_pmdp, cur_pmdp, #8
+	cmp	cur_pmdp, end_pmdp
+	b.ne	do_pmd
+	b	next_pud
+	.else /* CONFIG_PGTABLE_LEVELS <= 2 */
+	mov	pmd, pud
+	b	walk_ptes
+next_pmd:
+	b	next_pud
+	.endif
+
+	/* PTE */
+walk_ptes:
+	pte_to_phys	cur_ptep, pmd
+	add	end_ptep, cur_ptep, #(PTRS_PER_PTE * 8)
+do_pte:	__idmap_kpti_get_pgtable_ent	pte
+	__idmap_kpti_put_pgtable_ent_ng	pte
+next_pte:
+	add	cur_ptep, cur_ptep, #8
+	cmp	cur_ptep, end_ptep
+	b.ne	do_pte
+	b	next_pmd
+
+	/* Secondary CPUs end up here */
+__idmap_kpti_secondary:
+	/* Uninstall swapper before surgery begins */
+	__idmap_cpu_set_reserved_ttbr1 x18, x17
+
+	/* Increment the flag to let the boot CPU we're ready */
+1:	ldxr	w18, [flag_ptr]
+	add	w18, w18, #1
+	stxr	w17, w18, [flag_ptr]
+	cbnz	w17, 1b
+
+	/* Wait for the boot CPU to finish messing around with swapper */
+	sevl
+1:	wfe
+	ldxr	w18, [flag_ptr]
+	cbnz	w18, 1b
+
+	/* All done, act like nothing happened */
+	msr	ttbr1_el1, swapper_ttb
+	isb
+	ret
+
+	.unreq	cpu
+	.unreq	num_cpus
+	.unreq	swapper_pa
+	.unreq	swapper_ttb
+	.unreq	flag_ptr
+	.unreq	cur_pgdp
+	.unreq	end_pgdp
+	.unreq	pgd
+	.unreq	cur_pudp
+	.unreq	end_pudp
+	.unreq	pud
+	.unreq	cur_pmdp
+	.unreq	end_pmdp
+	.unreq	pmd
+	.unreq	cur_ptep
+	.unreq	end_ptep
+	.unreq	pte
+ENDPROC(idmap_kpti_install_ng_mappings)
+	.popsection
+#endif
+
 /*
  *	__cpu_setup
  *
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 25/27] arm64: Force KPTI to be disabled on Cavium ThunderX
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (23 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 24/27] arm64: kpti: Add ->enable callback to remap swapper using nG mappings Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: Force KPTI to be disabled on Cavium ThunderX" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 26/27] arm64: entry: Reword comment about post_ttbr_update_workaround Mark Rutland
                   ` (3 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Marc Zyngier <marc.zyngier@arm.com>

commit 6dc52b15c4a4 upstream.

Cavium ThunderX's erratum 27456 results in a corruption of icache
entries that are loaded from memory that is mapped as non-global
(i.e. ASID-tagged).

As KPTI is based on memory being mapped non-global, let's prevent
it from kicking in if this erratum is detected.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[will: Update comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[Alex: use cpus_have_cap as cpus_have_const_cap doesn't exist in v4.9]
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/kernel/cpufeature.c | 17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 8d41a3f94954..5056fc597ae9 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -753,12 +753,23 @@ static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 				int __unused)
 {
+	char const *str = "command line option";
 	u64 pfr0 = read_system_reg(SYS_ID_AA64PFR0_EL1);
 
-	/* Forced on command line? */
+	/*
+	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
+	 * ThunderX leads to apparent I-cache corruption of kernel text, which
+	 * ends as well as you might imagine. Don't even try.
+	 */
+	if (cpus_have_cap(ARM64_WORKAROUND_CAVIUM_27456)) {
+		str = "ARM64_WORKAROUND_CAVIUM_27456";
+		__kpti_forced = -1;
+	}
+
+	/* Forced? */
 	if (__kpti_forced) {
-		pr_info_once("kernel page table isolation forced %s by command line option\n",
-			     __kpti_forced > 0 ? "ON" : "OFF");
+		pr_info_once("kernel page table isolation forced %s by %s\n",
+			     __kpti_forced > 0 ? "ON" : "OFF", str);
 		return __kpti_forced > 0;
 	}
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 26/27] arm64: entry: Reword comment about post_ttbr_update_workaround
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (24 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 25/27] arm64: Force KPTI to be disabled on Cavium ThunderX Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: entry: Reword comment about post_ttbr_update_workaround" has been added to the 4.9-stable tree gregkh
  2018-04-03 11:09 ` [PATCH v4.9.y 27/27] arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives Mark Rutland
                   ` (2 subsequent siblings)
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit f167211a93ac upstream.

We don't fully understand the Cavium ThunderX erratum, but it appears
that mapping the kernel as nG can lead to horrible consequences such as
attempting to execute userspace from kernel context. Since kpti isn't
enabled for these CPUs anyway, simplify the comment justifying the lack
of post_ttbr_update_workaround in the exception trampoline.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/kernel/entry.S | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index f35ca1e54b5a..8d1600b18562 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -861,16 +861,9 @@ __ni_sys_trace:
 	orr	\tmp, \tmp, #USER_ASID_FLAG
 	msr	ttbr1_el1, \tmp
 	/*
-	 * We avoid running the post_ttbr_update_workaround here because the
-	 * user and kernel ASIDs don't have conflicting mappings, so any
-	 * "blessing" as described in:
-	 *
-	 *   http://lkml.kernel.org/r/56BB848A.6060603@caviumnetworks.com
-	 *
-	 * will not hurt correctness. Whilst this may partially defeat the
-	 * point of using split ASIDs in the first place, it avoids
-	 * the hit of invalidating the entire I-cache on every return to
-	 * userspace.
+	 * We avoid running the post_ttbr_update_workaround here because
+	 * it's only needed by Cavium ThunderX, which requires KPTI to be
+	 * disabled.
 	 */
 	.endm
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH v4.9.y 27/27] arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (25 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 26/27] arm64: entry: Reword comment about post_ttbr_update_workaround Mark Rutland
@ 2018-04-03 11:09 ` Mark Rutland
  2018-04-05 19:42   ` Patch "arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives" has been added to the 4.9-stable tree gregkh
  2018-04-04 15:07 ` [PATCH v4.9.y 00/27] arm64 meltdown patches Greg KH
  2018-04-05 17:34 ` Greg Hackmann
  28 siblings, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:09 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

From: Will Deacon <will.deacon@arm.com>

commit 439e70e27a51 upstream.

The identity map is mapped as both writeable and executable by the
SWAPPER_MM_MMUFLAGS and this is relied upon by the kpti code to manage
a synchronisation flag. Update the .pushsection flags to reflect the
actual mapping attributes.

Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
---
 arch/arm64/kernel/cpu-reset.S | 2 +-
 arch/arm64/kernel/head.S      | 2 +-
 arch/arm64/kernel/sleep.S     | 2 +-
 arch/arm64/mm/proc.S          | 8 ++++----
 4 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S
index 65f42d257414..f736a6f81ecd 100644
--- a/arch/arm64/kernel/cpu-reset.S
+++ b/arch/arm64/kernel/cpu-reset.S
@@ -16,7 +16,7 @@
 #include <asm/virt.h>
 
 .text
-.pushsection    .idmap.text, "ax"
+.pushsection    .idmap.text, "awx"
 
 /*
  * __cpu_soft_restart(el2_switch, entry, arg0, arg1, arg2) - Helper for
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 539bebc1222f..fa52817d84c5 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -473,7 +473,7 @@ ENDPROC(__primary_switched)
  * end early head section, begin head code that is also used for
  * hotplug and needs to have the same protections as the text region
  */
-	.section ".idmap.text","ax"
+	.section ".idmap.text","awx"
 
 ENTRY(kimage_vaddr)
 	.quad		_text - TEXT_OFFSET
diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index 1bec41b5fda3..0030d6964e65 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -95,7 +95,7 @@ ENTRY(__cpu_suspend_enter)
 	ret
 ENDPROC(__cpu_suspend_enter)
 
-	.pushsection ".idmap.text", "ax"
+	.pushsection ".idmap.text", "awx"
 ENTRY(cpu_resume)
 	bl	el2_setup		// if in EL2 drop to EL1 cleanly
 	bl	__cpu_setup
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 5c268f5767b4..c07d9cc057e6 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -83,7 +83,7 @@ ENDPROC(cpu_do_suspend)
  *
  * x0: Address of context pointer
  */
-	.pushsection ".idmap.text", "ax"
+	.pushsection ".idmap.text", "awx"
 ENTRY(cpu_do_resume)
 	ldp	x2, x3, [x0]
 	ldp	x4, x5, [x0, #16]
@@ -147,7 +147,7 @@ alternative_else_nop_endif
 	ret
 ENDPROC(cpu_do_switch_mm)
 
-	.pushsection ".idmap.text", "ax"
+	.pushsection ".idmap.text", "awx"
 
 .macro	__idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
 	adrp	\tmp1, empty_zero_page
@@ -180,7 +180,7 @@ ENDPROC(idmap_cpu_replace_ttbr1)
 	.popsection
 
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-	.pushsection ".idmap.text", "ax"
+	.pushsection ".idmap.text", "awx"
 
 	.macro	__idmap_kpti_get_pgtable_ent, type
 	dc	cvac, cur_\()\type\()p		// Ensure any existing dirty
@@ -368,7 +368,7 @@ ENDPROC(idmap_kpti_install_ng_mappings)
  *	Initialise the processor for turning the MMU on.  Return in x0 the
  *	value of the SCTLR_EL1 register.
  */
-	.pushsection ".idmap.text", "ax"
+	.pushsection ".idmap.text", "awx"
 ENTRY(__cpu_setup)
 	tlbi	vmalle1				// Invalidate local TLB
 	dsb	nsh
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH v4.9.y 09/27] arm64: mm: Map entry trampoline into trampoline and kernel page tables
  2018-04-03 11:09 ` [PATCH v4.9.y 09/27] arm64: mm: Map entry trampoline into trampoline and kernel page tables Mark Rutland
@ 2018-04-03 11:15   ` Mark Rutland
  2018-04-05 19:33     ` Greg KH
  2018-04-05 19:42   ` Patch "arm64: mm: Map entry trampoline into trampoline and kernel page tables" has been added to the 4.9-stable tree gregkh
  1 sibling, 1 reply; 63+ messages in thread
From: Mark Rutland @ 2018-04-03 11:15 UTC (permalink / raw)
  To: stable; +Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

On Tue, Apr 03, 2018 at 12:09:05PM +0100, Mark Rutland wrote:
> From: Will Deacon <will.deacon@arm.com>
> 
> commit 51a0048beb44 upstream.
> 
> The exception entry trampoline needs to be mapped at the same virtual
> address in both the trampoline page table (which maps nothing else)
> and also the kernel page table, so that we can swizzle TTBR1_EL1 on
> exceptions from and return to EL0.
> 
> This patch maps the trampoline at a fixed virtual address in the fixmap
> area of the kernel virtual address space, which allows the kernel proper
> to be randomized with respect to the trampoline when KASLR is enabled.
> 
> Reviewed-by: Mark Rutland <mark.rutland@arm.com>
> Tested-by: Laura Abbott <labbott@redhat.com>
> Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> Signed-off-by: Alex Shi <alex.shi@linaro.org>
> Reviewed-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]

It has just been pointed out to me that I messed up the SoB chain here,
and this should be:

Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]

Otherwise, the patch itself is fine. Sorry for the noise there -- I hope
this can be fixed up when applying?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v4.9.y 00/27] arm64 meltdown patches
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (26 preceding siblings ...)
  2018-04-03 11:09 ` [PATCH v4.9.y 27/27] arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives Mark Rutland
@ 2018-04-04 15:07 ` Greg KH
  2018-04-05 10:04   ` Will Deacon
  2018-04-05 11:46   ` Will Deacon
  2018-04-05 17:34 ` Greg Hackmann
  28 siblings, 2 replies; 63+ messages in thread
From: Greg KH @ 2018-04-04 15:07 UTC (permalink / raw)
  To: Mark Rutland
  Cc: stable, mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

On Tue, Apr 03, 2018 at 12:08:56PM +0100, Mark Rutland wrote:
> Hi Greg,
> 
> These patches backport KPTI to v4.9.y (based on v4.9.92), providing protection
> against meltdown on arm64 platforms.
> 
> I picked up Alex Shi's backport for review and testing, and as I found a couple
> of issues to fix up, I'm sending this with my Signed-off-by in the chain, with
> those fixups applied and noted.
> 
> To the best of my understanding the code is correct, in the context of the
> v4.9.y kernel, and I've tested the seires on arm64 hardware available to me.
> i.e. if this didn't have my Signed-off-by it would have my Reviewed-by and
> Tested-by tags.
> 
> Are you happy to pick these up for v4.9.93?

Thank you for doing this work, it's much appreciated.  I'd like to get
some "tested-by" from some people with this hardware (like Linaro or
others that I have pointed this patch series to), before I queue it up.

That way we have some people we can rely on to ensure that this actually
works if something goes wrong :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v4.9.y 00/27] arm64 meltdown patches
  2018-04-04 15:07 ` [PATCH v4.9.y 00/27] arm64 meltdown patches Greg KH
@ 2018-04-05 10:04   ` Will Deacon
  2018-04-05 10:15     ` Mark Rutland
  2018-04-05 11:46   ` Will Deacon
  1 sibling, 1 reply; 63+ messages in thread
From: Will Deacon @ 2018-04-05 10:04 UTC (permalink / raw)
  To: Greg KH; +Cc: Mark Rutland, stable, mark.brown, ard.biesheuvel, marc.zyngier

On Wed, Apr 04, 2018 at 05:07:20PM +0200, Greg KH wrote:
> On Tue, Apr 03, 2018 at 12:08:56PM +0100, Mark Rutland wrote:
> > These patches backport KPTI to v4.9.y (based on v4.9.92), providing protection
> > against meltdown on arm64 platforms.
> > 
> > I picked up Alex Shi's backport for review and testing, and as I found a couple
> > of issues to fix up, I'm sending this with my Signed-off-by in the chain, with
> > those fixups applied and noted.
> > 
> > To the best of my understanding the code is correct, in the context of the
> > v4.9.y kernel, and I've tested the seires on arm64 hardware available to me.
> > i.e. if this didn't have my Signed-off-by it would have my Reviewed-by and
> > Tested-by tags.
> > 
> > Are you happy to pick these up for v4.9.93?
> 
> Thank you for doing this work, it's much appreciated.  I'd like to get
> some "tested-by" from some people with this hardware (like Linaro or
> others that I have pointed this patch series to), before I queue it up.

I can give it a spin if Mark points me at a branch to use.

Will

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v4.9.y 00/27] arm64 meltdown patches
  2018-04-05 10:04   ` Will Deacon
@ 2018-04-05 10:15     ` Mark Rutland
  0 siblings, 0 replies; 63+ messages in thread
From: Mark Rutland @ 2018-04-05 10:15 UTC (permalink / raw)
  To: Will Deacon; +Cc: Greg KH, stable, mark.brown, ard.biesheuvel, marc.zyngier

On Thu, Apr 05, 2018 at 11:04:23AM +0100, Will Deacon wrote:
> On Wed, Apr 04, 2018 at 05:07:20PM +0200, Greg KH wrote:
> > On Tue, Apr 03, 2018 at 12:08:56PM +0100, Mark Rutland wrote:
> > > These patches backport KPTI to v4.9.y (based on v4.9.92), providing protection
> > > against meltdown on arm64 platforms.
> > > 
> > > I picked up Alex Shi's backport for review and testing, and as I found a couple
> > > of issues to fix up, I'm sending this with my Signed-off-by in the chain, with
> > > those fixups applied and noted.
> > > 
> > > To the best of my understanding the code is correct, in the context of the
> > > v4.9.y kernel, and I've tested the seires on arm64 hardware available to me.
> > > i.e. if this didn't have my Signed-off-by it would have my Reviewed-by and
> > > Tested-by tags.
> > > 
> > > Are you happy to pick these up for v4.9.93?
> > 
> > Thank you for doing this work, it's much appreciated.  I'd like to get
> > some "tested-by" from some people with this hardware (like Linaro or
> > others that I have pointed this patch series to), before I queue it up.
> 
> I can give it a spin if Mark points me at a branch to use.

I've pushed that to:

  git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git stable/4.9.y/meltdown

... that has my Signed-off-by fixed up on patch 9, but is otherwise
identical to this posting.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v4.9.y 00/27] arm64 meltdown patches
  2018-04-04 15:07 ` [PATCH v4.9.y 00/27] arm64 meltdown patches Greg KH
  2018-04-05 10:04   ` Will Deacon
@ 2018-04-05 11:46   ` Will Deacon
  1 sibling, 0 replies; 63+ messages in thread
From: Will Deacon @ 2018-04-05 11:46 UTC (permalink / raw)
  To: Greg KH; +Cc: Mark Rutland, stable, mark.brown, ard.biesheuvel, marc.zyngier

On Wed, Apr 04, 2018 at 05:07:20PM +0200, Greg KH wrote:
> On Tue, Apr 03, 2018 at 12:08:56PM +0100, Mark Rutland wrote:
> > These patches backport KPTI to v4.9.y (based on v4.9.92), providing protection
> > against meltdown on arm64 platforms.
> > 
> > I picked up Alex Shi's backport for review and testing, and as I found a couple
> > of issues to fix up, I'm sending this with my Signed-off-by in the chain, with
> > those fixups applied and noted.
> > 
> > To the best of my understanding the code is correct, in the context of the
> > v4.9.y kernel, and I've tested the seires on arm64 hardware available to me.
> > i.e. if this didn't have my Signed-off-by it would have my Reviewed-by and
> > Tested-by tags.
> > 
> > Are you happy to pick these up for v4.9.93?
> 
> Thank you for doing this work, it's much appreciated.  I'd like to get
> some "tested-by" from some people with this hardware (like Linaro or
> others that I have pointed this patch series to), before I queue it up.
> 
> That way we have some people we can rely on to ensure that this actually
> works if something goes wrong :)

Tested-by: Will Deacon <will.deacon@arm.com>

Will

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v4.9.y 00/27] arm64 meltdown patches
  2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
                   ` (27 preceding siblings ...)
  2018-04-04 15:07 ` [PATCH v4.9.y 00/27] arm64 meltdown patches Greg KH
@ 2018-04-05 17:34 ` Greg Hackmann
  2018-04-05 19:15   ` Greg KH
  28 siblings, 1 reply; 63+ messages in thread
From: Greg Hackmann @ 2018-04-05 17:34 UTC (permalink / raw)
  To: Mark Rutland, stable
  Cc: mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

On 04/03/2018 04:08 AM, Mark Rutland wrote:
> Hi Greg,
> 
> These patches backport KPTI to v4.9.y (based on v4.9.92), providing protection
> against meltdown on arm64 platforms.
> 
> I picked up Alex Shi's backport for review and testing, and as I found a couple
> of issues to fix up, I'm sending this with my Signed-off-by in the chain, with
> those fixups applied and noted.
> 
> To the best of my understanding the code is correct, in the context of the
> v4.9.y kernel, and I've tested the seires on arm64 hardware available to me.
> i.e. if this didn't have my Signed-off-by it would have my Reviewed-by and
> Tested-by tags.
> 
> Are you happy to pick these up for v4.9.93?
> 
> Thanks,
> Mark.
> 
> AKASHI Takahiro (1):
>   module: extend 'rodata=off' boot cmdline parameter to module mappings
> 
> Jayachandran C (2):
>   arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs
>   arm64: Turn on KPTI only on CPUs that need it
> 
> Marc Zyngier (2):
>   arm64: Allow checking of a CPU-local erratum
>   arm64: Force KPTI to be disabled on Cavium ThunderX
> 
> Mark Rutland (1):
>   arm64: factor out entry stack manipulation
> 
> Suzuki K Poulose (1):
>   arm64: capabilities: Handle duplicate entries for a capability
> 
> Will Deacon (20):
>   arm64: mm: Use non-global mappings for kernel space
>   arm64: mm: Move ASID from TTBR0 to TTBR1
>   arm64: mm: Allocate ASIDs in pairs
>   arm64: mm: Add arm64_kernel_unmapped_at_el0 helper
>   arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI
>   arm64: entry: Add exception trampoline page for exceptions from EL0
>   arm64: mm: Map entry trampoline into trampoline and kernel page tables
>   arm64: entry: Explicitly pass exception level to kernel_ventry macro
>   arm64: entry: Hook up entry trampoline to exception vectors
>   arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native
>     tasks
>   arm64: entry: Add fake CPU feature for unmapping the kernel at EL0
>   arm64: kaslr: Put kernel vectors address in separate data page
>   arm64: use RET instruction for exiting the trampoline
>   arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0
>   arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
>   arm64: Take into account ID_AA64PFR0_EL1.CSV3
>   arm64: kpti: Make use of nG dependent on
>     arm64_kernel_unmapped_at_el0()
>   arm64: kpti: Add ->enable callback to remap swapper using nG mappings
>   arm64: entry: Reword comment about post_ttbr_update_workaround
>   arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives
> 
>  arch/arm64/Kconfig                     |  12 ++
>  arch/arm64/include/asm/assembler.h     |   3 +
>  arch/arm64/include/asm/cpucaps.h       |   3 +-
>  arch/arm64/include/asm/cputype.h       |   3 +
>  arch/arm64/include/asm/fixmap.h        |   6 +
>  arch/arm64/include/asm/mmu.h           |  11 ++
>  arch/arm64/include/asm/mmu_context.h   |   7 ++
>  arch/arm64/include/asm/pgtable-hwdef.h |   1 +
>  arch/arm64/include/asm/pgtable-prot.h  |  35 +++---
>  arch/arm64/include/asm/pgtable.h       |   1 +
>  arch/arm64/include/asm/proc-fns.h      |   6 -
>  arch/arm64/include/asm/sysreg.h        |   1 +
>  arch/arm64/include/asm/tlbflush.h      |  16 ++-
>  arch/arm64/kernel/asm-offsets.c        |   6 +-
>  arch/arm64/kernel/cpu-reset.S          |   2 +-
>  arch/arm64/kernel/cpufeature.c         | 135 ++++++++++++++++++---
>  arch/arm64/kernel/entry.S              | 188 ++++++++++++++++++++++++----
>  arch/arm64/kernel/head.S               |   2 +-
>  arch/arm64/kernel/process.c            |  12 +-
>  arch/arm64/kernel/sleep.S              |   2 +-
>  arch/arm64/kernel/vmlinux.lds.S        |  22 +++-
>  arch/arm64/mm/context.c                |  25 ++--
>  arch/arm64/mm/mmu.c                    |  31 +++++
>  arch/arm64/mm/proc.S                   | 216 +++++++++++++++++++++++++++++++--
>  include/linux/init.h                   |   3 +
>  init/main.c                            |   7 +-
>  kernel/module.c                        |  20 ++-
>  27 files changed, 675 insertions(+), 101 deletions(-)
> 

I ran this series on the 1st gen hikey dev board and it works fine for me.

On top of mainline v4.9.92, tip-of-tree AOSP userspace boots to a serial
shell.

On top of the android-linaro-hikey-4.9 branch on AOSP, it boots to the
home screen without issues.  (android-4.9 has an out-of-tree SW PAN
backport which I reverted locally for testing purposes.)

So for the series:

Tested-by: Greg Hackmann <ghackmann@google.com>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v4.9.y 00/27] arm64 meltdown patches
  2018-04-05 17:34 ` Greg Hackmann
@ 2018-04-05 19:15   ` Greg KH
  0 siblings, 0 replies; 63+ messages in thread
From: Greg KH @ 2018-04-05 19:15 UTC (permalink / raw)
  To: Greg Hackmann
  Cc: Mark Rutland, stable, mark.brown, ard.biesheuvel, marc.zyngier,
	will.deacon

On Thu, Apr 05, 2018 at 10:34:30AM -0700, Greg Hackmann wrote:
> On 04/03/2018 04:08 AM, Mark Rutland wrote:
> > Hi Greg,
> > 
> > These patches backport KPTI to v4.9.y (based on v4.9.92), providing protection
> > against meltdown on arm64 platforms.
> > 
> > I picked up Alex Shi's backport for review and testing, and as I found a couple
> > of issues to fix up, I'm sending this with my Signed-off-by in the chain, with
> > those fixups applied and noted.
> > 
> > To the best of my understanding the code is correct, in the context of the
> > v4.9.y kernel, and I've tested the seires on arm64 hardware available to me.
> > i.e. if this didn't have my Signed-off-by it would have my Reviewed-by and
> > Tested-by tags.
> > 
> > Are you happy to pick these up for v4.9.93?
> > 
> > Thanks,
> > Mark.
> > 
> > AKASHI Takahiro (1):
> >   module: extend 'rodata=off' boot cmdline parameter to module mappings
> > 
> > Jayachandran C (2):
> >   arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs
> >   arm64: Turn on KPTI only on CPUs that need it
> > 
> > Marc Zyngier (2):
> >   arm64: Allow checking of a CPU-local erratum
> >   arm64: Force KPTI to be disabled on Cavium ThunderX
> > 
> > Mark Rutland (1):
> >   arm64: factor out entry stack manipulation
> > 
> > Suzuki K Poulose (1):
> >   arm64: capabilities: Handle duplicate entries for a capability
> > 
> > Will Deacon (20):
> >   arm64: mm: Use non-global mappings for kernel space
> >   arm64: mm: Move ASID from TTBR0 to TTBR1
> >   arm64: mm: Allocate ASIDs in pairs
> >   arm64: mm: Add arm64_kernel_unmapped_at_el0 helper
> >   arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI
> >   arm64: entry: Add exception trampoline page for exceptions from EL0
> >   arm64: mm: Map entry trampoline into trampoline and kernel page tables
> >   arm64: entry: Explicitly pass exception level to kernel_ventry macro
> >   arm64: entry: Hook up entry trampoline to exception vectors
> >   arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native
> >     tasks
> >   arm64: entry: Add fake CPU feature for unmapping the kernel at EL0
> >   arm64: kaslr: Put kernel vectors address in separate data page
> >   arm64: use RET instruction for exiting the trampoline
> >   arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0
> >   arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
> >   arm64: Take into account ID_AA64PFR0_EL1.CSV3
> >   arm64: kpti: Make use of nG dependent on
> >     arm64_kernel_unmapped_at_el0()
> >   arm64: kpti: Add ->enable callback to remap swapper using nG mappings
> >   arm64: entry: Reword comment about post_ttbr_update_workaround
> >   arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives
> > 
> >  arch/arm64/Kconfig                     |  12 ++
> >  arch/arm64/include/asm/assembler.h     |   3 +
> >  arch/arm64/include/asm/cpucaps.h       |   3 +-
> >  arch/arm64/include/asm/cputype.h       |   3 +
> >  arch/arm64/include/asm/fixmap.h        |   6 +
> >  arch/arm64/include/asm/mmu.h           |  11 ++
> >  arch/arm64/include/asm/mmu_context.h   |   7 ++
> >  arch/arm64/include/asm/pgtable-hwdef.h |   1 +
> >  arch/arm64/include/asm/pgtable-prot.h  |  35 +++---
> >  arch/arm64/include/asm/pgtable.h       |   1 +
> >  arch/arm64/include/asm/proc-fns.h      |   6 -
> >  arch/arm64/include/asm/sysreg.h        |   1 +
> >  arch/arm64/include/asm/tlbflush.h      |  16 ++-
> >  arch/arm64/kernel/asm-offsets.c        |   6 +-
> >  arch/arm64/kernel/cpu-reset.S          |   2 +-
> >  arch/arm64/kernel/cpufeature.c         | 135 ++++++++++++++++++---
> >  arch/arm64/kernel/entry.S              | 188 ++++++++++++++++++++++++----
> >  arch/arm64/kernel/head.S               |   2 +-
> >  arch/arm64/kernel/process.c            |  12 +-
> >  arch/arm64/kernel/sleep.S              |   2 +-
> >  arch/arm64/kernel/vmlinux.lds.S        |  22 +++-
> >  arch/arm64/mm/context.c                |  25 ++--
> >  arch/arm64/mm/mmu.c                    |  31 +++++
> >  arch/arm64/mm/proc.S                   | 216 +++++++++++++++++++++++++++++++--
> >  include/linux/init.h                   |   3 +
> >  init/main.c                            |   7 +-
> >  kernel/module.c                        |  20 ++-
> >  27 files changed, 675 insertions(+), 101 deletions(-)
> > 
> 
> I ran this series on the 1st gen hikey dev board and it works fine for me.
> 
> On top of mainline v4.9.92, tip-of-tree AOSP userspace boots to a serial
> shell.
> 
> On top of the android-linaro-hikey-4.9 branch on AOSP, it boots to the
> home screen without issues.  (android-4.9 has an out-of-tree SW PAN
> backport which I reverted locally for testing purposes.)
> 
> So for the series:
> 
> Tested-by: Greg Hackmann <ghackmann@google.com>

Great, thanks for testing this and letting me know.

greg k-h

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH v4.9.y 09/27] arm64: mm: Map entry trampoline into trampoline and kernel page tables
  2018-04-03 11:15   ` Mark Rutland
@ 2018-04-05 19:33     ` Greg KH
  0 siblings, 0 replies; 63+ messages in thread
From: Greg KH @ 2018-04-05 19:33 UTC (permalink / raw)
  To: Mark Rutland
  Cc: stable, mark.brown, ard.biesheuvel, marc.zyngier, will.deacon

On Tue, Apr 03, 2018 at 12:15:06PM +0100, Mark Rutland wrote:
> On Tue, Apr 03, 2018 at 12:09:05PM +0100, Mark Rutland wrote:
> > From: Will Deacon <will.deacon@arm.com>
> > 
> > commit 51a0048beb44 upstream.
> > 
> > The exception entry trampoline needs to be mapped at the same virtual
> > address in both the trampoline page table (which maps nothing else)
> > and also the kernel page table, so that we can swizzle TTBR1_EL1 on
> > exceptions from and return to EL0.
> > 
> > This patch maps the trampoline at a fixed virtual address in the fixmap
> > area of the kernel virtual address space, which allows the kernel proper
> > to be randomized with respect to the trampoline when KASLR is enabled.
> > 
> > Reviewed-by: Mark Rutland <mark.rutland@arm.com>
> > Tested-by: Laura Abbott <labbott@redhat.com>
> > Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
> > Signed-off-by: Will Deacon <will.deacon@arm.com>
> > Signed-off-by: Alex Shi <alex.shi@linaro.org>
> > Reviewed-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
> 
> It has just been pointed out to me that I messed up the SoB chain here,
> and this should be:
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
> 
> Otherwise, the patch itself is fine. Sorry for the noise there -- I hope
> this can be fixed up when applying?

I'll go fix it up...

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: Allow checking of a CPU-local erratum" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 19/27] arm64: Allow checking of a CPU-local erratum Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, daniel.lezcano, ghackmann, gregkh,
	marc.zyngier, suzuki.poulose, tglx, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: Allow checking of a CPU-local erratum

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-allow-checking-of-a-cpu-local-erratum.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:15 +0100
Subject: arm64: Allow checking of a CPU-local erratum
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-20-mark.rutland@arm.com>

From: Marc Zyngier <marc.zyngier@arm.com>

commit 8f4137588261d7504f4aa022dc9d1a1fd1940e8e upstream.

this_cpu_has_cap() only checks the feature array, and not the errata
one. In order to be able to check for a CPU-local erratum, allow it
to inspect the latter as well.

This is consistent with cpus_have_cap()'s behaviour, which includes
errata already.

Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpufeature.c |   13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1103,20 +1103,29 @@ static void __init setup_feature_capabil
  * Check if the current CPU has a given feature capability.
  * Should be called from non-preemptible context.
  */
-bool this_cpu_has_cap(unsigned int cap)
+static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
+			       unsigned int cap)
 {
 	const struct arm64_cpu_capabilities *caps;
 
 	if (WARN_ON(preemptible()))
 		return false;
 
-	for (caps = arm64_features; caps->desc; caps++)
+	for (caps = cap_array; caps->desc; caps++)
 		if (caps->capability == cap && caps->matches)
 			return caps->matches(caps, SCOPE_LOCAL_CPU);
 
 	return false;
 }
 
+extern const struct arm64_cpu_capabilities arm64_errata[];
+
+bool this_cpu_has_cap(unsigned int cap)
+{
+	return (__this_cpu_has_cap(arm64_features, cap) ||
+		__this_cpu_has_cap(arm64_errata, cap));
+}
+
 void __init setup_cpu_features(void)
 {
 	u32 cwg;


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: capabilities: Handle duplicate entries for a capability" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 20/27] arm64: capabilities: Handle duplicate entries for a capability Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, catalin.marinas, ghackmann, gregkh,
	marc.zyngier, suzuki.poulose, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: capabilities: Handle duplicate entries for a capability

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:16 +0100
Subject: arm64: capabilities: Handle duplicate entries for a capability
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-21-mark.rutland@arm.com>

From: Suzuki K Poulose <suzuki.poulose@arm.com>

commit 67948af41f2e upstream.

Sometimes a single capability could be listed multiple times with
differing matches(), e.g, CPU errata for different MIDR versions.
This breaks verify_local_cpu_feature() and this_cpu_has_cap() as
we stop checking for a capability on a CPU with the first
entry in the given table, which is not sufficient. Make sure we
run the checks for all entries of the same capability. We do
this by fixing __this_cpu_has_cap() to run through all the
entries in the given table for a match and reuse it for
verify_local_cpu_feature().

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpufeature.c |   44 +++++++++++++++++++++--------------------
 1 file changed, 23 insertions(+), 21 deletions(-)

--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -969,6 +969,26 @@ static void __init setup_elf_hwcaps(cons
 			cap_set_elf_hwcap(hwcaps);
 }
 
+/*
+ * Check if the current CPU has a given feature capability.
+ * Should be called from non-preemptible context.
+ */
+static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
+			       unsigned int cap)
+{
+	const struct arm64_cpu_capabilities *caps;
+
+	if (WARN_ON(preemptible()))
+		return false;
+
+	for (caps = cap_array; caps->desc; caps++)
+		if (caps->capability == cap &&
+		    caps->matches &&
+		    caps->matches(caps, SCOPE_LOCAL_CPU))
+			return true;
+	return false;
+}
+
 void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 			    const char *info)
 {
@@ -1037,8 +1057,9 @@ verify_local_elf_hwcaps(const struct arm
 }
 
 static void
-verify_local_cpu_features(const struct arm64_cpu_capabilities *caps)
+verify_local_cpu_features(const struct arm64_cpu_capabilities *caps_list)
 {
+	const struct arm64_cpu_capabilities *caps = caps_list;
 	for (; caps->matches; caps++) {
 		if (!cpus_have_cap(caps->capability))
 			continue;
@@ -1046,7 +1067,7 @@ verify_local_cpu_features(const struct a
 		 * If the new CPU misses an advertised feature, we cannot proceed
 		 * further, park the cpu.
 		 */
-		if (!caps->matches(caps, SCOPE_LOCAL_CPU)) {
+		if (!__this_cpu_has_cap(caps_list, caps->capability)) {
 			pr_crit("CPU%d: missing feature: %s\n",
 					smp_processor_id(), caps->desc);
 			cpu_die_early();
@@ -1099,25 +1120,6 @@ static void __init setup_feature_capabil
 	enable_cpu_capabilities(arm64_features);
 }
 
-/*
- * Check if the current CPU has a given feature capability.
- * Should be called from non-preemptible context.
- */
-static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
-			       unsigned int cap)
-{
-	const struct arm64_cpu_capabilities *caps;
-
-	if (WARN_ON(preemptible()))
-		return false;
-
-	for (caps = cap_array; caps->desc; caps++)
-		if (caps->capability == cap && caps->matches)
-			return caps->matches(caps, SCOPE_LOCAL_CPU);
-
-	return false;
-}
-
 extern const struct arm64_cpu_capabilities arm64_errata[];
 
 bool this_cpu_has_cap(unsigned int cap)


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 21/27] arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, catalin.marinas, ghackmann, gregkh,
	jnair, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:17 +0100
Subject: arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-22-mark.rutland@arm.com>

From: Jayachandran C <jnair@caviumnetworks.com>

commit 0d90718871fe upstream.

Add the older Broadcom ID as well as the new Cavium ID for ThunderX2
CPUs.

Signed-off-by: Jayachandran C <jnair@caviumnetworks.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cputype.h |    3 +++
 1 file changed, 3 insertions(+)

--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -81,6 +81,7 @@
 
 #define CAVIUM_CPU_PART_THUNDERX	0x0A1
 #define CAVIUM_CPU_PART_THUNDERX_81XX	0x0A2
+#define CAVIUM_CPU_PART_THUNDERX2	0x0AF
 
 #define BRCM_CPU_PART_VULCAN		0x516
 
@@ -88,6 +89,8 @@
 #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
 #define MIDR_THUNDERX	MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
 #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
+#define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2)
+#define MIDR_BRCM_VULCAN MIDR_CPU_MODEL(ARM_CPU_IMP_BRCM, BRCM_CPU_PART_VULCAN)
 
 #ifndef __ASSEMBLY__
 


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: entry: Add fake CPU feature for unmapping the kernel at EL0" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 13/27] arm64: entry: Add fake CPU feature for unmapping the kernel at EL0 Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ghackmann, gregkh, labbott, shankerd,
	will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: entry: Add fake CPU feature for unmapping the kernel at EL0

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:09 +0100
Subject: arm64: entry: Add fake CPU feature for unmapping the kernel at EL0
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-14-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit ea1e3de85e94 upstream.

Allow explicit disabling of the entry trampoline on the kernel command
line (kpti=off) by adding a fake CPU feature (ARM64_UNMAP_KERNEL_AT_EL0)
that can be used to toggle the alternative sequences in our entry code and
avoid use of the trampoline altogether if desired. This also allows us to
make use of a static key in arm64_kernel_unmapped_at_el0().

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[Alex: use first free cpucap number, use cpus_have_cap]
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpucaps.h |    3 +-
 arch/arm64/include/asm/mmu.h     |    3 +-
 arch/arm64/kernel/cpufeature.c   |   41 +++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/entry.S        |    9 ++++----
 4 files changed, 50 insertions(+), 6 deletions(-)

--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -34,7 +34,8 @@
 #define ARM64_HAS_32BIT_EL0			13
 #define ARM64_HYP_OFFSET_LOW			14
 #define ARM64_MISMATCHED_CACHE_LINE_SIZE	15
+#define ARM64_UNMAP_KERNEL_AT_EL0		16
 
-#define ARM64_NCAPS				16
+#define ARM64_NCAPS				17
 
 #endif /* __ASM_CPUCAPS_H */
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -34,7 +34,8 @@ typedef struct {
 
 static inline bool arm64_kernel_unmapped_at_el0(void)
 {
-	return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0);
+	return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) &&
+	       cpus_have_cap(ARM64_UNMAP_KERNEL_AT_EL0);
 }
 
 extern void paging_init(void);
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -746,6 +746,40 @@ static bool hyp_offset_low(const struct
 	return idmap_addr > GENMASK(VA_BITS - 2, 0) && !is_kernel_in_hyp_mode();
 }
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
+
+static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
+				int __unused)
+{
+	/* Forced on command line? */
+	if (__kpti_forced) {
+		pr_info_once("kernel page table isolation forced %s by command line option\n",
+			     __kpti_forced > 0 ? "ON" : "OFF");
+		return __kpti_forced > 0;
+	}
+
+	/* Useful for KASLR robustness */
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+		return true;
+
+	return false;
+}
+
+static int __init parse_kpti(char *str)
+{
+	bool enabled;
+	int ret = strtobool(str, &enabled);
+
+	if (ret)
+		return ret;
+
+	__kpti_forced = enabled ? 1 : -1;
+	return 0;
+}
+__setup("kpti=", parse_kpti);
+#endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
@@ -829,6 +863,13 @@ static const struct arm64_cpu_capabiliti
 		.def_scope = SCOPE_SYSTEM,
 		.matches = hyp_offset_low,
 	},
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	{
+		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
+		.def_scope = SCOPE_SYSTEM,
+		.matches = unmap_kernel_at_el0,
+	},
+#endif
 	{},
 };
 
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -73,6 +73,7 @@
 	.macro kernel_ventry, el, label, regsize = 64
 	.align 7
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+alternative_if ARM64_UNMAP_KERNEL_AT_EL0
 	.if	\el == 0
 	.if	\regsize == 64
 	mrs	x30, tpidrro_el0
@@ -81,6 +82,7 @@
 	mov	x30, xzr
 	.endif
 	.endif
+alternative_else_nop_endif
 #endif
 
 	sub	sp, sp, #S_FRAME_SIZE
@@ -208,10 +210,9 @@ alternative_else_nop_endif
 	ldr	lr, [sp, #S_LR]
 	add	sp, sp, #S_FRAME_SIZE		// restore sp
 
-#ifndef CONFIG_UNMAP_KERNEL_AT_EL0
-	eret
-#else
 	.if	\el == 0
+alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	bne	4f
 	msr	far_el1, x30
 	tramp_alias	x30, tramp_exit_native
@@ -219,10 +220,10 @@ alternative_else_nop_endif
 4:
 	tramp_alias	x30, tramp_exit_compat
 	br	x30
+#endif
 	.else
 	eret
 	.endif
-#endif
 	.endm
 
 	.macro	get_thread_info, rd


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: entry: Add exception trampoline page for exceptions from EL0" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 08/27] arm64: entry: Add exception trampoline page for exceptions from EL0 Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ghackmann, gregkh, labbott, shankerd,
	will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: entry: Add exception trampoline page for exceptions from EL0

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:27 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:04 +0100
Subject: arm64: entry: Add exception trampoline page for exceptions from EL0
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-9-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit c7b9adaf85f8 upstream.

To allow unmapping of the kernel whilst running at EL0, we need to
point the exception vectors at an entry trampoline that can map/unmap
the kernel on entry/exit respectively.

This patch adds the trampoline page, although it is not yet plugged
into the vector table and is therefore unused.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[Alex: avoid dependency on SW PAN patches]
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
[Mark: remove dummy SW PAN definitions]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/entry.S       |   86 ++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/vmlinux.lds.S |   17 +++++++
 2 files changed, 103 insertions(+)

--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -29,9 +29,11 @@
 #include <asm/esr.h>
 #include <asm/irq.h>
 #include <asm/memory.h>
+#include <asm/mmu.h>
 #include <asm/thread_info.h>
 #include <asm/asm-uaccess.h>
 #include <asm/unistd.h>
+#include <asm/kernel-pgtable.h>
 
 /*
  * Context tracking subsystem.  Used to instrument transitions
@@ -806,6 +808,90 @@ __ni_sys_trace:
 
 	.popsection				// .entry.text
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+/*
+ * Exception vectors trampoline.
+ */
+	.pushsection ".entry.tramp.text", "ax"
+
+	.macro tramp_map_kernel, tmp
+	mrs	\tmp, ttbr1_el1
+	sub	\tmp, \tmp, #SWAPPER_DIR_SIZE
+	bic	\tmp, \tmp, #USER_ASID_FLAG
+	msr	ttbr1_el1, \tmp
+	.endm
+
+	.macro tramp_unmap_kernel, tmp
+	mrs	\tmp, ttbr1_el1
+	add	\tmp, \tmp, #SWAPPER_DIR_SIZE
+	orr	\tmp, \tmp, #USER_ASID_FLAG
+	msr	ttbr1_el1, \tmp
+	/*
+	 * We avoid running the post_ttbr_update_workaround here because the
+	 * user and kernel ASIDs don't have conflicting mappings, so any
+	 * "blessing" as described in:
+	 *
+	 *   http://lkml.kernel.org/r/56BB848A.6060603@caviumnetworks.com
+	 *
+	 * will not hurt correctness. Whilst this may partially defeat the
+	 * point of using split ASIDs in the first place, it avoids
+	 * the hit of invalidating the entire I-cache on every return to
+	 * userspace.
+	 */
+	.endm
+
+	.macro tramp_ventry, regsize = 64
+	.align	7
+1:
+	.if	\regsize == 64
+	msr	tpidrro_el0, x30	// Restored in kernel_ventry
+	.endif
+	tramp_map_kernel	x30
+	ldr	x30, =vectors
+	prfm	plil1strm, [x30, #(1b - tramp_vectors)]
+	msr	vbar_el1, x30
+	add	x30, x30, #(1b - tramp_vectors)
+	isb
+	br	x30
+	.endm
+
+	.macro tramp_exit, regsize = 64
+	adr	x30, tramp_vectors
+	msr	vbar_el1, x30
+	tramp_unmap_kernel	x30
+	.if	\regsize == 64
+	mrs	x30, far_el1
+	.endif
+	eret
+	.endm
+
+	.align	11
+ENTRY(tramp_vectors)
+	.space	0x400
+
+	tramp_ventry
+	tramp_ventry
+	tramp_ventry
+	tramp_ventry
+
+	tramp_ventry	32
+	tramp_ventry	32
+	tramp_ventry	32
+	tramp_ventry	32
+END(tramp_vectors)
+
+ENTRY(tramp_exit_native)
+	tramp_exit
+END(tramp_exit_native)
+
+ENTRY(tramp_exit_compat)
+	tramp_exit	32
+END(tramp_exit_compat)
+
+	.ltorg
+	.popsection				// .entry.tramp.text
+#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+
 /*
  * Special system call wrappers.
  */
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -56,6 +56,17 @@ jiffies = jiffies_64;
 #define HIBERNATE_TEXT
 #endif
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define TRAMP_TEXT					\
+	. = ALIGN(PAGE_SIZE);				\
+	VMLINUX_SYMBOL(__entry_tramp_text_start) = .;	\
+	*(.entry.tramp.text)				\
+	. = ALIGN(PAGE_SIZE);				\
+	VMLINUX_SYMBOL(__entry_tramp_text_end) = .;
+#else
+#define TRAMP_TEXT
+#endif
+
 /*
  * The size of the PE/COFF section that covers the kernel image, which
  * runs from stext to _edata, must be a round multiple of the PE/COFF
@@ -128,6 +139,7 @@ SECTIONS
 			HYPERVISOR_TEXT
 			IDMAP_TEXT
 			HIBERNATE_TEXT
+			TRAMP_TEXT
 			*(.fixup)
 			*(.gnu.warning)
 		. = ALIGN(16);
@@ -216,6 +228,11 @@ SECTIONS
 	swapper_pg_dir = .;
 	. += SWAPPER_DIR_SIZE;
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	tramp_pg_dir = .;
+	. += PAGE_SIZE;
+#endif
+
 	_end = .;
 
 	STABS_DEBUG


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: entry: Explicitly pass exception level to kernel_ventry macro" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 10/27] arm64: entry: Explicitly pass exception level to kernel_ventry macro Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ghackmann, gregkh, labbott, shankerd,
	will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: entry: Explicitly pass exception level to kernel_ventry macro

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:27 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:06 +0100
Subject: arm64: entry: Explicitly pass exception level to kernel_ventry macro
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-11-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit 5b1f7fe41909 upstream.

We will need to treat exceptions from EL0 differently in kernel_ventry,
so rework the macro to take the exception level as an argument and
construct the branch target using that.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
[Mark: avoid dependency on C error handler backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/entry.S |   48 +++++++++++++++++++++++-----------------------
 1 file changed, 24 insertions(+), 24 deletions(-)

--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -70,10 +70,10 @@
 #define BAD_FIQ		2
 #define BAD_ERROR	3
 
-	.macro kernel_ventry	label
+	.macro kernel_ventry, el, label, regsize = 64
 	.align 7
 	sub	sp, sp, #S_FRAME_SIZE
-	b	\label
+	b	el\()\el\()_\label
 	.endm
 
 	.macro	kernel_entry, el, regsize = 64
@@ -264,31 +264,31 @@ tsk	.req	x28		// current thread_info
 
 	.align	11
 ENTRY(vectors)
-	kernel_ventry	el1_sync_invalid		// Synchronous EL1t
-	kernel_ventry	el1_irq_invalid			// IRQ EL1t
-	kernel_ventry	el1_fiq_invalid			// FIQ EL1t
-	kernel_ventry	el1_error_invalid		// Error EL1t
-
-	kernel_ventry	el1_sync			// Synchronous EL1h
-	kernel_ventry	el1_irq				// IRQ EL1h
-	kernel_ventry	el1_fiq_invalid			// FIQ EL1h
-	kernel_ventry	el1_error_invalid		// Error EL1h
-
-	kernel_ventry	el0_sync			// Synchronous 64-bit EL0
-	kernel_ventry	el0_irq				// IRQ 64-bit EL0
-	kernel_ventry	el0_fiq_invalid			// FIQ 64-bit EL0
-	kernel_ventry	el0_error_invalid		// Error 64-bit EL0
+	kernel_ventry	1, sync_invalid			// Synchronous EL1t
+	kernel_ventry	1, irq_invalid			// IRQ EL1t
+	kernel_ventry	1, fiq_invalid			// FIQ EL1t
+	kernel_ventry	1, error_invalid		// Error EL1t
+
+	kernel_ventry	1, sync				// Synchronous EL1h
+	kernel_ventry	1, irq				// IRQ EL1h
+	kernel_ventry	1, fiq_invalid			// FIQ EL1h
+	kernel_ventry	1, error_invalid		// Error EL1h
+
+	kernel_ventry	0, sync				// Synchronous 64-bit EL0
+	kernel_ventry	0, irq				// IRQ 64-bit EL0
+	kernel_ventry	0, fiq_invalid			// FIQ 64-bit EL0
+	kernel_ventry	0, error_invalid		// Error 64-bit EL0
 
 #ifdef CONFIG_COMPAT
-	kernel_ventry	el0_sync_compat			// Synchronous 32-bit EL0
-	kernel_ventry	el0_irq_compat			// IRQ 32-bit EL0
-	kernel_ventry	el0_fiq_invalid_compat		// FIQ 32-bit EL0
-	kernel_ventry	el0_error_invalid_compat	// Error 32-bit EL0
+	kernel_ventry	0, sync_compat, 32		// Synchronous 32-bit EL0
+	kernel_ventry	0, irq_compat, 32		// IRQ 32-bit EL0
+	kernel_ventry	0, fiq_invalid_compat, 32	// FIQ 32-bit EL0
+	kernel_ventry	0, error_invalid_compat, 32	// Error 32-bit EL0
 #else
-	kernel_ventry	el0_sync_invalid		// Synchronous 32-bit EL0
-	kernel_ventry	el0_irq_invalid			// IRQ 32-bit EL0
-	kernel_ventry	el0_fiq_invalid			// FIQ 32-bit EL0
-	kernel_ventry	el0_error_invalid		// Error 32-bit EL0
+	kernel_ventry	0, sync_invalid, 32		// Synchronous 32-bit EL0
+	kernel_ventry	0, irq_invalid, 32		// IRQ 32-bit EL0
+	kernel_ventry	0, fiq_invalid, 32		// FIQ 32-bit EL0
+	kernel_ventry	0, error_invalid, 32		// Error 32-bit EL0
 #endif
 END(vectors)
 


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: entry: Hook up entry trampoline to exception vectors" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 11/27] arm64: entry: Hook up entry trampoline to exception vectors Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ghackmann, gregkh, labbott, shankerd,
	will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: entry: Hook up entry trampoline to exception vectors

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:27 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:07 +0100
Subject: arm64: entry: Hook up entry trampoline to exception vectors
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-12-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit 4bf3286d29f3 upstream.

Hook up the entry trampoline to our exception vectors so that all
exceptions from and returns to EL0 go via the trampoline, which swizzles
the vector base register accordingly. Transitioning to and from the
kernel clobbers x30, so we use tpidrro_el0 and far_el1 as scratch
registers for native tasks.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/entry.S |   39 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 36 insertions(+), 3 deletions(-)

--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -72,10 +72,26 @@
 
 	.macro kernel_ventry, el, label, regsize = 64
 	.align 7
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	.if	\el == 0
+	.if	\regsize == 64
+	mrs	x30, tpidrro_el0
+	msr	tpidrro_el0, xzr
+	.else
+	mov	x30, xzr
+	.endif
+	.endif
+#endif
+
 	sub	sp, sp, #S_FRAME_SIZE
 	b	el\()\el\()_\label
 	.endm
 
+	.macro tramp_alias, dst, sym
+	mov_q	\dst, TRAMP_VALIAS
+	add	\dst, \dst, #(\sym - .entry.tramp.text)
+	.endm
+
 	.macro	kernel_entry, el, regsize = 64
 	.if	\regsize == 32
 	mov	w0, w0				// zero upper 32 bits of x0
@@ -157,18 +173,20 @@
 	ct_user_enter
 	ldr	x23, [sp, #S_SP]		// load return stack pointer
 	msr	sp_el0, x23
+	tst	x22, #PSR_MODE32_BIT		// native task?
+	b.eq	3f
+
 #ifdef CONFIG_ARM64_ERRATUM_845719
 alternative_if ARM64_WORKAROUND_845719
-	tbz	x22, #4, 1f
 #ifdef CONFIG_PID_IN_CONTEXTIDR
 	mrs	x29, contextidr_el1
 	msr	contextidr_el1, x29
 #else
 	msr contextidr_el1, xzr
 #endif
-1:
 alternative_else_nop_endif
 #endif
+3:
 	.endif
 	msr	elr_el1, x21			// set up the return data
 	msr	spsr_el1, x22
@@ -189,7 +207,22 @@ alternative_else_nop_endif
 	ldp	x28, x29, [sp, #16 * 14]
 	ldr	lr, [sp, #S_LR]
 	add	sp, sp, #S_FRAME_SIZE		// restore sp
-	eret					// return to kernel
+
+#ifndef CONFIG_UNMAP_KERNEL_AT_EL0
+	eret
+#else
+	.if	\el == 0
+	bne	4f
+	msr	far_el1, x30
+	tramp_alias	x30, tramp_exit_native
+	br	x30
+4:
+	tramp_alias	x30, tramp_exit_compat
+	br	x30
+	.else
+	eret
+	.endif
+#endif
 	.endm
 
 	.macro	get_thread_info, rd


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: entry: Reword comment about post_ttbr_update_workaround" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 26/27] arm64: entry: Reword comment about post_ttbr_update_workaround Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, catalin.marinas, ghackmann, gregkh, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: entry: Reword comment about post_ttbr_update_workaround

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:22 +0100
Subject: arm64: entry: Reword comment about post_ttbr_update_workaround
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-27-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit f167211a93ac upstream.

We don't fully understand the Cavium ThunderX erratum, but it appears
that mapping the kernel as nG can lead to horrible consequences such as
attempting to execute userspace from kernel context. Since kpti isn't
enabled for these CPUs anyway, simplify the comment justifying the lack
of post_ttbr_update_workaround in the exception trampoline.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/entry.S |   13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -861,16 +861,9 @@ __ni_sys_trace:
 	orr	\tmp, \tmp, #USER_ASID_FLAG
 	msr	ttbr1_el1, \tmp
 	/*
-	 * We avoid running the post_ttbr_update_workaround here because the
-	 * user and kernel ASIDs don't have conflicting mappings, so any
-	 * "blessing" as described in:
-	 *
-	 *   http://lkml.kernel.org/r/56BB848A.6060603@caviumnetworks.com
-	 *
-	 * will not hurt correctness. Whilst this may partially defeat the
-	 * point of using split ASIDs in the first place, it avoids
-	 * the hit of invalidating the entire I-cache on every return to
-	 * userspace.
+	 * We avoid running the post_ttbr_update_workaround here because
+	 * it's only needed by Cavium ThunderX, which requires KPTI to be
+	 * disabled.
 	 */
 	.endm
 


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: Force KPTI to be disabled on Cavium ThunderX" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 25/27] arm64: Force KPTI to be disabled on Cavium ThunderX Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, catalin.marinas, ghackmann, gregkh,
	marc.zyngier, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: Force KPTI to be disabled on Cavium ThunderX

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:21 +0100
Subject: arm64: Force KPTI to be disabled on Cavium ThunderX
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-26-mark.rutland@arm.com>

From: Marc Zyngier <marc.zyngier@arm.com>

commit 6dc52b15c4a4 upstream.

Cavium ThunderX's erratum 27456 results in a corruption of icache
entries that are loaded from memory that is mapped as non-global
(i.e. ASID-tagged).

As KPTI is based on memory being mapped non-global, let's prevent
it from kicking in if this erratum is detected.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[will: Update comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[Alex: use cpus_have_cap as cpus_have_const_cap doesn't exist in v4.9]
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpufeature.c |   17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -753,12 +753,23 @@ static int __kpti_forced; /* 0: not forc
 static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 				int __unused)
 {
+	char const *str = "command line option";
 	u64 pfr0 = read_system_reg(SYS_ID_AA64PFR0_EL1);
 
-	/* Forced on command line? */
+	/*
+	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
+	 * ThunderX leads to apparent I-cache corruption of kernel text, which
+	 * ends as well as you might imagine. Don't even try.
+	 */
+	if (cpus_have_cap(ARM64_WORKAROUND_CAVIUM_27456)) {
+		str = "ARM64_WORKAROUND_CAVIUM_27456";
+		__kpti_forced = -1;
+	}
+
+	/* Forced? */
 	if (__kpti_forced) {
-		pr_info_once("kernel page table isolation forced %s by command line option\n",
-			     __kpti_forced > 0 ? "ON" : "OFF");
+		pr_info_once("kernel page table isolation forced %s by %s\n",
+			     __kpti_forced > 0 ? "ON" : "OFF", str);
 		return __kpti_forced > 0;
 	}
 


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: factor out entry stack manipulation" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 06/27] arm64: factor out entry stack manipulation Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ard.biesheuvel, catalin.marinas,
	ghackmann, gregkh, james.morse, labbott, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: factor out entry stack manipulation

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-factor-out-entry-stack-manipulation.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:27 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:02 +0100
Subject: arm64: factor out entry stack manipulation
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-7-mark.rutland@arm.com>

From: Mark Rutland <mark.rutland@arm.com>

commit b11e5759bfac upstream.

In subsequent patches, we will detect stack overflow in our exception
entry code, by verifying the SP after it has been decremented to make
space for the exception regs.

This verification code is small, and we can minimize its impact by
placing it directly in the vectors. To avoid redundant modification of
the SP, we also need to move the initial decrement of the SP into the
vectors.

As a preparatory step, this patch introduces kernel_ventry, which
performs this decrement, and updates the entry code accordingly.
Subsequent patches will fold SP verification into kernel_ventry.

There should be no functional change as a result of this patch.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[Mark: turn into prep patch, expand commit msg]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/entry.S |   51 +++++++++++++++++++++++++---------------------
 1 file changed, 28 insertions(+), 23 deletions(-)

--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -68,8 +68,13 @@
 #define BAD_FIQ		2
 #define BAD_ERROR	3
 
-	.macro	kernel_entry, el, regsize = 64
+	.macro kernel_ventry	label
+	.align 7
 	sub	sp, sp, #S_FRAME_SIZE
+	b	\label
+	.endm
+
+	.macro	kernel_entry, el, regsize = 64
 	.if	\regsize == 32
 	mov	w0, w0				// zero upper 32 bits of x0
 	.endif
@@ -257,31 +262,31 @@ tsk	.req	x28		// current thread_info
 
 	.align	11
 ENTRY(vectors)
-	ventry	el1_sync_invalid		// Synchronous EL1t
-	ventry	el1_irq_invalid			// IRQ EL1t
-	ventry	el1_fiq_invalid			// FIQ EL1t
-	ventry	el1_error_invalid		// Error EL1t
-
-	ventry	el1_sync			// Synchronous EL1h
-	ventry	el1_irq				// IRQ EL1h
-	ventry	el1_fiq_invalid			// FIQ EL1h
-	ventry	el1_error_invalid		// Error EL1h
-
-	ventry	el0_sync			// Synchronous 64-bit EL0
-	ventry	el0_irq				// IRQ 64-bit EL0
-	ventry	el0_fiq_invalid			// FIQ 64-bit EL0
-	ventry	el0_error_invalid		// Error 64-bit EL0
+	kernel_ventry	el1_sync_invalid		// Synchronous EL1t
+	kernel_ventry	el1_irq_invalid			// IRQ EL1t
+	kernel_ventry	el1_fiq_invalid			// FIQ EL1t
+	kernel_ventry	el1_error_invalid		// Error EL1t
+
+	kernel_ventry	el1_sync			// Synchronous EL1h
+	kernel_ventry	el1_irq				// IRQ EL1h
+	kernel_ventry	el1_fiq_invalid			// FIQ EL1h
+	kernel_ventry	el1_error_invalid		// Error EL1h
+
+	kernel_ventry	el0_sync			// Synchronous 64-bit EL0
+	kernel_ventry	el0_irq				// IRQ 64-bit EL0
+	kernel_ventry	el0_fiq_invalid			// FIQ 64-bit EL0
+	kernel_ventry	el0_error_invalid		// Error 64-bit EL0
 
 #ifdef CONFIG_COMPAT
-	ventry	el0_sync_compat			// Synchronous 32-bit EL0
-	ventry	el0_irq_compat			// IRQ 32-bit EL0
-	ventry	el0_fiq_invalid_compat		// FIQ 32-bit EL0
-	ventry	el0_error_invalid_compat	// Error 32-bit EL0
+	kernel_ventry	el0_sync_compat			// Synchronous 32-bit EL0
+	kernel_ventry	el0_irq_compat			// IRQ 32-bit EL0
+	kernel_ventry	el0_fiq_invalid_compat		// FIQ 32-bit EL0
+	kernel_ventry	el0_error_invalid_compat	// Error 32-bit EL0
 #else
-	ventry	el0_sync_invalid		// Synchronous 32-bit EL0
-	ventry	el0_irq_invalid			// IRQ 32-bit EL0
-	ventry	el0_fiq_invalid			// FIQ 32-bit EL0
-	ventry	el0_error_invalid		// Error 32-bit EL0
+	kernel_ventry	el0_sync_invalid		// Synchronous 32-bit EL0
+	kernel_ventry	el0_irq_invalid			// IRQ 32-bit EL0
+	kernel_ventry	el0_fiq_invalid			// FIQ 32-bit EL0
+	kernel_ventry	el0_error_invalid		// Error 32-bit EL0
 #endif
 END(vectors)
 


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 27/27] arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, catalin.marinas, ghackmann, gregkh,
	marc.zyngier, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:23 +0100
Subject: arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-28-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit 439e70e27a51 upstream.

The identity map is mapped as both writeable and executable by the
SWAPPER_MM_MMUFLAGS and this is relied upon by the kpti code to manage
a synchronisation flag. Update the .pushsection flags to reflect the
actual mapping attributes.

Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpu-reset.S |    2 +-
 arch/arm64/kernel/head.S      |    2 +-
 arch/arm64/kernel/sleep.S     |    2 +-
 arch/arm64/mm/proc.S          |    8 ++++----
 4 files changed, 7 insertions(+), 7 deletions(-)

--- a/arch/arm64/kernel/cpu-reset.S
+++ b/arch/arm64/kernel/cpu-reset.S
@@ -16,7 +16,7 @@
 #include <asm/virt.h>
 
 .text
-.pushsection    .idmap.text, "ax"
+.pushsection    .idmap.text, "awx"
 
 /*
  * __cpu_soft_restart(el2_switch, entry, arg0, arg1, arg2) - Helper for
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -473,7 +473,7 @@ ENDPROC(__primary_switched)
  * end early head section, begin head code that is also used for
  * hotplug and needs to have the same protections as the text region
  */
-	.section ".idmap.text","ax"
+	.section ".idmap.text","awx"
 
 ENTRY(kimage_vaddr)
 	.quad		_text - TEXT_OFFSET
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -95,7 +95,7 @@ ENTRY(__cpu_suspend_enter)
 	ret
 ENDPROC(__cpu_suspend_enter)
 
-	.pushsection ".idmap.text", "ax"
+	.pushsection ".idmap.text", "awx"
 ENTRY(cpu_resume)
 	bl	el2_setup		// if in EL2 drop to EL1 cleanly
 	bl	__cpu_setup
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -83,7 +83,7 @@ ENDPROC(cpu_do_suspend)
  *
  * x0: Address of context pointer
  */
-	.pushsection ".idmap.text", "ax"
+	.pushsection ".idmap.text", "awx"
 ENTRY(cpu_do_resume)
 	ldp	x2, x3, [x0]
 	ldp	x4, x5, [x0, #16]
@@ -147,7 +147,7 @@ alternative_else_nop_endif
 	ret
 ENDPROC(cpu_do_switch_mm)
 
-	.pushsection ".idmap.text", "ax"
+	.pushsection ".idmap.text", "awx"
 
 .macro	__idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
 	adrp	\tmp1, empty_zero_page
@@ -180,7 +180,7 @@ ENDPROC(idmap_cpu_replace_ttbr1)
 	.popsection
 
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-	.pushsection ".idmap.text", "ax"
+	.pushsection ".idmap.text", "awx"
 
 	.macro	__idmap_kpti_get_pgtable_ent, type
 	dc	cvac, cur_\()\type\()p		// Ensure any existing dirty
@@ -368,7 +368,7 @@ ENDPROC(idmap_kpti_install_ng_mappings)
  *	Initialise the processor for turning the MMU on.  Return in x0 the
  *	value of the SCTLR_EL1 register.
  */
-	.pushsection ".idmap.text", "ax"
+	.pushsection ".idmap.text", "awx"
 ENTRY(__cpu_setup)
 	tlbi	vmalle1				// Invalidate local TLB
 	dsb	nsh


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 16/27] arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0 Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ghackmann, gregkh, labbott, shankerd,
	will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-kconfig-add-config_unmap_kernel_at_el0.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:12 +0100
Subject: arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-17-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit 084eb77cd3a8 upstream.

Add a Kconfig entry to control use of the entry trampoline, which allows
us to unmap the kernel whilst running in userspace and improve the
robustness of KASLR.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/Kconfig |   13 +++++++++++++
 1 file changed, 13 insertions(+)

--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -733,6 +733,19 @@ config FORCE_MAX_ZONEORDER
 	  However for 4K, we choose a higher default value, 11 as opposed to 10, giving us
 	  4M allocations matching the default size used by generic code.
 
+config UNMAP_KERNEL_AT_EL0
+	bool "Unmap kernel when running in userspace (aka \"KAISER\")"
+	default y
+	help
+	  Some attacks against KASLR make use of the timing difference between
+	  a permission fault which could arise from a page table entry that is
+	  present in the TLB, and a translation fault which always requires a
+	  page table walk. This option defends against these attacks by unmapping
+	  the kernel whilst running in userspace, therefore forcing translation
+	  faults for all of kernel space.
+
+	  If unsure, say Y.
+
 menuconfig ARMV8_DEPRECATED
 	bool "Emulate deprecated/obsolete ARMv8 instructions"
 	depends on COMPAT


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: kaslr: Put kernel vectors address in separate data page" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 14/27] arm64: kaslr: Put kernel vectors address in separate data page Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ard.biesheuvel, ghackmann, gregkh,
	labbott, shankerd, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: kaslr: Put kernel vectors address in separate data page

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:10 +0100
Subject: arm64: kaslr: Put kernel vectors address in separate data page
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-15-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit 6c27c4082f4f upstream.

The literal pool entry for identifying the vectors base is the only piece
of information in the trampoline page that identifies the true location
of the kernel.

This patch moves it into a page-aligned region of the .rodata section
and maps this adjacent to the trampoline text via an additional fixmap
entry, which protects against any accidental leakage of the trampoline
contents.

Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[Alex: avoid ARM64_WORKAROUND_QCOM_FALKOR_E1003 dependency]
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/fixmap.h |    1 +
 arch/arm64/kernel/entry.S       |   14 ++++++++++++++
 arch/arm64/kernel/vmlinux.lds.S |    5 ++++-
 arch/arm64/mm/mmu.c             |   10 +++++++++-
 4 files changed, 28 insertions(+), 2 deletions(-)

--- a/arch/arm64/include/asm/fixmap.h
+++ b/arch/arm64/include/asm/fixmap.h
@@ -53,6 +53,7 @@ enum fixed_addresses {
 	FIX_TEXT_POKE0,
 
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	FIX_ENTRY_TRAMP_DATA,
 	FIX_ENTRY_TRAMP_TEXT,
 #define TRAMP_VALIAS		(__fix_to_virt(FIX_ENTRY_TRAMP_TEXT))
 #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -881,7 +881,13 @@ __ni_sys_trace:
 	msr	tpidrro_el0, x30	// Restored in kernel_ventry
 	.endif
 	tramp_map_kernel	x30
+#ifdef CONFIG_RANDOMIZE_BASE
+	adr	x30, tramp_vectors + PAGE_SIZE
+	isb
+	ldr	x30, [x30]
+#else
 	ldr	x30, =vectors
+#endif
 	prfm	plil1strm, [x30, #(1b - tramp_vectors)]
 	msr	vbar_el1, x30
 	add	x30, x30, #(1b - tramp_vectors)
@@ -924,6 +930,14 @@ END(tramp_exit_compat)
 
 	.ltorg
 	.popsection				// .entry.tramp.text
+#ifdef CONFIG_RANDOMIZE_BASE
+	.pushsection ".rodata", "a"
+	.align PAGE_SHIFT
+	.globl	__entry_tramp_data_start
+__entry_tramp_data_start:
+	.quad	vectors
+	.popsection				// .rodata
+#endif /* CONFIG_RANDOMIZE_BASE */
 #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 /*
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -252,7 +252,10 @@ ASSERT(__idmap_text_end - (__idmap_text_
 ASSERT(__hibernate_exit_text_end - (__hibernate_exit_text_start & ~(SZ_4K - 1))
 	<= SZ_4K, "Hibernate exit text too big or misaligned")
 #endif
-
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE,
+	"Entry trampoline text too big")
+#endif
 /*
  * If padding is applied before .head.text, virt<->phys conversions will fail.
  */
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -435,8 +435,16 @@ static int __init map_entry_trampoline(v
 	__create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE,
 			     prot, pgd_pgtable_alloc, 0);
 
-	/* ...as well as the kernel page table */
+	/* Map both the text and data into the kernel page table */
 	__set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot);
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+		extern char __entry_tramp_data_start[];
+
+		__set_fixmap(FIX_ENTRY_TRAMP_DATA,
+			     __pa_symbol(__entry_tramp_data_start),
+			     PAGE_KERNEL_RO);
+	}
+
 	return 0;
 }
 core_initcall(map_entry_trampoline);


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 17/27] arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, catalin.marinas, ghackmann, gregkh, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:13 +0100
Subject: arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-18-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit 0617052ddde3 upstream.

Although CONFIG_UNMAP_KERNEL_AT_EL0 does make KASLR more robust, it's
actually more useful as a mitigation against speculation attacks that
can leak arbitrary kernel data to userspace through speculation.

Reword the Kconfig help message to reflect this, and make the option
depend on EXPERT so that it is on by default for the majority of users.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/Kconfig |   13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -734,15 +734,14 @@ config FORCE_MAX_ZONEORDER
 	  4M allocations matching the default size used by generic code.
 
 config UNMAP_KERNEL_AT_EL0
-	bool "Unmap kernel when running in userspace (aka \"KAISER\")"
+	bool "Unmap kernel when running in userspace (aka \"KAISER\")" if EXPERT
 	default y
 	help
-	  Some attacks against KASLR make use of the timing difference between
-	  a permission fault which could arise from a page table entry that is
-	  present in the TLB, and a translation fault which always requires a
-	  page table walk. This option defends against these attacks by unmapping
-	  the kernel whilst running in userspace, therefore forcing translation
-	  faults for all of kernel space.
+	  Speculation attacks against some high-performance processors can
+	  be used to bypass MMU permission checks and leak kernel data to
+	  userspace. This can be defended against by unmapping the kernel
+	  when running in userspace, mapping it back in on exception entry
+	  via a trampoline page in the vector table.
 
 	  If unsure, say Y.
 


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0()" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 23/27] arm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0() Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, catalin.marinas, ghackmann, gregkh, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0()

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:19 +0100
Subject: arm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0()
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-24-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit 41acec624087 upstream.

To allow systems which do not require kpti to continue running with
global kernel mappings (which appears to be a requirement for Cavium
ThunderX due to a CPU erratum), make the use of nG in the kernel page
tables dependent on arm64_kernel_unmapped_at_el0(), which is resolved
at runtime.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/kernel-pgtable.h |   12 ++----------
 arch/arm64/include/asm/pgtable-prot.h   |   30 ++++++++++++++----------------
 2 files changed, 16 insertions(+), 26 deletions(-)

--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -71,16 +71,8 @@
 /*
  * Initial memory map attributes.
  */
-#define _SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
-#define _SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
-
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-#define SWAPPER_PTE_FLAGS	(_SWAPPER_PTE_FLAGS | PTE_NG)
-#define SWAPPER_PMD_FLAGS	(_SWAPPER_PMD_FLAGS | PMD_SECT_NG)
-#else
-#define SWAPPER_PTE_FLAGS	_SWAPPER_PTE_FLAGS
-#define SWAPPER_PMD_FLAGS	_SWAPPER_PMD_FLAGS
-#endif
+#define SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+#define SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
 
 #if ARM64_SWAPPER_USES_SECTION_MAPS
 #define SWAPPER_MM_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -37,13 +37,11 @@
 #define _PROT_DEFAULT		(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
 #define _PROT_SECT_DEFAULT	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-#define PROT_DEFAULT		(_PROT_DEFAULT | PTE_NG)
-#define PROT_SECT_DEFAULT	(_PROT_SECT_DEFAULT | PMD_SECT_NG)
-#else
-#define PROT_DEFAULT		_PROT_DEFAULT
-#define PROT_SECT_DEFAULT	_PROT_SECT_DEFAULT
-#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+#define PTE_MAYBE_NG		(arm64_kernel_unmapped_at_el0() ? PTE_NG : 0)
+#define PMD_MAYBE_NG		(arm64_kernel_unmapped_at_el0() ? PMD_SECT_NG : 0)
+
+#define PROT_DEFAULT		(_PROT_DEFAULT | PTE_MAYBE_NG)
+#define PROT_SECT_DEFAULT	(_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
 
 #define PROT_DEVICE_nGnRnE	(PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE))
 #define PROT_DEVICE_nGnRE	(PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE))
@@ -55,22 +53,22 @@
 #define PROT_SECT_NORMAL	(PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
 #define PROT_SECT_NORMAL_EXEC	(PROT_SECT_DEFAULT | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
 
-#define _PAGE_DEFAULT		(PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
-#define _HYP_PAGE_DEFAULT	(_PAGE_DEFAULT & ~PTE_NG)
+#define _PAGE_DEFAULT		(_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
+#define _HYP_PAGE_DEFAULT	_PAGE_DEFAULT
 
-#define PAGE_KERNEL		__pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE)
-#define PAGE_KERNEL_RO		__pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_RDONLY)
-#define PAGE_KERNEL_ROX		__pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_RDONLY)
-#define PAGE_KERNEL_EXEC	__pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE)
-#define PAGE_KERNEL_EXEC_CONT	__pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT)
+#define PAGE_KERNEL		__pgprot(PROT_NORMAL)
+#define PAGE_KERNEL_RO		__pgprot((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY)
+#define PAGE_KERNEL_ROX		__pgprot((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY)
+#define PAGE_KERNEL_EXEC	__pgprot(PROT_NORMAL & ~PTE_PXN)
+#define PAGE_KERNEL_EXEC_CONT	__pgprot((PROT_NORMAL & ~PTE_PXN) | PTE_CONT)
 
 #define PAGE_HYP		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
 #define PAGE_HYP_EXEC		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
 #define PAGE_HYP_RO		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
 #define PAGE_HYP_DEVICE		__pgprot(PROT_DEVICE_nGnRE | PTE_HYP)
 
-#define PAGE_S2			__pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY)
-#define PAGE_S2_DEVICE		__pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_UXN)
+#define PAGE_S2			__pgprot(_PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY)
+#define PAGE_S2_DEVICE		__pgprot(_PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_UXN)
 
 #define PAGE_NONE		__pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_NG | PTE_PXN | PTE_UXN)
 #define PAGE_SHARED		__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: kpti: Add ->enable callback to remap swapper using nG mappings" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 24/27] arm64: kpti: Add ->enable callback to remap swapper using nG mappings Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ard.biesheuvel, catalin.marinas,
	ghackmann, gregkh, marc.zyngier, nicolas.dechesne, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: kpti: Add ->enable callback to remap swapper using nG mappings

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:20 +0100
Subject: arm64: kpti: Add ->enable callback to remap swapper using nG mappings
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-25-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit f992b4dfd58b upstream.

Defaulting to global mappings for kernel space is generally good for
performance and appears to be necessary for Cavium ThunderX. If we
subsequently decide that we need to enable kpti, then we need to rewrite
our existing page table entries to be non-global. This is fiddly, and
made worse by the possible use of contiguous mappings, which require
a strict break-before-make sequence.

Since the enable callback runs on each online CPU from stop_machine
context, we can have all CPUs enter the idmap, where secondaries can
wait for the primary CPU to rewrite swapper with its MMU off. It's all
fairly horrible, but at least it only runs once.

Nicolas Dechesne <nicolas.dechesne@linaro.org> found a bug on this commit
which cause boot failure on db410c etc board. Ard Biesheuvel found it
writting wrong contenct to ttbr1_el1 in __idmap_cpu_set_reserved_ttbr1
macro and fixed it by give it the right content.

Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[Alex: avoid dependency on 52-bit PA patches and TTBR/MMU erratum patches]
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/assembler.h |    3 
 arch/arm64/kernel/cpufeature.c     |   25 ++++
 arch/arm64/mm/proc.S               |  201 +++++++++++++++++++++++++++++++++++--
 3 files changed, 222 insertions(+), 7 deletions(-)

--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -413,4 +413,7 @@ alternative_endif
 	movk	\reg, :abs_g0_nc:\val
 	.endm
 
+	.macro	pte_to_phys, phys, pte
+	and	\phys, \pte, #(((1 << (48 - PAGE_SHIFT)) - 1) << PAGE_SHIFT)
+	.endm
 #endif	/* __ASM_ASSEMBLER_H */
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -778,6 +778,30 @@ static bool unmap_kernel_at_el0(const st
 						     ID_AA64PFR0_CSV3_SHIFT);
 }
 
+static int kpti_install_ng_mappings(void *__unused)
+{
+	typedef void (kpti_remap_fn)(int, int, phys_addr_t);
+	extern kpti_remap_fn idmap_kpti_install_ng_mappings;
+	kpti_remap_fn *remap_fn;
+
+	static bool kpti_applied = false;
+	int cpu = smp_processor_id();
+
+	if (kpti_applied)
+		return 0;
+
+	remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
+
+	cpu_install_idmap();
+	remap_fn(cpu, num_online_cpus(), __pa_symbol(swapper_pg_dir));
+	cpu_uninstall_idmap();
+
+	if (!cpu)
+		kpti_applied = true;
+
+	return 0;
+}
+
 static int __init parse_kpti(char *str)
 {
 	bool enabled;
@@ -881,6 +905,7 @@ static const struct arm64_cpu_capabiliti
 		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
 		.def_scope = SCOPE_SYSTEM,
 		.matches = unmap_kernel_at_el0,
+		.enable = kpti_install_ng_mappings,
 	},
 #endif
 	{},
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -148,6 +148,16 @@ alternative_else_nop_endif
 ENDPROC(cpu_do_switch_mm)
 
 	.pushsection ".idmap.text", "ax"
+
+.macro	__idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
+	adrp	\tmp1, empty_zero_page
+	msr	ttbr1_el1, \tmp1
+	isb
+	tlbi	vmalle1
+	dsb	nsh
+	isb
+.endm
+
 /*
  * void idmap_cpu_replace_ttbr1(phys_addr_t new_pgd)
  *
@@ -158,13 +168,7 @@ ENTRY(idmap_cpu_replace_ttbr1)
 	mrs	x2, daif
 	msr	daifset, #0xf
 
-	adrp	x1, empty_zero_page
-	msr	ttbr1_el1, x1
-	isb
-
-	tlbi	vmalle1
-	dsb	nsh
-	isb
+	__idmap_cpu_set_reserved_ttbr1 x1, x3
 
 	msr	ttbr1_el1, x0
 	isb
@@ -175,6 +179,189 @@ ENTRY(idmap_cpu_replace_ttbr1)
 ENDPROC(idmap_cpu_replace_ttbr1)
 	.popsection
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	.pushsection ".idmap.text", "ax"
+
+	.macro	__idmap_kpti_get_pgtable_ent, type
+	dc	cvac, cur_\()\type\()p		// Ensure any existing dirty
+	dmb	sy				// lines are written back before
+	ldr	\type, [cur_\()\type\()p]	// loading the entry
+	tbz	\type, #0, next_\()\type	// Skip invalid entries
+	.endm
+
+	.macro __idmap_kpti_put_pgtable_ent_ng, type
+	orr	\type, \type, #PTE_NG		// Same bit for blocks and pages
+	str	\type, [cur_\()\type\()p]	// Update the entry and ensure it
+	dc	civac, cur_\()\type\()p		// is visible to all CPUs.
+	.endm
+
+/*
+ * void __kpti_install_ng_mappings(int cpu, int num_cpus, phys_addr_t swapper)
+ *
+ * Called exactly once from stop_machine context by each CPU found during boot.
+ */
+__idmap_kpti_flag:
+	.long	1
+ENTRY(idmap_kpti_install_ng_mappings)
+	cpu		.req	w0
+	num_cpus	.req	w1
+	swapper_pa	.req	x2
+	swapper_ttb	.req	x3
+	flag_ptr	.req	x4
+	cur_pgdp	.req	x5
+	end_pgdp	.req	x6
+	pgd		.req	x7
+	cur_pudp	.req	x8
+	end_pudp	.req	x9
+	pud		.req	x10
+	cur_pmdp	.req	x11
+	end_pmdp	.req	x12
+	pmd		.req	x13
+	cur_ptep	.req	x14
+	end_ptep	.req	x15
+	pte		.req	x16
+
+	mrs	swapper_ttb, ttbr1_el1
+	adr	flag_ptr, __idmap_kpti_flag
+
+	cbnz	cpu, __idmap_kpti_secondary
+
+	/* We're the boot CPU. Wait for the others to catch up */
+	sevl
+1:	wfe
+	ldaxr	w18, [flag_ptr]
+	eor	w18, w18, num_cpus
+	cbnz	w18, 1b
+
+	/* We need to walk swapper, so turn off the MMU. */
+	mrs	x18, sctlr_el1
+	bic	x18, x18, #SCTLR_ELx_M
+	msr	sctlr_el1, x18
+	isb
+
+	/* Everybody is enjoying the idmap, so we can rewrite swapper. */
+	/* PGD */
+	mov	cur_pgdp, swapper_pa
+	add	end_pgdp, cur_pgdp, #(PTRS_PER_PGD * 8)
+do_pgd:	__idmap_kpti_get_pgtable_ent	pgd
+	tbnz	pgd, #1, walk_puds
+	__idmap_kpti_put_pgtable_ent_ng	pgd
+next_pgd:
+	add	cur_pgdp, cur_pgdp, #8
+	cmp	cur_pgdp, end_pgdp
+	b.ne	do_pgd
+
+	/* Publish the updated tables and nuke all the TLBs */
+	dsb	sy
+	tlbi	vmalle1is
+	dsb	ish
+	isb
+
+	/* We're done: fire up the MMU again */
+	mrs	x18, sctlr_el1
+	orr	x18, x18, #SCTLR_ELx_M
+	msr	sctlr_el1, x18
+	isb
+
+	/* Set the flag to zero to indicate that we're all done */
+	str	wzr, [flag_ptr]
+	ret
+
+	/* PUD */
+walk_puds:
+	.if CONFIG_PGTABLE_LEVELS > 3
+	pte_to_phys	cur_pudp, pgd
+	add	end_pudp, cur_pudp, #(PTRS_PER_PUD * 8)
+do_pud:	__idmap_kpti_get_pgtable_ent	pud
+	tbnz	pud, #1, walk_pmds
+	__idmap_kpti_put_pgtable_ent_ng	pud
+next_pud:
+	add	cur_pudp, cur_pudp, 8
+	cmp	cur_pudp, end_pudp
+	b.ne	do_pud
+	b	next_pgd
+	.else /* CONFIG_PGTABLE_LEVELS <= 3 */
+	mov	pud, pgd
+	b	walk_pmds
+next_pud:
+	b	next_pgd
+	.endif
+
+	/* PMD */
+walk_pmds:
+	.if CONFIG_PGTABLE_LEVELS > 2
+	pte_to_phys	cur_pmdp, pud
+	add	end_pmdp, cur_pmdp, #(PTRS_PER_PMD * 8)
+do_pmd:	__idmap_kpti_get_pgtable_ent	pmd
+	tbnz	pmd, #1, walk_ptes
+	__idmap_kpti_put_pgtable_ent_ng	pmd
+next_pmd:
+	add	cur_pmdp, cur_pmdp, #8
+	cmp	cur_pmdp, end_pmdp
+	b.ne	do_pmd
+	b	next_pud
+	.else /* CONFIG_PGTABLE_LEVELS <= 2 */
+	mov	pmd, pud
+	b	walk_ptes
+next_pmd:
+	b	next_pud
+	.endif
+
+	/* PTE */
+walk_ptes:
+	pte_to_phys	cur_ptep, pmd
+	add	end_ptep, cur_ptep, #(PTRS_PER_PTE * 8)
+do_pte:	__idmap_kpti_get_pgtable_ent	pte
+	__idmap_kpti_put_pgtable_ent_ng	pte
+next_pte:
+	add	cur_ptep, cur_ptep, #8
+	cmp	cur_ptep, end_ptep
+	b.ne	do_pte
+	b	next_pmd
+
+	/* Secondary CPUs end up here */
+__idmap_kpti_secondary:
+	/* Uninstall swapper before surgery begins */
+	__idmap_cpu_set_reserved_ttbr1 x18, x17
+
+	/* Increment the flag to let the boot CPU we're ready */
+1:	ldxr	w18, [flag_ptr]
+	add	w18, w18, #1
+	stxr	w17, w18, [flag_ptr]
+	cbnz	w17, 1b
+
+	/* Wait for the boot CPU to finish messing around with swapper */
+	sevl
+1:	wfe
+	ldxr	w18, [flag_ptr]
+	cbnz	w18, 1b
+
+	/* All done, act like nothing happened */
+	msr	ttbr1_el1, swapper_ttb
+	isb
+	ret
+
+	.unreq	cpu
+	.unreq	num_cpus
+	.unreq	swapper_pa
+	.unreq	swapper_ttb
+	.unreq	flag_ptr
+	.unreq	cur_pgdp
+	.unreq	end_pgdp
+	.unreq	pgd
+	.unreq	cur_pudp
+	.unreq	end_pudp
+	.unreq	pud
+	.unreq	cur_pmdp
+	.unreq	end_pmdp
+	.unreq	pmd
+	.unreq	cur_ptep
+	.unreq	end_ptep
+	.unreq	pte
+ENDPROC(idmap_kpti_install_ng_mappings)
+	.popsection
+#endif
+
 /*
  *	__cpu_setup
  *


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: mm: Add arm64_kernel_unmapped_at_el0 helper" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 04/27] arm64: mm: Add arm64_kernel_unmapped_at_el0 helper Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ghackmann, gregkh, labbott, shankerd,
	will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: mm: Add arm64_kernel_unmapped_at_el0 helper

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:27 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:00 +0100
Subject: arm64: mm: Add arm64_kernel_unmapped_at_el0 helper
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-5-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit fc0e1299da54 upstream.

In order for code such as TLB invalidation to operate efficiently when
the decision to map the kernel at EL0 is determined at runtime, this
patch introduces a helper function, arm64_kernel_unmapped_at_el0, to
determine whether or not the kernel is mapped whilst running in userspace.

Currently, this just reports the value of CONFIG_UNMAP_KERNEL_AT_EL0,
but will later be hooked up to a fake CPU capability using a static key.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/mmu.h |    8 ++++++++
 1 file changed, 8 insertions(+)

--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -18,6 +18,8 @@
 
 #define USER_ASID_FLAG	(UL(1) << 48)
 
+#ifndef __ASSEMBLY__
+
 typedef struct {
 	atomic64_t	id;
 	void		*vdso;
@@ -30,6 +32,11 @@ typedef struct {
  */
 #define ASID(mm)	((mm)->context.id.counter & 0xffff)
 
+static inline bool arm64_kernel_unmapped_at_el0(void)
+{
+	return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0);
+}
+
 extern void paging_init(void);
 extern void bootmem_init(void);
 extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
@@ -39,4 +46,5 @@ extern void create_pgd_mapping(struct mm
 			       pgprot_t prot, bool allow_block_mappings);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
 
+#endif	/* !__ASSEMBLY__ */
 #endif


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: mm: Allocate ASIDs in pairs" has been added to the 4.9-stable tree
  2018-04-03 11:08 ` [PATCH v4.9.y 03/27] arm64: mm: Allocate ASIDs in pairs Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ghackmann, gregkh, labbott, shankerd,
	will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: mm: Allocate ASIDs in pairs

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-mm-allocate-asids-in-pairs.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:27 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:08:59 +0100
Subject: arm64: mm: Allocate ASIDs in pairs
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-4-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit 0c8ea531b774 upstream.

In preparation for separate kernel/user ASIDs, allocate them in pairs
for each mm_struct. The bottom bit distinguishes the two: if it is set,
then the ASID will map only userspace.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/mmu.h |    2 ++
 arch/arm64/mm/context.c      |   25 +++++++++++++++++--------
 2 files changed, 19 insertions(+), 8 deletions(-)

--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -16,6 +16,8 @@
 #ifndef __ASM_MMU_H
 #define __ASM_MMU_H
 
+#define USER_ASID_FLAG	(UL(1) << 48)
+
 typedef struct {
 	atomic64_t	id;
 	void		*vdso;
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -39,7 +39,16 @@ static cpumask_t tlb_flush_pending;
 
 #define ASID_MASK		(~GENMASK(asid_bits - 1, 0))
 #define ASID_FIRST_VERSION	(1UL << asid_bits)
-#define NUM_USER_ASIDS		ASID_FIRST_VERSION
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define NUM_USER_ASIDS		(ASID_FIRST_VERSION >> 1)
+#define asid2idx(asid)		(((asid) & ~ASID_MASK) >> 1)
+#define idx2asid(idx)		(((idx) << 1) & ~ASID_MASK)
+#else
+#define NUM_USER_ASIDS		(ASID_FIRST_VERSION)
+#define asid2idx(asid)		((asid) & ~ASID_MASK)
+#define idx2asid(idx)		asid2idx(idx)
+#endif
 
 /* Get the ASIDBits supported by the current CPU */
 static u32 get_cpu_asid_bits(void)
@@ -104,7 +113,7 @@ static void flush_context(unsigned int c
 		 */
 		if (asid == 0)
 			asid = per_cpu(reserved_asids, i);
-		__set_bit(asid & ~ASID_MASK, asid_map);
+		__set_bit(asid2idx(asid), asid_map);
 		per_cpu(reserved_asids, i) = asid;
 	}
 
@@ -159,16 +168,16 @@ static u64 new_context(struct mm_struct
 		 * We had a valid ASID in a previous life, so try to re-use
 		 * it if possible.
 		 */
-		asid &= ~ASID_MASK;
-		if (!__test_and_set_bit(asid, asid_map))
+		if (!__test_and_set_bit(asid2idx(asid), asid_map))
 			return newasid;
 	}
 
 	/*
 	 * Allocate a free ASID. If we can't find one, take a note of the
-	 * currently active ASIDs and mark the TLBs as requiring flushes.
-	 * We always count from ASID #1, as we use ASID #0 when setting a
-	 * reserved TTBR0 for the init_mm.
+	 * currently active ASIDs and mark the TLBs as requiring flushes.  We
+	 * always count from ASID #2 (index 1), as we use ASID #0 when setting
+	 * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd
+	 * pairs.
 	 */
 	asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx);
 	if (asid != NUM_USER_ASIDS)
@@ -185,7 +194,7 @@ static u64 new_context(struct mm_struct
 set_asid:
 	__set_bit(asid, asid_map);
 	cur_idx = asid;
-	return asid | generation;
+	return idx2asid(asid) | generation;
 }
 
 void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 05/27] arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ghackmann, gregkh, labbott, shankerd,
	will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:27 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:01 +0100
Subject: arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-6-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit 9b0de864b5bc upstream.

Since an mm has both a kernel and a user ASID, we need to ensure that
broadcast TLB maintenance targets both address spaces so that things
like CoW continue to work with the uaccess primitives in the kernel.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/tlbflush.h |   16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -23,6 +23,7 @@
 
 #include <linux/sched.h>
 #include <asm/cputype.h>
+#include <asm/mmu.h>
 
 /*
  * Raw TLBI operations.
@@ -42,6 +43,11 @@
 
 #define __tlbi(op, ...)		__TLBI_N(op, ##__VA_ARGS__, 1, 0)
 
+#define __tlbi_user(op, arg) do {						\
+	if (arm64_kernel_unmapped_at_el0())					\
+		__tlbi(op, (arg) | USER_ASID_FLAG);				\
+} while (0)
+
 /*
  *	TLB Management
  *	==============
@@ -103,6 +109,7 @@ static inline void flush_tlb_mm(struct m
 
 	dsb(ishst);
 	__tlbi(aside1is, asid);
+	__tlbi_user(aside1is, asid);
 	dsb(ish);
 }
 
@@ -113,6 +120,7 @@ static inline void flush_tlb_page(struct
 
 	dsb(ishst);
 	__tlbi(vale1is, addr);
+	__tlbi_user(vale1is, addr);
 	dsb(ish);
 }
 
@@ -139,10 +147,13 @@ static inline void __flush_tlb_range(str
 
 	dsb(ishst);
 	for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) {
-		if (last_level)
+		if (last_level) {
 			__tlbi(vale1is, addr);
-		else
+			__tlbi_user(vale1is, addr);
+		} else {
 			__tlbi(vae1is, addr);
+			__tlbi_user(vae1is, addr);
+		}
 	}
 	dsb(ish);
 }
@@ -182,6 +193,7 @@ static inline void __flush_tlb_pgtable(s
 	unsigned long addr = uaddr >> 12 | (ASID(mm) << 48);
 
 	__tlbi(vae1is, addr);
+	__tlbi_user(vae1is, addr);
 	dsb(ish);
 }
 


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: mm: Map entry trampoline into trampoline and kernel page tables" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 09/27] arm64: mm: Map entry trampoline into trampoline and kernel page tables Mark Rutland
  2018-04-03 11:15   ` Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  1 sibling, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ghackmann, gregkh, labbott, shankerd,
	will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: mm: Map entry trampoline into trampoline and kernel page tables

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:27 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:05 +0100
Subject: arm64: mm: Map entry trampoline into trampoline and kernel page tables
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-10-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit 51a0048beb44 upstream.

The exception entry trampoline needs to be mapped at the same virtual
address in both the trampoline page table (which maps nothing else)
and also the kernel page table, so that we can swizzle TTBR1_EL1 on
exceptions from and return to EL0.

This patch maps the trampoline at a fixed virtual address in the fixmap
area of the kernel virtual address space, which allows the kernel proper
to be randomized with respect to the trampoline when KASLR is enabled.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/fixmap.h  |    5 +++++
 arch/arm64/include/asm/pgtable.h |    1 +
 arch/arm64/kernel/asm-offsets.c  |    6 +++++-
 arch/arm64/mm/mmu.c              |   23 +++++++++++++++++++++++
 4 files changed, 34 insertions(+), 1 deletion(-)

--- a/arch/arm64/include/asm/fixmap.h
+++ b/arch/arm64/include/asm/fixmap.h
@@ -51,6 +51,11 @@ enum fixed_addresses {
 
 	FIX_EARLYCON_MEM_BASE,
 	FIX_TEXT_POKE0,
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	FIX_ENTRY_TRAMP_TEXT,
+#define TRAMP_VALIAS		(__fix_to_virt(FIX_ENTRY_TRAMP_TEXT))
+#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
 	__end_of_permanent_fixed_addresses,
 
 	/*
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -692,6 +692,7 @@ static inline void pmdp_set_wrprotect(st
 
 extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
 extern pgd_t idmap_pg_dir[PTRS_PER_PGD];
+extern pgd_t tramp_pg_dir[PTRS_PER_PGD];
 
 /*
  * Encode and decode a swap entry:
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -24,6 +24,7 @@
 #include <linux/kvm_host.h>
 #include <linux/suspend.h>
 #include <asm/cpufeature.h>
+#include <asm/fixmap.h>
 #include <asm/thread_info.h>
 #include <asm/memory.h>
 #include <asm/smp_plat.h>
@@ -144,11 +145,14 @@ int main(void)
   DEFINE(ARM_SMCCC_RES_X2_OFFS,		offsetof(struct arm_smccc_res, a2));
   DEFINE(ARM_SMCCC_QUIRK_ID_OFFS,	offsetof(struct arm_smccc_quirk, id));
   DEFINE(ARM_SMCCC_QUIRK_STATE_OFFS,	offsetof(struct arm_smccc_quirk, state));
-
   BLANK();
   DEFINE(HIBERN_PBE_ORIG,	offsetof(struct pbe, orig_address));
   DEFINE(HIBERN_PBE_ADDR,	offsetof(struct pbe, address));
   DEFINE(HIBERN_PBE_NEXT,	offsetof(struct pbe, next));
   DEFINE(ARM64_FTR_SYSVAL,	offsetof(struct arm64_ftr_reg, sys_val));
+  BLANK();
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+  DEFINE(TRAMP_VALIAS,		TRAMP_VALIAS);
+#endif
   return 0;
 }
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -419,6 +419,29 @@ static void __init map_kernel_segment(pg
 	vm_area_add_early(vma);
 }
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+static int __init map_entry_trampoline(void)
+{
+	extern char __entry_tramp_text_start[];
+
+	pgprot_t prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC;
+	phys_addr_t pa_start = __pa_symbol(__entry_tramp_text_start);
+
+	/* The trampoline is always mapped and can therefore be global */
+	pgprot_val(prot) &= ~PTE_NG;
+
+	/* Map only the text into the trampoline page table */
+	memset(tramp_pg_dir, 0, PGD_SIZE);
+	__create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE,
+			     prot, pgd_pgtable_alloc, 0);
+
+	/* ...as well as the kernel page table */
+	__set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot);
+	return 0;
+}
+core_initcall(map_entry_trampoline);
+#endif
+
 /*
  * Create fine-grained mappings for the kernel.
  */


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: mm: Move ASID from TTBR0 to TTBR1" has been added to the 4.9-stable tree
  2018-04-03 11:08 ` [PATCH v4.9.y 02/27] arm64: mm: Move ASID from TTBR0 to TTBR1 Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ghackmann, gregkh, labbott, shankerd,
	will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: mm: Move ASID from TTBR0 to TTBR1

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:27 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:08:58 +0100
Subject: arm64: mm: Move ASID from TTBR0 to TTBR1
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-3-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit 7655abb95386 upstream.

In preparation for mapping kernelspace and userspace with different
ASIDs, move the ASID to TTBR1 and update switch_mm to context-switch
TTBR0 via an invalid mapping (the zero page).

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/mmu_context.h   |    7 +++++++
 arch/arm64/include/asm/pgtable-hwdef.h |    1 +
 arch/arm64/include/asm/proc-fns.h      |    6 ------
 arch/arm64/mm/proc.S                   |    9 ++++++---
 4 files changed, 14 insertions(+), 9 deletions(-)

--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -50,6 +50,13 @@ static inline void cpu_set_reserved_ttbr
 	isb();
 }
 
+static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm)
+{
+	BUG_ON(pgd == swapper_pg_dir);
+	cpu_set_reserved_ttbr0();
+	cpu_do_switch_mm(virt_to_phys(pgd),mm);
+}
+
 /*
  * TCR.T0SZ value to use when the ID map is active. Usually equals
  * TCR_T0SZ(VA_BITS), unless system RAM is positioned very high in
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -272,6 +272,7 @@
 #define TCR_TG1_4K		(UL(2) << TCR_TG1_SHIFT)
 #define TCR_TG1_64K		(UL(3) << TCR_TG1_SHIFT)
 
+#define TCR_A1			(UL(1) << 22)
 #define TCR_ASID16		(UL(1) << 36)
 #define TCR_TBI0		(UL(1) << 37)
 #define TCR_HA			(UL(1) << 39)
--- a/arch/arm64/include/asm/proc-fns.h
+++ b/arch/arm64/include/asm/proc-fns.h
@@ -35,12 +35,6 @@ extern u64 cpu_do_resume(phys_addr_t ptr
 
 #include <asm/memory.h>
 
-#define cpu_switch_mm(pgd,mm)				\
-do {							\
-	BUG_ON(pgd == swapper_pg_dir);			\
-	cpu_do_switch_mm(virt_to_phys(pgd),mm);		\
-} while (0)
-
 #endif /* __ASSEMBLY__ */
 #endif /* __KERNEL__ */
 #endif /* __ASM_PROCFNS_H */
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -132,9 +132,12 @@ ENDPROC(cpu_do_resume)
  *	- pgd_phys - physical address of new TTB
  */
 ENTRY(cpu_do_switch_mm)
+	mrs	x2, ttbr1_el1
 	mmid	x1, x1				// get mm->context.id
-	bfi	x0, x1, #48, #16		// set the ASID
-	msr	ttbr0_el1, x0			// set TTBR0
+	bfi	x2, x1, #48, #16		// set the ASID
+	msr	ttbr1_el1, x2			// in TTBR1 (since TCR.A1 is set)
+	isb
+	msr	ttbr0_el1, x0			// now update TTBR0
 	isb
 alternative_if ARM64_WORKAROUND_CAVIUM_27456
 	ic	iallu
@@ -222,7 +225,7 @@ ENTRY(__cpu_setup)
 	 * both user and kernel.
 	 */
 	ldr	x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
-			TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0
+			TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0 | TCR_A1
 	tcr_set_idmap_t0sz	x10, x9
 
 	/*


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: Take into account ID_AA64PFR0_EL1.CSV3" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 18/27] arm64: Take into account ID_AA64PFR0_EL1.CSV3 Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, catalin.marinas, ghackmann, gregkh,
	suzuki.poulose, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: Take into account ID_AA64PFR0_EL1.CSV3

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:14 +0100
Subject: arm64: Take into account ID_AA64PFR0_EL1.CSV3
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-19-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit 179a56f6f9fb upstream.

For non-KASLR kernels where the KPTI behaviour has not been overridden
on the command line we can use ID_AA64PFR0_EL1.CSV3 to determine whether
or not we should unmap the kernel whilst running at EL0.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[Alex: s/read_sanitised_ftr_reg/read_system_reg/ to match v4.9 naming]
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
[Mark: correct zero bits in ftr_id_aa64pfr0 to account for CSV3]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/sysreg.h |    1 +
 arch/arm64/kernel/cpufeature.c  |   10 ++++++++--
 2 files changed, 9 insertions(+), 2 deletions(-)

--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -117,6 +117,7 @@
 #define ID_AA64ISAR0_AES_SHIFT		4
 
 /* id_aa64pfr0 */
+#define ID_AA64PFR0_CSV3_SHIFT		60
 #define ID_AA64PFR0_GIC_SHIFT		24
 #define ID_AA64PFR0_ASIMD_SHIFT		20
 #define ID_AA64PFR0_FP_SHIFT		16
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -93,7 +93,8 @@ static const struct arm64_ftr_bits ftr_i
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 32, 32, 0),
+	ARM64_FTR_BITS(FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 32, 28, 0),
 	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 28, 4, 0),
 	ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64PFR0_GIC_SHIFT, 4, 0),
 	S_ARM64_FTR_BITS(FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
@@ -752,6 +753,8 @@ static int __kpti_forced; /* 0: not forc
 static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 				int __unused)
 {
+	u64 pfr0 = read_system_reg(SYS_ID_AA64PFR0_EL1);
+
 	/* Forced on command line? */
 	if (__kpti_forced) {
 		pr_info_once("kernel page table isolation forced %s by command line option\n",
@@ -763,7 +766,9 @@ static bool unmap_kernel_at_el0(const st
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
 		return true;
 
-	return false;
+	/* Defer to CPU feature registers */
+	return !cpuid_feature_extract_unsigned_field(pfr0,
+						     ID_AA64PFR0_CSV3_SHIFT);
 }
 
 static int __init parse_kpti(char *str)
@@ -865,6 +870,7 @@ static const struct arm64_cpu_capabiliti
 	},
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	{
+		.desc = "Kernel page table isolation (KPTI)",
 		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
 		.def_scope = SCOPE_SYSTEM,
 		.matches = unmap_kernel_at_el0,


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: mm: Use non-global mappings for kernel space" has been added to the 4.9-stable tree
  2018-04-03 11:08 ` [PATCH v4.9.y 01/27] arm64: mm: Use non-global mappings for kernel space Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ghackmann, gregkh, labbott, shankerd,
	will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: mm: Use non-global mappings for kernel space

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-mm-use-non-global-mappings-for-kernel-space.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:27 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:08:57 +0100
Subject: arm64: mm: Use non-global mappings for kernel space
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-2-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit e046eb0c9bf2 upstream.

In preparation for unmapping the kernel whilst running in userspace,
make the kernel mappings non-global so we can avoid expensive TLB
invalidation on kernel exit to userspace.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/kernel-pgtable.h |   12 ++++++++++--
 arch/arm64/include/asm/pgtable-prot.h   |   21 +++++++++++++++------
 2 files changed, 25 insertions(+), 8 deletions(-)

--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -71,8 +71,16 @@
 /*
  * Initial memory map attributes.
  */
-#define SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
-#define SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+#define _SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+#define _SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define SWAPPER_PTE_FLAGS	(_SWAPPER_PTE_FLAGS | PTE_NG)
+#define SWAPPER_PMD_FLAGS	(_SWAPPER_PMD_FLAGS | PMD_SECT_NG)
+#else
+#define SWAPPER_PTE_FLAGS	_SWAPPER_PTE_FLAGS
+#define SWAPPER_PMD_FLAGS	_SWAPPER_PMD_FLAGS
+#endif
 
 #if ARM64_SWAPPER_USES_SECTION_MAPS
 #define SWAPPER_MM_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -34,8 +34,16 @@
 
 #include <asm/pgtable-types.h>
 
-#define PROT_DEFAULT		(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
-#define PROT_SECT_DEFAULT	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+#define _PROT_DEFAULT		(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+#define _PROT_SECT_DEFAULT	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define PROT_DEFAULT		(_PROT_DEFAULT | PTE_NG)
+#define PROT_SECT_DEFAULT	(_PROT_SECT_DEFAULT | PMD_SECT_NG)
+#else
+#define PROT_DEFAULT		_PROT_DEFAULT
+#define PROT_SECT_DEFAULT	_PROT_SECT_DEFAULT
+#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 #define PROT_DEVICE_nGnRnE	(PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE))
 #define PROT_DEVICE_nGnRE	(PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE))
@@ -48,6 +56,7 @@
 #define PROT_SECT_NORMAL_EXEC	(PROT_SECT_DEFAULT | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
 
 #define _PAGE_DEFAULT		(PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
+#define _HYP_PAGE_DEFAULT	(_PAGE_DEFAULT & ~PTE_NG)
 
 #define PAGE_KERNEL		__pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE)
 #define PAGE_KERNEL_RO		__pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_RDONLY)
@@ -55,15 +64,15 @@
 #define PAGE_KERNEL_EXEC	__pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE)
 #define PAGE_KERNEL_EXEC_CONT	__pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT)
 
-#define PAGE_HYP		__pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
-#define PAGE_HYP_EXEC		__pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
-#define PAGE_HYP_RO		__pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
+#define PAGE_HYP		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
+#define PAGE_HYP_EXEC		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
+#define PAGE_HYP_RO		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
 #define PAGE_HYP_DEVICE		__pgprot(PROT_DEVICE_nGnRE | PTE_HYP)
 
 #define PAGE_S2			__pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY)
 #define PAGE_S2_DEVICE		__pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_UXN)
 
-#define PAGE_NONE		__pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_PXN | PTE_UXN)
+#define PAGE_NONE		__pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_NG | PTE_PXN | PTE_UXN)
 #define PAGE_SHARED		__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)
 #define PAGE_SHARED_EXEC	__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_WRITE)
 #define PAGE_COPY		__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN)


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 12/27] arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ghackmann, gregkh, labbott, shankerd,
	will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:27 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:08 +0100
Subject: arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-13-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit 18011eac28c7 upstream.

When unmapping the kernel at EL0, we use tpidrro_el0 as a scratch register
during exception entry from native tasks and subsequently zero it in
the kernel_ventry macro. We can therefore avoid zeroing tpidrro_el0
in the context-switch path for native tasks using the entry trampoline.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/process.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -306,17 +306,17 @@ int copy_thread(unsigned long clone_flag
 
 static void tls_thread_switch(struct task_struct *next)
 {
-	unsigned long tpidr, tpidrro;
+	unsigned long tpidr;
 
 	tpidr = read_sysreg(tpidr_el0);
 	*task_user_tls(current) = tpidr;
 
-	tpidr = *task_user_tls(next);
-	tpidrro = is_compat_thread(task_thread_info(next)) ?
-		  next->thread.tp_value : 0;
+	if (is_compat_thread(task_thread_info(next)))
+		write_sysreg(next->thread.tp_value, tpidrro_el0);
+	else if (!arm64_kernel_unmapped_at_el0())
+		write_sysreg(0, tpidrro_el0);
 
-	write_sysreg(tpidr, tpidr_el0);
-	write_sysreg(tpidrro, tpidrro_el0);
+	write_sysreg(*task_user_tls(next), tpidr_el0);
 }
 
 /* Restore the UAO state depending on next's addr_limit */


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: Turn on KPTI only on CPUs that need it" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 22/27] arm64: Turn on KPTI only on CPUs that need it Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, catalin.marinas, ghackmann, gregkh,
	jnair, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: Turn on KPTI only on CPUs that need it

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:18 +0100
Subject: arm64: Turn on KPTI only on CPUs that need it
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-23-mark.rutland@arm.com>

From: Jayachandran C <jnair@caviumnetworks.com>

commit 0ba2e29c7fc1 upstream.

Whitelist Broadcom Vulcan/Cavium ThunderX2 processors in
unmap_kernel_at_el0(). These CPUs are not vulnerable to
CVE-2017-5754 and do not need KPTI when KASLR is off.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jayachandran C <jnair@caviumnetworks.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpufeature.c |    7 +++++++
 1 file changed, 7 insertions(+)

--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -766,6 +766,13 @@ static bool unmap_kernel_at_el0(const st
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
 		return true;
 
+	/* Don't force KPTI for CPUs that are not vulnerable */
+	switch (read_cpuid_id() & MIDR_CPU_MODEL_MASK) {
+	case MIDR_CAVIUM_THUNDERX2:
+	case MIDR_BRCM_VULCAN:
+		return false;
+	}
+
 	/* Defer to CPU feature registers */
 	return !cpuid_feature_extract_unsigned_field(pfr0,
 						     ID_AA64PFR0_CSV3_SHIFT);


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "arm64: use RET instruction for exiting the trampoline" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 15/27] arm64: use RET instruction for exiting the trampoline Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, catalin.marinas, ghackmann, gregkh, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: use RET instruction for exiting the trampoline

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-use-ret-instruction-for-exiting-the-trampoline.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:28 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:11 +0100
Subject: arm64: use RET instruction for exiting the trampoline
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-16-mark.rutland@arm.com>

From: Will Deacon <will.deacon@arm.com>

commit be04a6d1126b upstream.

Speculation attacks against the entry trampoline can potentially resteer
the speculative instruction stream through the indirect branch and into
arbitrary gadgets within the kernel.

This patch defends against these attacks by forcing a misprediction
through the return stack: a dummy BL instruction loads an entry into
the stack, so that the predicted program flow of the subsequent RET
instruction is to a branch-to-self instruction which is finally resolved
as a branch to the kernel vectors with speculation suppressed.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/entry.S |   10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -880,6 +880,14 @@ __ni_sys_trace:
 	.if	\regsize == 64
 	msr	tpidrro_el0, x30	// Restored in kernel_ventry
 	.endif
+	/*
+	 * Defend against branch aliasing attacks by pushing a dummy
+	 * entry onto the return stack and using a RET instruction to
+	 * enter the full-fat kernel vectors.
+	 */
+	bl	2f
+	b	.
+2:
 	tramp_map_kernel	x30
 #ifdef CONFIG_RANDOMIZE_BASE
 	adr	x30, tramp_vectors + PAGE_SIZE
@@ -892,7 +900,7 @@ __ni_sys_trace:
 	msr	vbar_el1, x30
 	add	x30, x30, #(1b - tramp_vectors)
 	isb
-	br	x30
+	ret
 	.endm
 
 	.macro tramp_exit, regsize = 64


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Patch "module: extend 'rodata=off' boot cmdline parameter to module mappings" has been added to the 4.9-stable tree
  2018-04-03 11:09 ` [PATCH v4.9.y 07/27] module: extend 'rodata=off' boot cmdline parameter to module mappings Mark Rutland
@ 2018-04-05 19:42   ` gregkh
  0 siblings, 0 replies; 63+ messages in thread
From: gregkh @ 2018-04-05 19:42 UTC (permalink / raw)
  To: mark.rutland, alex.shi, ghackmann, gregkh, jeyu, keescook, rusty,
	takahiro.akashi, will.deacon
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    module: extend 'rodata=off' boot cmdline parameter to module mappings

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From foo@baz Thu Apr  5 21:39:27 CEST 2018
From: Mark Rutland <mark.rutland@arm.com>
Date: Tue,  3 Apr 2018 12:09:03 +0100
Subject: module: extend 'rodata=off' boot cmdline parameter to module mappings
To: stable@vger.kernel.org
Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com
Message-ID: <20180403110923.43575-8-mark.rutland@arm.com>

From: AKASHI Takahiro <takahiro.akashi@linaro.org>

commit 39290b389ea upstream.

The current "rodata=off" parameter disables read-only kernel mappings
under CONFIG_DEBUG_RODATA:
    commit d2aa1acad22f ("mm/init: Add 'rodata=off' boot cmdline parameter
    to disable read-only kernel mappings")

This patch is a logical extension to module mappings ie. read-only mappings
at module loading can be disabled even if CONFIG_DEBUG_SET_MODULE_RONX
(mainly for debug use). Please note, however, that it only affects RO/RW
permissions, keeping NX set.

This is the first step to make CONFIG_DEBUG_SET_MODULE_RONX mandatory
(always-on) in the future as CONFIG_DEBUG_RODATA on x86 and arm64.

Suggested-by: and Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Link: http://lkml.kernel.org/r/20161114061505.15238-1-takahiro.akashi@linaro.org
Signed-off-by: Jessica Yu <jeyu@redhat.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org> [v4.9 backport]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport]
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 include/linux/init.h |    3 +++
 init/main.c          |    7 +++++--
 kernel/module.c      |   20 +++++++++++++++++---
 3 files changed, 25 insertions(+), 5 deletions(-)

--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -133,6 +133,9 @@ void prepare_namespace(void);
 void __init load_default_modules(void);
 int __init init_rootfs(void);
 
+#if defined(CONFIG_DEBUG_RODATA) || defined(CONFIG_DEBUG_SET_MODULE_RONX)
+extern bool rodata_enabled;
+#endif
 #ifdef CONFIG_DEBUG_RODATA
 void mark_rodata_ro(void);
 #endif
--- a/init/main.c
+++ b/init/main.c
@@ -81,6 +81,7 @@
 #include <linux/proc_ns.h>
 #include <linux/io.h>
 #include <linux/kaiser.h>
+#include <linux/cache.h>
 
 #include <asm/io.h>
 #include <asm/bugs.h>
@@ -914,14 +915,16 @@ static int try_to_run_init_process(const
 
 static noinline void __init kernel_init_freeable(void);
 
-#ifdef CONFIG_DEBUG_RODATA
-static bool rodata_enabled = true;
+#if defined(CONFIG_DEBUG_RODATA) || defined(CONFIG_SET_MODULE_RONX)
+bool rodata_enabled __ro_after_init = true;
 static int __init set_debug_rodata(char *str)
 {
 	return strtobool(str, &rodata_enabled);
 }
 __setup("rodata=", set_debug_rodata);
+#endif
 
+#ifdef CONFIG_DEBUG_RODATA
 static void mark_readonly(void)
 {
 	if (rodata_enabled)
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -1911,6 +1911,9 @@ static void frob_writable_data(const str
 /* livepatching wants to disable read-only so it can frob module. */
 void module_disable_ro(const struct module *mod)
 {
+	if (!rodata_enabled)
+		return;
+
 	frob_text(&mod->core_layout, set_memory_rw);
 	frob_rodata(&mod->core_layout, set_memory_rw);
 	frob_ro_after_init(&mod->core_layout, set_memory_rw);
@@ -1920,6 +1923,9 @@ void module_disable_ro(const struct modu
 
 void module_enable_ro(const struct module *mod, bool after_init)
 {
+	if (!rodata_enabled)
+		return;
+
 	frob_text(&mod->core_layout, set_memory_ro);
 	frob_rodata(&mod->core_layout, set_memory_ro);
 	frob_text(&mod->init_layout, set_memory_ro);
@@ -1952,6 +1958,9 @@ void set_all_modules_text_rw(void)
 {
 	struct module *mod;
 
+	if (!rodata_enabled)
+		return;
+
 	mutex_lock(&module_mutex);
 	list_for_each_entry_rcu(mod, &modules, list) {
 		if (mod->state == MODULE_STATE_UNFORMED)
@@ -1968,6 +1977,9 @@ void set_all_modules_text_ro(void)
 {
 	struct module *mod;
 
+	if (!rodata_enabled)
+		return;
+
 	mutex_lock(&module_mutex);
 	list_for_each_entry_rcu(mod, &modules, list) {
 		if (mod->state == MODULE_STATE_UNFORMED)
@@ -1981,10 +1993,12 @@ void set_all_modules_text_ro(void)
 
 static void disable_ro_nx(const struct module_layout *layout)
 {
-	frob_text(layout, set_memory_rw);
-	frob_rodata(layout, set_memory_rw);
+	if (rodata_enabled) {
+		frob_text(layout, set_memory_rw);
+		frob_rodata(layout, set_memory_rw);
+		frob_ro_after_init(layout, set_memory_rw);
+	}
 	frob_rodata(layout, set_memory_x);
-	frob_ro_after_init(layout, set_memory_rw);
 	frob_ro_after_init(layout, set_memory_x);
 	frob_writable_data(layout, set_memory_x);
 }


Patches currently in stable-queue which might be from mark.rutland@arm.com are

queue-4.9/arm64-mm-add-arm64_kernel_unmapped_at_el0-helper.patch
queue-4.9/arm64-entry-reword-comment-about-post_ttbr_update_workaround.patch
queue-4.9/arm64-kaslr-put-kernel-vectors-address-in-separate-data-page.patch
queue-4.9/arm64-turn-on-kpti-only-on-cpus-that-need-it.patch
queue-4.9/arm64-force-kpti-to-be-disabled-on-cavium-thunderx.patch
queue-4.9/arm64-mm-allocate-asids-in-pairs.patch
queue-4.9/arm64-tls-avoid-unconditional-zeroing-of-tpidrro_el0-for-native-tasks.patch
queue-4.9/arm64-use-ret-instruction-for-exiting-the-trampoline.patch
queue-4.9/arm64-entry-explicitly-pass-exception-level-to-kernel_ventry-macro.patch
queue-4.9/arm64-kpti-make-use-of-ng-dependent-on-arm64_kernel_unmapped_at_el0.patch
queue-4.9/arm64-mm-use-non-global-mappings-for-kernel-space.patch
queue-4.9/arm64-capabilities-handle-duplicate-entries-for-a-capability.patch
queue-4.9/arm64-entry-hook-up-entry-trampoline-to-exception-vectors.patch
queue-4.9/arm64-mm-invalidate-both-kernel-and-user-asids-when-performing-tlbi.patch
queue-4.9/arm64-mm-map-entry-trampoline-into-trampoline-and-kernel-page-tables.patch
queue-4.9/module-extend-rodata-off-boot-cmdline-parameter-to-module-mappings.patch
queue-4.9/arm64-kconfig-reword-unmap_kernel_at_el0-kconfig-entry.patch
queue-4.9/arm64-mm-move-asid-from-ttbr0-to-ttbr1.patch
queue-4.9/arm64-allow-checking-of-a-cpu-local-erratum.patch
queue-4.9/arm64-take-into-account-id_aa64pfr0_el1.csv3.patch
queue-4.9/arm64-kconfig-add-config_unmap_kernel_at_el0.patch
queue-4.9/arm64-idmap-use-awx-flags-for-.idmap.text-.pushsection-directives.patch
queue-4.9/arm64-factor-out-entry-stack-manipulation.patch
queue-4.9/arm64-entry-add-exception-trampoline-page-for-exceptions-from-el0.patch
queue-4.9/arm64-kpti-add-enable-callback-to-remap-swapper-using-ng-mappings.patch
queue-4.9/arm64-entry-add-fake-cpu-feature-for-unmapping-the-kernel-at-el0.patch
queue-4.9/arm64-cputype-add-midr-values-for-cavium-thunderx2-cpus.patch

^ permalink raw reply	[flat|nested] 63+ messages in thread

end of thread, other threads:[~2018-04-05 19:43 UTC | newest]

Thread overview: 63+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-03 11:08 [PATCH v4.9.y 00/27] arm64 meltdown patches Mark Rutland
2018-04-03 11:08 ` [PATCH v4.9.y 01/27] arm64: mm: Use non-global mappings for kernel space Mark Rutland
2018-04-05 19:42   ` Patch "arm64: mm: Use non-global mappings for kernel space" has been added to the 4.9-stable tree gregkh
2018-04-03 11:08 ` [PATCH v4.9.y 02/27] arm64: mm: Move ASID from TTBR0 to TTBR1 Mark Rutland
2018-04-05 19:42   ` Patch "arm64: mm: Move ASID from TTBR0 to TTBR1" has been added to the 4.9-stable tree gregkh
2018-04-03 11:08 ` [PATCH v4.9.y 03/27] arm64: mm: Allocate ASIDs in pairs Mark Rutland
2018-04-05 19:42   ` Patch "arm64: mm: Allocate ASIDs in pairs" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 04/27] arm64: mm: Add arm64_kernel_unmapped_at_el0 helper Mark Rutland
2018-04-05 19:42   ` Patch "arm64: mm: Add arm64_kernel_unmapped_at_el0 helper" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 05/27] arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI Mark Rutland
2018-04-05 19:42   ` Patch "arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 06/27] arm64: factor out entry stack manipulation Mark Rutland
2018-04-05 19:42   ` Patch "arm64: factor out entry stack manipulation" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 07/27] module: extend 'rodata=off' boot cmdline parameter to module mappings Mark Rutland
2018-04-05 19:42   ` Patch "module: extend 'rodata=off' boot cmdline parameter to module mappings" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 08/27] arm64: entry: Add exception trampoline page for exceptions from EL0 Mark Rutland
2018-04-05 19:42   ` Patch "arm64: entry: Add exception trampoline page for exceptions from EL0" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 09/27] arm64: mm: Map entry trampoline into trampoline and kernel page tables Mark Rutland
2018-04-03 11:15   ` Mark Rutland
2018-04-05 19:33     ` Greg KH
2018-04-05 19:42   ` Patch "arm64: mm: Map entry trampoline into trampoline and kernel page tables" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 10/27] arm64: entry: Explicitly pass exception level to kernel_ventry macro Mark Rutland
2018-04-05 19:42   ` Patch "arm64: entry: Explicitly pass exception level to kernel_ventry macro" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 11/27] arm64: entry: Hook up entry trampoline to exception vectors Mark Rutland
2018-04-05 19:42   ` Patch "arm64: entry: Hook up entry trampoline to exception vectors" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 12/27] arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks Mark Rutland
2018-04-05 19:42   ` Patch "arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 13/27] arm64: entry: Add fake CPU feature for unmapping the kernel at EL0 Mark Rutland
2018-04-05 19:42   ` Patch "arm64: entry: Add fake CPU feature for unmapping the kernel at EL0" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 14/27] arm64: kaslr: Put kernel vectors address in separate data page Mark Rutland
2018-04-05 19:42   ` Patch "arm64: kaslr: Put kernel vectors address in separate data page" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 15/27] arm64: use RET instruction for exiting the trampoline Mark Rutland
2018-04-05 19:42   ` Patch "arm64: use RET instruction for exiting the trampoline" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 16/27] arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0 Mark Rutland
2018-04-05 19:42   ` Patch "arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 17/27] arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry Mark Rutland
2018-04-05 19:42   ` Patch "arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 18/27] arm64: Take into account ID_AA64PFR0_EL1.CSV3 Mark Rutland
2018-04-05 19:42   ` Patch "arm64: Take into account ID_AA64PFR0_EL1.CSV3" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 19/27] arm64: Allow checking of a CPU-local erratum Mark Rutland
2018-04-05 19:42   ` Patch "arm64: Allow checking of a CPU-local erratum" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 20/27] arm64: capabilities: Handle duplicate entries for a capability Mark Rutland
2018-04-05 19:42   ` Patch "arm64: capabilities: Handle duplicate entries for a capability" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 21/27] arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs Mark Rutland
2018-04-05 19:42   ` Patch "arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 22/27] arm64: Turn on KPTI only on CPUs that need it Mark Rutland
2018-04-05 19:42   ` Patch "arm64: Turn on KPTI only on CPUs that need it" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 23/27] arm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0() Mark Rutland
2018-04-05 19:42   ` Patch "arm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0()" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 24/27] arm64: kpti: Add ->enable callback to remap swapper using nG mappings Mark Rutland
2018-04-05 19:42   ` Patch "arm64: kpti: Add ->enable callback to remap swapper using nG mappings" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 25/27] arm64: Force KPTI to be disabled on Cavium ThunderX Mark Rutland
2018-04-05 19:42   ` Patch "arm64: Force KPTI to be disabled on Cavium ThunderX" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 26/27] arm64: entry: Reword comment about post_ttbr_update_workaround Mark Rutland
2018-04-05 19:42   ` Patch "arm64: entry: Reword comment about post_ttbr_update_workaround" has been added to the 4.9-stable tree gregkh
2018-04-03 11:09 ` [PATCH v4.9.y 27/27] arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives Mark Rutland
2018-04-05 19:42   ` Patch "arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives" has been added to the 4.9-stable tree gregkh
2018-04-04 15:07 ` [PATCH v4.9.y 00/27] arm64 meltdown patches Greg KH
2018-04-05 10:04   ` Will Deacon
2018-04-05 10:15     ` Mark Rutland
2018-04-05 11:46   ` Will Deacon
2018-04-05 17:34 ` Greg Hackmann
2018-04-05 19:15   ` Greg KH

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.