linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER)
@ 2017-11-30 16:39 Will Deacon
  2017-11-30 16:39 ` [PATCH v2 01/18] arm64: mm: Use non-global mappings for kernel space Will Deacon
                   ` (19 more replies)
  0 siblings, 20 replies; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

Hi again,

This is version two of the patches previously posted here:

  http://lists.infradead.org/pipermail/linux-arm-kernel/2017-November/542751.html

Changes since v1 include:

  * Based on v4.15-rc1
  * Trampoline moved into FIXMAP area
  * Explicit static key replaced by cpu cap
  * Disable SPE for userspace profiling if kernel unmapped at EL0
  * Changed polarity of cpu feature to match config option
  * Changed command-line option so we can force on in future if necessary
  * Changed Falkor workaround to invalidate different page within 2MB region
  * Reworked alternative sequences in entry.S, since the NOP slides with
    kaiser=off were measurable

I experimented with leaving the vbar set to point at the kaiser vectors,
but I couldn't measure any performance improvement from that and it made
the code slightly more complicated, so I've left it as-is.

Patches based on 4.15-rc1 and also pushed here:

  git://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git kaiser

Feedback welcome, particularly on a better name for the command-line option.

Will

--->8

Will Deacon (18):
  arm64: mm: Use non-global mappings for kernel space
  arm64: mm: Temporarily disable ARM64_SW_TTBR0_PAN
  arm64: mm: Move ASID from TTBR0 to TTBR1
  arm64: mm: Remove pre_ttbr0_update_workaround for Falkor erratum
    #E1003
  arm64: mm: Rename post_ttbr0_update_workaround
  arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN
  arm64: mm: Allocate ASIDs in pairs
  arm64: mm: Add arm64_kernel_unmapped_at_el0 helper
  arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI
  arm64: entry: Add exception trampoline page for exceptions from EL0
  arm64: mm: Map entry trampoline into trampoline and kernel page tables
  arm64: entry: Explicitly pass exception level to kernel_ventry macro
  arm64: entry: Hook up entry trampoline to exception vectors
  arm64: erratum: Work around Falkor erratum #E1003 in trampoline code
  arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native
    tasks
  arm64: entry: Add fake CPU feature for unmapping the kernel at EL0
  arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0
  perf: arm_spe: Disallow userspace profiling when
    arm_kernel_unmapped_at_el0()

 arch/arm64/Kconfig                      |  30 +++--
 arch/arm64/include/asm/asm-uaccess.h    |  25 +++--
 arch/arm64/include/asm/assembler.h      |  27 +----
 arch/arm64/include/asm/cpucaps.h        |   3 +-
 arch/arm64/include/asm/fixmap.h         |   4 +
 arch/arm64/include/asm/kernel-pgtable.h |  12 +-
 arch/arm64/include/asm/mmu.h            |  10 ++
 arch/arm64/include/asm/mmu_context.h    |   9 +-
 arch/arm64/include/asm/pgtable-hwdef.h  |   1 +
 arch/arm64/include/asm/pgtable-prot.h   |  21 +++-
 arch/arm64/include/asm/pgtable.h        |   1 +
 arch/arm64/include/asm/proc-fns.h       |   6 -
 arch/arm64/include/asm/tlbflush.h       |  16 ++-
 arch/arm64/include/asm/uaccess.h        |  21 +++-
 arch/arm64/kernel/asm-offsets.c         |   6 +-
 arch/arm64/kernel/cpufeature.c          |  41 +++++++
 arch/arm64/kernel/entry.S               | 190 +++++++++++++++++++++++++++-----
 arch/arm64/kernel/process.c             |  12 +-
 arch/arm64/kernel/vmlinux.lds.S         |  17 +++
 arch/arm64/lib/clear_user.S             |   2 +-
 arch/arm64/lib/copy_from_user.S         |   2 +-
 arch/arm64/lib/copy_in_user.S           |   2 +-
 arch/arm64/lib/copy_to_user.S           |   2 +-
 arch/arm64/mm/cache.S                   |   2 +-
 arch/arm64/mm/context.c                 |  36 +++---
 arch/arm64/mm/mmu.c                     |  23 ++++
 arch/arm64/mm/proc.S                    |  12 +-
 arch/arm64/xen/hypercall.S              |   2 +-
 drivers/perf/arm_spe_pmu.c              |   7 ++
 29 files changed, 407 insertions(+), 135 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 01/18] arm64: mm: Use non-global mappings for kernel space
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-11-30 16:39 ` [PATCH v2 02/18] arm64: mm: Temporarily disable ARM64_SW_TTBR0_PAN Will Deacon
                   ` (18 subsequent siblings)
  19 siblings, 0 replies; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

In preparation for unmapping the kernel whilst running in userspace,
make the kernel mappings non-global so we can avoid expensive TLB
invalidation on kernel exit to userspace.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/kernel-pgtable.h | 12 ++++++++++--
 arch/arm64/include/asm/pgtable-prot.h   | 21 +++++++++++++++------
 2 files changed, 25 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 7803343e5881..77a27af01371 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -78,8 +78,16 @@
 /*
  * Initial memory map attributes.
  */
-#define SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
-#define SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+#define _SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+#define _SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define SWAPPER_PTE_FLAGS	(_SWAPPER_PTE_FLAGS | PTE_NG)
+#define SWAPPER_PMD_FLAGS	(_SWAPPER_PMD_FLAGS | PMD_SECT_NG)
+#else
+#define SWAPPER_PTE_FLAGS	_SWAPPER_PTE_FLAGS
+#define SWAPPER_PMD_FLAGS	_SWAPPER_PMD_FLAGS
+#endif
 
 #if ARM64_SWAPPER_USES_SECTION_MAPS
 #define SWAPPER_MM_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index 0a5635fb0ef9..22a926825e3f 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -34,8 +34,16 @@
 
 #include <asm/pgtable-types.h>
 
-#define PROT_DEFAULT		(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
-#define PROT_SECT_DEFAULT	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+#define _PROT_DEFAULT		(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+#define _PROT_SECT_DEFAULT	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define PROT_DEFAULT		(_PROT_DEFAULT | PTE_NG)
+#define PROT_SECT_DEFAULT	(_PROT_SECT_DEFAULT | PMD_SECT_NG)
+#else
+#define PROT_DEFAULT		_PROT_DEFAULT
+#define PROT_SECT_DEFAULT	_PROT_SECT_DEFAULT
+#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 #define PROT_DEVICE_nGnRnE	(PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE))
 #define PROT_DEVICE_nGnRE	(PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE))
@@ -48,6 +56,7 @@
 #define PROT_SECT_NORMAL_EXEC	(PROT_SECT_DEFAULT | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
 
 #define _PAGE_DEFAULT		(PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
+#define _HYP_PAGE_DEFAULT	(_PAGE_DEFAULT & ~PTE_NG)
 
 #define PAGE_KERNEL		__pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE)
 #define PAGE_KERNEL_RO		__pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_RDONLY)
@@ -55,15 +64,15 @@
 #define PAGE_KERNEL_EXEC	__pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE)
 #define PAGE_KERNEL_EXEC_CONT	__pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT)
 
-#define PAGE_HYP		__pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
-#define PAGE_HYP_EXEC		__pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
-#define PAGE_HYP_RO		__pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
+#define PAGE_HYP		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
+#define PAGE_HYP_EXEC		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
+#define PAGE_HYP_RO		__pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
 #define PAGE_HYP_DEVICE		__pgprot(PROT_DEVICE_nGnRE | PTE_HYP)
 
 #define PAGE_S2			__pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY)
 #define PAGE_S2_DEVICE		__pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_UXN)
 
-#define PAGE_NONE		__pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_PXN | PTE_UXN)
+#define PAGE_NONE		__pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
 #define PAGE_SHARED		__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)
 #define PAGE_SHARED_EXEC	__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_WRITE)
 #define PAGE_READONLY		__pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 02/18] arm64: mm: Temporarily disable ARM64_SW_TTBR0_PAN
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
  2017-11-30 16:39 ` [PATCH v2 01/18] arm64: mm: Use non-global mappings for kernel space Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-11-30 16:39 ` [PATCH v2 03/18] arm64: mm: Move ASID from TTBR0 to TTBR1 Will Deacon
                   ` (17 subsequent siblings)
  19 siblings, 0 replies; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

We're about to rework the way ASIDs are allocated, switch_mm is
implemented and low-level kernel entry/exit is handled, so keep the
ARM64_SW_TTBR0_PAN code out of the way whilst we do the heavy lifting.

It will be re-enabled in a subsequent patch.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index a93339f5178f..7e7d7fd152c4 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -910,6 +910,7 @@ endif
 
 config ARM64_SW_TTBR0_PAN
 	bool "Emulate Privileged Access Never using TTBR0_EL1 switching"
+	depends on BROKEN       # Temporary while switch_mm is reworked
 	help
 	  Enabling this option prevents the kernel from accessing
 	  user-space memory directly by pointing TTBR0_EL1 to a reserved
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 03/18] arm64: mm: Move ASID from TTBR0 to TTBR1
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
  2017-11-30 16:39 ` [PATCH v2 01/18] arm64: mm: Use non-global mappings for kernel space Will Deacon
  2017-11-30 16:39 ` [PATCH v2 02/18] arm64: mm: Temporarily disable ARM64_SW_TTBR0_PAN Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-11-30 17:36   ` Mark Rutland
  2017-11-30 16:39 ` [PATCH v2 04/18] arm64: mm: Remove pre_ttbr0_update_workaround for Falkor erratum #E1003 Will Deacon
                   ` (16 subsequent siblings)
  19 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

In preparation for mapping kernelspace and userspace with different
ASIDs, move the ASID to TTBR1 and update switch_mm to context-switch
TTBR0 via an invalid mapping (the zero page).

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/mmu_context.h   | 7 +++++++
 arch/arm64/include/asm/pgtable-hwdef.h | 1 +
 arch/arm64/include/asm/proc-fns.h      | 6 ------
 arch/arm64/mm/proc.S                   | 9 ++++++---
 4 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 3257895a9b5e..56723bcbfaaa 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -37,6 +37,13 @@
 #include <asm/sysreg.h>
 #include <asm/tlbflush.h>
 
+#define cpu_switch_mm(pgd,mm)				\
+do {							\
+	BUG_ON(pgd == swapper_pg_dir);			\
+	cpu_set_reserved_ttbr0();			\
+	cpu_do_switch_mm(virt_to_phys(pgd),mm);		\
+} while (0)
+
 static inline void contextidr_thread_switch(struct task_struct *next)
 {
 	if (!IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR))
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index eb0c2bd90de9..8df4cb6ac6f7 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -272,6 +272,7 @@
 #define TCR_TG1_4K		(UL(2) << TCR_TG1_SHIFT)
 #define TCR_TG1_64K		(UL(3) << TCR_TG1_SHIFT)
 
+#define TCR_A1			(UL(1) << 22)
 #define TCR_ASID16		(UL(1) << 36)
 #define TCR_TBI0		(UL(1) << 37)
 #define TCR_HA			(UL(1) << 39)
diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
index 14ad6e4e87d1..16cef2e8449e 100644
--- a/arch/arm64/include/asm/proc-fns.h
+++ b/arch/arm64/include/asm/proc-fns.h
@@ -35,12 +35,6 @@ extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
 
 #include <asm/memory.h>
 
-#define cpu_switch_mm(pgd,mm)				\
-do {							\
-	BUG_ON(pgd == swapper_pg_dir);			\
-	cpu_do_switch_mm(virt_to_phys(pgd),mm);		\
-} while (0)
-
 #endif /* __ASSEMBLY__ */
 #endif /* __KERNEL__ */
 #endif /* __ASM_PROCFNS_H */
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 95233dfc4c39..a8a64898a2aa 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -139,9 +139,12 @@ ENDPROC(cpu_do_resume)
  */
 ENTRY(cpu_do_switch_mm)
 	pre_ttbr0_update_workaround x0, x2, x3
+	mrs	x2, ttbr1_el1
 	mmid	x1, x1				// get mm->context.id
-	bfi	x0, x1, #48, #16		// set the ASID
-	msr	ttbr0_el1, x0			// set TTBR0
+	bfi	x2, x1, #48, #16		// set the ASID
+	msr	ttbr1_el1, x2			// in TTBR1 (since TCR.A1 is set)
+	isb
+	msr	ttbr0_el1, x0			// now update TTBR0
 	isb
 	post_ttbr0_update_workaround
 	ret
@@ -224,7 +227,7 @@ ENTRY(__cpu_setup)
 	 * both user and kernel.
 	 */
 	ldr	x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
-			TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0
+			TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0 | TCR_A1
 	tcr_set_idmap_t0sz	x10, x9
 
 	/*
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 04/18] arm64: mm: Remove pre_ttbr0_update_workaround for Falkor erratum #E1003
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (2 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 03/18] arm64: mm: Move ASID from TTBR0 to TTBR1 Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-11-30 16:39 ` [PATCH v2 05/18] arm64: mm: Rename post_ttbr0_update_workaround Will Deacon
                   ` (15 subsequent siblings)
  19 siblings, 0 replies; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

The pre_ttbr0_update_workaround hook is called prior to context-switching
TTBR0 because Falkor erratum E1003 can cause TLB allocation with the wrong
ASID if both the ASID and the base address of the TTBR are updated at
the same time.

With the ASID sitting safely in TTBR1, we no longer update things
atomically, so we can remove the pre_ttbr0_update_workaround macro as
it's no longer required. The erratum infrastructure and documentation
is left around for #E1003, as it will be required by the entry
trampoline code in a future patch.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/assembler.h   | 22 ----------------------
 arch/arm64/include/asm/mmu_context.h |  2 --
 arch/arm64/mm/context.c              | 11 -----------
 arch/arm64/mm/proc.S                 |  1 -
 4 files changed, 36 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index aef72d886677..e1fa5db858b7 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -26,7 +26,6 @@
 #include <asm/asm-offsets.h>
 #include <asm/cpufeature.h>
 #include <asm/debug-monitors.h>
-#include <asm/mmu_context.h>
 #include <asm/page.h>
 #include <asm/pgtable-hwdef.h>
 #include <asm/ptrace.h>
@@ -478,27 +477,6 @@ alternative_endif
 	.endm
 
 /*
- * Errata workaround prior to TTBR0_EL1 update
- *
- * 	val:	TTBR value with new BADDR, preserved
- * 	tmp0:	temporary register, clobbered
- * 	tmp1:	other temporary register, clobbered
- */
-	.macro	pre_ttbr0_update_workaround, val, tmp0, tmp1
-#ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
-alternative_if ARM64_WORKAROUND_QCOM_FALKOR_E1003
-	mrs	\tmp0, ttbr0_el1
-	mov	\tmp1, #FALKOR_RESERVED_ASID
-	bfi	\tmp0, \tmp1, #48, #16		// reserved ASID + old BADDR
-	msr	ttbr0_el1, \tmp0
-	isb
-	bfi	\tmp0, \val, #0, #48		// reserved ASID + new BADDR
-	msr	ttbr0_el1, \tmp0
-	isb
-alternative_else_nop_endif
-#endif
-	.endm
-
 /*
  * Errata workaround post TTBR0_EL1 update.
  */
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 56723bcbfaaa..6d93bd545906 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -19,8 +19,6 @@
 #ifndef __ASM_MMU_CONTEXT_H
 #define __ASM_MMU_CONTEXT_H
 
-#define FALKOR_RESERVED_ASID	1
-
 #ifndef __ASSEMBLY__
 
 #include <linux/compiler.h>
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index ab9f5f0fb2c7..78816e476491 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -79,13 +79,6 @@ void verify_cpu_asid_bits(void)
 	}
 }
 
-static void set_reserved_asid_bits(void)
-{
-	if (IS_ENABLED(CONFIG_QCOM_FALKOR_ERRATUM_1003) &&
-	    cpus_have_const_cap(ARM64_WORKAROUND_QCOM_FALKOR_E1003))
-		__set_bit(FALKOR_RESERVED_ASID, asid_map);
-}
-
 static void flush_context(unsigned int cpu)
 {
 	int i;
@@ -94,8 +87,6 @@ static void flush_context(unsigned int cpu)
 	/* Update the list of reserved ASIDs and the ASID bitmap. */
 	bitmap_clear(asid_map, 0, NUM_USER_ASIDS);
 
-	set_reserved_asid_bits();
-
 	/*
 	 * Ensure the generation bump is observed before we xchg the
 	 * active_asids.
@@ -250,8 +241,6 @@ static int asids_init(void)
 		panic("Failed to allocate bitmap for %lu ASIDs\n",
 		      NUM_USER_ASIDS);
 
-	set_reserved_asid_bits();
-
 	pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS);
 	return 0;
 }
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index a8a64898a2aa..f2ff0837577c 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -138,7 +138,6 @@ ENDPROC(cpu_do_resume)
  *	- pgd_phys - physical address of new TTB
  */
 ENTRY(cpu_do_switch_mm)
-	pre_ttbr0_update_workaround x0, x2, x3
 	mrs	x2, ttbr1_el1
 	mmid	x1, x1				// get mm->context.id
 	bfi	x2, x1, #48, #16		// set the ASID
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 05/18] arm64: mm: Rename post_ttbr0_update_workaround
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (3 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 04/18] arm64: mm: Remove pre_ttbr0_update_workaround for Falkor erratum #E1003 Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-11-30 16:39 ` [PATCH v2 06/18] arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN Will Deacon
                   ` (14 subsequent siblings)
  19 siblings, 0 replies; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

The post_ttbr0_update_workaround hook applies to any change to TTBRx_EL1.
Since we're using TTBR1 for the ASID, rename the hook to make it clearer
as to what it's doing.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/assembler.h | 5 ++---
 arch/arm64/kernel/entry.S          | 2 +-
 arch/arm64/mm/proc.S               | 2 +-
 3 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index e1fa5db858b7..c45bc94f15d0 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -477,10 +477,9 @@ alternative_endif
 	.endm
 
 /*
-/*
- * Errata workaround post TTBR0_EL1 update.
+ * Errata workaround post TTBRx_EL1 update.
  */
-	.macro	post_ttbr0_update_workaround
+	.macro	post_ttbr_update_workaround
 #ifdef CONFIG_CAVIUM_ERRATUM_27456
 alternative_if ARM64_WORKAROUND_CAVIUM_27456
 	ic	iallu
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 6d14b8f29b5f..804e43c9cb0b 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -257,7 +257,7 @@ alternative_else_nop_endif
 	 * Cavium erratum 27456 (broadcast TLBI instructions may cause I-cache
 	 * corruption).
 	 */
-	post_ttbr0_update_workaround
+	post_ttbr_update_workaround
 	.endif
 1:
 	.if	\el != 0
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index f2ff0837577c..3146dc96f05b 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -145,7 +145,7 @@ ENTRY(cpu_do_switch_mm)
 	isb
 	msr	ttbr0_el1, x0			// now update TTBR0
 	isb
-	post_ttbr0_update_workaround
+	post_ttbr_update_workaround
 	ret
 ENDPROC(cpu_do_switch_mm)
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 06/18] arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (4 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 05/18] arm64: mm: Rename post_ttbr0_update_workaround Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-12-01 11:48   ` Mark Rutland
  2017-11-30 16:39 ` [PATCH v2 07/18] arm64: mm: Allocate ASIDs in pairs Will Deacon
                   ` (13 subsequent siblings)
  19 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

With the ASID now installed in TTBR1, we can re-enable ARM64_SW_TTBR0_PAN
by ensuring that we switch to a reserved ASID of zero when disabling
user access and restore the active user ASID on the uaccess enable path.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/Kconfig                   |  1 -
 arch/arm64/include/asm/asm-uaccess.h | 25 +++++++++++++++++--------
 arch/arm64/include/asm/uaccess.h     | 21 +++++++++++++++++----
 arch/arm64/kernel/entry.S            |  4 ++--
 arch/arm64/lib/clear_user.S          |  2 +-
 arch/arm64/lib/copy_from_user.S      |  2 +-
 arch/arm64/lib/copy_in_user.S        |  2 +-
 arch/arm64/lib/copy_to_user.S        |  2 +-
 arch/arm64/mm/cache.S                |  2 +-
 arch/arm64/xen/hypercall.S           |  2 +-
 10 files changed, 42 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7e7d7fd152c4..a93339f5178f 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -910,7 +910,6 @@ endif
 
 config ARM64_SW_TTBR0_PAN
 	bool "Emulate Privileged Access Never using TTBR0_EL1 switching"
-	depends on BROKEN       # Temporary while switch_mm is reworked
 	help
 	  Enabling this option prevents the kernel from accessing
 	  user-space memory directly by pointing TTBR0_EL1 to a reserved
diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
index b3da6c886835..21b8cf304028 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -16,11 +16,20 @@
 	add	\tmp1, \tmp1, #SWAPPER_DIR_SIZE	// reserved_ttbr0 at the end of swapper_pg_dir
 	msr	ttbr0_el1, \tmp1		// set reserved TTBR0_EL1
 	isb
+	sub	\tmp1, \tmp1, #SWAPPER_DIR_SIZE
+	bic	\tmp1, \tmp1, #(0xffff << 48)
+	msr	ttbr1_el1, \tmp1		// set reserved ASID
+	isb
 	.endm
 
-	.macro	__uaccess_ttbr0_enable, tmp1
+	.macro	__uaccess_ttbr0_enable, tmp1, tmp2
 	get_thread_info \tmp1
 	ldr	\tmp1, [\tmp1, #TSK_TI_TTBR0]	// load saved TTBR0_EL1
+	mrs	\tmp2, ttbr1_el1
+	extr    \tmp2, \tmp2, \tmp1, #48
+	ror     \tmp2, \tmp2, #16
+	msr	ttbr1_el1, \tmp2		// set the active ASID
+	isb
 	msr	ttbr0_el1, \tmp1		// set the non-PAN TTBR0_EL1
 	isb
 	.endm
@@ -31,18 +40,18 @@ alternative_if_not ARM64_HAS_PAN
 alternative_else_nop_endif
 	.endm
 
-	.macro	uaccess_ttbr0_enable, tmp1, tmp2
+	.macro	uaccess_ttbr0_enable, tmp1, tmp2, tmp3
 alternative_if_not ARM64_HAS_PAN
-	save_and_disable_irq \tmp2		// avoid preemption
-	__uaccess_ttbr0_enable \tmp1
-	restore_irq \tmp2
+	save_and_disable_irq \tmp3		// avoid preemption
+	__uaccess_ttbr0_enable \tmp1, \tmp2
+	restore_irq \tmp3
 alternative_else_nop_endif
 	.endm
 #else
 	.macro	uaccess_ttbr0_disable, tmp1
 	.endm
 
-	.macro	uaccess_ttbr0_enable, tmp1, tmp2
+	.macro	uaccess_ttbr0_enable, tmp1, tmp2, tmp3
 	.endm
 #endif
 
@@ -56,8 +65,8 @@ alternative_if ARM64_ALT_PAN_NOT_UAO
 alternative_else_nop_endif
 	.endm
 
-	.macro	uaccess_enable_not_uao, tmp1, tmp2
-	uaccess_ttbr0_enable \tmp1, \tmp2
+	.macro	uaccess_enable_not_uao, tmp1, tmp2, tmp3
+	uaccess_ttbr0_enable \tmp1, \tmp2, \tmp3
 alternative_if ARM64_ALT_PAN_NOT_UAO
 	SET_PSTATE_PAN(0)
 alternative_else_nop_endif
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index fc0f9eb66039..750a3b76a01c 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -107,15 +107,19 @@ static inline void __uaccess_ttbr0_disable(void)
 {
 	unsigned long ttbr;
 
+	ttbr = read_sysreg(ttbr1_el1);
 	/* reserved_ttbr0 placed at the end of swapper_pg_dir */
-	ttbr = read_sysreg(ttbr1_el1) + SWAPPER_DIR_SIZE;
-	write_sysreg(ttbr, ttbr0_el1);
+	write_sysreg(ttbr + SWAPPER_DIR_SIZE, ttbr0_el1);
+	isb();
+	/* Set reserved ASID */
+	ttbr &= ~(0xffffUL << 48);
+	write_sysreg(ttbr, ttbr1_el1);
 	isb();
 }
 
 static inline void __uaccess_ttbr0_enable(void)
 {
-	unsigned long flags;
+	unsigned long flags, ttbr0, ttbr1;
 
 	/*
 	 * Disable interrupts to avoid preemption between reading the 'ttbr0'
@@ -123,7 +127,16 @@ static inline void __uaccess_ttbr0_enable(void)
 	 * roll-over and an update of 'ttbr0'.
 	 */
 	local_irq_save(flags);
-	write_sysreg(current_thread_info()->ttbr0, ttbr0_el1);
+	ttbr0 = current_thread_info()->ttbr0;
+
+	/* Restore active ASID */
+	ttbr1 = read_sysreg(ttbr1_el1);
+	ttbr1 |= ttbr0 & (0xffffUL << 48);
+	write_sysreg(ttbr1, ttbr1_el1);
+	isb();
+
+	/* Restore user page table */
+	write_sysreg(ttbr0, ttbr0_el1);
 	isb();
 	local_irq_restore(flags);
 }
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 804e43c9cb0b..d454d8ed45e4 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -184,7 +184,7 @@ alternative_if ARM64_HAS_PAN
 alternative_else_nop_endif
 
 	.if	\el != 0
-	mrs	x21, ttbr0_el1
+	mrs	x21, ttbr1_el1
 	tst	x21, #0xffff << 48		// Check for the reserved ASID
 	orr	x23, x23, #PSR_PAN_BIT		// Set the emulated PAN in the saved SPSR
 	b.eq	1f				// TTBR0 access already disabled
@@ -248,7 +248,7 @@ alternative_else_nop_endif
 	tbnz	x22, #22, 1f			// Skip re-enabling TTBR0 access if the PSR_PAN_BIT is set
 	.endif
 
-	__uaccess_ttbr0_enable x0
+	__uaccess_ttbr0_enable x0, x1
 
 	.if	\el == 0
 	/*
diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S
index e88fb99c1561..8f9c4641e706 100644
--- a/arch/arm64/lib/clear_user.S
+++ b/arch/arm64/lib/clear_user.S
@@ -30,7 +30,7 @@
  * Alignment fixed up by hardware.
  */
 ENTRY(__clear_user)
-	uaccess_enable_not_uao x2, x3
+	uaccess_enable_not_uao x2, x3, x4
 	mov	x2, x1			// save the size for fixup return
 	subs	x1, x1, #8
 	b.mi	2f
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 4b5d826895ff..69d86a80f3e2 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -64,7 +64,7 @@
 
 end	.req	x5
 ENTRY(__arch_copy_from_user)
-	uaccess_enable_not_uao x3, x4
+	uaccess_enable_not_uao x3, x4, x5
 	add	end, x0, x2
 #include "copy_template.S"
 	uaccess_disable_not_uao x3
diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S
index b24a830419ad..e442b531252a 100644
--- a/arch/arm64/lib/copy_in_user.S
+++ b/arch/arm64/lib/copy_in_user.S
@@ -65,7 +65,7 @@
 
 end	.req	x5
 ENTRY(raw_copy_in_user)
-	uaccess_enable_not_uao x3, x4
+	uaccess_enable_not_uao x3, x4, x5
 	add	end, x0, x2
 #include "copy_template.S"
 	uaccess_disable_not_uao x3
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 351f0766f7a6..318f15d5c336 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -63,7 +63,7 @@
 
 end	.req	x5
 ENTRY(__arch_copy_to_user)
-	uaccess_enable_not_uao x3, x4
+	uaccess_enable_not_uao x3, x4, x5
 	add	end, x0, x2
 #include "copy_template.S"
 	uaccess_disable_not_uao x3
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 7f1dbe962cf5..6cd20a8c0952 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -49,7 +49,7 @@ ENTRY(flush_icache_range)
  *	- end     - virtual end address of region
  */
 ENTRY(__flush_cache_user_range)
-	uaccess_ttbr0_enable x2, x3
+	uaccess_ttbr0_enable x2, x3, x4
 	dcache_line_size x2, x3
 	sub	x3, x2, #1
 	bic	x4, x0, x3
diff --git a/arch/arm64/xen/hypercall.S b/arch/arm64/xen/hypercall.S
index 401ceb71540c..acdbd2c9e899 100644
--- a/arch/arm64/xen/hypercall.S
+++ b/arch/arm64/xen/hypercall.S
@@ -101,7 +101,7 @@ ENTRY(privcmd_call)
 	 * need the explicit uaccess_enable/disable if the TTBR0 PAN emulation
 	 * is enabled (it implies that hardware UAO and PAN disabled).
 	 */
-	uaccess_ttbr0_enable x6, x7
+	uaccess_ttbr0_enable x6, x7, x8
 	hvc XEN_IMM
 
 	/*
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 07/18] arm64: mm: Allocate ASIDs in pairs
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (5 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 06/18] arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-11-30 16:39 ` [PATCH v2 08/18] arm64: mm: Add arm64_kernel_unmapped_at_el0 helper Will Deacon
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

In preparation for separate kernel/user ASIDs, allocate them in pairs
for each mm_struct. The bottom bit distinguishes the two: if it is set,
then the ASID will map only userspace.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/mmu.h |  1 +
 arch/arm64/mm/context.c      | 25 +++++++++++++++++--------
 2 files changed, 18 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 0d34bf0a89c7..01bfb184f2a8 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -17,6 +17,7 @@
 #define __ASM_MMU_H
 
 #define MMCF_AARCH32	0x1	/* mm context flag for AArch32 executables */
+#define USER_ASID_FLAG	(UL(1) << 48)
 
 typedef struct {
 	atomic64_t	id;
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 78816e476491..db28958d9e4f 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -39,7 +39,16 @@ static cpumask_t tlb_flush_pending;
 
 #define ASID_MASK		(~GENMASK(asid_bits - 1, 0))
 #define ASID_FIRST_VERSION	(1UL << asid_bits)
-#define NUM_USER_ASIDS		ASID_FIRST_VERSION
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define NUM_USER_ASIDS		(ASID_FIRST_VERSION >> 1)
+#define asid2idx(asid)		(((asid) & ~ASID_MASK) >> 1)
+#define idx2asid(idx)		(((idx) << 1) & ~ASID_MASK)
+#else
+#define NUM_USER_ASIDS		(ASID_FIRST_VERSION)
+#define asid2idx(asid)		((asid) & ~ASID_MASK)
+#define idx2asid(idx)		asid2idx(idx)
+#endif
 
 /* Get the ASIDBits supported by the current CPU */
 static u32 get_cpu_asid_bits(void)
@@ -104,7 +113,7 @@ static void flush_context(unsigned int cpu)
 		 */
 		if (asid == 0)
 			asid = per_cpu(reserved_asids, i);
-		__set_bit(asid & ~ASID_MASK, asid_map);
+		__set_bit(asid2idx(asid), asid_map);
 		per_cpu(reserved_asids, i) = asid;
 	}
 
@@ -156,16 +165,16 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu)
 		 * We had a valid ASID in a previous life, so try to re-use
 		 * it if possible.
 		 */
-		asid &= ~ASID_MASK;
-		if (!__test_and_set_bit(asid, asid_map))
+		if (!__test_and_set_bit(asid2idx(asid), asid_map))
 			return newasid;
 	}
 
 	/*
 	 * Allocate a free ASID. If we can't find one, take a note of the
-	 * currently active ASIDs and mark the TLBs as requiring flushes.
-	 * We always count from ASID #1, as we use ASID #0 when setting a
-	 * reserved TTBR0 for the init_mm.
+	 * currently active ASIDs and mark the TLBs as requiring flushes.  We
+	 * always count from ASID #2 (index 1), as we use ASID #0 when setting
+	 * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd
+	 * pairs.
 	 */
 	asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx);
 	if (asid != NUM_USER_ASIDS)
@@ -182,7 +191,7 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu)
 set_asid:
 	__set_bit(asid, asid_map);
 	cur_idx = asid;
-	return asid | generation;
+	return idx2asid(asid) | generation;
 }
 
 void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 08/18] arm64: mm: Add arm64_kernel_unmapped_at_el0 helper
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (6 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 07/18] arm64: mm: Allocate ASIDs in pairs Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-11-30 16:39 ` [PATCH v2 09/18] arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI Will Deacon
                   ` (11 subsequent siblings)
  19 siblings, 0 replies; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

In order for code such as TLB invalidation to operate efficiently when
the decision to map the kernel at EL0 is determined at runtime, this
patch introduces a helper function, arm64_kernel_unmapped_at_el0, to
determine whether or not the kernel is mapped whilst running in userspace.

Currently, this just reports the value of CONFIG_UNMAP_KERNEL_AT_EL0,
but will later be hooked up to a fake CPU capability using a static key.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/mmu.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 01bfb184f2a8..c07954638658 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -19,6 +19,8 @@
 #define MMCF_AARCH32	0x1	/* mm context flag for AArch32 executables */
 #define USER_ASID_FLAG	(UL(1) << 48)
 
+#ifndef __ASSEMBLY__
+
 typedef struct {
 	atomic64_t	id;
 	void		*vdso;
@@ -32,6 +34,11 @@ typedef struct {
  */
 #define ASID(mm)	((mm)->context.id.counter & 0xffff)
 
+static inline bool arm64_kernel_unmapped_at_el0(void)
+{
+	return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0);
+}
+
 extern void paging_init(void);
 extern void bootmem_init(void);
 extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
@@ -42,4 +49,5 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
 extern void mark_linear_text_alias_ro(void);
 
+#endif	/* !__ASSEMBLY__ */
 #endif
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 09/18] arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (7 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 08/18] arm64: mm: Add arm64_kernel_unmapped_at_el0 helper Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-11-30 16:39 ` [PATCH v2 10/18] arm64: entry: Add exception trampoline page for exceptions from EL0 Will Deacon
                   ` (10 subsequent siblings)
  19 siblings, 0 replies; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

Since an mm has both a kernel and a user ASID, we need to ensure that
broadcast TLB maintenance targets both address spaces so that things
like CoW continue to work with the uaccess primitives in the kernel.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/tlbflush.h | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index af1c76981911..9e82dd79c7db 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -23,6 +23,7 @@
 
 #include <linux/sched.h>
 #include <asm/cputype.h>
+#include <asm/mmu.h>
 
 /*
  * Raw TLBI operations.
@@ -54,6 +55,11 @@
 
 #define __tlbi(op, ...)		__TLBI_N(op, ##__VA_ARGS__, 1, 0)
 
+#define __tlbi_user(op, arg) do {						\
+	if (arm64_kernel_unmapped_at_el0())					\
+		__tlbi(op, (arg) | USER_ASID_FLAG);				\
+} while (0)
+
 /*
  *	TLB Management
  *	==============
@@ -115,6 +121,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
 
 	dsb(ishst);
 	__tlbi(aside1is, asid);
+	__tlbi_user(aside1is, asid);
 	dsb(ish);
 }
 
@@ -125,6 +132,7 @@ static inline void flush_tlb_page(struct vm_area_struct *vma,
 
 	dsb(ishst);
 	__tlbi(vale1is, addr);
+	__tlbi_user(vale1is, addr);
 	dsb(ish);
 }
 
@@ -151,10 +159,13 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
 
 	dsb(ishst);
 	for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) {
-		if (last_level)
+		if (last_level) {
 			__tlbi(vale1is, addr);
-		else
+			__tlbi_user(vale1is, addr);
+		} else {
 			__tlbi(vae1is, addr);
+			__tlbi_user(vae1is, addr);
+		}
 	}
 	dsb(ish);
 }
@@ -194,6 +205,7 @@ static inline void __flush_tlb_pgtable(struct mm_struct *mm,
 	unsigned long addr = uaddr >> 12 | (ASID(mm) << 48);
 
 	__tlbi(vae1is, addr);
+	__tlbi_user(vae1is, addr);
 	dsb(ish);
 }
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 10/18] arm64: entry: Add exception trampoline page for exceptions from EL0
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (8 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 09/18] arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-12-01 13:31   ` Mark Rutland
  2017-12-06 10:25   ` Ard Biesheuvel
  2017-11-30 16:39 ` [PATCH v2 11/18] arm64: mm: Map entry trampoline into trampoline and kernel page tables Will Deacon
                   ` (9 subsequent siblings)
  19 siblings, 2 replies; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

To allow unmapping of the kernel whilst running at EL0, we need to
point the exception vectors at an entry trampoline that can map/unmap
the kernel on entry/exit respectively.

This patch adds the trampoline page, although it is not yet plugged
into the vector table and is therefore unused.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/entry.S       | 86 +++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/vmlinux.lds.S | 17 ++++++++
 2 files changed, 103 insertions(+)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index d454d8ed45e4..dea196f287a0 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -28,6 +28,8 @@
 #include <asm/errno.h>
 #include <asm/esr.h>
 #include <asm/irq.h>
+#include <asm/memory.h>
+#include <asm/mmu.h>
 #include <asm/processor.h>
 #include <asm/ptrace.h>
 #include <asm/thread_info.h>
@@ -943,6 +945,90 @@ __ni_sys_trace:
 
 	.popsection				// .entry.text
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+/*
+ * Exception vectors trampoline.
+ */
+	.pushsection ".entry.tramp.text", "ax"
+
+	.macro tramp_map_kernel, tmp
+	mrs	\tmp, ttbr1_el1
+	sub	\tmp, \tmp, #(SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
+	bic	\tmp, \tmp, #USER_ASID_FLAG
+	msr	ttbr1_el1, \tmp
+	.endm
+
+	.macro tramp_unmap_kernel, tmp
+	mrs	\tmp, ttbr1_el1
+	add	\tmp, \tmp, #(SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
+	orr	\tmp, \tmp, #USER_ASID_FLAG
+	msr	ttbr1_el1, \tmp
+	/*
+	 * We avoid running the post_ttbr_update_workaround here because the
+	 * user and kernel ASIDs don't have conflicting mappings, so any
+	 * "blessing" as described in:
+	 *
+	 *   http://lkml.kernel.org/r/56BB848A.6060603@caviumnetworks.com
+	 *
+	 * will not hurt correctness. Whilst this may partially defeat the
+	 * point of using split ASIDs in the first place, it avoids
+	 * the hit of invalidating the entire I-cache on every return to
+	 * userspace.
+	 */
+	.endm
+
+	.macro tramp_ventry, regsize = 64
+	.align	7
+1:
+	.if	\regsize == 64
+	msr	tpidrro_el0, x30
+	.endif
+	tramp_map_kernel	x30
+	ldr	x30, =vectors
+	prfm	plil1strm, [x30, #(1b - tramp_vectors)]
+	msr	vbar_el1, x30
+	add	x30, x30, #(1b - tramp_vectors)
+	isb
+	br	x30
+	.endm
+
+	.macro tramp_exit, regsize = 64
+	adr	x30, tramp_vectors
+	msr	vbar_el1, x30
+	tramp_unmap_kernel	x30
+	.if	\regsize == 64
+	mrs	x30, far_el1
+	.endif
+	eret
+	.endm
+
+	.align	11
+ENTRY(tramp_vectors)
+	.space	0x400
+
+	tramp_ventry
+	tramp_ventry
+	tramp_ventry
+	tramp_ventry
+
+	tramp_ventry	32
+	tramp_ventry	32
+	tramp_ventry	32
+	tramp_ventry	32
+END(tramp_vectors)
+
+ENTRY(tramp_exit_native)
+	tramp_exit
+END(tramp_exit_native)
+
+ENTRY(tramp_exit_compat)
+	tramp_exit	32
+END(tramp_exit_compat)
+
+	.ltorg
+	.popsection				// .entry.tramp.text
+#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+
 /*
  * Special system call wrappers.
  */
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 7da3e5c366a0..6b4260f22aab 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -57,6 +57,17 @@ jiffies = jiffies_64;
 #define HIBERNATE_TEXT
 #endif
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define TRAMP_TEXT					\
+	. = ALIGN(PAGE_SIZE);				\
+	VMLINUX_SYMBOL(__entry_tramp_text_start) = .;	\
+	*(.entry.tramp.text)				\
+	. = ALIGN(PAGE_SIZE);				\
+	VMLINUX_SYMBOL(__entry_tramp_text_end) = .;
+#else
+#define TRAMP_TEXT
+#endif
+
 /*
  * The size of the PE/COFF section that covers the kernel image, which
  * runs from stext to _edata, must be a round multiple of the PE/COFF
@@ -113,6 +124,7 @@ SECTIONS
 			HYPERVISOR_TEXT
 			IDMAP_TEXT
 			HIBERNATE_TEXT
+			TRAMP_TEXT
 			*(.fixup)
 			*(.gnu.warning)
 		. = ALIGN(16);
@@ -214,6 +226,11 @@ SECTIONS
 	. += RESERVED_TTBR0_SIZE;
 #endif
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	tramp_pg_dir = .;
+	. += PAGE_SIZE;
+#endif
+
 	__pecoff_data_size = ABSOLUTE(. - __initdata_begin);
 	_end = .;
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 11/18] arm64: mm: Map entry trampoline into trampoline and kernel page tables
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (9 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 10/18] arm64: entry: Add exception trampoline page for exceptions from EL0 Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-11-30 18:29   ` Mark Rutland
  2017-11-30 16:39 ` [PATCH v2 12/18] arm64: entry: Explicitly pass exception level to kernel_ventry macro Will Deacon
                   ` (8 subsequent siblings)
  19 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

The exception entry trampoline needs to be mapped at the same virtual
address in both the trampoline page table (which maps nothing else)
and also the kernel page table, so that we can swizzle TTBR1_EL1 on
exceptions from and return to EL0.

This patch maps the trampoline at a fixed virtual address in the fixmap
area of the kernel virtual address space, which allows the kernel proper
to be randomized with respect to the trampoline when KASLR is enabled.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/fixmap.h  |  4 ++++
 arch/arm64/include/asm/pgtable.h |  1 +
 arch/arm64/kernel/asm-offsets.c  |  6 +++++-
 arch/arm64/mm/mmu.c              | 23 +++++++++++++++++++++++
 4 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h
index 4052ec39e8db..8119b49be98d 100644
--- a/arch/arm64/include/asm/fixmap.h
+++ b/arch/arm64/include/asm/fixmap.h
@@ -58,6 +58,10 @@ enum fixed_addresses {
 	FIX_APEI_GHES_NMI,
 #endif /* CONFIG_ACPI_APEI_GHES */
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	FIX_ENTRY_TRAMP_TEXT,
+#define TRAMP_VALIAS		(__fix_to_virt(FIX_ENTRY_TRAMP_TEXT))
+#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
 	__end_of_permanent_fixed_addresses,
 
 	/*
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index c9530b5b5ca8..c8f56b2ca414 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -681,6 +681,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 
 extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
 extern pgd_t idmap_pg_dir[PTRS_PER_PGD];
+extern pgd_t tramp_pg_dir[PTRS_PER_PGD];
 
 /*
  * Encode and decode a swap entry:
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 71bf088f1e4b..af247d10252f 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -24,6 +24,7 @@
 #include <linux/kvm_host.h>
 #include <linux/suspend.h>
 #include <asm/cpufeature.h>
+#include <asm/fixmap.h>
 #include <asm/thread_info.h>
 #include <asm/memory.h>
 #include <asm/smp_plat.h>
@@ -148,11 +149,14 @@ int main(void)
   DEFINE(ARM_SMCCC_RES_X2_OFFS,		offsetof(struct arm_smccc_res, a2));
   DEFINE(ARM_SMCCC_QUIRK_ID_OFFS,	offsetof(struct arm_smccc_quirk, id));
   DEFINE(ARM_SMCCC_QUIRK_STATE_OFFS,	offsetof(struct arm_smccc_quirk, state));
-
   BLANK();
   DEFINE(HIBERN_PBE_ORIG,	offsetof(struct pbe, orig_address));
   DEFINE(HIBERN_PBE_ADDR,	offsetof(struct pbe, address));
   DEFINE(HIBERN_PBE_NEXT,	offsetof(struct pbe, next));
   DEFINE(ARM64_FTR_SYSVAL,	offsetof(struct arm64_ftr_reg, sys_val));
+  BLANK();
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+  DEFINE(TRAMP_VALIAS,		TRAMP_VALIAS);
+#endif
   return 0;
 }
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 267d2b79d52d..c2622525c4d6 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -525,6 +525,29 @@ static int __init parse_rodata(char *arg)
 }
 early_param("rodata", parse_rodata);
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+static int __init map_entry_trampoline(void)
+{
+	extern char __entry_tramp_text_start[];
+
+	pgprot_t prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC;
+	phys_addr_t pa_start = __pa_symbol(__entry_tramp_text_start);
+
+	/* The trampoline is always mapped and can therefore be global */
+	pgprot_val(prot) &= ~PTE_NG;
+
+	/* Map only the text into the trampoline page table */
+	memset((char *)tramp_pg_dir, 0, PGD_SIZE);
+	__create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE,
+			     prot, pgd_pgtable_alloc, 0);
+
+	/* ...as well as the kernel page table */
+	__set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot);
+	return 0;
+}
+core_initcall(map_entry_trampoline);
+#endif
+
 /*
  * Create fine-grained mappings for the kernel.
  */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 12/18] arm64: entry: Explicitly pass exception level to kernel_ventry macro
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (10 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 11/18] arm64: mm: Map entry trampoline into trampoline and kernel page tables Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-12-01 11:58   ` Mark Rutland
  2017-11-30 16:39 ` [PATCH v2 13/18] arm64: entry: Hook up entry trampoline to exception vectors Will Deacon
                   ` (7 subsequent siblings)
  19 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

We will need to treat exceptions from EL0 differently in kernel_ventry,
so rework the macro to take the exception level as an argument and
construct the branch target using that.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/entry.S | 46 +++++++++++++++++++++++-----------------------
 1 file changed, 23 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index dea196f287a0..688e52f65a8d 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -71,7 +71,7 @@
 #define BAD_FIQ		2
 #define BAD_ERROR	3
 
-	.macro kernel_ventry	label
+	.macro kernel_ventry, el, label, regsize = 64
 	.align 7
 	sub	sp, sp, #S_FRAME_SIZE
 #ifdef CONFIG_VMAP_STACK
@@ -84,7 +84,7 @@
 	tbnz	x0, #THREAD_SHIFT, 0f
 	sub	x0, sp, x0			// x0'' = sp' - x0' = (sp + x0) - sp = x0
 	sub	sp, sp, x0			// sp'' = sp' - x0 = (sp + x0) - x0 = sp
-	b	\label
+	b	el\()\el\()_\label
 
 0:
 	/*
@@ -116,7 +116,7 @@
 	sub	sp, sp, x0
 	mrs	x0, tpidrro_el0
 #endif
-	b	\label
+	b	el\()\el\()_\label
 	.endm
 
 	.macro	kernel_entry, el, regsize = 64
@@ -369,31 +369,31 @@ tsk	.req	x28		// current thread_info
 
 	.align	11
 ENTRY(vectors)
-	kernel_ventry	el1_sync_invalid		// Synchronous EL1t
-	kernel_ventry	el1_irq_invalid			// IRQ EL1t
-	kernel_ventry	el1_fiq_invalid			// FIQ EL1t
-	kernel_ventry	el1_error_invalid		// Error EL1t
+	kernel_ventry	1, sync_invalid			// Synchronous EL1t
+	kernel_ventry	1, irq_invalid			// IRQ EL1t
+	kernel_ventry	1, fiq_invalid			// FIQ EL1t
+	kernel_ventry	1, error_invalid		// Error EL1t
 
-	kernel_ventry	el1_sync			// Synchronous EL1h
-	kernel_ventry	el1_irq				// IRQ EL1h
-	kernel_ventry	el1_fiq_invalid			// FIQ EL1h
-	kernel_ventry	el1_error			// Error EL1h
+	kernel_ventry	1, sync				// Synchronous EL1h
+	kernel_ventry	1, irq				// IRQ EL1h
+	kernel_ventry	1, fiq_invalid			// FIQ EL1h
+	kernel_ventry	1, error			// Error EL1h
 
-	kernel_ventry	el0_sync			// Synchronous 64-bit EL0
-	kernel_ventry	el0_irq				// IRQ 64-bit EL0
-	kernel_ventry	el0_fiq_invalid			// FIQ 64-bit EL0
-	kernel_ventry	el0_error			// Error 64-bit EL0
+	kernel_ventry	0, sync				// Synchronous 64-bit EL0
+	kernel_ventry	0, irq				// IRQ 64-bit EL0
+	kernel_ventry	0, fiq_invalid			// FIQ 64-bit EL0
+	kernel_ventry	0, error			// Error 64-bit EL0
 
 #ifdef CONFIG_COMPAT
-	kernel_ventry	el0_sync_compat			// Synchronous 32-bit EL0
-	kernel_ventry	el0_irq_compat			// IRQ 32-bit EL0
-	kernel_ventry	el0_fiq_invalid_compat		// FIQ 32-bit EL0
-	kernel_ventry	el0_error_compat		// Error 32-bit EL0
+	kernel_ventry	0, sync_compat, 32		// Synchronous 32-bit EL0
+	kernel_ventry	0, irq_compat, 32		// IRQ 32-bit EL0
+	kernel_ventry	0, fiq_invalid_compat, 32	// FIQ 32-bit EL0
+	kernel_ventry	0, error_compat, 32		// Error 32-bit EL0
 #else
-	kernel_ventry	el0_sync_invalid		// Synchronous 32-bit EL0
-	kernel_ventry	el0_irq_invalid			// IRQ 32-bit EL0
-	kernel_ventry	el0_fiq_invalid			// FIQ 32-bit EL0
-	kernel_ventry	el0_error_invalid		// Error 32-bit EL0
+	kernel_ventry	0, sync_invalid, 32		// Synchronous 32-bit EL0
+	kernel_ventry	0, irq_invalid, 32		// IRQ 32-bit EL0
+	kernel_ventry	0, fiq_invalid, 32		// FIQ 32-bit EL0
+	kernel_ventry	0, error_invalid, 32		// Error 32-bit EL0
 #endif
 END(vectors)
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 13/18] arm64: entry: Hook up entry trampoline to exception vectors
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (11 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 12/18] arm64: entry: Explicitly pass exception level to kernel_ventry macro Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-12-01 13:53   ` Mark Rutland
  2017-11-30 16:39 ` [PATCH v2 14/18] arm64: erratum: Work around Falkor erratum #E1003 in trampoline code Will Deacon
                   ` (6 subsequent siblings)
  19 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

Hook up the entry trampoline to our exception vectors so that all
exceptions from and returns to EL0 go via the trampoline, which swizzles
the vector base register accordingly. Transitioning to and from the
kernel clobbers x30, so we use tpidrro_el0 and far_el1 as scratch
registers for native tasks.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/entry.S | 39 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 36 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 688e52f65a8d..99d105048663 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -73,6 +73,17 @@
 
 	.macro kernel_ventry, el, label, regsize = 64
 	.align 7
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	.if	\el == 0
+	.if	\regsize == 64
+	mrs	x30, tpidrro_el0
+	msr	tpidrro_el0, xzr
+	.else
+	mov	x30, xzr
+	.endif
+	.endif
+#endif
+
 	sub	sp, sp, #S_FRAME_SIZE
 #ifdef CONFIG_VMAP_STACK
 	/*
@@ -119,6 +130,11 @@
 	b	el\()\el\()_\label
 	.endm
 
+	.macro tramp_alias, dst, sym
+	mov_q	\dst, TRAMP_VALIAS
+	add	\dst, \dst, #(\sym - .entry.tramp.text)
+	.endm
+
 	.macro	kernel_entry, el, regsize = 64
 	.if	\regsize == 32
 	mov	w0, w0				// zero upper 32 bits of x0
@@ -271,18 +287,20 @@ alternative_else_nop_endif
 	.if	\el == 0
 	ldr	x23, [sp, #S_SP]		// load return stack pointer
 	msr	sp_el0, x23
+	tst	x22, #PSR_MODE32_BIT		// native task?
+	b.eq	3f
+
 #ifdef CONFIG_ARM64_ERRATUM_845719
 alternative_if ARM64_WORKAROUND_845719
-	tbz	x22, #4, 1f
 #ifdef CONFIG_PID_IN_CONTEXTIDR
 	mrs	x29, contextidr_el1
 	msr	contextidr_el1, x29
 #else
 	msr contextidr_el1, xzr
 #endif
-1:
 alternative_else_nop_endif
 #endif
+3:
 	.endif
 
 	msr	elr_el1, x21			// set up the return data
@@ -304,7 +322,22 @@ alternative_else_nop_endif
 	ldp	x28, x29, [sp, #16 * 14]
 	ldr	lr, [sp, #S_LR]
 	add	sp, sp, #S_FRAME_SIZE		// restore sp
-	eret					// return to kernel
+
+#ifndef CONFIG_UNMAP_KERNEL_AT_EL0
+	eret
+#else
+	.if	\el == 0
+	bne	4f
+	msr	far_el1, x30
+	tramp_alias	x30, tramp_exit_native
+	br	x30
+4:
+	tramp_alias	x30, tramp_exit_compat
+	br	x30
+	.else
+	eret
+	.endif
+#endif
 	.endm
 
 	.macro	irq_stack_entry
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 14/18] arm64: erratum: Work around Falkor erratum #E1003 in trampoline code
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (12 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 13/18] arm64: entry: Hook up entry trampoline to exception vectors Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-11-30 17:06   ` Robin Murphy
  2017-11-30 16:39 ` [PATCH v2 15/18] arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks Will Deacon
                   ` (5 subsequent siblings)
  19 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

We rely on an atomic swizzling of TTBR1 when transitioning from the entry
trampoline to the kernel proper on an exception. We can't rely on this
atomicity in the face of Falkor erratum #E1003, so on affected cores we
can issue a TLB invalidation to invalidate the walk cache prior to
jumping into the kernel. There is still the possibility of a TLB conflict
here due to conflicting walk cache entries prior to the invalidation, but
this doesn't appear to be the case on these CPUs in practice.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/Kconfig        | 17 +++++------------
 arch/arm64/kernel/entry.S | 10 ++++++++++
 2 files changed, 15 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index a93339f5178f..fdcc7b9bb15d 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -522,20 +522,13 @@ config CAVIUM_ERRATUM_30115
 config QCOM_FALKOR_ERRATUM_1003
 	bool "Falkor E1003: Incorrect translation due to ASID change"
 	default y
-	select ARM64_PAN if ARM64_SW_TTBR0_PAN
 	help
 	  On Falkor v1, an incorrect ASID may be cached in the TLB when ASID
-	  and BADDR are changed together in TTBRx_EL1. The workaround for this
-	  issue is to use a reserved ASID in cpu_do_switch_mm() before
-	  switching to the new ASID. Saying Y here selects ARM64_PAN if
-	  ARM64_SW_TTBR0_PAN is selected. This is done because implementing and
-	  maintaining the E1003 workaround in the software PAN emulation code
-	  would be an unnecessary complication. The affected Falkor v1 CPU
-	  implements ARMv8.1 hardware PAN support and using hardware PAN
-	  support versus software PAN emulation is mutually exclusive at
-	  runtime.
-
-	  If unsure, say Y.
+	  and BADDR are changed together in TTBRx_EL1. Since we keep the ASID
+	  in TTBR1_EL1, this situation only occurs in the entry trampoline and
+	  then only for entries in the walk cache, since the leaf translation
+	  is unchanged. Work around the erratum by invalidating the walk cache
+	  entries for the trampoline before entering the kernel proper.
 
 config QCOM_FALKOR_ERRATUM_1009
 	bool "Falkor E1009: Prematurely complete a DSB after a TLBI"
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 99d105048663..a5ec6ab5c711 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -989,6 +989,16 @@ __ni_sys_trace:
 	sub	\tmp, \tmp, #(SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
 	bic	\tmp, \tmp, #USER_ASID_FLAG
 	msr	ttbr1_el1, \tmp
+#ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
+alternative_if ARM64_WORKAROUND_QCOM_FALKOR_E1003
+	movk	\tmp, #:abs_g2_nc:(TRAMP_VALIAS >> 12)
+	movk	\tmp, #:abs_g1_nc:(TRAMP_VALIAS >> 12)
+	movk	\tmp, #:abs_g0_nc:((TRAMP_VALIAS & (SZ_2M - 1)) >> 12)
+	isb
+	tlbi	vae1, \tmp
+	dsb	nsh
+alternative_else_nop_endif
+#endif /* CONFIG_QCOM_FALKOR_ERRATUM_1003 */
 	.endm
 
 	.macro tramp_unmap_kernel, tmp
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 15/18] arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (13 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 14/18] arm64: erratum: Work around Falkor erratum #E1003 in trampoline code Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-11-30 16:39 ` [PATCH v2 16/18] arm64: entry: Add fake CPU feature for unmapping the kernel at EL0 Will Deacon
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

When unmapping the kernel at EL0, we use tpidrro_el0 as a scratch register
during exception entry from native tasks and subsequently zero it in
the kernel_ventry macro. We can therefore avoid zeroing tpidrro_el0
in the context-switch path for native tasks using the entry trampoline.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/process.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index b2adcce7bc18..aba3a1fb492d 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -361,16 +361,14 @@ void tls_preserve_current_state(void)
 
 static void tls_thread_switch(struct task_struct *next)
 {
-	unsigned long tpidr, tpidrro;
-
 	tls_preserve_current_state();
 
-	tpidr = *task_user_tls(next);
-	tpidrro = is_compat_thread(task_thread_info(next)) ?
-		  next->thread.tp_value : 0;
+	if (is_compat_thread(task_thread_info(next)))
+		write_sysreg(next->thread.tp_value, tpidrro_el0);
+	else if (!arm64_kernel_unmapped_at_el0())
+		write_sysreg(0, tpidrro_el0);
 
-	write_sysreg(tpidr, tpidr_el0);
-	write_sysreg(tpidrro, tpidrro_el0);
+	write_sysreg(*task_user_tls(next), tpidr_el0);
 }
 
 /* Restore the UAO state depending on next's addr_limit */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 16/18] arm64: entry: Add fake CPU feature for unmapping the kernel at EL0
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (14 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 15/18] arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-12-01 13:55   ` Mark Rutland
  2017-11-30 16:39 ` [PATCH v2 17/18] arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0 Will Deacon
                   ` (3 subsequent siblings)
  19 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

Allow explicit disabling of the entry trampoline on the kernel command
line (kaiser=off) by adding a fake CPU feature (ARM64_UNMAP_KERNEL_AT_EL0)
that can be used to toggle the alternative sequences in our entry code and
avoid use of the trampoline altogether if desired. This also allows us to
make use of a static key in arm64_kernel_unmapped_at_el0().

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/cpucaps.h |  3 ++-
 arch/arm64/include/asm/mmu.h     |  3 ++-
 arch/arm64/kernel/cpufeature.c   | 41 ++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/entry.S        | 11 +++++++----
 4 files changed, 52 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 2ff7c5e8efab..b4537ffd1018 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -41,7 +41,8 @@
 #define ARM64_WORKAROUND_CAVIUM_30115		20
 #define ARM64_HAS_DCPOP				21
 #define ARM64_SVE				22
+#define ARM64_UNMAP_KERNEL_AT_EL0		23
 
-#define ARM64_NCAPS				23
+#define ARM64_NCAPS				24
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index c07954638658..da6f12e40714 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -36,7 +36,8 @@ typedef struct {
 
 static inline bool arm64_kernel_unmapped_at_el0(void)
 {
-	return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0);
+	return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) &&
+	       cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
 }
 
 extern void paging_init(void);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c5ba0097887f..72fc55d22ddb 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -845,6 +845,40 @@ static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unus
 					ID_AA64PFR0_FP_SHIFT) < 0;
 }
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+static int __kaiser_forced; /* 0: not forced, >0: forced on, <0: forced off */
+
+static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
+				int __unused)
+{
+	/* Forced on command line? */
+	if (__kaiser_forced) {
+		pr_info("KAISER forced %s by command line option\n",
+			__kaiser_forced > 0 ? "ON" : "OFF");
+		return __kaiser_forced > 0;
+	}
+
+	/* Useful for KASLR robustness */
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+		return true;
+
+	return false;
+}
+
+static int __init parse_kaiser(char *str)
+{
+	bool enabled;
+	int ret = strtobool(str, &enabled);
+
+	if (ret)
+		return ret;
+
+	__kaiser_forced = enabled ? 1 : -1;
+	return 0;
+}
+__setup("kaiser=", parse_kaiser);
+#endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
@@ -931,6 +965,13 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.def_scope = SCOPE_SYSTEM,
 		.matches = hyp_offset_low,
 	},
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+	{
+		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
+		.def_scope = SCOPE_SYSTEM,
+		.matches = unmap_kernel_at_el0,
+	},
+#endif
 	{
 		/* FP/SIMD is not implemented */
 		.capability = ARM64_HAS_NO_FPSIMD,
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index a5ec6ab5c711..d8775f55e930 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -74,6 +74,7 @@
 	.macro kernel_ventry, el, label, regsize = 64
 	.align 7
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+alternative_if ARM64_UNMAP_KERNEL_AT_EL0
 	.if	\el == 0
 	.if	\regsize == 64
 	mrs	x30, tpidrro_el0
@@ -82,6 +83,7 @@
 	mov	x30, xzr
 	.endif
 	.endif
+alternative_else_nop_endif
 #endif
 
 	sub	sp, sp, #S_FRAME_SIZE
@@ -323,10 +325,10 @@ alternative_else_nop_endif
 	ldr	lr, [sp, #S_LR]
 	add	sp, sp, #S_FRAME_SIZE		// restore sp
 
-#ifndef CONFIG_UNMAP_KERNEL_AT_EL0
-	eret
-#else
 	.if	\el == 0
+alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+alternative_if ARM64_UNMAP_KERNEL_AT_EL0
 	bne	4f
 	msr	far_el1, x30
 	tramp_alias	x30, tramp_exit_native
@@ -334,10 +336,11 @@ alternative_else_nop_endif
 4:
 	tramp_alias	x30, tramp_exit_compat
 	br	x30
+alternative_else_nop_endif
+#endif
 	.else
 	eret
 	.endif
-#endif
 	.endm
 
 	.macro	irq_stack_entry
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 17/18] arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (15 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 16/18] arm64: entry: Add fake CPU feature for unmapping the kernel at EL0 Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-12-12  8:44   ` Geert Uytterhoeven
  2017-11-30 16:39 ` [PATCH v2 18/18] perf: arm_spe: Disallow userspace profiling when arm_kernel_unmapped_at_el0() Will Deacon
                   ` (2 subsequent siblings)
  19 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

Add a Kconfig entry to control use of the entry trampoline, which allows
us to unmap the kernel whilst running in userspace and improve the
robustness of KASLR.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/Kconfig | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index fdcc7b9bb15d..3af1657fcac3 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -833,6 +833,19 @@ config FORCE_MAX_ZONEORDER
 	  However for 4K, we choose a higher default value, 11 as opposed to 10, giving us
 	  4M allocations matching the default size used by generic code.
 
+config UNMAP_KERNEL_AT_EL0
+	bool "Unmap kernel when running in userspace (aka \"KAISER\")"
+	default y
+	help
+	  Some attacks against KASLR make use of the timing difference between
+	  a permission fault which could arise from a page table entry that is
+	  present in the TLB, and a translation fault which always requires a
+	  page table walk. This option defends against these attacks by unmapping
+	  the kernel whilst running in userspace, therefore forcing translation
+	  faults for all of kernel space.
+
+	  If unsure, say Y.
+
 menuconfig ARMV8_DEPRECATED
 	bool "Emulate deprecated/obsolete ARMv8 instructions"
 	depends on COMPAT
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 18/18] perf: arm_spe: Disallow userspace profiling when arm_kernel_unmapped_at_el0()
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (16 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 17/18] arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0 Will Deacon
@ 2017-11-30 16:39 ` Will Deacon
  2017-12-01 12:15   ` Mark Rutland
  2017-12-01 16:26   ` Stephen Boyd
  2017-12-01 14:04 ` [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Mark Rutland
  2017-12-04 23:47 ` Laura Abbott
  19 siblings, 2 replies; 44+ messages in thread
From: Will Deacon @ 2017-11-30 16:39 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx,
	Will Deacon

When running with the kernel unmapped whilst at EL0, the virtually-addressed
SPE buffer is also unmapped, which can lead to buffer faults if userspace
profiling is enabled.

This patch prohibits SPE profiling of userspace when
arm_kernel_unmapped_at_el0().

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_spe_pmu.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
index 8ce262fc2561..c028db8973a4 100644
--- a/drivers/perf/arm_spe_pmu.c
+++ b/drivers/perf/arm_spe_pmu.c
@@ -675,6 +675,13 @@ static int arm_spe_pmu_event_init(struct perf_event *event)
 		return -EOPNOTSUPP;
 
 	/*
+	 * If kernelspace is unmapped when running at EL0, then the SPE
+	 * buffer will fault and prematurely terminate the AUX session.
+	 */
+	if (arm64_kernel_unmapped_at_el0() && !attr->exclude_user)
+		dev_warn_once(&spe_pmu->pdev->dev, "unable to write to profiling buffer from EL0. Try passing \"kaiser=off\" on the kernel command line");
+
+	/*
 	 * Feedback-directed frequency throttling doesn't work when we
 	 * have a buffer of samples. We'd need to manually count the
 	 * samples in the buffer when it fills up and adjust the event
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 14/18] arm64: erratum: Work around Falkor erratum #E1003 in trampoline code
  2017-11-30 16:39 ` [PATCH v2 14/18] arm64: erratum: Work around Falkor erratum #E1003 in trampoline code Will Deacon
@ 2017-11-30 17:06   ` Robin Murphy
  2017-11-30 17:19     ` Will Deacon
  0 siblings, 1 reply; 44+ messages in thread
From: Robin Murphy @ 2017-11-30 17:06 UTC (permalink / raw)
  To: Will Deacon, linux-arm-kernel
  Cc: mark.rutland, keescook, ard.biesheuvel, catalin.marinas,
	dave.hansen, sboyd, linux-kernel, msalter, tglx, labbott

Hi Will,

On 30/11/17 16:39, Will Deacon wrote:
> We rely on an atomic swizzling of TTBR1 when transitioning from the entry
> trampoline to the kernel proper on an exception. We can't rely on this
> atomicity in the face of Falkor erratum #E1003, so on affected cores we
> can issue a TLB invalidation to invalidate the walk cache prior to
> jumping into the kernel. There is still the possibility of a TLB conflict
> here due to conflicting walk cache entries prior to the invalidation, but
> this doesn't appear to be the case on these CPUs in practice.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>   arch/arm64/Kconfig        | 17 +++++------------
>   arch/arm64/kernel/entry.S | 10 ++++++++++
>   2 files changed, 15 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index a93339f5178f..fdcc7b9bb15d 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -522,20 +522,13 @@ config CAVIUM_ERRATUM_30115
>   config QCOM_FALKOR_ERRATUM_1003
>   	bool "Falkor E1003: Incorrect translation due to ASID change"
>   	default y
> -	select ARM64_PAN if ARM64_SW_TTBR0_PAN
>   	help
>   	  On Falkor v1, an incorrect ASID may be cached in the TLB when ASID
> -	  and BADDR are changed together in TTBRx_EL1. The workaround for this
> -	  issue is to use a reserved ASID in cpu_do_switch_mm() before
> -	  switching to the new ASID. Saying Y here selects ARM64_PAN if
> -	  ARM64_SW_TTBR0_PAN is selected. This is done because implementing and
> -	  maintaining the E1003 workaround in the software PAN emulation code
> -	  would be an unnecessary complication. The affected Falkor v1 CPU
> -	  implements ARMv8.1 hardware PAN support and using hardware PAN
> -	  support versus software PAN emulation is mutually exclusive at
> -	  runtime.
> -
> -	  If unsure, say Y.
> +	  and BADDR are changed together in TTBRx_EL1. Since we keep the ASID
> +	  in TTBR1_EL1, this situation only occurs in the entry trampoline and
> +	  then only for entries in the walk cache, since the leaf translation
> +	  is unchanged. Work around the erratum by invalidating the walk cache
> +	  entries for the trampoline before entering the kernel proper.
>   
>   config QCOM_FALKOR_ERRATUM_1009
>   	bool "Falkor E1009: Prematurely complete a DSB after a TLBI"
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 99d105048663..a5ec6ab5c711 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -989,6 +989,16 @@ __ni_sys_trace:
>   	sub	\tmp, \tmp, #(SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
>   	bic	\tmp, \tmp, #USER_ASID_FLAG
>   	msr	ttbr1_el1, \tmp
> +#ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
> +alternative_if ARM64_WORKAROUND_QCOM_FALKOR_E1003
> +	movk	\tmp, #:abs_g2_nc:(TRAMP_VALIAS >> 12)
> +	movk	\tmp, #:abs_g1_nc:(TRAMP_VALIAS >> 12)
> +	movk	\tmp, #:abs_g0_nc:((TRAMP_VALIAS & (SZ_2M - 1)) >> 12)

What's the deal with effectively zeroing bits 27:22 of the TRAMP_VALIAS 
address here? Is this an attempt to round down to section granularity 
gone awry, or something else subtle which probably warrants documenting?

Robin.

> +	isb
> +	tlbi	vae1, \tmp
> +	dsb	nsh
> +alternative_else_nop_endif
> +#endif /* CONFIG_QCOM_FALKOR_ERRATUM_1003 */
>   	.endm
>   
>   	.macro tramp_unmap_kernel, tmp
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 14/18] arm64: erratum: Work around Falkor erratum #E1003 in trampoline code
  2017-11-30 17:06   ` Robin Murphy
@ 2017-11-30 17:19     ` Will Deacon
  0 siblings, 0 replies; 44+ messages in thread
From: Will Deacon @ 2017-11-30 17:19 UTC (permalink / raw)
  To: Robin Murphy
  Cc: linux-arm-kernel, mark.rutland, keescook, ard.biesheuvel,
	catalin.marinas, dave.hansen, sboyd, linux-kernel, msalter, tglx,
	labbott

Hi Robin,

On Thu, Nov 30, 2017 at 05:06:48PM +0000, Robin Murphy wrote:
> On 30/11/17 16:39, Will Deacon wrote:
> >We rely on an atomic swizzling of TTBR1 when transitioning from the entry
> >trampoline to the kernel proper on an exception. We can't rely on this
> >atomicity in the face of Falkor erratum #E1003, so on affected cores we
> >can issue a TLB invalidation to invalidate the walk cache prior to
> >jumping into the kernel. There is still the possibility of a TLB conflict
> >here due to conflicting walk cache entries prior to the invalidation, but
> >this doesn't appear to be the case on these CPUs in practice.
> >
> >Signed-off-by: Will Deacon <will.deacon@arm.com>
> >---
> >  arch/arm64/Kconfig        | 17 +++++------------
> >  arch/arm64/kernel/entry.S | 10 ++++++++++
> >  2 files changed, 15 insertions(+), 12 deletions(-)
> >
> >diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> >index a93339f5178f..fdcc7b9bb15d 100644
> >--- a/arch/arm64/Kconfig
> >+++ b/arch/arm64/Kconfig
> >@@ -522,20 +522,13 @@ config CAVIUM_ERRATUM_30115
> >  config QCOM_FALKOR_ERRATUM_1003
> >  	bool "Falkor E1003: Incorrect translation due to ASID change"
> >  	default y
> >-	select ARM64_PAN if ARM64_SW_TTBR0_PAN
> >  	help
> >  	  On Falkor v1, an incorrect ASID may be cached in the TLB when ASID
> >-	  and BADDR are changed together in TTBRx_EL1. The workaround for this
> >-	  issue is to use a reserved ASID in cpu_do_switch_mm() before
> >-	  switching to the new ASID. Saying Y here selects ARM64_PAN if
> >-	  ARM64_SW_TTBR0_PAN is selected. This is done because implementing and
> >-	  maintaining the E1003 workaround in the software PAN emulation code
> >-	  would be an unnecessary complication. The affected Falkor v1 CPU
> >-	  implements ARMv8.1 hardware PAN support and using hardware PAN
> >-	  support versus software PAN emulation is mutually exclusive at
> >-	  runtime.
> >-
> >-	  If unsure, say Y.
> >+	  and BADDR are changed together in TTBRx_EL1. Since we keep the ASID
> >+	  in TTBR1_EL1, this situation only occurs in the entry trampoline and
> >+	  then only for entries in the walk cache, since the leaf translation
> >+	  is unchanged. Work around the erratum by invalidating the walk cache
> >+	  entries for the trampoline before entering the kernel proper.
> >  config QCOM_FALKOR_ERRATUM_1009
> >  	bool "Falkor E1009: Prematurely complete a DSB after a TLBI"
> >diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> >index 99d105048663..a5ec6ab5c711 100644
> >--- a/arch/arm64/kernel/entry.S
> >+++ b/arch/arm64/kernel/entry.S
> >@@ -989,6 +989,16 @@ __ni_sys_trace:
> >  	sub	\tmp, \tmp, #(SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
> >  	bic	\tmp, \tmp, #USER_ASID_FLAG
> >  	msr	ttbr1_el1, \tmp
> >+#ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
> >+alternative_if ARM64_WORKAROUND_QCOM_FALKOR_E1003
> >+	movk	\tmp, #:abs_g2_nc:(TRAMP_VALIAS >> 12)
> >+	movk	\tmp, #:abs_g1_nc:(TRAMP_VALIAS >> 12)
> >+	movk	\tmp, #:abs_g0_nc:((TRAMP_VALIAS & (SZ_2M - 1)) >> 12)
> 
> What's the deal with effectively zeroing bits 27:22 of the TRAMP_VALIAS
> address here? Is this an attempt to round down to section granularity gone
> awry, or something else subtle which probably warrants documenting?

Bugger, missing a '~'. I wish I had a good way to test this stuff :(

Thanks,

Will

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 03/18] arm64: mm: Move ASID from TTBR0 to TTBR1
  2017-11-30 16:39 ` [PATCH v2 03/18] arm64: mm: Move ASID from TTBR0 to TTBR1 Will Deacon
@ 2017-11-30 17:36   ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-11-30 17:36 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

Hi Will,

On Thu, Nov 30, 2017 at 04:39:31PM +0000, Will Deacon wrote:
> In preparation for mapping kernelspace and userspace with different
> ASIDs, move the ASID to TTBR1 and update switch_mm to context-switch
> TTBR0 via an invalid mapping (the zero page).
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>

[...]

> +#define cpu_switch_mm(pgd,mm)				\
> +do {							\
> +	BUG_ON(pgd == swapper_pg_dir);			\
> +	cpu_set_reserved_ttbr0();			\
> +	cpu_do_switch_mm(virt_to_phys(pgd),mm);		\
> +} while (0)

A minor thing, but could we please fix the spacing for the
cpu_do_switch_mm() arguments while we move this?

AFAICT, there's no reason this needs to be a macro, and the following
works with v4.15-rc1 defconfig:

static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm)
{
	BUG_ON(pgd == swapper_pg_dir);
	cpu_set_reserved_ttbr0();
	cpu_do_switch_mm(virt_to_phys(pgd), mm);
}

Otherwise, the patch looks good to me.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 11/18] arm64: mm: Map entry trampoline into trampoline and kernel page tables
  2017-11-30 16:39 ` [PATCH v2 11/18] arm64: mm: Map entry trampoline into trampoline and kernel page tables Will Deacon
@ 2017-11-30 18:29   ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-11-30 18:29 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

Hi Will,

On Thu, Nov 30, 2017 at 04:39:39PM +0000, Will Deacon wrote:
> diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h
> index 4052ec39e8db..8119b49be98d 100644
> --- a/arch/arm64/include/asm/fixmap.h
> +++ b/arch/arm64/include/asm/fixmap.h
> @@ -58,6 +58,10 @@ enum fixed_addresses {
>  	FIX_APEI_GHES_NMI,
>  #endif /* CONFIG_ACPI_APEI_GHES */
>  
> +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> +	FIX_ENTRY_TRAMP_TEXT,
> +#define TRAMP_VALIAS		(__fix_to_virt(FIX_ENTRY_TRAMP_TEXT))
> +#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
>  	__end_of_permanent_fixed_addresses,

Defining TRAMP_VALIAS here is a little surprising, especially given we
reuse the name in asm-offsets:

> +  DEFINE(TRAMP_VALIAS,		TRAMP_VALIAS);

Can't we have asm-offsets do:

  DEFINE(TRAMP_VALIAS, __fix_to_virt(FIX_ENTRY_TRAMP_TEXT));

... and rely on the asm-offsets TRAMP_VALIAS definition everywhere?

[...]

> +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> +static int __init map_entry_trampoline(void)
> +{
> +	extern char __entry_tramp_text_start[];
> +
> +	pgprot_t prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC;
> +	phys_addr_t pa_start = __pa_symbol(__entry_tramp_text_start);
> +
> +	/* The trampoline is always mapped and can therefore be global */
> +	pgprot_val(prot) &= ~PTE_NG;
> +
> +	/* Map only the text into the trampoline page table */
> +	memset((char *)tramp_pg_dir, 0, PGD_SIZE);

The (char *) cast can go; memset() takes a void pointer and we don't do
similar casts for other memset instances.

> +	__create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE,
> +			     prot, pgd_pgtable_alloc, 0);
> +
> +	/* ...as well as the kernel page table */

This might be clearer as:

	/* map the text in the kernel page table, too */

Otherwise, this looks good to me.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 06/18] arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN
  2017-11-30 16:39 ` [PATCH v2 06/18] arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN Will Deacon
@ 2017-12-01 11:48   ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-12-01 11:48 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

On Thu, Nov 30, 2017 at 04:39:34PM +0000, Will Deacon wrote:
> With the ASID now installed in TTBR1, we can re-enable ARM64_SW_TTBR0_PAN
> by ensuring that we switch to a reserved ASID of zero when disabling
> user access and restore the active user ASID on the uaccess enable path.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>

[...]

> diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
> index b3da6c886835..21b8cf304028 100644
> --- a/arch/arm64/include/asm/asm-uaccess.h
> +++ b/arch/arm64/include/asm/asm-uaccess.h
> @@ -16,11 +16,20 @@
>  	add	\tmp1, \tmp1, #SWAPPER_DIR_SIZE	// reserved_ttbr0 at the end of swapper_pg_dir
>  	msr	ttbr0_el1, \tmp1		// set reserved TTBR0_EL1
>  	isb
> +	sub	\tmp1, \tmp1, #SWAPPER_DIR_SIZE
> +	bic	\tmp1, \tmp1, #(0xffff << 48)
> +	msr	ttbr1_el1, \tmp1		// set reserved ASID
> +	isb
>  	.endm
>  
> -	.macro	__uaccess_ttbr0_enable, tmp1
> +	.macro	__uaccess_ttbr0_enable, tmp1, tmp2
>  	get_thread_info \tmp1
>  	ldr	\tmp1, [\tmp1, #TSK_TI_TTBR0]	// load saved TTBR0_EL1
> +	mrs	\tmp2, ttbr1_el1
> +	extr    \tmp2, \tmp2, \tmp1, #48
> +	ror     \tmp2, \tmp2, #16

It took me a while to figure out what was going on here, as I confused
EXTR with BFX.

I also didn't realise that thread_info::ttbr0 still had the ASID
orred-in. I guess it doesn't matter if we write that into TTBR0_EL1, as
it should be ignored by HW.

> diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
> index fc0f9eb66039..750a3b76a01c 100644
> --- a/arch/arm64/include/asm/uaccess.h
> +++ b/arch/arm64/include/asm/uaccess.h
> @@ -107,15 +107,19 @@ static inline void __uaccess_ttbr0_disable(void)
>  {
>  	unsigned long ttbr;
>  
> +	ttbr = read_sysreg(ttbr1_el1);
>  	/* reserved_ttbr0 placed at the end of swapper_pg_dir */
> -	ttbr = read_sysreg(ttbr1_el1) + SWAPPER_DIR_SIZE;
> -	write_sysreg(ttbr, ttbr0_el1);
> +	write_sysreg(ttbr + SWAPPER_DIR_SIZE, ttbr0_el1);
> +	isb();
> +	/* Set reserved ASID */
> +	ttbr &= ~(0xffffUL << 48);

Given we have this constant open-coded in a few places, maybe we should
have something like:

#define TTBR_ASID_MASK	(UL(0xffff) << 48)

... in a header somewhere.

Otherwise, looks good to me.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 12/18] arm64: entry: Explicitly pass exception level to kernel_ventry macro
  2017-11-30 16:39 ` [PATCH v2 12/18] arm64: entry: Explicitly pass exception level to kernel_ventry macro Will Deacon
@ 2017-12-01 11:58   ` Mark Rutland
  2017-12-01 17:51     ` Will Deacon
  0 siblings, 1 reply; 44+ messages in thread
From: Mark Rutland @ 2017-12-01 11:58 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

On Thu, Nov 30, 2017 at 04:39:40PM +0000, Will Deacon wrote:
> We will need to treat exceptions from EL0 differently in kernel_ventry,
> so rework the macro to take the exception level as an argument and
> construct the branch target using that.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/kernel/entry.S | 46 +++++++++++++++++++++++-----------------------
>  1 file changed, 23 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index dea196f287a0..688e52f65a8d 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -71,7 +71,7 @@
>  #define BAD_FIQ		2
>  #define BAD_ERROR	3
>  
> -	.macro kernel_ventry	label
> +	.macro kernel_ventry, el, label, regsize = 64
>  	.align 7
>  	sub	sp, sp, #S_FRAME_SIZE
>  #ifdef CONFIG_VMAP_STACK
> @@ -84,7 +84,7 @@
>  	tbnz	x0, #THREAD_SHIFT, 0f
>  	sub	x0, sp, x0			// x0'' = sp' - x0' = (sp + x0) - sp = x0
>  	sub	sp, sp, x0			// sp'' = sp' - x0 = (sp + x0) - x0 = sp
> -	b	\label
> +	b	el\()\el\()_\label
>  
>  0:
>  	/*
> @@ -116,7 +116,7 @@
>  	sub	sp, sp, x0
>  	mrs	x0, tpidrro_el0
>  #endif
> -	b	\label
> +	b	el\()\el\()_\label
>  	.endm
>  
>  	.macro	kernel_entry, el, regsize = 64
> @@ -369,31 +369,31 @@ tsk	.req	x28		// current thread_info
>  
>  	.align	11
>  ENTRY(vectors)
> -	kernel_ventry	el1_sync_invalid		// Synchronous EL1t
> -	kernel_ventry	el1_irq_invalid			// IRQ EL1t
> -	kernel_ventry	el1_fiq_invalid			// FIQ EL1t
> -	kernel_ventry	el1_error_invalid		// Error EL1t
> +	kernel_ventry	1, sync_invalid			// Synchronous EL1t
> +	kernel_ventry	1, irq_invalid			// IRQ EL1t
> +	kernel_ventry	1, fiq_invalid			// FIQ EL1t
> +	kernel_ventry	1, error_invalid		// Error EL1t

Using the el paramter to build the branch name has the unfortunate
property of obscuring the branch name. For example, that makes it
difficult to jump around the entry asm with ctags, which is somewhat
painful.

Could we leave the full branch name in place, e.g.

	kernel_ventry	1, el1_sync_invalid		// Synchronous EL1t
	kernel_ventry	1, el1_irq_invalid		// IRQ EL1t
	kernel_ventry	1, el1_fiq_invalid		// FIQ EL1t
	kernel_ventry	1, el1_error_invalid		// Error EL1t

... or have separate kernel_ventry and user_ventry macros that
implicitly encoded the source EL, also leaving the label name as-is.

Otherwise, this looks fine to me.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 18/18] perf: arm_spe: Disallow userspace profiling when arm_kernel_unmapped_at_el0()
  2017-11-30 16:39 ` [PATCH v2 18/18] perf: arm_spe: Disallow userspace profiling when arm_kernel_unmapped_at_el0() Will Deacon
@ 2017-12-01 12:15   ` Mark Rutland
  2017-12-01 16:49     ` Will Deacon
  2017-12-01 16:26   ` Stephen Boyd
  1 sibling, 1 reply; 44+ messages in thread
From: Mark Rutland @ 2017-12-01 12:15 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

On Thu, Nov 30, 2017 at 04:39:46PM +0000, Will Deacon wrote:
> When running with the kernel unmapped whilst at EL0, the virtually-addressed
> SPE buffer is also unmapped, which can lead to buffer faults if userspace
> profiling is enabled.
> 
> This patch prohibits SPE profiling of userspace when
> arm_kernel_unmapped_at_el0().
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  drivers/perf/arm_spe_pmu.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
> index 8ce262fc2561..c028db8973a4 100644
> --- a/drivers/perf/arm_spe_pmu.c
> +++ b/drivers/perf/arm_spe_pmu.c
> @@ -675,6 +675,13 @@ static int arm_spe_pmu_event_init(struct perf_event *event)
>  		return -EOPNOTSUPP;
>  
>  	/*
> +	 * If kernelspace is unmapped when running at EL0, then the SPE
> +	 * buffer will fault and prematurely terminate the AUX session.
> +	 */
> +	if (arm64_kernel_unmapped_at_el0() && !attr->exclude_user)
> +		dev_warn_once(&spe_pmu->pdev->dev, "unable to write to profiling buffer from EL0. Try passing \"kaiser=off\" on the kernel command line");

The commit messages sats this prohibits profiling, but we simply log a
message.

I take it you meant to return an error code, too?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 10/18] arm64: entry: Add exception trampoline page for exceptions from EL0
  2017-11-30 16:39 ` [PATCH v2 10/18] arm64: entry: Add exception trampoline page for exceptions from EL0 Will Deacon
@ 2017-12-01 13:31   ` Mark Rutland
  2017-12-06 10:25   ` Ard Biesheuvel
  1 sibling, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-12-01 13:31 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

On Thu, Nov 30, 2017 at 04:39:38PM +0000, Will Deacon wrote:
> +	.macro tramp_ventry, regsize = 64
> +	.align	7
> +1:
> +	.if	\regsize == 64
> +	msr	tpidrro_el0, x30
> +	.endif
> +	tramp_map_kernel	x30
> +	ldr	x30, =vectors
> +	prfm	plil1strm, [x30, #(1b - tramp_vectors)]
> +	msr	vbar_el1, x30
> +	add	x30, x30, #(1b - tramp_vectors)
> +	isb
> +	br	x30
> +	.endm

It might be worth a comment that the real vectors will restore x30 from
tpiddro_el0, since as-is, it looks like we're corrupting the value.

Otherwise, this looks good to me.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 13/18] arm64: entry: Hook up entry trampoline to exception vectors
  2017-11-30 16:39 ` [PATCH v2 13/18] arm64: entry: Hook up entry trampoline to exception vectors Will Deacon
@ 2017-12-01 13:53   ` Mark Rutland
  2017-12-01 17:40     ` Will Deacon
  0 siblings, 1 reply; 44+ messages in thread
From: Mark Rutland @ 2017-12-01 13:53 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

On Thu, Nov 30, 2017 at 04:39:41PM +0000, Will Deacon wrote:
>  	.macro kernel_ventry, el, label, regsize = 64
>  	.align 7
> +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> +	.if	\el == 0
> +	.if	\regsize == 64
> +	mrs	x30, tpidrro_el0
> +	msr	tpidrro_el0, xzr
> +	.else
> +	mov	x30, xzr

I guess that's just to prevent acccidental leaks if we dump registers
somewhere, since we used x30 as a scratch register?

> +	.macro tramp_alias, dst, sym
> +	mov_q	\dst, TRAMP_VALIAS
> +	add	\dst, \dst, #(\sym - .entry.tramp.text)
> +	.endm

I didn't realise you could refer to sections like this; neat!

Otherwise, looks fine to me.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 16/18] arm64: entry: Add fake CPU feature for unmapping the kernel at EL0
  2017-11-30 16:39 ` [PATCH v2 16/18] arm64: entry: Add fake CPU feature for unmapping the kernel at EL0 Will Deacon
@ 2017-12-01 13:55   ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-12-01 13:55 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

On Thu, Nov 30, 2017 at 04:39:44PM +0000, Will Deacon wrote:
> -#ifndef CONFIG_UNMAP_KERNEL_AT_EL0
> -	eret
> -#else
>  	.if	\el == 0
> +alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0

Since we patch this eret ...

> +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> +alternative_if ARM64_UNMAP_KERNEL_AT_EL0
>  	bne	4f
>  	msr	far_el1, x30
>  	tramp_alias	x30, tramp_exit_native
> @@ -334,10 +336,11 @@ alternative_else_nop_endif
>  4:
>  	tramp_alias	x30, tramp_exit_compat
>  	br	x30
> +alternative_else_nop_endif
> +#endif

... we don't need the alternative here. This code won't be executed when
not needed, and the alternative just bloats the kernel.

We can/should keep the ifdef, though.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER)
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (17 preceding siblings ...)
  2017-11-30 16:39 ` [PATCH v2 18/18] perf: arm_spe: Disallow userspace profiling when arm_kernel_unmapped_at_el0() Will Deacon
@ 2017-12-01 14:04 ` Mark Rutland
  2017-12-01 17:50   ` Will Deacon
  2017-12-04 23:47 ` Laura Abbott
  19 siblings, 1 reply; 44+ messages in thread
From: Mark Rutland @ 2017-12-01 14:04 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

Hi Will,

On Thu, Nov 30, 2017 at 04:39:28PM +0000, Will Deacon wrote:
> Hi again,
> 
> This is version two of the patches previously posted here:
> 
>   http://lists.infradead.org/pipermail/linux-arm-kernel/2017-November/542751.html
> 
> Changes since v1 include:
> 
>   * Based on v4.15-rc1
>   * Trampoline moved into FIXMAP area
>   * Explicit static key replaced by cpu cap
>   * Disable SPE for userspace profiling if kernel unmapped at EL0
>   * Changed polarity of cpu feature to match config option
>   * Changed command-line option so we can force on in future if necessary
>   * Changed Falkor workaround to invalidate different page within 2MB region
>   * Reworked alternative sequences in entry.S, since the NOP slides with
>     kaiser=off were measurable

This generally looks good to me.

For patches patches 1-10, 13-15, and 17, feel free to add:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

(assuming you fix up the issue Robin spotted on patch 14).

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 18/18] perf: arm_spe: Disallow userspace profiling when arm_kernel_unmapped_at_el0()
  2017-11-30 16:39 ` [PATCH v2 18/18] perf: arm_spe: Disallow userspace profiling when arm_kernel_unmapped_at_el0() Will Deacon
  2017-12-01 12:15   ` Mark Rutland
@ 2017-12-01 16:26   ` Stephen Boyd
  1 sibling, 0 replies; 44+ messages in thread
From: Stephen Boyd @ 2017-12-01 16:26 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, mark.rutland,
	ard.biesheuvel, dave.hansen, keescook, msalter, labbott, tglx

On 11/30, Will Deacon wrote:
> When running with the kernel unmapped whilst at EL0, the virtually-addressed
> SPE buffer is also unmapped, which can lead to buffer faults if userspace
> profiling is enabled.
> 
> This patch prohibits SPE profiling of userspace when
> arm_kernel_unmapped_at_el0().
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  drivers/perf/arm_spe_pmu.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
> index 8ce262fc2561..c028db8973a4 100644
> --- a/drivers/perf/arm_spe_pmu.c
> +++ b/drivers/perf/arm_spe_pmu.c
> @@ -675,6 +675,13 @@ static int arm_spe_pmu_event_init(struct perf_event *event)
>  		return -EOPNOTSUPP;
>  
>  	/*
> +	 * If kernelspace is unmapped when running at EL0, then the SPE
> +	 * buffer will fault and prematurely terminate the AUX session.
> +	 */
> +	if (arm64_kernel_unmapped_at_el0() && !attr->exclude_user)
> +		dev_warn_once(&spe_pmu->pdev->dev, "unable to write to profiling buffer from EL0. Try passing \"kaiser=off\" on the kernel command line");

Missing newline on that print?

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 18/18] perf: arm_spe: Disallow userspace profiling when arm_kernel_unmapped_at_el0()
  2017-12-01 12:15   ` Mark Rutland
@ 2017-12-01 16:49     ` Will Deacon
  0 siblings, 0 replies; 44+ messages in thread
From: Will Deacon @ 2017-12-01 16:49 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

On Fri, Dec 01, 2017 at 12:15:06PM +0000, Mark Rutland wrote:
> On Thu, Nov 30, 2017 at 04:39:46PM +0000, Will Deacon wrote:
> > When running with the kernel unmapped whilst at EL0, the virtually-addressed
> > SPE buffer is also unmapped, which can lead to buffer faults if userspace
> > profiling is enabled.
> > 
> > This patch prohibits SPE profiling of userspace when
> > arm_kernel_unmapped_at_el0().
> > 
> > Signed-off-by: Will Deacon <will.deacon@arm.com>
> > ---
> >  drivers/perf/arm_spe_pmu.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> > 
> > diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
> > index 8ce262fc2561..c028db8973a4 100644
> > --- a/drivers/perf/arm_spe_pmu.c
> > +++ b/drivers/perf/arm_spe_pmu.c
> > @@ -675,6 +675,13 @@ static int arm_spe_pmu_event_init(struct perf_event *event)
> >  		return -EOPNOTSUPP;
> >  
> >  	/*
> > +	 * If kernelspace is unmapped when running at EL0, then the SPE
> > +	 * buffer will fault and prematurely terminate the AUX session.
> > +	 */
> > +	if (arm64_kernel_unmapped_at_el0() && !attr->exclude_user)
> > +		dev_warn_once(&spe_pmu->pdev->dev, "unable to write to profiling buffer from EL0. Try passing \"kaiser=off\" on the kernel command line");
> 
> The commit messages sats this prohibits profiling, but we simply log a
> message.
> 
> I take it you meant to return an error code, too?

To be honest with you, I've been changing my mind a lot about what to do
here and the code has ended up being a bit of a mess after I've butchered
it repeatedly.

The fact remains that there aren't any SPE-capable CPUs shipping at the
moment, so I'm inclined just to fail the probe for now and we can look
at whether or not we can do better when we've got some hardware to play
with.

And I'll add the missing newline.

Thanks,

Will

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 13/18] arm64: entry: Hook up entry trampoline to exception vectors
  2017-12-01 13:53   ` Mark Rutland
@ 2017-12-01 17:40     ` Will Deacon
  0 siblings, 0 replies; 44+ messages in thread
From: Will Deacon @ 2017-12-01 17:40 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

On Fri, Dec 01, 2017 at 01:53:01PM +0000, Mark Rutland wrote:
> On Thu, Nov 30, 2017 at 04:39:41PM +0000, Will Deacon wrote:
> >  	.macro kernel_ventry, el, label, regsize = 64
> >  	.align 7
> > +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> > +	.if	\el == 0
> > +	.if	\regsize == 64
> > +	mrs	x30, tpidrro_el0
> > +	msr	tpidrro_el0, xzr
> > +	.else
> > +	mov	x30, xzr
> 
> I guess that's just to prevent acccidental leaks if we dump registers
> somewhere, since we used x30 as a scratch register?

Indeed. I don't have a concrete example, but I was worried about things
like perf and ptrace, which might allow you to get at the AArch64 register
state for a compat task so it felt like a good idea to zero this.

Will

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER)
  2017-12-01 14:04 ` [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Mark Rutland
@ 2017-12-01 17:50   ` Will Deacon
  2017-12-01 17:58     ` Mark Rutland
  0 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2017-12-01 17:50 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

Hi Mark,

On Fri, Dec 01, 2017 at 02:04:06PM +0000, Mark Rutland wrote:
> On Thu, Nov 30, 2017 at 04:39:28PM +0000, Will Deacon wrote:
> > Hi again,
> > 
> > This is version two of the patches previously posted here:
> > 
> >   http://lists.infradead.org/pipermail/linux-arm-kernel/2017-November/542751.html
> > 
> > Changes since v1 include:
> > 
> >   * Based on v4.15-rc1
> >   * Trampoline moved into FIXMAP area
> >   * Explicit static key replaced by cpu cap
> >   * Disable SPE for userspace profiling if kernel unmapped at EL0
> >   * Changed polarity of cpu feature to match config option
> >   * Changed command-line option so we can force on in future if necessary
> >   * Changed Falkor workaround to invalidate different page within 2MB region
> >   * Reworked alternative sequences in entry.S, since the NOP slides with
> >     kaiser=off were measurable
> 
> This generally looks good to me.
> 
> For patches patches 1-10, 13-15, and 17, feel free to add:
> 
> Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Thanks for going through this. Do you have any ideas about what we could
rename the command-line option to? I'll get us started:

  - kaiser=
  - hidekernel=
  - unmapkernel=
  - hardenkaslr=
  - swuan=

...

Will

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 12/18] arm64: entry: Explicitly pass exception level to kernel_ventry macro
  2017-12-01 11:58   ` Mark Rutland
@ 2017-12-01 17:51     ` Will Deacon
  2017-12-01 18:00       ` Mark Rutland
  0 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2017-12-01 17:51 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

On Fri, Dec 01, 2017 at 11:58:36AM +0000, Mark Rutland wrote:
> On Thu, Nov 30, 2017 at 04:39:40PM +0000, Will Deacon wrote:
> > We will need to treat exceptions from EL0 differently in kernel_ventry,
> > so rework the macro to take the exception level as an argument and
> > construct the branch target using that.
> > 
> > Signed-off-by: Will Deacon <will.deacon@arm.com>
> > ---
> >  arch/arm64/kernel/entry.S | 46 +++++++++++++++++++++++-----------------------
> >  1 file changed, 23 insertions(+), 23 deletions(-)
> > 
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index dea196f287a0..688e52f65a8d 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -71,7 +71,7 @@
> >  #define BAD_FIQ		2
> >  #define BAD_ERROR	3
> >  
> > -	.macro kernel_ventry	label
> > +	.macro kernel_ventry, el, label, regsize = 64
> >  	.align 7
> >  	sub	sp, sp, #S_FRAME_SIZE
> >  #ifdef CONFIG_VMAP_STACK
> > @@ -84,7 +84,7 @@
> >  	tbnz	x0, #THREAD_SHIFT, 0f
> >  	sub	x0, sp, x0			// x0'' = sp' - x0' = (sp + x0) - sp = x0
> >  	sub	sp, sp, x0			// sp'' = sp' - x0 = (sp + x0) - x0 = sp
> > -	b	\label
> > +	b	el\()\el\()_\label
> >  
> >  0:
> >  	/*
> > @@ -116,7 +116,7 @@
> >  	sub	sp, sp, x0
> >  	mrs	x0, tpidrro_el0
> >  #endif
> > -	b	\label
> > +	b	el\()\el\()_\label
> >  	.endm
> >  
> >  	.macro	kernel_entry, el, regsize = 64
> > @@ -369,31 +369,31 @@ tsk	.req	x28		// current thread_info
> >  
> >  	.align	11
> >  ENTRY(vectors)
> > -	kernel_ventry	el1_sync_invalid		// Synchronous EL1t
> > -	kernel_ventry	el1_irq_invalid			// IRQ EL1t
> > -	kernel_ventry	el1_fiq_invalid			// FIQ EL1t
> > -	kernel_ventry	el1_error_invalid		// Error EL1t
> > +	kernel_ventry	1, sync_invalid			// Synchronous EL1t
> > +	kernel_ventry	1, irq_invalid			// IRQ EL1t
> > +	kernel_ventry	1, fiq_invalid			// FIQ EL1t
> > +	kernel_ventry	1, error_invalid		// Error EL1t
> 
> Using the el paramter to build the branch name has the unfortunate
> property of obscuring the branch name. For example, that makes it
> difficult to jump around the entry asm with ctags, which is somewhat
> painful.
> 
> Could we leave the full branch name in place, e.g.
> 
> 	kernel_ventry	1, el1_sync_invalid		// Synchronous EL1t
> 	kernel_ventry	1, el1_irq_invalid		// IRQ EL1t
> 	kernel_ventry	1, el1_fiq_invalid		// FIQ EL1t
> 	kernel_ventry	1, el1_error_invalid		// Error EL1t
> 
> ... or have separate kernel_ventry and user_ventry macros that
> implicitly encoded the source EL, also leaving the label name as-is.

The downside of doing that is that it makes it possible to say things like:

	kernel_ventry	0, el1_sync

which I don't want to be expressible.

Given that ctags already chokes on lots of entry.S (for example, any macro
that is defined outside of the file) *and* that you can easily search for
things like el1_sync_invalid within the file, I'm inclined to leave this
patch as-is, but I'll note your objection and buy you a pint.

Will

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER)
  2017-12-01 17:50   ` Will Deacon
@ 2017-12-01 17:58     ` Mark Rutland
  2017-12-01 18:02       ` Dave Hansen
  0 siblings, 1 reply; 44+ messages in thread
From: Mark Rutland @ 2017-12-01 17:58 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

On Fri, Dec 01, 2017 at 05:50:26PM +0000, Will Deacon wrote:
> On Fri, Dec 01, 2017 at 02:04:06PM +0000, Mark Rutland wrote:
> > On Thu, Nov 30, 2017 at 04:39:28PM +0000, Will Deacon wrote:
> Thanks for going through this. Do you have any ideas about what we could
> rename the command-line option to? I'll get us started:
> 
>   - kaiser=
>   - hidekernel=
>   - unmapkernel=
>   - hardenkaslr=
>   - swuan=

Off all of these, I think "unmapkernel" is the clear winner, since it
says what it does in the tin (even if it misses the when).

I'll have a think over the weekend.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 12/18] arm64: entry: Explicitly pass exception level to kernel_ventry macro
  2017-12-01 17:51     ` Will Deacon
@ 2017-12-01 18:00       ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-12-01 18:00 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, labbott, tglx

On Fri, Dec 01, 2017 at 05:51:44PM +0000, Will Deacon wrote:
> On Fri, Dec 01, 2017 at 11:58:36AM +0000, Mark Rutland wrote:
> > On Thu, Nov 30, 2017 at 04:39:40PM +0000, Will Deacon wrote:

> > > +	.macro kernel_ventry, el, label, regsize = 64

> > > +	b	el\()\el\()_\label

> > > -	kernel_ventry	el1_sync_invalid		// Synchronous EL1t

> > > +	kernel_ventry	1, sync_invalid			// Synchronous EL1t

> > Using the el paramter to build the branch name has the unfortunate
> > property of obscuring the branch name. For example, that makes it
> > difficult to jump around the entry asm with ctags, which is somewhat
> > painful.
> > 
> > Could we leave the full branch name in place, e.g.
> > 
> > 	kernel_ventry	1, el1_sync_invalid		// Synchronous EL1t
> > 	kernel_ventry	1, el1_irq_invalid		// IRQ EL1t
> > 	kernel_ventry	1, el1_fiq_invalid		// FIQ EL1t
> > 	kernel_ventry	1, el1_error_invalid		// Error EL1t
> > 
> > ... or have separate kernel_ventry and user_ventry macros that
> > implicitly encoded the source EL, also leaving the label name as-is.
> 
> The downside of doing that is that it makes it possible to say things like:
> 
> 	kernel_ventry	0, el1_sync
> 
> which I don't want to be expressible.
> 
> Given that ctags already chokes on lots of entry.S (for example, any macro
> that is defined outside of the file) *and* that you can easily search for
> things like el1_sync_invalid within the file, I'm inclined to leave this
> patch as-is, but I'll note your objection and buy you a pint.

I guess I'll live with it, then. ;)

Assuming I can't twist your arm, feel free to take my Reviewed-by here
too.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER)
  2017-12-01 17:58     ` Mark Rutland
@ 2017-12-01 18:02       ` Dave Hansen
  2017-12-01 18:14         ` Will Deacon
  0 siblings, 1 reply; 44+ messages in thread
From: Dave Hansen @ 2017-12-01 18:02 UTC (permalink / raw)
  To: Mark Rutland, Will Deacon
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, ard.biesheuvel,
	sboyd, keescook, msalter, labbott, tglx

On 12/01/2017 09:58 AM, Mark Rutland wrote:
> On Fri, Dec 01, 2017 at 05:50:26PM +0000, Will Deacon wrote:
>> On Fri, Dec 01, 2017 at 02:04:06PM +0000, Mark Rutland wrote:
>>> On Thu, Nov 30, 2017 at 04:39:28PM +0000, Will Deacon wrote:
>> Thanks for going through this. Do you have any ideas about what we could
>> rename the command-line option to? I'll get us started:
>>
>>   - kaiser=
>>   - hidekernel=
>>   - unmapkernel=
>>   - hardenkaslr=
>>   - swuan=
> Off all of these, I think "unmapkernel" is the clear winner, since it
> says what it does in the tin (even if it misses the when).
> 
> I'll have a think over the weekend.

On the x86 side we've been leaning toward renaming kaiser to something
like "user pagetable isolation", so the boot parameter is something like
"noupti".

But I think the consensus is definitely to get rid of "kaiser".

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER)
  2017-12-01 18:02       ` Dave Hansen
@ 2017-12-01 18:14         ` Will Deacon
  2017-12-11  2:24           ` Shanker Donthineni
  0 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2017-12-01 18:14 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Mark Rutland, linux-arm-kernel, linux-kernel, catalin.marinas,
	ard.biesheuvel, sboyd, keescook, msalter, labbott, tglx

On Fri, Dec 01, 2017 at 10:02:43AM -0800, Dave Hansen wrote:
> On 12/01/2017 09:58 AM, Mark Rutland wrote:
> > On Fri, Dec 01, 2017 at 05:50:26PM +0000, Will Deacon wrote:
> >> On Fri, Dec 01, 2017 at 02:04:06PM +0000, Mark Rutland wrote:
> >>> On Thu, Nov 30, 2017 at 04:39:28PM +0000, Will Deacon wrote:
> >> Thanks for going through this. Do you have any ideas about what we could
> >> rename the command-line option to? I'll get us started:
> >>
> >>   - kaiser=
> >>   - hidekernel=
> >>   - unmapkernel=
> >>   - hardenkaslr=
> >>   - swuan=
> > Off all of these, I think "unmapkernel" is the clear winner, since it
> > says what it does in the tin (even if it misses the when).
> > 
> > I'll have a think over the weekend.
> 
> On the x86 side we've been leaning toward renaming kaiser to something
> like "user pagetable isolation", so the boot parameter is something like
> "noupti".
> 
> But I think the consensus is definitely to get rid of "kaiser".

Ok, good. I'm happy to follow your lead on the name if it's likely to be
resolved in the next week or so.

Will

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER)
  2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
                   ` (18 preceding siblings ...)
  2017-12-01 14:04 ` [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Mark Rutland
@ 2017-12-04 23:47 ` Laura Abbott
  19 siblings, 0 replies; 44+ messages in thread
From: Laura Abbott @ 2017-12-04 23:47 UTC (permalink / raw)
  To: Will Deacon, linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, mark.rutland, ard.biesheuvel,
	sboyd, dave.hansen, keescook, msalter, tglx

On 11/30/2017 08:39 AM, Will Deacon wrote:
> Hi again,
> 
> This is version two of the patches previously posted here:
> 
>    http://lists.infradead.org/pipermail/linux-arm-kernel/2017-November/542751.html
> 
> Changes since v1 include:
> 
>    * Based on v4.15-rc1
>    * Trampoline moved into FIXMAP area
>    * Explicit static key replaced by cpu cap
>    * Disable SPE for userspace profiling if kernel unmapped at EL0
>    * Changed polarity of cpu feature to match config option
>    * Changed command-line option so we can force on in future if necessary
>    * Changed Falkor workaround to invalidate different page within 2MB region
>    * Reworked alternative sequences in entry.S, since the NOP slides with
>      kaiser=off were measurable
> 
> I experimented with leaving the vbar set to point at the kaiser vectors,
> but I couldn't measure any performance improvement from that and it made
> the code slightly more complicated, so I've left it as-is.
> 
> Patches based on 4.15-rc1 and also pushed here:
> 
>    git://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git kaiser
> 
> Feedback welcome, particularly on a better name for the command-line option.
> 

I ran this with one of the LTP mmap tests over the weekend. The mmap
test completed successfully but later the machine was spewing I/O
errors. I think this is because of the hardware and not the patches
so I'm running again for good measure.

> Will
> 
> --->8
> 
> Will Deacon (18):
>    arm64: mm: Use non-global mappings for kernel space
>    arm64: mm: Temporarily disable ARM64_SW_TTBR0_PAN
>    arm64: mm: Move ASID from TTBR0 to TTBR1
>    arm64: mm: Remove pre_ttbr0_update_workaround for Falkor erratum
>      #E1003
>    arm64: mm: Rename post_ttbr0_update_workaround
>    arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN
>    arm64: mm: Allocate ASIDs in pairs
>    arm64: mm: Add arm64_kernel_unmapped_at_el0 helper
>    arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI
>    arm64: entry: Add exception trampoline page for exceptions from EL0
>    arm64: mm: Map entry trampoline into trampoline and kernel page tables
>    arm64: entry: Explicitly pass exception level to kernel_ventry macro
>    arm64: entry: Hook up entry trampoline to exception vectors
>    arm64: erratum: Work around Falkor erratum #E1003 in trampoline code
>    arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native
>      tasks
>    arm64: entry: Add fake CPU feature for unmapping the kernel at EL0
>    arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0
>    perf: arm_spe: Disallow userspace profiling when
>      arm_kernel_unmapped_at_el0()
> 
>   arch/arm64/Kconfig                      |  30 +++--
>   arch/arm64/include/asm/asm-uaccess.h    |  25 +++--
>   arch/arm64/include/asm/assembler.h      |  27 +----
>   arch/arm64/include/asm/cpucaps.h        |   3 +-
>   arch/arm64/include/asm/fixmap.h         |   4 +
>   arch/arm64/include/asm/kernel-pgtable.h |  12 +-
>   arch/arm64/include/asm/mmu.h            |  10 ++
>   arch/arm64/include/asm/mmu_context.h    |   9 +-
>   arch/arm64/include/asm/pgtable-hwdef.h  |   1 +
>   arch/arm64/include/asm/pgtable-prot.h   |  21 +++-
>   arch/arm64/include/asm/pgtable.h        |   1 +
>   arch/arm64/include/asm/proc-fns.h       |   6 -
>   arch/arm64/include/asm/tlbflush.h       |  16 ++-
>   arch/arm64/include/asm/uaccess.h        |  21 +++-
>   arch/arm64/kernel/asm-offsets.c         |   6 +-
>   arch/arm64/kernel/cpufeature.c          |  41 +++++++
>   arch/arm64/kernel/entry.S               | 190 +++++++++++++++++++++++++++-----
>   arch/arm64/kernel/process.c             |  12 +-
>   arch/arm64/kernel/vmlinux.lds.S         |  17 +++
>   arch/arm64/lib/clear_user.S             |   2 +-
>   arch/arm64/lib/copy_from_user.S         |   2 +-
>   arch/arm64/lib/copy_in_user.S           |   2 +-
>   arch/arm64/lib/copy_to_user.S           |   2 +-
>   arch/arm64/mm/cache.S                   |   2 +-
>   arch/arm64/mm/context.c                 |  36 +++---
>   arch/arm64/mm/mmu.c                     |  23 ++++
>   arch/arm64/mm/proc.S                    |  12 +-
>   arch/arm64/xen/hypercall.S              |   2 +-
>   drivers/perf/arm_spe_pmu.c              |   7 ++
>   29 files changed, 407 insertions(+), 135 deletions(-)
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 10/18] arm64: entry: Add exception trampoline page for exceptions from EL0
  2017-11-30 16:39 ` [PATCH v2 10/18] arm64: entry: Add exception trampoline page for exceptions from EL0 Will Deacon
  2017-12-01 13:31   ` Mark Rutland
@ 2017-12-06 10:25   ` Ard Biesheuvel
  1 sibling, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-12-06 10:25 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, Catalin Marinas, Mark Rutland,
	Stephen Boyd, Dave Hansen, Kees Cook, Mark Salter, Laura Abbott,
	tglx

On 30 November 2017 at 16:39, Will Deacon <will.deacon@arm.com> wrote:
> To allow unmapping of the kernel whilst running at EL0, we need to
> point the exception vectors at an entry trampoline that can map/unmap
> the kernel on entry/exit respectively.
>
> This patch adds the trampoline page, although it is not yet plugged
> into the vector table and is therefore unused.
>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/kernel/entry.S       | 86 +++++++++++++++++++++++++++++++++++++++++
>  arch/arm64/kernel/vmlinux.lds.S | 17 ++++++++
>  2 files changed, 103 insertions(+)
>
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index d454d8ed45e4..dea196f287a0 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -28,6 +28,8 @@
>  #include <asm/errno.h>
>  #include <asm/esr.h>
>  #include <asm/irq.h>
> +#include <asm/memory.h>
> +#include <asm/mmu.h>
>  #include <asm/processor.h>
>  #include <asm/ptrace.h>
>  #include <asm/thread_info.h>
> @@ -943,6 +945,90 @@ __ni_sys_trace:
>
>         .popsection                             // .entry.text
>
> +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> +/*
> + * Exception vectors trampoline.
> + */
> +       .pushsection ".entry.tramp.text", "ax"
> +
> +       .macro tramp_map_kernel, tmp
> +       mrs     \tmp, ttbr1_el1
> +       sub     \tmp, \tmp, #(SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
> +       bic     \tmp, \tmp, #USER_ASID_FLAG
> +       msr     ttbr1_el1, \tmp
> +       .endm
> +
> +       .macro tramp_unmap_kernel, tmp
> +       mrs     \tmp, ttbr1_el1
> +       add     \tmp, \tmp, #(SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
> +       orr     \tmp, \tmp, #USER_ASID_FLAG
> +       msr     ttbr1_el1, \tmp
> +       /*
> +        * We avoid running the post_ttbr_update_workaround here because the
> +        * user and kernel ASIDs don't have conflicting mappings, so any
> +        * "blessing" as described in:
> +        *
> +        *   http://lkml.kernel.org/r/56BB848A.6060603@caviumnetworks.com
> +        *
> +        * will not hurt correctness. Whilst this may partially defeat the
> +        * point of using split ASIDs in the first place, it avoids
> +        * the hit of invalidating the entire I-cache on every return to
> +        * userspace.
> +        */
> +       .endm
> +
> +       .macro tramp_ventry, regsize = 64
> +       .align  7
> +1:
> +       .if     \regsize == 64
> +       msr     tpidrro_el0, x30
> +       .endif
> +       tramp_map_kernel        x30
> +       ldr     x30, =vectors

Could we move this literal into the next page, and only map that in
the kernel page tables? It's the only piece of information in the
trampoline page that can reveal the true location of the kernel, and
moving it out is trivial to implement on top of the changes you are
already making to harden KASLR.

> +       prfm    plil1strm, [x30, #(1b - tramp_vectors)]
> +       msr     vbar_el1, x30
> +       add     x30, x30, #(1b - tramp_vectors)
> +       isb
> +       br      x30
> +       .endm
> +
> +       .macro tramp_exit, regsize = 64
> +       adr     x30, tramp_vectors
> +       msr     vbar_el1, x30
> +       tramp_unmap_kernel      x30
> +       .if     \regsize == 64
> +       mrs     x30, far_el1
> +       .endif
> +       eret
> +       .endm
> +
> +       .align  11
> +ENTRY(tramp_vectors)
> +       .space  0x400
> +
> +       tramp_ventry
> +       tramp_ventry
> +       tramp_ventry
> +       tramp_ventry
> +
> +       tramp_ventry    32
> +       tramp_ventry    32
> +       tramp_ventry    32
> +       tramp_ventry    32
> +END(tramp_vectors)
> +
> +ENTRY(tramp_exit_native)
> +       tramp_exit
> +END(tramp_exit_native)
> +
> +ENTRY(tramp_exit_compat)
> +       tramp_exit      32
> +END(tramp_exit_compat)
> +
> +       .ltorg
> +       .popsection                             // .entry.tramp.text
> +#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
> +
>  /*
>   * Special system call wrappers.
>   */
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index 7da3e5c366a0..6b4260f22aab 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -57,6 +57,17 @@ jiffies = jiffies_64;
>  #define HIBERNATE_TEXT
>  #endif
>
> +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> +#define TRAMP_TEXT                                     \
> +       . = ALIGN(PAGE_SIZE);                           \
> +       VMLINUX_SYMBOL(__entry_tramp_text_start) = .;   \
> +       *(.entry.tramp.text)                            \
> +       . = ALIGN(PAGE_SIZE);                           \
> +       VMLINUX_SYMBOL(__entry_tramp_text_end) = .;
> +#else
> +#define TRAMP_TEXT
> +#endif
> +
>  /*
>   * The size of the PE/COFF section that covers the kernel image, which
>   * runs from stext to _edata, must be a round multiple of the PE/COFF
> @@ -113,6 +124,7 @@ SECTIONS
>                         HYPERVISOR_TEXT
>                         IDMAP_TEXT
>                         HIBERNATE_TEXT
> +                       TRAMP_TEXT
>                         *(.fixup)
>                         *(.gnu.warning)
>                 . = ALIGN(16);
> @@ -214,6 +226,11 @@ SECTIONS
>         . += RESERVED_TTBR0_SIZE;
>  #endif
>
> +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> +       tramp_pg_dir = .;
> +       . += PAGE_SIZE;
> +#endif
> +
>         __pecoff_data_size = ABSOLUTE(. - __initdata_begin);
>         _end = .;
>
> --
> 2.1.4
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER)
  2017-12-01 18:14         ` Will Deacon
@ 2017-12-11  2:24           ` Shanker Donthineni
  0 siblings, 0 replies; 44+ messages in thread
From: Shanker Donthineni @ 2017-12-11  2:24 UTC (permalink / raw)
  To: Will Deacon, Dave Hansen
  Cc: Mark Rutland, keescook, ard.biesheuvel, catalin.marinas, sboyd,
	linux-kernel, msalter, tglx, labbott, linux-arm-kernel


Hi Will,

I tested v2 patch series on Centriq2400 server platform successfully, no regression so far. And also
we applied internal patches on top of the branch "kpti" and verified kaiser feature.

Tested-by: Shanker Donthineni <shankerd@codeaurora.org>


-- 
Shanker Donthineni
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 17/18] arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0
  2017-11-30 16:39 ` [PATCH v2 17/18] arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0 Will Deacon
@ 2017-12-12  8:44   ` Geert Uytterhoeven
  2017-12-12 10:28     ` Will Deacon
  0 siblings, 1 reply; 44+ messages in thread
From: Geert Uytterhoeven @ 2017-12-12  8:44 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, Catalin Marinas, Mark Rutland,
	Ard Biesheuvel, Stephen Boyd, Dave Hansen, Kees Cook,
	Mark Salter, Laura Abbott, Thomas Gleixner

Hi Will,

On Thu, Nov 30, 2017 at 5:39 PM, Will Deacon <will.deacon@arm.com> wrote:
> Add a Kconfig entry to control use of the entry trampoline, which allows
> us to unmap the kernel whilst running in userspace and improve the
> robustness of KASLR.
>
> Signed-off-by: Will Deacon <will.deacon@arm.com>

This is now commit 084eb77cd3a81134 in arm64/for-next/core.

> ---
>  arch/arm64/Kconfig | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index fdcc7b9bb15d..3af1657fcac3 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -833,6 +833,19 @@ config FORCE_MAX_ZONEORDER
>           However for 4K, we choose a higher default value, 11 as opposed to 10, giving us
>           4M allocations matching the default size used by generic code.
>
> +config UNMAP_KERNEL_AT_EL0
> +       bool "Unmap kernel when running in userspace (aka \"KAISER\")"

But I believe this is no longer called KAISER?

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 17/18] arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0
  2017-12-12  8:44   ` Geert Uytterhoeven
@ 2017-12-12 10:28     ` Will Deacon
  0 siblings, 0 replies; 44+ messages in thread
From: Will Deacon @ 2017-12-12 10:28 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: linux-arm-kernel, linux-kernel, Catalin Marinas, Mark Rutland,
	Ard Biesheuvel, Stephen Boyd, Dave Hansen, Kees Cook,
	Mark Salter, Laura Abbott, Thomas Gleixner

On Tue, Dec 12, 2017 at 09:44:09AM +0100, Geert Uytterhoeven wrote:
> Hi Will,
> 
> On Thu, Nov 30, 2017 at 5:39 PM, Will Deacon <will.deacon@arm.com> wrote:
> > Add a Kconfig entry to control use of the entry trampoline, which allows
> > us to unmap the kernel whilst running in userspace and improve the
> > robustness of KASLR.
> >
> > Signed-off-by: Will Deacon <will.deacon@arm.com>
> 
> This is now commit 084eb77cd3a81134 in arm64/for-next/core.
> 
> > ---
> >  arch/arm64/Kconfig | 13 +++++++++++++
> >  1 file changed, 13 insertions(+)
> >
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index fdcc7b9bb15d..3af1657fcac3 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -833,6 +833,19 @@ config FORCE_MAX_ZONEORDER
> >           However for 4K, we choose a higher default value, 11 as opposed to 10, giving us
> >           4M allocations matching the default size used by generic code.
> >
> > +config UNMAP_KERNEL_AT_EL0
> > +       bool "Unmap kernel when running in userspace (aka \"KAISER\")"
> 
> But I believe this is no longer called KAISER?

That's right, but KAISER is the original name in the paper and so I figured
it was worth mentioning just here to help people identify what this feature
is. The command line option is "kpti" to align with x86.

Will

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2017-12-12 10:28 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-30 16:39 [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Will Deacon
2017-11-30 16:39 ` [PATCH v2 01/18] arm64: mm: Use non-global mappings for kernel space Will Deacon
2017-11-30 16:39 ` [PATCH v2 02/18] arm64: mm: Temporarily disable ARM64_SW_TTBR0_PAN Will Deacon
2017-11-30 16:39 ` [PATCH v2 03/18] arm64: mm: Move ASID from TTBR0 to TTBR1 Will Deacon
2017-11-30 17:36   ` Mark Rutland
2017-11-30 16:39 ` [PATCH v2 04/18] arm64: mm: Remove pre_ttbr0_update_workaround for Falkor erratum #E1003 Will Deacon
2017-11-30 16:39 ` [PATCH v2 05/18] arm64: mm: Rename post_ttbr0_update_workaround Will Deacon
2017-11-30 16:39 ` [PATCH v2 06/18] arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN Will Deacon
2017-12-01 11:48   ` Mark Rutland
2017-11-30 16:39 ` [PATCH v2 07/18] arm64: mm: Allocate ASIDs in pairs Will Deacon
2017-11-30 16:39 ` [PATCH v2 08/18] arm64: mm: Add arm64_kernel_unmapped_at_el0 helper Will Deacon
2017-11-30 16:39 ` [PATCH v2 09/18] arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI Will Deacon
2017-11-30 16:39 ` [PATCH v2 10/18] arm64: entry: Add exception trampoline page for exceptions from EL0 Will Deacon
2017-12-01 13:31   ` Mark Rutland
2017-12-06 10:25   ` Ard Biesheuvel
2017-11-30 16:39 ` [PATCH v2 11/18] arm64: mm: Map entry trampoline into trampoline and kernel page tables Will Deacon
2017-11-30 18:29   ` Mark Rutland
2017-11-30 16:39 ` [PATCH v2 12/18] arm64: entry: Explicitly pass exception level to kernel_ventry macro Will Deacon
2017-12-01 11:58   ` Mark Rutland
2017-12-01 17:51     ` Will Deacon
2017-12-01 18:00       ` Mark Rutland
2017-11-30 16:39 ` [PATCH v2 13/18] arm64: entry: Hook up entry trampoline to exception vectors Will Deacon
2017-12-01 13:53   ` Mark Rutland
2017-12-01 17:40     ` Will Deacon
2017-11-30 16:39 ` [PATCH v2 14/18] arm64: erratum: Work around Falkor erratum #E1003 in trampoline code Will Deacon
2017-11-30 17:06   ` Robin Murphy
2017-11-30 17:19     ` Will Deacon
2017-11-30 16:39 ` [PATCH v2 15/18] arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks Will Deacon
2017-11-30 16:39 ` [PATCH v2 16/18] arm64: entry: Add fake CPU feature for unmapping the kernel at EL0 Will Deacon
2017-12-01 13:55   ` Mark Rutland
2017-11-30 16:39 ` [PATCH v2 17/18] arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0 Will Deacon
2017-12-12  8:44   ` Geert Uytterhoeven
2017-12-12 10:28     ` Will Deacon
2017-11-30 16:39 ` [PATCH v2 18/18] perf: arm_spe: Disallow userspace profiling when arm_kernel_unmapped_at_el0() Will Deacon
2017-12-01 12:15   ` Mark Rutland
2017-12-01 16:49     ` Will Deacon
2017-12-01 16:26   ` Stephen Boyd
2017-12-01 14:04 ` [PATCH v2 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Mark Rutland
2017-12-01 17:50   ` Will Deacon
2017-12-01 17:58     ` Mark Rutland
2017-12-01 18:02       ` Dave Hansen
2017-12-01 18:14         ` Will Deacon
2017-12-11  2:24           ` Shanker Donthineni
2017-12-04 23:47 ` Laura Abbott

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).