linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] x86/pti: Get rid of entry trampolines and add some docs
@ 2018-08-31 22:21 Andy Lutomirski
  2018-08-31 22:21 ` [PATCH 1/3] x86/entry/64: Document idtentry Andy Lutomirski
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Andy Lutomirski @ 2018-08-31 22:21 UTC (permalink / raw)
  To: x86
  Cc: Borislav Petkov, LKML, Dave Hansen, Adrian Hunter,
	Alexander Shishkin, Arnaldo Carvalho de Melo, Linus Torvalds,
	Josh Poimboeuf, Joerg Roedel, Jiri Olsa, Andi Kleen,
	Peter Zijlstra, Andy Lutomirski

This gets rid of entry trampolines.  It's more or less the same as
the RFC version, except that I rebased it to v4.19-rc1 due to
massive conflicts with some perf changes.  I have *not* reverted all
of the perf support for entry trampolines -- I leave that to the
perf crew, if needed.

Andy Lutomirski (3):
  x86/entry/64: Document idtentry
  x86/entry/64: Use the TSS sp2 slot for rsp_scratch
  x86/pti/64: Remove the SYSCALL64 entry trampoline

 arch/x86/entry/entry_64.S             | 101 +++++++++-----------------
 arch/x86/include/asm/cpu_entry_area.h |   2 -
 arch/x86/include/asm/processor.h      |   6 ++
 arch/x86/include/asm/sections.h       |   1 -
 arch/x86/include/asm/thread_info.h    |   1 +
 arch/x86/kernel/asm-offsets.c         |   5 +-
 arch/x86/kernel/cpu/common.c          |  11 +--
 arch/x86/kernel/kprobes/core.c        |  10 +--
 arch/x86/kernel/process_64.c          |   2 -
 arch/x86/kernel/traps.c               |   4 +
 arch/x86/kernel/vmlinux.lds.S         |  10 ---
 arch/x86/mm/cpu_entry_area.c          |  36 ---------
 arch/x86/mm/pti.c                     |  33 ++++++++-
 13 files changed, 83 insertions(+), 139 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/3] x86/entry/64: Document idtentry
  2018-08-31 22:21 [PATCH 0/3] x86/pti: Get rid of entry trampolines and add some docs Andy Lutomirski
@ 2018-08-31 22:21 ` Andy Lutomirski
  2018-08-31 22:21 ` [PATCH 2/3] x86/entry/64: Use the TSS sp2 slot for rsp_scratch Andy Lutomirski
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Andy Lutomirski @ 2018-08-31 22:21 UTC (permalink / raw)
  To: x86
  Cc: Borislav Petkov, LKML, Dave Hansen, Adrian Hunter,
	Alexander Shishkin, Arnaldo Carvalho de Melo, Linus Torvalds,
	Josh Poimboeuf, Joerg Roedel, Jiri Olsa, Andi Kleen,
	Peter Zijlstra, Andy Lutomirski

The idtentry macro is complicated and magical.  Document what it
does to help future readers and to allow future patches to adjust
the code and docs at the same time.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/entry/entry_64.S | 35 +++++++++++++++++++++++++++++++++++
 arch/x86/kernel/traps.c   |  4 ++++
 2 files changed, 39 insertions(+)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 957dfb693ecc..1796c42e08af 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -900,6 +900,41 @@ apicinterrupt IRQ_WORK_VECTOR			irq_work_interrupt		smp_irq_work_interrupt
  */
 #define CPU_TSS_IST(x) PER_CPU_VAR(cpu_tss_rw) + (TSS_ist + ((x) - 1) * 8)
 
+/**
+ * idtentry - Generate an IDT entry stub
+ * @sym: Name of the generated entry point
+ * @do_sym: C function to be called
+ * @has_error_code: True if this IDT vector has an error code on the stack
+ * @paranoid: non-zero means that this vector may be invoked from kernel
+ *            mode with user GSBASE and/or user CR3.  2 is special -- see below.
+ * @shift_ist: Set to an IST index if entries from kernel mode should
+ *             decrement the IST stack so that nested entries get a fresh
+ *             stack.  (This is for #DB, which has a nasty habit of
+ *             recursing.)
+ *
+ * idtentry generates an IDT stub that sets up a usable kernel context,
+ * creates struct pt_regs, and calls @do_sym.  The stub has the following
+ * special behaviors:
+ *
+ * On an entry from user mode, the stub switches from the trampoline or
+ * IST stack to the normal thread stack.  On an exit to user mode, the
+ * normal exit-to-usermode path is invoked.
+ *
+ * On an exit to kernel mode, if paranoid == 0, we check for preemption,
+ * whereas we omit the preemption check if @paranoid != 0.  This is purely
+ * because the implementation is simpler this way.  The kernel only needs
+ * to check for asynchronous kernel preemption when IRQ handlers return.
+ *
+ * If @paranoid == 0, then the stub will handle IRET faults by pretending
+ * that the fault came from user mode.  It will handle gs_change faults by
+ * pretending that the fault happened with kernel GSBASE.  Since this handling
+ * is omitted for @paranoid != 0, the #GP, #SS, and #NP stubs must have
+ * @paranoid == 0.  This special handling will do the wrong thing for
+ * espfix-induced #DF on IRET, so #DF must not use @paranoid == 0.
+ *
+ * @paranoid == 2 is special: the stub will never switch stacks.  This is for
+ * #DF: if the thread stack is somehow unusable, we'll still get a useful OOPS.
+ */
 .macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1
 ENTRY(\sym)
 	UNWIND_HINT_IRET_REGS offset=\has_error_code*8
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index e6db475164ed..1a90821c0b74 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -383,6 +383,10 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
 		 * we won't enable interupts or schedule before we invoke
 		 * general_protection, so nothing will clobber the stack
 		 * frame we just set up.
+		 *
+		 * We will enter general_protection with kernel GSBASE,
+		 * which is what the stub expects, given that the faulting
+		 * RIP will be the IRET instruction.
 		 */
 		regs->ip = (unsigned long)general_protection;
 		regs->sp = (unsigned long)&gpregs->orig_ax;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/3] x86/entry/64: Use the TSS sp2 slot for rsp_scratch
  2018-08-31 22:21 [PATCH 0/3] x86/pti: Get rid of entry trampolines and add some docs Andy Lutomirski
  2018-08-31 22:21 ` [PATCH 1/3] x86/entry/64: Document idtentry Andy Lutomirski
@ 2018-08-31 22:21 ` Andy Lutomirski
  2018-09-01 16:33   ` Linus Torvalds
  2018-08-31 22:21 ` [PATCH 3/3] x86/pti/64: Remove the SYSCALL64 entry trampoline Andy Lutomirski
  2018-09-01 16:34 ` [PATCH 0/3] x86/pti: Get rid of entry trampolines and add some docs Linus Torvalds
  3 siblings, 1 reply; 7+ messages in thread
From: Andy Lutomirski @ 2018-08-31 22:21 UTC (permalink / raw)
  To: x86
  Cc: Borislav Petkov, LKML, Dave Hansen, Adrian Hunter,
	Alexander Shishkin, Arnaldo Carvalho de Melo, Linus Torvalds,
	Josh Poimboeuf, Joerg Roedel, Jiri Olsa, Andi Kleen,
	Peter Zijlstra, Andy Lutomirski

In the non-trampoline SYSCALL64 path, we use a percpu variable to
temporarily store the user RSP value.  Instead of a separate
variable, use the otherwise unused sp2 slot in the TSS.  This will
improve cache locality, as the sp1 slot is already used in the same
code to find the kernel stack.  It will also simplify a future
change to make the non-trampoline path work in PTI mode.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/include/asm/processor.h   | 6 ++++++
 arch/x86/include/asm/thread_info.h | 1 +
 arch/x86/kernel/asm-offsets.c      | 3 ++-
 arch/x86/kernel/process_64.c       | 2 --
 4 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index c24297268ebc..8433d76bc37b 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -313,7 +313,13 @@ struct x86_hw_tss {
 	 */
 	u64			sp1;
 
+	/*
+	 * Since Linux does not use ring 2, the 'sp2' slot is unused by
+	 * hardware.  entry_SYSCALL_64 uses it as scratch space to stash
+	 * the user RSP value.
+	 */
 	u64			sp2;
+
 	u64			reserved2;
 	u64			ist[7];
 	u32			reserved3;
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 2ff2a30a264f..9a2f84233e39 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -209,6 +209,7 @@ static inline int arch_within_stack_frames(const void * const stack,
 
 #ifdef CONFIG_X86_64
 # define cpu_current_top_of_stack (cpu_tss_rw + TSS_sp1)
+# define rsp_scratch (cpu_tss_rw + TSS_sp2)
 #endif
 
 #endif
diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c
index 01de31db300d..fc2e90d3429a 100644
--- a/arch/x86/kernel/asm-offsets.c
+++ b/arch/x86/kernel/asm-offsets.c
@@ -105,7 +105,8 @@ void common(void) {
 	DEFINE(SIZEOF_entry_stack, sizeof(struct entry_stack));
 	DEFINE(MASK_entry_stack, (~(sizeof(struct entry_stack) - 1)));
 
-	/* Offset for sp0 and sp1 into the tss_struct */
+	/* Offset for fields in tss_struct */
 	OFFSET(TSS_sp0, tss_struct, x86_tss.sp0);
 	OFFSET(TSS_sp1, tss_struct, x86_tss.sp1);
+	OFFSET(TSS_sp2, tss_struct, x86_tss.sp2);
 }
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index a451bc374b9b..0fa7aa19f09e 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -59,8 +59,6 @@
 #include <asm/unistd_32_ia32.h>
 #endif
 
-__visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
-
 /* Prints also some state that isn't saved in the pt_regs */
 void __show_regs(struct pt_regs *regs, int all)
 {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/3] x86/pti/64: Remove the SYSCALL64 entry trampoline
  2018-08-31 22:21 [PATCH 0/3] x86/pti: Get rid of entry trampolines and add some docs Andy Lutomirski
  2018-08-31 22:21 ` [PATCH 1/3] x86/entry/64: Document idtentry Andy Lutomirski
  2018-08-31 22:21 ` [PATCH 2/3] x86/entry/64: Use the TSS sp2 slot for rsp_scratch Andy Lutomirski
@ 2018-08-31 22:21 ` Andy Lutomirski
  2018-09-01 16:34 ` [PATCH 0/3] x86/pti: Get rid of entry trampolines and add some docs Linus Torvalds
  3 siblings, 0 replies; 7+ messages in thread
From: Andy Lutomirski @ 2018-08-31 22:21 UTC (permalink / raw)
  To: x86
  Cc: Borislav Petkov, LKML, Dave Hansen, Adrian Hunter,
	Alexander Shishkin, Arnaldo Carvalho de Melo, Linus Torvalds,
	Josh Poimboeuf, Joerg Roedel, Jiri Olsa, Andi Kleen,
	Peter Zijlstra, Andy Lutomirski

The SYSCALL64 trampoline has a couple of nice properties:

 - The usual sequence of SWAPGS followed by two GS-relative accesses to
   set up RSP is somewhat slow because the GS-relative accesses need
   to wait for SWAPGS to finish.  The trampoline approach allows
   RIP-relative accesses to set up RSP, which avoids the stall.

 - The trampoline avoids any percpu access before CR3 is set up,
   which means that no percpu memory needs to be mapped in the user
   page tables.  This prevents using Meltdown to read any percpu memory
   outside the cpu_entry_area and prevents using timing leaks
   to directly locate the percpu areas.

The downsides of using a trampoline may outweigh the upsides, however.
It adds an extra non-contiguous I$ cache line to system calls, and it
forces an indirect jump to transfer control back to the normal kernel
text after CR3 is set up.  The latter is because x86 lacks a 64-bit
direct jump instruction that could jump from the trampoline to the entry
text.  With retpolines enabled, the indirect jump is extremely slow.

This patch changes the code to map the percpu TSS into the user page
tables to allow the non-trampoline SYSCALL64 path to work under PTI.
This does not add a new direct information leak, since the TSS is
readable by Meltdown from the cpu_entry_area alias regardless.  It
does allow a timing attack to locate the percpu area, but KASLR is
more or less a lost cause against local attack on CPUs vulnerable to
Meltdown regardless.  As far as I'm concerned, on current hardware,
KASLR is only useful to mitigate remote attacks that try to attack
the kernel without first gaining RCE against a vulnerable user
process.

On Skylake, with CONFIG_RETPOLINE=y and KPTI on, this reduces
syscall overhead from ~237ns to ~228ns.

There is a possible alternative approach: we could instead move the
trampoline within 2G of the entry text and make a separate copy for
each CPU.  Then we could use a direct jump to rejoin the normal
entry path.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/entry/entry_64.S             | 66 +--------------------------
 arch/x86/include/asm/cpu_entry_area.h |  2 -
 arch/x86/include/asm/sections.h       |  1 -
 arch/x86/kernel/asm-offsets.c         |  2 -
 arch/x86/kernel/cpu/common.c          | 11 +----
 arch/x86/kernel/kprobes/core.c        | 10 +---
 arch/x86/kernel/vmlinux.lds.S         | 10 ----
 arch/x86/mm/cpu_entry_area.c          | 36 ---------------
 arch/x86/mm/pti.c                     | 33 +++++++++++++-
 9 files changed, 35 insertions(+), 136 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 1796c42e08af..19927211b175 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -142,67 +142,6 @@ END(native_usergs_sysret64)
  * with them due to bugs in both AMD and Intel CPUs.
  */
 
-	.pushsection .entry_trampoline, "ax"
-
-/*
- * The code in here gets remapped into cpu_entry_area's trampoline.  This means
- * that the assembler and linker have the wrong idea as to where this code
- * lives (and, in fact, it's mapped more than once, so it's not even at a
- * fixed address).  So we can't reference any symbols outside the entry
- * trampoline and expect it to work.
- *
- * Instead, we carefully abuse %rip-relative addressing.
- * _entry_trampoline(%rip) refers to the start of the remapped) entry
- * trampoline.  We can thus find cpu_entry_area with this macro:
- */
-
-#define CPU_ENTRY_AREA \
-	_entry_trampoline - CPU_ENTRY_AREA_entry_trampoline(%rip)
-
-/* The top word of the SYSENTER stack is hot and is usable as scratch space. */
-#define RSP_SCRATCH	CPU_ENTRY_AREA_entry_stack + \
-			SIZEOF_entry_stack - 8 + CPU_ENTRY_AREA
-
-ENTRY(entry_SYSCALL_64_trampoline)
-	UNWIND_HINT_EMPTY
-	swapgs
-
-	/* Stash the user RSP. */
-	movq	%rsp, RSP_SCRATCH
-
-	/* Note: using %rsp as a scratch reg. */
-	SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
-
-	/* Load the top of the task stack into RSP */
-	movq	CPU_ENTRY_AREA_tss + TSS_sp1 + CPU_ENTRY_AREA, %rsp
-
-	/* Start building the simulated IRET frame. */
-	pushq	$__USER_DS			/* pt_regs->ss */
-	pushq	RSP_SCRATCH			/* pt_regs->sp */
-	pushq	%r11				/* pt_regs->flags */
-	pushq	$__USER_CS			/* pt_regs->cs */
-	pushq	%rcx				/* pt_regs->ip */
-
-	/*
-	 * x86 lacks a near absolute jump, and we can't jump to the real
-	 * entry text with a relative jump.  We could push the target
-	 * address and then use retq, but this destroys the pipeline on
-	 * many CPUs (wasting over 20 cycles on Sandy Bridge).  Instead,
-	 * spill RDI and restore it in a second-stage trampoline.
-	 */
-	pushq	%rdi
-	movq	$entry_SYSCALL_64_stage2, %rdi
-	JMP_NOSPEC %rdi
-END(entry_SYSCALL_64_trampoline)
-
-	.popsection
-
-ENTRY(entry_SYSCALL_64_stage2)
-	UNWIND_HINT_EMPTY
-	popq	%rdi
-	jmp	entry_SYSCALL_64_after_hwframe
-END(entry_SYSCALL_64_stage2)
-
 ENTRY(entry_SYSCALL_64)
 	UNWIND_HINT_EMPTY
 	/*
@@ -212,11 +151,8 @@ ENTRY(entry_SYSCALL_64)
 	 */
 
 	swapgs
-	/*
-	 * This path is only taken when PAGE_TABLE_ISOLATION is disabled so it
-	 * is not required to switch CR3.
-	 */
 	movq	%rsp, PER_CPU_VAR(rsp_scratch)
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
 	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
 
 	/* Construct struct pt_regs on stack */
diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h
index 4a7884b8dca5..29c706415443 100644
--- a/arch/x86/include/asm/cpu_entry_area.h
+++ b/arch/x86/include/asm/cpu_entry_area.h
@@ -30,8 +30,6 @@ struct cpu_entry_area {
 	 */
 	struct tss_struct tss;
 
-	char entry_trampoline[PAGE_SIZE];
-
 #ifdef CONFIG_X86_64
 	/*
 	 * Exception stacks used for IST entries.
diff --git a/arch/x86/include/asm/sections.h b/arch/x86/include/asm/sections.h
index 4a911a382ade..8ea1cfdbeabc 100644
--- a/arch/x86/include/asm/sections.h
+++ b/arch/x86/include/asm/sections.h
@@ -11,7 +11,6 @@ extern char __end_rodata_aligned[];
 
 #if defined(CONFIG_X86_64)
 extern char __end_rodata_hpage_align[];
-extern char __entry_trampoline_start[], __entry_trampoline_end[];
 #endif
 
 #endif	/* _ASM_X86_SECTIONS_H */
diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c
index fc2e90d3429a..083c01309027 100644
--- a/arch/x86/kernel/asm-offsets.c
+++ b/arch/x86/kernel/asm-offsets.c
@@ -99,8 +99,6 @@ void common(void) {
 	OFFSET(TLB_STATE_user_pcid_flush_mask, tlb_state, user_pcid_flush_mask);
 
 	/* Layout info for cpu_entry_area */
-	OFFSET(CPU_ENTRY_AREA_tss, cpu_entry_area, tss);
-	OFFSET(CPU_ENTRY_AREA_entry_trampoline, cpu_entry_area, entry_trampoline);
 	OFFSET(CPU_ENTRY_AREA_entry_stack, cpu_entry_area, entry_stack_page);
 	DEFINE(SIZEOF_entry_stack, sizeof(struct entry_stack));
 	DEFINE(MASK_entry_stack, (~(sizeof(struct entry_stack) - 1)));
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 84dee5ab745a..83068258c856 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1530,19 +1530,10 @@ EXPORT_PER_CPU_SYMBOL(__preempt_count);
 /* May not be marked __init: used by software suspend */
 void syscall_init(void)
 {
-	extern char _entry_trampoline[];
-	extern char entry_SYSCALL_64_trampoline[];
-
 	int cpu = smp_processor_id();
-	unsigned long SYSCALL64_entry_trampoline =
-		(unsigned long)get_cpu_entry_area(cpu)->entry_trampoline +
-		(entry_SYSCALL_64_trampoline - _entry_trampoline);
 
 	wrmsr(MSR_STAR, 0, (__USER32_CS << 16) | __KERNEL_CS);
-	if (static_cpu_has(X86_FEATURE_PTI))
-		wrmsrl(MSR_LSTAR, SYSCALL64_entry_trampoline);
-	else
-		wrmsrl(MSR_LSTAR, (unsigned long)entry_SYSCALL_64);
+	wrmsrl(MSR_LSTAR, (unsigned long)entry_SYSCALL_64);
 
 #ifdef CONFIG_IA32_EMULATION
 	wrmsrl(MSR_CSTAR, (unsigned long)entry_SYSCALL_compat);
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index b0d1e81c96bb..f802cf5b4478 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -1066,18 +1066,10 @@ NOKPROBE_SYMBOL(kprobe_exceptions_notify);
 
 bool arch_within_kprobe_blacklist(unsigned long addr)
 {
-	bool is_in_entry_trampoline_section = false;
-
-#ifdef CONFIG_X86_64
-	is_in_entry_trampoline_section =
-		(addr >= (unsigned long)__entry_trampoline_start &&
-		 addr < (unsigned long)__entry_trampoline_end);
-#endif
 	return  (addr >= (unsigned long)__kprobes_text_start &&
 		 addr < (unsigned long)__kprobes_text_end) ||
 		(addr >= (unsigned long)__entry_text_start &&
-		 addr < (unsigned long)__entry_text_end) ||
-		is_in_entry_trampoline_section;
+		 addr < (unsigned long)__entry_text_end);
 }
 
 int __init arch_init_kprobes(void)
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 8bde0a419f86..9c77d2df9c27 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -118,16 +118,6 @@ SECTIONS
 		*(.fixup)
 		*(.gnu.warning)
 
-#ifdef CONFIG_X86_64
-		. = ALIGN(PAGE_SIZE);
-		__entry_trampoline_start = .;
-		_entry_trampoline = .;
-		*(.entry_trampoline)
-		. = ALIGN(PAGE_SIZE);
-		__entry_trampoline_end = .;
-		ASSERT(. - _entry_trampoline == PAGE_SIZE, "entry trampoline is too big");
-#endif
-
 #ifdef CONFIG_RETPOLINE
 		__indirect_thunk_start = .;
 		*(.text.__x86.indirect_thunk)
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index 076ebdce9bd4..12d7e7fb4efd 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -15,7 +15,6 @@ static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage)
 #ifdef CONFIG_X86_64
 static DEFINE_PER_CPU_PAGE_ALIGNED(char, exception_stacks
 	[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ + DEBUG_STKSZ]);
-static DEFINE_PER_CPU(struct kcore_list, kcore_entry_trampoline);
 #endif
 
 struct cpu_entry_area *get_cpu_entry_area(int cpu)
@@ -83,8 +82,6 @@ static void percpu_setup_debug_store(int cpu)
 static void __init setup_cpu_entry_area(int cpu)
 {
 #ifdef CONFIG_X86_64
-	extern char _entry_trampoline[];
-
 	/* On 64-bit systems, we use a read-only fixmap GDT and TSS. */
 	pgprot_t gdt_prot = PAGE_KERNEL_RO;
 	pgprot_t tss_prot = PAGE_KERNEL_RO;
@@ -146,43 +143,10 @@ static void __init setup_cpu_entry_area(int cpu)
 	cea_map_percpu_pages(&get_cpu_entry_area(cpu)->exception_stacks,
 			     &per_cpu(exception_stacks, cpu),
 			     sizeof(exception_stacks) / PAGE_SIZE, PAGE_KERNEL);
-
-	cea_set_pte(&get_cpu_entry_area(cpu)->entry_trampoline,
-		     __pa_symbol(_entry_trampoline), PAGE_KERNEL_RX);
-	/*
-	 * The cpu_entry_area alias addresses are not in the kernel binary
-	 * so they do not show up in /proc/kcore normally.  This adds entries
-	 * for them manually.
-	 */
-	kclist_add_remap(&per_cpu(kcore_entry_trampoline, cpu),
-			 _entry_trampoline,
-			 &get_cpu_entry_area(cpu)->entry_trampoline, PAGE_SIZE);
 #endif
 	percpu_setup_debug_store(cpu);
 }
 
-#ifdef CONFIG_X86_64
-int arch_get_kallsym(unsigned int symnum, unsigned long *value, char *type,
-		     char *name)
-{
-	unsigned int cpu, ncpu = 0;
-
-	if (symnum >= num_possible_cpus())
-		return -EINVAL;
-
-	for_each_possible_cpu(cpu) {
-		if (ncpu++ >= symnum)
-			break;
-	}
-
-	*value = (unsigned long)&get_cpu_entry_area(cpu)->entry_trampoline;
-	*type = 't';
-	strlcpy(name, "__entry_SYSCALL_64_trampoline", KSYM_NAME_LEN);
-
-	return 0;
-}
-#endif
-
 static __init void setup_cpu_entry_area_ptes(void)
 {
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index 31341ae7309f..7e79154846c8 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -434,11 +434,42 @@ static void __init pti_clone_p4d(unsigned long addr)
 }
 
 /*
- * Clone the CPU_ENTRY_AREA into the user space visible page table.
+ * Clone the CPU_ENTRY_AREA and associated data into the user space visible
+ * page table.
  */
 static void __init pti_clone_user_shared(void)
 {
+	unsigned cpu;
+
 	pti_clone_p4d(CPU_ENTRY_AREA_BASE);
+
+	for_each_possible_cpu(cpu) {
+		/*
+		 * The SYSCALL64 entry code needs to be able to find the
+		 * thread stack and needs one word of scratch space in which
+		 * to spill a register.  All of this lives in the TSS, in
+		 * the sp1 and sp2 slots.
+		 *
+		 * This is done for all possible CPUs during boot to ensure
+		 * that it's propagated to all mms.  If we were to add one of
+		 * these mappings during CPU hotplug, we would need to take
+		 * some measure to make sure that every mm that subsequently
+		 * ran on that CPU would have the relevant PGD entry in its
+		 * pagetables.  The usual vmalloc_fault() mechanism would not
+		 * work for page faults taken in entry_SYSCALL_64 before RSP
+		 * is set up.
+		 */
+
+		unsigned long va = (unsigned long)&per_cpu(cpu_tss_rw, cpu);
+		phys_addr_t pa = per_cpu_ptr_to_phys((void *)va);
+		pte_t *target_pte;
+
+		target_pte = pti_user_pagetable_walk_pte(va);
+		if (WARN_ON(!target_pte))
+			return;
+
+		*target_pte = pfn_pte(pa >> PAGE_SHIFT, PAGE_KERNEL);
+	}
 }
 
 #else /* CONFIG_X86_64 */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/3] x86/entry/64: Use the TSS sp2 slot for rsp_scratch
  2018-08-31 22:21 ` [PATCH 2/3] x86/entry/64: Use the TSS sp2 slot for rsp_scratch Andy Lutomirski
@ 2018-09-01 16:33   ` Linus Torvalds
  2018-09-01 17:29     ` Andy Lutomirski
  0 siblings, 1 reply; 7+ messages in thread
From: Linus Torvalds @ 2018-09-01 16:33 UTC (permalink / raw)
  To: Andrew Lutomirski
  Cc: the arch/x86 maintainers, Borislav Petkov,
	Linux Kernel Mailing List, Dave Hansen, Adrian Hunter,
	Alexander Shishkin, Arnaldo Carvalho de Melo, Josh Poimboeuf,
	Joerg Roedel, Jiri Olsa, Andi Kleen, Peter Zijlstra

On Fri, Aug 31, 2018 at 3:21 PM Andy Lutomirski <luto@kernel.org> wrote:
>
>  #ifdef CONFIG_X86_64
>  # define cpu_current_top_of_stack (cpu_tss_rw + TSS_sp1)
> +# define rsp_scratch (cpu_tss_rw + TSS_sp2)
>  #endif

Ugh. The above gets used by *assembler* code. I was really confused by how this:


> --- a/arch/x86/kernel/process_64.c
> +++ b/arch/x86/kernel/process_64.c
> @@ -59,8 +59,6 @@
>  #include <asm/unistd_32_ia32.h>
>  #endif
>
> -__visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
> -

could continue to work despite the accesses to "rsp_scratch" still
remaining in the asm files.

Can yu humor me, and just not do something quite that subtle. I must
have missed this the first time around.

Please get rid of the define, and just make the asm code spell out
what it actually does.

We already do that for TSS_sp0 for the normal case:

      movq    PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %rsp

so I think this should just change

-     movq    %rsp, PER_CPU_VAR(rsp_scratch)
+     movq    %rsp, PER_CPU_VAR(cpu_tss_rw + TSS_sp2)

instead of having that subtle rsp_scratch thing.

And honestly, I think we should strive to do the same thing with
cpu_current_top_of_stack. There at least the #define currently makes
sense (because on 32-bit, it's actually a percpu variable, on 64-bit
it's that sp1 field).

But wouldn't it be nice to just unify 32-bit and 64-bit in this
respect, and get rid of that subtle difference?

But regardless of whether we eventually do that kind of unification
change, the cpu_current_top_of_stack #define has a _reason_ for it as
things stand now.

The new rsp_scratch thing does not. Just spell out what you're doing.

                  Linus

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/3] x86/pti: Get rid of entry trampolines and add some docs
  2018-08-31 22:21 [PATCH 0/3] x86/pti: Get rid of entry trampolines and add some docs Andy Lutomirski
                   ` (2 preceding siblings ...)
  2018-08-31 22:21 ` [PATCH 3/3] x86/pti/64: Remove the SYSCALL64 entry trampoline Andy Lutomirski
@ 2018-09-01 16:34 ` Linus Torvalds
  3 siblings, 0 replies; 7+ messages in thread
From: Linus Torvalds @ 2018-09-01 16:34 UTC (permalink / raw)
  To: Andrew Lutomirski
  Cc: the arch/x86 maintainers, Borislav Petkov,
	Linux Kernel Mailing List, Dave Hansen, Adrian Hunter,
	Alexander Shishkin, Arnaldo Carvalho de Melo, Josh Poimboeuf,
	Joerg Roedel, Jiri Olsa, Andi Kleen, Peter Zijlstra

On Fri, Aug 31, 2018 at 3:21 PM Andy Lutomirski <luto@kernel.org> wrote:
>
> This gets rid of entry trampolines.

Despite my syntactic comment on 2/3, I'd love for this to go in. The
extra indirection through the trampoline is confusing, I think, in
addition to being a performance issue.

              Linus

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/3] x86/entry/64: Use the TSS sp2 slot for rsp_scratch
  2018-09-01 16:33   ` Linus Torvalds
@ 2018-09-01 17:29     ` Andy Lutomirski
  0 siblings, 0 replies; 7+ messages in thread
From: Andy Lutomirski @ 2018-09-01 17:29 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Andrew Lutomirski, the arch/x86 maintainers, Borislav Petkov,
	Linux Kernel Mailing List, Dave Hansen, Adrian Hunter,
	Alexander Shishkin, Arnaldo Carvalho de Melo, Josh Poimboeuf,
	Joerg Roedel, Jiri Olsa, Andi Kleen, Peter Zijlstra

On Sat, Sep 1, 2018 at 9:33 AM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Fri, Aug 31, 2018 at 3:21 PM Andy Lutomirski <luto@kernel.org> wrote:
>>
>>  #ifdef CONFIG_X86_64
>>  # define cpu_current_top_of_stack (cpu_tss_rw + TSS_sp1)
>> +# define rsp_scratch (cpu_tss_rw + TSS_sp2)
>>  #endif
>
> Ugh. The above gets used by *assembler* code. I was really confused by how this:
>
>
>> --- a/arch/x86/kernel/process_64.c
>> +++ b/arch/x86/kernel/process_64.c
>> @@ -59,8 +59,6 @@
>>  #include <asm/unistd_32_ia32.h>
>>  #endif
>>
>> -__visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
>> -
>
> could continue to work despite the accesses to "rsp_scratch" still
> remaining in the asm files.
>
> Can yu humor me, and just not do something quite that subtle. I must
> have missed this the first time around.
>
> Please get rid of the define, and just make the asm code spell out
> what it actually does.

Done for v2.

>
> We already do that for TSS_sp0 for the normal case:
>
>       movq    PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %rsp
>
> so I think this should just change
>
> -     movq    %rsp, PER_CPU_VAR(rsp_scratch)
> +     movq    %rsp, PER_CPU_VAR(cpu_tss_rw + TSS_sp2)
>
> instead of having that subtle rsp_scratch thing.
>
> And honestly, I think we should strive to do the same thing with
> cpu_current_top_of_stack. There at least the #define currently makes
> sense (because on 32-bit, it's actually a percpu variable, on 64-bit
> it's that sp1 field).
>
> But wouldn't it be nice to just unify 32-bit and 64-bit in this
> respect, and get rid of that subtle difference?
>

Yes.  But ugh, the way that thing has worked has changed so many times
on 32-bit and 64-bit that I've lost track a little bit.  I'll put it
on my long list of things to clean up.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-09-01 17:37 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-31 22:21 [PATCH 0/3] x86/pti: Get rid of entry trampolines and add some docs Andy Lutomirski
2018-08-31 22:21 ` [PATCH 1/3] x86/entry/64: Document idtentry Andy Lutomirski
2018-08-31 22:21 ` [PATCH 2/3] x86/entry/64: Use the TSS sp2 slot for rsp_scratch Andy Lutomirski
2018-09-01 16:33   ` Linus Torvalds
2018-09-01 17:29     ` Andy Lutomirski
2018-08-31 22:21 ` [PATCH 3/3] x86/pti/64: Remove the SYSCALL64 entry trampoline Andy Lutomirski
2018-09-01 16:34 ` [PATCH 0/3] x86/pti: Get rid of entry trampolines and add some docs Linus Torvalds

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).