LKML Archive on lore.kernel.org
 help / Atom feed
* [PATCH 0/8] x86: A round of x86 segmentation improvements
@ 2016-04-26 19:23 Andy Lutomirski
  2016-04-26 19:23 ` [PATCH 1/8] x86/asm: Stop depending on ptrace.h in alternative.h Andy Lutomirski
                   ` (7 more replies)
  0 siblings, 8 replies; 19+ messages in thread
From: Andy Lutomirski @ 2016-04-26 19:23 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Brian Gerst, Andi Kleen, Andy Lutomirski

Hi all-

This is preparation for enabling FSGSBASE and fixing FS/GS switching
better than my last attempt.

It cleans up the code, fixes two more holes in X86_BUG_NULL_SEG
handling (loadsegment(fs, val) and load_gs_index(val) failure
handling), and makes set_thread_area behave in a well-defined
manner.

With these patches applied, modify_ldt should be the last remaining
way for a user thread to get its cached descriptors out of sync with
memory, except in the 64-bit FS/GS < 3 case.  I'll fix modify_ldt as
well, but those patches will come later.

I'm sending these now because they seem to be good improvements on
their own and it'll help avoid a monster patch series down the road.

Tested on x86_32 with and without lazy GS and on x86_64.

Andy Lutomirski (8):
  x86/asm: Stop depending on ptrace.h in alternative.h
  x86/asm: Make asm/alternative.h safe from assembly
  x86/segments/64: When loadsegment(fs, ...) fails, clear the base
  x86/segments/64: When load_gs_index fails, clear the base
  x86/arch_prctl/64: Remove FSBASE/GSBASE < 4G optimization
  x86/asm/64: Rename thread_struct's fs and gs to fsbase and gsbase
  x86/tls: Synchronize segment registers in set_thread_area
  selftests/x86/ldt_gdt: Test set_thread_area deletion of an active
    segment

 arch/x86/entry/entry_64.S             |   6 +
 arch/x86/include/asm/alternative.h    |  35 +----
 arch/x86/include/asm/elf.h            |   6 +-
 arch/x86/include/asm/kgdb.h           |   2 +
 arch/x86/include/asm/processor.h      |  11 +-
 arch/x86/include/asm/segment.h        |  49 +++++--
 arch/x86/include/asm/setup.h          |   1 +
 arch/x86/include/asm/text-patching.h  |  40 ++++++
 arch/x86/kernel/alternative.c         |   1 +
 arch/x86/kernel/cpu/common.c          |   2 +-
 arch/x86/kernel/jump_label.c          |   1 +
 arch/x86/kernel/kgdb.c                |   1 +
 arch/x86/kernel/kprobes/core.c        |   1 +
 arch/x86/kernel/kprobes/opt.c         |   1 +
 arch/x86/kernel/module.c              |   1 +
 arch/x86/kernel/process_64.c          |  97 ++++---------
 arch/x86/kernel/ptrace.c              |  48 ++-----
 arch/x86/kernel/tls.c                 |  42 ++++++
 arch/x86/kernel/traps.c               |   1 +
 arch/x86/kvm/svm.c                    |   2 +-
 arch/x86/mm/extable.c                 |  10 ++
 tools/testing/selftests/x86/ldt_gdt.c | 250 ++++++++++++++++++++++++++++++++++
 22 files changed, 446 insertions(+), 162 deletions(-)
 create mode 100644 arch/x86/include/asm/text-patching.h

-- 
2.5.5

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/8] x86/asm: Stop depending on ptrace.h in alternative.h
  2016-04-26 19:23 [PATCH 0/8] x86: A round of x86 segmentation improvements Andy Lutomirski
@ 2016-04-26 19:23 ` Andy Lutomirski
  2016-04-29 10:48   ` [tip:x86/asm] " tip-bot for Andy Lutomirski
  2016-04-26 19:23 ` [PATCH 2/8] x86/asm: Make asm/alternative.h safe from assembly Andy Lutomirski
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 19+ messages in thread
From: Andy Lutomirski @ 2016-04-26 19:23 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Brian Gerst, Andi Kleen, Andy Lutomirski

alternative.h pulls in ptrace.h, which means that alternatives can't
be used in anything referenced from ptrace.h, which is a mess.

Break the dependency by pulling text patching helpers into their own
header.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/include/asm/alternative.h   | 33 -----------------------------
 arch/x86/include/asm/kgdb.h          |  2 ++
 arch/x86/include/asm/setup.h         |  1 +
 arch/x86/include/asm/text-patching.h | 40 ++++++++++++++++++++++++++++++++++++
 arch/x86/kernel/alternative.c        |  1 +
 arch/x86/kernel/jump_label.c         |  1 +
 arch/x86/kernel/kgdb.c               |  1 +
 arch/x86/kernel/kprobes/core.c       |  1 +
 arch/x86/kernel/kprobes/opt.c        |  1 +
 arch/x86/kernel/module.c             |  1 +
 arch/x86/kernel/traps.c              |  1 +
 11 files changed, 50 insertions(+), 33 deletions(-)
 create mode 100644 arch/x86/include/asm/text-patching.h

diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
index 99afb665a004..be4496c961db 100644
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -5,7 +5,6 @@
 #include <linux/stddef.h>
 #include <linux/stringify.h>
 #include <asm/asm.h>
-#include <asm/ptrace.h>
 
 /*
  * Alternative inline assembly for SMP.
@@ -233,36 +232,4 @@ static inline int alternatives_text_reserved(void *start, void *end)
  */
 #define ASM_NO_INPUT_CLOBBER(clbr...) "i" (0) : clbr
 
-struct paravirt_patch_site;
-#ifdef CONFIG_PARAVIRT
-void apply_paravirt(struct paravirt_patch_site *start,
-		    struct paravirt_patch_site *end);
-#else
-static inline void apply_paravirt(struct paravirt_patch_site *start,
-				  struct paravirt_patch_site *end)
-{}
-#define __parainstructions	NULL
-#define __parainstructions_end	NULL
-#endif
-
-extern void *text_poke_early(void *addr, const void *opcode, size_t len);
-
-/*
- * Clear and restore the kernel write-protection flag on the local CPU.
- * Allows the kernel to edit read-only pages.
- * Side-effect: any interrupt handler running between save and restore will have
- * the ability to write to read-only pages.
- *
- * Warning:
- * Code patching in the UP case is safe if NMIs and MCE handlers are stopped and
- * no thread can be preempted in the instructions being modified (no iret to an
- * invalid instruction possible) or if the instructions are changed from a
- * consistent state to another consistent state atomically.
- * On the local CPU you need to be protected again NMI or MCE handlers seeing an
- * inconsistent instruction while you patch.
- */
-extern void *text_poke(void *addr, const void *opcode, size_t len);
-extern int poke_int3_handler(struct pt_regs *regs);
-extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
-
 #endif /* _ASM_X86_ALTERNATIVE_H */
diff --git a/arch/x86/include/asm/kgdb.h b/arch/x86/include/asm/kgdb.h
index 332f98c9111f..22a8537eb780 100644
--- a/arch/x86/include/asm/kgdb.h
+++ b/arch/x86/include/asm/kgdb.h
@@ -6,6 +6,8 @@
  * Copyright (C) 2008 Wind River Systems, Inc.
  */
 
+#include <asm/ptrace.h>
+
 /*
  * BUFMAX defines the maximum number of characters in inbound/outbound
  * buffers at least NUMREGBYTES*2 are needed for register packets
diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index 11af24e09c8a..ac1d5da14734 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -6,6 +6,7 @@
 #define COMMAND_LINE_SIZE 2048
 
 #include <linux/linkage.h>
+#include <asm/page_types.h>
 
 #ifdef __i386__
 
diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
new file mode 100644
index 000000000000..90395063383c
--- /dev/null
+++ b/arch/x86/include/asm/text-patching.h
@@ -0,0 +1,40 @@
+#ifndef _ASM_X86_TEXT_PATCHING_H
+#define _ASM_X86_TEXT_PATCHING_H
+
+#include <linux/types.h>
+#include <linux/stddef.h>
+#include <asm/ptrace.h>
+
+struct paravirt_patch_site;
+#ifdef CONFIG_PARAVIRT
+void apply_paravirt(struct paravirt_patch_site *start,
+		    struct paravirt_patch_site *end);
+#else
+static inline void apply_paravirt(struct paravirt_patch_site *start,
+				  struct paravirt_patch_site *end)
+{}
+#define __parainstructions	NULL
+#define __parainstructions_end	NULL
+#endif
+
+extern void *text_poke_early(void *addr, const void *opcode, size_t len);
+
+/*
+ * Clear and restore the kernel write-protection flag on the local CPU.
+ * Allows the kernel to edit read-only pages.
+ * Side-effect: any interrupt handler running between save and restore will have
+ * the ability to write to read-only pages.
+ *
+ * Warning:
+ * Code patching in the UP case is safe if NMIs and MCE handlers are stopped and
+ * no thread can be preempted in the instructions being modified (no iret to an
+ * invalid instruction possible) or if the instructions are changed from a
+ * consistent state to another consistent state atomically.
+ * On the local CPU you need to be protected again NMI or MCE handlers seeing an
+ * inconsistent instruction while you patch.
+ */
+extern void *text_poke(void *addr, const void *opcode, size_t len);
+extern int poke_int3_handler(struct pt_regs *regs);
+extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
+
+#endif /* _ASM_X86_TEXT_PATCHING_H */
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 25f909362b7a..5cb272a7a5a3 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -11,6 +11,7 @@
 #include <linux/stop_machine.h>
 #include <linux/slab.h>
 #include <linux/kdebug.h>
+#include <asm/text-patching.h>
 #include <asm/alternative.h>
 #include <asm/sections.h>
 #include <asm/pgtable.h>
diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index e565e0e4d216..fc25f698d792 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -13,6 +13,7 @@
 #include <linux/cpu.h>
 #include <asm/kprobes.h>
 #include <asm/alternative.h>
+#include <asm/text-patching.h>
 
 #ifdef HAVE_JUMP_LABEL
 
diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
index 2da6ee9ae69b..04cde527d728 100644
--- a/arch/x86/kernel/kgdb.c
+++ b/arch/x86/kernel/kgdb.c
@@ -45,6 +45,7 @@
 #include <linux/uaccess.h>
 #include <linux/memory.h>
 
+#include <asm/text-patching.h>
 #include <asm/debugreg.h>
 #include <asm/apicdef.h>
 #include <asm/apic.h>
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index ae703acb85c1..38cf7a741250 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -51,6 +51,7 @@
 #include <linux/ftrace.h>
 #include <linux/frame.h>
 
+#include <asm/text-patching.h>
 #include <asm/cacheflush.h>
 #include <asm/desc.h>
 #include <asm/pgtable.h>
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 7b3b9d15c47a..4425f593f0ec 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -29,6 +29,7 @@
 #include <linux/kallsyms.h>
 #include <linux/ftrace.h>
 
+#include <asm/text-patching.h>
 #include <asm/cacheflush.h>
 #include <asm/desc.h>
 #include <asm/pgtable.h>
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index 005c03e93fc5..477ae806c2fa 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -31,6 +31,7 @@
 #include <linux/jump_label.h>
 #include <linux/random.h>
 
+#include <asm/text-patching.h>
 #include <asm/page.h>
 #include <asm/pgtable.h>
 #include <asm/setup.h>
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 06cbe25861f1..d1590486204a 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -51,6 +51,7 @@
 #include <asm/processor.h>
 #include <asm/debugreg.h>
 #include <linux/atomic.h>
+#include <asm/text-patching.h>
 #include <asm/ftrace.h>
 #include <asm/traps.h>
 #include <asm/desc.h>
-- 
2.5.5

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 2/8] x86/asm: Make asm/alternative.h safe from assembly
  2016-04-26 19:23 [PATCH 0/8] x86: A round of x86 segmentation improvements Andy Lutomirski
  2016-04-26 19:23 ` [PATCH 1/8] x86/asm: Stop depending on ptrace.h in alternative.h Andy Lutomirski
@ 2016-04-26 19:23 ` Andy Lutomirski
  2016-04-29 10:49   ` [tip:x86/asm] " tip-bot for Andy Lutomirski
  2016-04-26 19:23 ` [PATCH 3/8] x86/segments/64: When loadsegment(fs, ...) fails, clear the base Andy Lutomirski
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 19+ messages in thread
From: Andy Lutomirski @ 2016-04-26 19:23 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Brian Gerst, Andi Kleen, Andy Lutomirski

asm/alternative.h isn't directly useful from assembly, but it
shouldn't break the build.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/include/asm/alternative.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
index be4496c961db..e77a6443104f 100644
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -1,6 +1,8 @@
 #ifndef _ASM_X86_ALTERNATIVE_H
 #define _ASM_X86_ALTERNATIVE_H
 
+#ifndef __ASSEMBLY__
+
 #include <linux/types.h>
 #include <linux/stddef.h>
 #include <linux/stringify.h>
@@ -232,4 +234,6 @@ static inline int alternatives_text_reserved(void *start, void *end)
  */
 #define ASM_NO_INPUT_CLOBBER(clbr...) "i" (0) : clbr
 
+#endif /* __ASSEMBLY__ */
+
 #endif /* _ASM_X86_ALTERNATIVE_H */
-- 
2.5.5

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 3/8] x86/segments/64: When loadsegment(fs, ...) fails, clear the base
  2016-04-26 19:23 [PATCH 0/8] x86: A round of x86 segmentation improvements Andy Lutomirski
  2016-04-26 19:23 ` [PATCH 1/8] x86/asm: Stop depending on ptrace.h in alternative.h Andy Lutomirski
  2016-04-26 19:23 ` [PATCH 2/8] x86/asm: Make asm/alternative.h safe from assembly Andy Lutomirski
@ 2016-04-26 19:23 ` Andy Lutomirski
  2016-04-29 10:49   ` [tip:x86/asm] " tip-bot for Andy Lutomirski
  2016-04-26 19:23 ` [PATCH 4/8] x86/segments/64: When load_gs_index " Andy Lutomirski
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 19+ messages in thread
From: Andy Lutomirski @ 2016-04-26 19:23 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Brian Gerst, Andi Kleen, Andy Lutomirski

On AMD CPUs, a failed loadsegment currently may not clear the FS
base.  Fix it.

While we're at it, prevent loadsegment(gs, xyz) from even compiling
on 64-bit kernels.  It shouldn't be used.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/include/asm/segment.h | 42 +++++++++++++++++++++++++++++++++++++++---
 arch/x86/kernel/cpu/common.c   |  2 +-
 arch/x86/mm/extable.c          | 10 ++++++++++
 3 files changed, 50 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h
index 7d5a1929d76b..e1a4afd20223 100644
--- a/arch/x86/include/asm/segment.h
+++ b/arch/x86/include/asm/segment.h
@@ -2,6 +2,7 @@
 #define _ASM_X86_SEGMENT_H
 
 #include <linux/const.h>
+#include <asm/alternative.h>
 
 /*
  * Constructor for a conventional segment GDT (or LDT) entry.
@@ -249,10 +250,13 @@ extern const char early_idt_handler_array[NUM_EXCEPTION_VECTORS][EARLY_IDT_HANDL
 #endif
 
 /*
- * Load a segment. Fall back on loading the zero
- * segment if something goes wrong..
+ * Load a segment. Fall back on loading the zero segment if something goes
+ * wrong.  This variant assumes that loading zero fully clears the segment.
+ * This is always the case on Intel CPUs and, even on 64-bit AMD CPUs, any
+ * failure to fully clear the cached descriptor is only observable for
+ * FS and GS.
  */
-#define loadsegment(seg, value)						\
+#define __loadsegment_simple(seg, value)				\
 do {									\
 	unsigned short __val = (value);					\
 									\
@@ -269,6 +273,38 @@ do {									\
 		     : "+r" (__val) : : "memory");			\
 } while (0)
 
+#define __loadsegment_ss(value) __loadsegment_simple(ss, (value))
+#define __loadsegment_ds(value) __loadsegment_simple(ds, (value))
+#define __loadsegment_es(value) __loadsegment_simple(es, (value))
+
+#ifdef CONFIG_X86_32
+
+/*
+ * On 32-bit systems, the hidden parts of FS and GS are unobservable if
+ * the selector is NULL, so there's no funny business here.
+ */
+#define __loadsegment_fs(value) __loadsegment_simple(fs, (value))
+#define __loadsegment_gs(value) __loadsegment_simple(gs, (value))
+
+#else
+
+static inline void __loadsegment_fs(unsigned short value)
+{
+	asm volatile("						\n"
+		     "1:	movw %0, %%fs			\n"
+		     "2:					\n"
+
+		     _ASM_EXTABLE_HANDLE(1b, 2b, ex_handler_clear_fs)
+
+		     : : "rm" (value) : "memory");
+}
+
+/* __loadsegment_gs is intentionally undefined.  Use load_gs_index instead. */
+
+#endif
+
+#define loadsegment(seg, value) __loadsegment_ ## seg (value)
+
 /*
  * Save a segment register away:
  */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 6bfa36de6d9f..088106140c4b 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -430,7 +430,7 @@ void load_percpu_segment(int cpu)
 #ifdef CONFIG_X86_32
 	loadsegment(fs, __KERNEL_PERCPU);
 #else
-	loadsegment(gs, 0);
+	__loadsegment_simple(gs, 0);
 	wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
 #endif
 	load_stack_canary_segment();
diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
index aaeda3ffaafe..4bb53b89f3c5 100644
--- a/arch/x86/mm/extable.c
+++ b/arch/x86/mm/extable.c
@@ -70,6 +70,16 @@ bool ex_handler_wrmsr_unsafe(const struct exception_table_entry *fixup,
 }
 EXPORT_SYMBOL(ex_handler_wrmsr_unsafe);
 
+bool ex_handler_clear_fs(const struct exception_table_entry *fixup,
+			 struct pt_regs *regs, int trapnr)
+{
+	if (static_cpu_has(X86_BUG_NULL_SEG))
+		asm volatile ("mov %0, %%fs" : : "rm" (__USER_DS));
+	asm volatile ("mov %0, %%fs" : : "rm" (0));
+	return ex_handler_default(fixup, regs, trapnr);
+}
+EXPORT_SYMBOL(ex_handler_clear_fs);
+
 bool ex_has_fault_handler(unsigned long ip)
 {
 	const struct exception_table_entry *e;
-- 
2.5.5

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 4/8] x86/segments/64: When load_gs_index fails, clear the base
  2016-04-26 19:23 [PATCH 0/8] x86: A round of x86 segmentation improvements Andy Lutomirski
                   ` (2 preceding siblings ...)
  2016-04-26 19:23 ` [PATCH 3/8] x86/segments/64: When loadsegment(fs, ...) fails, clear the base Andy Lutomirski
@ 2016-04-26 19:23 ` " Andy Lutomirski
  2016-04-29 10:49   ` [tip:x86/asm] " tip-bot for Andy Lutomirski
  2016-04-26 19:23 ` [PATCH 5/8] x86/arch_prctl/64: Remove FSBASE/GSBASE < 4G optimization Andy Lutomirski
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 19+ messages in thread
From: Andy Lutomirski @ 2016-04-26 19:23 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Brian Gerst, Andi Kleen, Andy Lutomirski

On AMD CPUs, a failed load_gs_base currently may not clear the FS
base.  Fix it.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/entry/entry_64.S | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 1693c17dbf81..6344629ae1ce 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -794,6 +794,12 @@ END(native_load_gs_index)
 	/* running with kernelgs */
 bad_gs:
 	SWAPGS					/* switch back to user gs */
+.macro ZAP_GS
+	/* This can't be a string because the preprocessor needs to see it. */
+	movl $__USER_DS, %eax
+	movl %eax, %gs
+.endm
+	ALTERNATIVE "", "ZAP_GS", X86_BUG_NULL_SEG
 	xorl	%eax, %eax
 	movl	%eax, %gs
 	jmp	2b
-- 
2.5.5

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 5/8] x86/arch_prctl/64: Remove FSBASE/GSBASE < 4G optimization
  2016-04-26 19:23 [PATCH 0/8] x86: A round of x86 segmentation improvements Andy Lutomirski
                   ` (3 preceding siblings ...)
  2016-04-26 19:23 ` [PATCH 4/8] x86/segments/64: When load_gs_index " Andy Lutomirski
@ 2016-04-26 19:23 ` Andy Lutomirski
  2016-04-26 20:50   ` Andi Kleen
  2016-04-29 10:50   ` [tip:x86/asm] " tip-bot for Andy Lutomirski
  2016-04-26 19:23 ` [PATCH 6/8] x86/asm/64: Rename thread_struct's fs and gs to fsbase and gsbase Andy Lutomirski
                   ` (2 subsequent siblings)
  7 siblings, 2 replies; 19+ messages in thread
From: Andy Lutomirski @ 2016-04-26 19:23 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Brian Gerst, Andi Kleen, Andy Lutomirski

As far as I know, the optimization doesn't work on any modern distro
because modern distros use high addresses for ASLR.  Remove it.

The ptrace code was either wrong or very strange, but the behavior
with this patch should be essentially identical to the behavior
without this patch unless user code goes out of its way to mislead
ptrace.

On newer CPUs, once the FSGSBASE instructions are enabled, we won't
want to use the optimized variant anyway.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/include/asm/segment.h |  7 -----
 arch/x86/kernel/process_64.c   | 71 +++++++-----------------------------------
 arch/x86/kernel/ptrace.c       | 44 ++++----------------------
 3 files changed, 17 insertions(+), 105 deletions(-)

diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h
index e1a4afd20223..1549caa098f0 100644
--- a/arch/x86/include/asm/segment.h
+++ b/arch/x86/include/asm/segment.h
@@ -208,13 +208,6 @@
 #define __USER_CS			(GDT_ENTRY_DEFAULT_USER_CS*8 + 3)
 #define __PER_CPU_SEG			(GDT_ENTRY_PER_CPU*8 + 3)
 
-/* TLS indexes for 64-bit - hardcoded in arch_prctl(): */
-#define FS_TLS				0
-#define GS_TLS				1
-
-#define GS_TLS_SEL			((GDT_ENTRY_TLS_MIN+GS_TLS)*8 + 3)
-#define FS_TLS_SEL			((GDT_ENTRY_TLS_MIN+FS_TLS)*8 + 3)
-
 #endif
 
 #ifndef CONFIG_PARAVIRT
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 50337eac1ca2..556a19672ea2 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -136,25 +136,6 @@ void release_thread(struct task_struct *dead_task)
 	}
 }
 
-static inline void set_32bit_tls(struct task_struct *t, int tls, u32 addr)
-{
-	struct user_desc ud = {
-		.base_addr = addr,
-		.limit = 0xfffff,
-		.seg_32bit = 1,
-		.limit_in_pages = 1,
-		.useable = 1,
-	};
-	struct desc_struct *desc = t->thread.tls_array;
-	desc += tls;
-	fill_ldt(desc, &ud);
-}
-
-static inline u32 read_32bit_tls(struct task_struct *t, int tls)
-{
-	return get_desc_base(&t->thread.tls_array[tls]);
-}
-
 int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
 		unsigned long arg, struct task_struct *p, unsigned long tls)
 {
@@ -554,25 +535,12 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 		if (addr >= TASK_SIZE_OF(task))
 			return -EPERM;
 		cpu = get_cpu();
-		/* handle small bases via the GDT because that's faster to
-		   switch. */
-		if (addr <= 0xffffffff) {
-			set_32bit_tls(task, GS_TLS, addr);
-			if (doit) {
-				load_TLS(&task->thread, cpu);
-				load_gs_index(GS_TLS_SEL);
-			}
-			task->thread.gsindex = GS_TLS_SEL;
-			task->thread.gs = 0;
-		} else {
-			task->thread.gsindex = 0;
-			task->thread.gs = addr;
-			if (doit) {
-				load_gs_index(0);
-				ret = wrmsrl_safe(MSR_KERNEL_GS_BASE, addr);
-			}
+		task->thread.gsindex = 0;
+		task->thread.gs = addr;
+		if (doit) {
+			load_gs_index(0);
+			ret = wrmsrl_safe(MSR_KERNEL_GS_BASE, addr);
 		}
-		put_cpu();
 		break;
 	case ARCH_SET_FS:
 		/* Not strictly needed for fs, but do it for symmetry
@@ -580,25 +548,12 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 		if (addr >= TASK_SIZE_OF(task))
 			return -EPERM;
 		cpu = get_cpu();
-		/* handle small bases via the GDT because that's faster to
-		   switch. */
-		if (addr <= 0xffffffff) {
-			set_32bit_tls(task, FS_TLS, addr);
-			if (doit) {
-				load_TLS(&task->thread, cpu);
-				loadsegment(fs, FS_TLS_SEL);
-			}
-			task->thread.fsindex = FS_TLS_SEL;
-			task->thread.fs = 0;
-		} else {
-			task->thread.fsindex = 0;
-			task->thread.fs = addr;
-			if (doit) {
-				/* set the selector to 0 to not confuse
-				   __switch_to */
-				loadsegment(fs, 0);
-				ret = wrmsrl_safe(MSR_FS_BASE, addr);
-			}
+		task->thread.fsindex = 0;
+		task->thread.fs = addr;
+		if (doit) {
+			/* set the selector to 0 to not confuse __switch_to */
+			loadsegment(fs, 0);
+			ret = wrmsrl_safe(MSR_FS_BASE, addr);
 		}
 		put_cpu();
 		break;
@@ -606,8 +561,6 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 		unsigned long base;
 		if (doit)
 			rdmsrl(MSR_FS_BASE, base);
-		else if (task->thread.fsindex == FS_TLS_SEL)
-			base = read_32bit_tls(task, FS_TLS);
 		else
 			base = task->thread.fs;
 		ret = put_user(base, (unsigned long __user *)addr);
@@ -617,8 +570,6 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 		unsigned long base;
 		if (doit)
 			rdmsrl(MSR_KERNEL_GS_BASE, base);
-		else if (task->thread.gsindex == GS_TLS_SEL)
-			base = read_32bit_tls(task, GS_TLS);
 		else
 			base = task->thread.gs;
 		ret = put_user(base, (unsigned long __user *)addr);
diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
index 32e9d9cbb884..8cbdd840a611 100644
--- a/arch/x86/kernel/ptrace.c
+++ b/arch/x86/kernel/ptrace.c
@@ -303,29 +303,11 @@ static int set_segment_reg(struct task_struct *task,
 
 	switch (offset) {
 	case offsetof(struct user_regs_struct,fs):
-		/*
-		 * If this is setting fs as for normal 64-bit use but
-		 * setting fs_base has implicitly changed it, leave it.
-		 */
-		if ((value == FS_TLS_SEL && task->thread.fsindex == 0 &&
-		     task->thread.fs != 0) ||
-		    (value == 0 && task->thread.fsindex == FS_TLS_SEL &&
-		     task->thread.fs == 0))
-			break;
 		task->thread.fsindex = value;
 		if (task == current)
 			loadsegment(fs, task->thread.fsindex);
 		break;
 	case offsetof(struct user_regs_struct,gs):
-		/*
-		 * If this is setting gs as for normal 64-bit use but
-		 * setting gs_base has implicitly changed it, leave it.
-		 */
-		if ((value == GS_TLS_SEL && task->thread.gsindex == 0 &&
-		     task->thread.gs != 0) ||
-		    (value == 0 && task->thread.gsindex == GS_TLS_SEL &&
-		     task->thread.gs == 0))
-			break;
 		task->thread.gsindex = value;
 		if (task == current)
 			load_gs_index(task->thread.gsindex);
@@ -453,31 +435,17 @@ static unsigned long getreg(struct task_struct *task, unsigned long offset)
 #ifdef CONFIG_X86_64
 	case offsetof(struct user_regs_struct, fs_base): {
 		/*
-		 * do_arch_prctl may have used a GDT slot instead of
-		 * the MSR.  To userland, it appears the same either
-		 * way, except the %fs segment selector might not be 0.
+		 * XXX: This will not behave as expected if called on
+		 * current or if fsindex != 0.
 		 */
-		unsigned int seg = task->thread.fsindex;
-		if (task->thread.fs != 0)
-			return task->thread.fs;
-		if (task == current)
-			asm("movl %%fs,%0" : "=r" (seg));
-		if (seg != FS_TLS_SEL)
-			return 0;
-		return get_desc_base(&task->thread.tls_array[FS_TLS]);
+		return task->thread.fs;
 	}
 	case offsetof(struct user_regs_struct, gs_base): {
 		/*
-		 * Exactly the same here as the %fs handling above.
+		 * XXX: This will not behave as expected if called on
+		 * current or if fsindex != 0.
 		 */
-		unsigned int seg = task->thread.gsindex;
-		if (task->thread.gs != 0)
-			return task->thread.gs;
-		if (task == current)
-			asm("movl %%gs,%0" : "=r" (seg));
-		if (seg != GS_TLS_SEL)
-			return 0;
-		return get_desc_base(&task->thread.tls_array[GS_TLS]);
+		return task->thread.gs;
 	}
 #endif
 	}
-- 
2.5.5

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 6/8] x86/asm/64: Rename thread_struct's fs and gs to fsbase and gsbase
  2016-04-26 19:23 [PATCH 0/8] x86: A round of x86 segmentation improvements Andy Lutomirski
                   ` (4 preceding siblings ...)
  2016-04-26 19:23 ` [PATCH 5/8] x86/arch_prctl/64: Remove FSBASE/GSBASE < 4G optimization Andy Lutomirski
@ 2016-04-26 19:23 ` Andy Lutomirski
  2016-04-29 10:50   ` [tip:x86/asm] " tip-bot for Andy Lutomirski
  2016-04-26 19:23 ` [PATCH 7/8] x86/tls: Synchronize segment registers in set_thread_area Andy Lutomirski
  2016-04-26 19:23 ` [PATCH 8/8] selftests/x86/ldt_gdt: Test set_thread_area deletion of an active segment Andy Lutomirski
  7 siblings, 1 reply; 19+ messages in thread
From: Andy Lutomirski @ 2016-04-26 19:23 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Brian Gerst, Andi Kleen, Andy Lutomirski

Unlike ds and es, these are base addresses, not selectors.  Rename
them so their meaning is more obvious.

On x86_32, the field is still called fs.  Fixing that could make sense
as a future cleanup.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/include/asm/elf.h       |  6 +++---
 arch/x86/include/asm/processor.h | 11 +++++++++--
 arch/x86/kernel/process_64.c     | 30 +++++++++++++++---------------
 arch/x86/kernel/ptrace.c         |  8 ++++----
 arch/x86/kvm/svm.c               |  2 +-
 5 files changed, 32 insertions(+), 25 deletions(-)

diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index 15340e36ddcb..fea7724141a0 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -176,7 +176,7 @@ static inline void elf_common_init(struct thread_struct *t,
 	regs->si = regs->di = regs->bp = 0;
 	regs->r8 = regs->r9 = regs->r10 = regs->r11 = 0;
 	regs->r12 = regs->r13 = regs->r14 = regs->r15 = 0;
-	t->fs = t->gs = 0;
+	t->fsbase = t->gsbase = 0;
 	t->fsindex = t->gsindex = 0;
 	t->ds = t->es = ds;
 }
@@ -226,8 +226,8 @@ do {								\
 	(pr_reg)[18] = (regs)->flags;				\
 	(pr_reg)[19] = (regs)->sp;				\
 	(pr_reg)[20] = (regs)->ss;				\
-	(pr_reg)[21] = current->thread.fs;			\
-	(pr_reg)[22] = current->thread.gs;			\
+	(pr_reg)[21] = current->thread.fsbase;			\
+	(pr_reg)[22] = current->thread.gsbase;			\
 	asm("movl %%ds,%0" : "=r" (v)); (pr_reg)[23] = v;	\
 	asm("movl %%es,%0" : "=r" (v)); (pr_reg)[24] = v;	\
 	asm("movl %%fs,%0" : "=r" (v)); (pr_reg)[25] = v;	\
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 9264476f3d57..9251aa962721 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -388,9 +388,16 @@ struct thread_struct {
 	unsigned long		ip;
 #endif
 #ifdef CONFIG_X86_64
-	unsigned long		fs;
+	unsigned long		fsbase;
+	unsigned long		gsbase;
+#else
+	/*
+	 * XXX: this could presumably be unsigned short.  Alternatively,
+	 * 32-bit kernels could be taught to use fsindex instead.
+	 */
+	unsigned long fs;
+	unsigned long gs;
 #endif
-	unsigned long		gs;
 
 	/* Save middle states of ptrace breakpoints */
 	struct perf_event	*ptrace_bps[HBP_NUM];
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 556a19672ea2..ac78ec2f2d77 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -150,9 +150,9 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
 	p->thread.io_bitmap_ptr = NULL;
 
 	savesegment(gs, p->thread.gsindex);
-	p->thread.gs = p->thread.gsindex ? 0 : me->thread.gs;
+	p->thread.gsbase = p->thread.gsindex ? 0 : me->thread.gsbase;
 	savesegment(fs, p->thread.fsindex);
-	p->thread.fs = p->thread.fsindex ? 0 : me->thread.fs;
+	p->thread.fsbase = p->thread.fsindex ? 0 : me->thread.fsbase;
 	savesegment(es, p->thread.es);
 	savesegment(ds, p->thread.ds);
 	memset(p->thread.ptrace_bps, 0, sizeof(p->thread.ptrace_bps));
@@ -329,18 +329,18 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	 * stronger guarantees.)
 	 *
 	 * As an invariant,
-	 * (fs != 0 && fsindex != 0) || (gs != 0 && gsindex != 0) is
+	 * (fsbase != 0 && fsindex != 0) || (gsbase != 0 && gsindex != 0) is
 	 * impossible.
 	 */
 	if (next->fsindex) {
 		/* Loading a nonzero value into FS sets the index and base. */
 		loadsegment(fs, next->fsindex);
 	} else {
-		if (next->fs) {
+		if (next->fsbase) {
 			/* Next index is zero but next base is nonzero. */
 			if (prev_fsindex)
 				loadsegment(fs, 0);
-			wrmsrl(MSR_FS_BASE, next->fs);
+			wrmsrl(MSR_FS_BASE, next->fsbase);
 		} else {
 			/* Next base and index are both zero. */
 			if (static_cpu_has_bug(X86_BUG_NULL_SEG)) {
@@ -356,7 +356,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 				 * didn't change the base, then the base is
 				 * also zero and we don't need to do anything.
 				 */
-				if (prev->fs || prev_fsindex)
+				if (prev->fsbase || prev_fsindex)
 					loadsegment(fs, 0);
 			}
 		}
@@ -369,18 +369,18 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	 * us.
 	 */
 	if (prev_fsindex)
-		prev->fs = 0;
+		prev->fsbase = 0;
 	prev->fsindex = prev_fsindex;
 
 	if (next->gsindex) {
 		/* Loading a nonzero value into GS sets the index and base. */
 		load_gs_index(next->gsindex);
 	} else {
-		if (next->gs) {
+		if (next->gsbase) {
 			/* Next index is zero but next base is nonzero. */
 			if (prev_gsindex)
 				load_gs_index(0);
-			wrmsrl(MSR_KERNEL_GS_BASE, next->gs);
+			wrmsrl(MSR_KERNEL_GS_BASE, next->gsbase);
 		} else {
 			/* Next base and index are both zero. */
 			if (static_cpu_has_bug(X86_BUG_NULL_SEG)) {
@@ -400,7 +400,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 				 * didn't change the base, then the base is
 				 * also zero and we don't need to do anything.
 				 */
-				if (prev->gs || prev_gsindex)
+				if (prev->gsbase || prev_gsindex)
 					load_gs_index(0);
 			}
 		}
@@ -413,7 +413,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	 * us.
 	 */
 	if (prev_gsindex)
-		prev->gs = 0;
+		prev->gsbase = 0;
 	prev->gsindex = prev_gsindex;
 
 	switch_fpu_finish(next_fpu, fpu_switch);
@@ -536,7 +536,7 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 			return -EPERM;
 		cpu = get_cpu();
 		task->thread.gsindex = 0;
-		task->thread.gs = addr;
+		task->thread.gsbase = addr;
 		if (doit) {
 			load_gs_index(0);
 			ret = wrmsrl_safe(MSR_KERNEL_GS_BASE, addr);
@@ -549,7 +549,7 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 			return -EPERM;
 		cpu = get_cpu();
 		task->thread.fsindex = 0;
-		task->thread.fs = addr;
+		task->thread.fsbase = addr;
 		if (doit) {
 			/* set the selector to 0 to not confuse __switch_to */
 			loadsegment(fs, 0);
@@ -562,7 +562,7 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 		if (doit)
 			rdmsrl(MSR_FS_BASE, base);
 		else
-			base = task->thread.fs;
+			base = task->thread.fsbase;
 		ret = put_user(base, (unsigned long __user *)addr);
 		break;
 	}
@@ -571,7 +571,7 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 		if (doit)
 			rdmsrl(MSR_KERNEL_GS_BASE, base);
 		else
-			base = task->thread.gs;
+			base = task->thread.gsbase;
 		ret = put_user(base, (unsigned long __user *)addr);
 		break;
 	}
diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
index 8cbdd840a611..5d97d90aced3 100644
--- a/arch/x86/kernel/ptrace.c
+++ b/arch/x86/kernel/ptrace.c
@@ -399,7 +399,7 @@ static int putreg(struct task_struct *child,
 		 * to set either thread.fs or thread.fsindex and the
 		 * corresponding GDT slot.
 		 */
-		if (child->thread.fs != value)
+		if (child->thread.fsbase != value)
 			return do_arch_prctl(child, ARCH_SET_FS, value);
 		return 0;
 	case offsetof(struct user_regs_struct,gs_base):
@@ -408,7 +408,7 @@ static int putreg(struct task_struct *child,
 		 */
 		if (value >= TASK_SIZE_OF(child))
 			return -EIO;
-		if (child->thread.gs != value)
+		if (child->thread.gsbase != value)
 			return do_arch_prctl(child, ARCH_SET_GS, value);
 		return 0;
 #endif
@@ -438,14 +438,14 @@ static unsigned long getreg(struct task_struct *task, unsigned long offset)
 		 * XXX: This will not behave as expected if called on
 		 * current or if fsindex != 0.
 		 */
-		return task->thread.fs;
+		return task->thread.fsbase;
 	}
 	case offsetof(struct user_regs_struct, gs_base): {
 		/*
 		 * XXX: This will not behave as expected if called on
 		 * current or if fsindex != 0.
 		 */
-		return task->thread.gs;
+		return task->thread.gsbase;
 	}
 #endif
 	}
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 31346a3f20a5..fafd720ce10a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1254,7 +1254,7 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu)
 	kvm_load_ldt(svm->host.ldt);
 #ifdef CONFIG_X86_64
 	loadsegment(fs, svm->host.fs);
-	wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gs);
+	wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gsbase);
 	load_gs_index(svm->host.gs);
 #else
 #ifdef CONFIG_X86_32_LAZY_GS
-- 
2.5.5

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 7/8] x86/tls: Synchronize segment registers in set_thread_area
  2016-04-26 19:23 [PATCH 0/8] x86: A round of x86 segmentation improvements Andy Lutomirski
                   ` (5 preceding siblings ...)
  2016-04-26 19:23 ` [PATCH 6/8] x86/asm/64: Rename thread_struct's fs and gs to fsbase and gsbase Andy Lutomirski
@ 2016-04-26 19:23 ` Andy Lutomirski
  2016-04-29 10:51   ` [tip:x86/asm] x86/tls: Synchronize segment registers in set_thread_area() tip-bot for Andy Lutomirski
  2016-04-26 19:23 ` [PATCH 8/8] selftests/x86/ldt_gdt: Test set_thread_area deletion of an active segment Andy Lutomirski
  7 siblings, 1 reply; 19+ messages in thread
From: Andy Lutomirski @ 2016-04-26 19:23 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Brian Gerst, Andi Kleen, Andy Lutomirski

The current behavior of set_thread_area when it modifies a segment that is
currently loaded is a bit confused.

If CS [1] or SS is modified, the change will take effect on return
to userspace because CS and SS are fundamentally always reloaded on
return to userspace.

Similarly, on 32-bit kernels, if DS, ES, FS, or (depending on
configuration) GS refers to a modified segment, the change will take
effect immediately on return to user mode because the entry code
reloads these registers.

If set_thread_area modifies DS, ES [2], FS, or GS on 64-bit kernels or
GS on 32-bit lazy-GS [3] kernels, however, the segment registers
will be left alone until something (most likely a context switch)
causes them to be reloaded.  This means that behavior visible to
user space is inconsistent.

If set_thread_area is implicitly called via CLONE_SETTLS, then all
segment registers will be reloaded before the thread starts because
CLONE_SETTLS happens before the initial context switch into the
newly created thread.

Empirically, glibc requires the immediate reload on CLONE_SETTLS --
32-bit glibc on my system does *not* manually reload GS when
creating a new thread.

Before enabling FSGSBASE, we need to figure out what the behavior
will be, as FSGSBASE requires that we reconsider our behavior when,
e.g., GS and GSBASE are out of sync in user mode.  Given that we
must preserve the existing behavior of CLONE_SETTLS, it makes sense
to me that we simply extend similar behavior to all invocations
of set_thread_area.

This patch explicitly updates any segment register referring to a
segment that is targetted by set_thread_area.  If set_thread_area
deletes the segment, then the segment register will be nulled out.

[1] This can't actually happen since 0e58af4e1d21 ("x86/tls:
    Disallow unusual TLS segments") but, if it did, this is how it
    would behave.

[2] I strongly doubt that any existing non-malicious program loads a
    TLS segment into DS or ES on a 64-bit kernel because the context
    switch code was badly broken until recently, but that's not an
    excuse to leave the current code alone.

[3] One way or another, that config option should to go away.  Yuck!

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/kernel/tls.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c
index 7fc5e843f247..9692a5e9fdab 100644
--- a/arch/x86/kernel/tls.c
+++ b/arch/x86/kernel/tls.c
@@ -114,6 +114,7 @@ int do_set_thread_area(struct task_struct *p, int idx,
 		       int can_allocate)
 {
 	struct user_desc info;
+	unsigned short __maybe_unused sel, modified_sel;
 
 	if (copy_from_user(&info, u_info, sizeof(info)))
 		return -EFAULT;
@@ -141,6 +142,47 @@ int do_set_thread_area(struct task_struct *p, int idx,
 
 	set_tls_desc(p, idx, &info, 1);
 
+	/*
+	 * If DS, ES, FS, or GS points to the modified segment, forcibly
+	 * refresh it.  Only needed on x86_64 because x86_32 reloads them
+	 * on return to user mode.
+	 */
+	modified_sel = (idx << 3) | 3;
+
+	if (p == current) {
+#ifdef CONFIG_X86_64
+		savesegment(ds, sel);
+		if (sel == modified_sel)
+			loadsegment(ds, sel);
+
+		savesegment(es, sel);
+		if (sel == modified_sel)
+			loadsegment(es, sel);
+
+		savesegment(fs, sel);
+		if (sel == modified_sel)
+			loadsegment(fs, sel);
+
+		savesegment(gs, sel);
+		if (sel == modified_sel)
+			load_gs_index(sel);
+#endif
+
+#ifdef CONFIG_X86_32_LAZY_GS
+		savesegment(gs, sel);
+		if (sel == modified_sel)
+			loadsegment(gs, sel);
+#endif
+	} else {
+#ifdef CONFIG_X86_64
+		if (p->thread.fsindex == modified_sel)
+			p->thread.fsbase = info.base_addr;
+
+		if (p->thread.gsindex == modified_sel)
+			p->thread.gsbase = info.base_addr;
+#endif
+	}
+
 	return 0;
 }
 
-- 
2.5.5

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 8/8] selftests/x86/ldt_gdt: Test set_thread_area deletion of an active segment
  2016-04-26 19:23 [PATCH 0/8] x86: A round of x86 segmentation improvements Andy Lutomirski
                   ` (6 preceding siblings ...)
  2016-04-26 19:23 ` [PATCH 7/8] x86/tls: Synchronize segment registers in set_thread_area Andy Lutomirski
@ 2016-04-26 19:23 ` Andy Lutomirski
  2016-04-29 10:51   ` [tip:x86/asm] selftests/x86/ldt_gdt: Test set_thread_area() " tip-bot for Andy Lutomirski
  7 siblings, 1 reply; 19+ messages in thread
From: Andy Lutomirski @ 2016-04-26 19:23 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Brian Gerst, Andi Kleen, Andy Lutomirski

Now that set_thread_area is supposed to give deterministic behavior
when it modifies in-use segments, test it.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 tools/testing/selftests/x86/ldt_gdt.c | 250 ++++++++++++++++++++++++++++++++++
 1 file changed, 250 insertions(+)

diff --git a/tools/testing/selftests/x86/ldt_gdt.c b/tools/testing/selftests/x86/ldt_gdt.c
index 31a3035cd4eb..4af47079cf04 100644
--- a/tools/testing/selftests/x86/ldt_gdt.c
+++ b/tools/testing/selftests/x86/ldt_gdt.c
@@ -21,6 +21,9 @@
 #include <pthread.h>
 #include <sched.h>
 #include <linux/futex.h>
+#include <sys/mman.h>
+#include <asm/prctl.h>
+#include <sys/prctl.h>
 
 #define AR_ACCESSED		(1<<8)
 
@@ -44,6 +47,12 @@
 
 static int nerrs;
 
+/* Points to an array of 1024 ints, each holding its own index. */
+static const unsigned int *counter_page;
+static struct user_desc *low_user_desc;
+static struct user_desc *low_user_desc_clear;  /* Use to delete GDT entry */
+static int gdt_entry_num;
+
 static void check_invalid_segment(uint16_t index, int ldt)
 {
 	uint32_t has_limit = 0, has_ar = 0, limit, ar;
@@ -561,16 +570,257 @@ static void do_exec_test(void)
 	}
 }
 
+static void setup_counter_page(void)
+{
+	unsigned int *page = mmap(NULL, 4096, PROT_READ | PROT_WRITE,
+			 MAP_ANONYMOUS | MAP_PRIVATE | MAP_32BIT, -1, 0);
+	if (page == MAP_FAILED)
+		err(1, "mmap");
+
+	for (int i = 0; i < 1024; i++)
+		page[i] = i;
+	counter_page = page;
+}
+
+static int invoke_set_thread_area(void)
+{
+	int ret;
+	asm volatile ("int $0x80"
+		      : "=a" (ret), "+m" (low_user_desc) :
+			"a" (243), "b" (low_user_desc)
+		      : "flags");
+	return ret;
+}
+
+static void setup_low_user_desc(void)
+{
+	low_user_desc = mmap(NULL, 2 * sizeof(struct user_desc),
+			     PROT_READ | PROT_WRITE,
+			     MAP_ANONYMOUS | MAP_PRIVATE | MAP_32BIT, -1, 0);
+	if (low_user_desc == MAP_FAILED)
+		err(1, "mmap");
+
+	low_user_desc->entry_number	= -1;
+	low_user_desc->base_addr	= (unsigned long)&counter_page[1];
+	low_user_desc->limit		= 0xfffff;
+	low_user_desc->seg_32bit	= 1;
+	low_user_desc->contents		= 0; /* Data, grow-up*/
+	low_user_desc->read_exec_only	= 0;
+	low_user_desc->limit_in_pages	= 1;
+	low_user_desc->seg_not_present	= 0;
+	low_user_desc->useable		= 0;
+
+	if (invoke_set_thread_area() == 0) {
+		gdt_entry_num = low_user_desc->entry_number;
+		printf("[NOTE]\tset_thread_area is available; will use GDT index %d\n", gdt_entry_num);
+	} else {
+		printf("[NOTE]\tset_thread_area is unavailable\n");
+	}
+
+	low_user_desc_clear = low_user_desc + 1;
+	low_user_desc_clear->entry_number = gdt_entry_num;
+	low_user_desc_clear->read_exec_only = 1;
+	low_user_desc_clear->seg_not_present = 1;
+}
+
+static void test_gdt_invalidation(void)
+{
+	if (!gdt_entry_num)
+		return;	/* 64-bit only system -- we can't use set_thread_area */
+
+	unsigned short prev_sel;
+	unsigned short sel;
+	unsigned int eax;
+	const char *result;
+#ifdef __x86_64__
+	unsigned long saved_base;
+	unsigned long new_base;
+#endif
+
+	/* Test DS */
+	invoke_set_thread_area();
+	eax = 243;
+	sel = (gdt_entry_num << 3) | 3;
+	asm volatile ("movw %%ds, %[prev_sel]\n\t"
+		      "movw %[sel], %%ds\n\t"
+#ifdef __i386__
+		      "pushl %%ebx\n\t"
+#endif
+		      "movl %[arg1], %%ebx\n\t"
+		      "int $0x80\n\t"	/* Should invalidate ds */
+#ifdef __i386__
+		      "popl %%ebx\n\t"
+#endif
+		      "movw %%ds, %[sel]\n\t"
+		      "movw %[prev_sel], %%ds"
+		      : [prev_sel] "=&r" (prev_sel), [sel] "+r" (sel),
+			"+a" (eax)
+		      : "m" (low_user_desc_clear),
+			[arg1] "r" ((unsigned int)(unsigned long)low_user_desc_clear)
+		      : "flags");
+
+	if (sel != 0) {
+		result = "FAIL";
+		nerrs++;
+	} else {
+		result = "OK";
+	}
+	printf("[%s]\tInvalidate DS with set_thread_area: new DS = 0x%hx\n",
+	       result, sel);
+
+	/* Test ES */
+	invoke_set_thread_area();
+	eax = 243;
+	sel = (gdt_entry_num << 3) | 3;
+	asm volatile ("movw %%es, %[prev_sel]\n\t"
+		      "movw %[sel], %%es\n\t"
+#ifdef __i386__
+		      "pushl %%ebx\n\t"
+#endif
+		      "movl %[arg1], %%ebx\n\t"
+		      "int $0x80\n\t"	/* Should invalidate es */
+#ifdef __i386__
+		      "popl %%ebx\n\t"
+#endif
+		      "movw %%es, %[sel]\n\t"
+		      "movw %[prev_sel], %%es"
+		      : [prev_sel] "=&r" (prev_sel), [sel] "+r" (sel),
+			"+a" (eax)
+		      : "m" (low_user_desc_clear),
+			[arg1] "r" ((unsigned int)(unsigned long)low_user_desc_clear)
+		      : "flags");
+
+	if (sel != 0) {
+		result = "FAIL";
+		nerrs++;
+	} else {
+		result = "OK";
+	}
+	printf("[%s]\tInvalidate ES with set_thread_area: new ES = 0x%hx\n",
+	       result, sel);
+
+	/* Test FS */
+	invoke_set_thread_area();
+	eax = 243;
+	sel = (gdt_entry_num << 3) | 3;
+#ifdef __x86_64__
+	syscall(SYS_arch_prctl, ARCH_GET_FS, &saved_base);
+#endif
+	asm volatile ("movw %%fs, %[prev_sel]\n\t"
+		      "movw %[sel], %%fs\n\t"
+#ifdef __i386__
+		      "pushl %%ebx\n\t"
+#endif
+		      "movl %[arg1], %%ebx\n\t"
+		      "int $0x80\n\t"	/* Should invalidate fs */
+#ifdef __i386__
+		      "popl %%ebx\n\t"
+#endif
+		      "movw %%fs, %[sel]\n\t"
+		      : [prev_sel] "=&r" (prev_sel), [sel] "+r" (sel),
+			"+a" (eax)
+		      : "m" (low_user_desc_clear),
+			[arg1] "r" ((unsigned int)(unsigned long)low_user_desc_clear)
+		      : "flags");
+
+#ifdef __x86_64__
+	syscall(SYS_arch_prctl, ARCH_GET_FS, &new_base);
+#endif
+
+	/* Restore FS/BASE for glibc */
+	asm volatile ("movw %[prev_sel], %%fs" : : [prev_sel] "rm" (prev_sel));
+#ifdef __x86_64__
+	if (saved_base)
+		syscall(SYS_arch_prctl, ARCH_SET_FS, saved_base);
+#endif
+
+	if (sel != 0) {
+		result = "FAIL";
+		nerrs++;
+	} else {
+		result = "OK";
+	}
+	printf("[%s]\tInvalidate FS with set_thread_area: new FS = 0x%hx\n",
+	       result, sel);
+
+#ifdef __x86_64__
+	if (sel == 0 && new_base != 0) {
+		nerrs++;
+		printf("[FAIL]\tNew FSBASE was 0x%lx\n", new_base);
+	} else {
+		printf("[OK]\tNew FSBASE was zero\n");
+	}
+#endif
+
+	/* Test GS */
+	invoke_set_thread_area();
+	eax = 243;
+	sel = (gdt_entry_num << 3) | 3;
+#ifdef __x86_64__
+	syscall(SYS_arch_prctl, ARCH_GET_GS, &saved_base);
+#endif
+	asm volatile ("movw %%gs, %[prev_sel]\n\t"
+		      "movw %[sel], %%gs\n\t"
+#ifdef __i386__
+		      "pushl %%ebx\n\t"
+#endif
+		      "movl %[arg1], %%ebx\n\t"
+		      "int $0x80\n\t"	/* Should invalidate gs */
+#ifdef __i386__
+		      "popl %%ebx\n\t"
+#endif
+		      "movw %%gs, %[sel]\n\t"
+		      : [prev_sel] "=&r" (prev_sel), [sel] "+r" (sel),
+			"+a" (eax)
+		      : "m" (low_user_desc_clear),
+			[arg1] "r" ((unsigned int)(unsigned long)low_user_desc_clear)
+		      : "flags");
+
+#ifdef __x86_64__
+	syscall(SYS_arch_prctl, ARCH_GET_GS, &new_base);
+#endif
+
+	/* Restore GS/BASE for glibc */
+	asm volatile ("movw %[prev_sel], %%gs" : : [prev_sel] "rm" (prev_sel));
+#ifdef __x86_64__
+	if (saved_base)
+		syscall(SYS_arch_prctl, ARCH_SET_GS, saved_base);
+#endif
+
+	if (sel != 0) {
+		result = "FAIL";
+		nerrs++;
+	} else {
+		result = "OK";
+	}
+	printf("[%s]\tInvalidate GS with set_thread_area: new GS = 0x%hx\n",
+	       result, sel);
+
+#ifdef __x86_64__
+	if (sel == 0 && new_base != 0) {
+		nerrs++;
+		printf("[FAIL]\tNew GSBASE was 0x%lx\n", new_base);
+	} else {
+		printf("[OK]\tNew GSBASE was zero\n");
+	}
+#endif
+}
+
 int main(int argc, char **argv)
 {
 	if (argc == 1 && !strcmp(argv[0], "ldt_gdt_test_exec"))
 		return finish_exec_test();
 
+	setup_counter_page();
+	setup_low_user_desc();
+
 	do_simple_tests();
 
 	do_multicpu_tests();
 
 	do_exec_test();
 
+	test_gdt_invalidation();
+
 	return nerrs ? 1 : 0;
 }
-- 
2.5.5

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 5/8] x86/arch_prctl/64: Remove FSBASE/GSBASE < 4G optimization
  2016-04-26 19:23 ` [PATCH 5/8] x86/arch_prctl/64: Remove FSBASE/GSBASE < 4G optimization Andy Lutomirski
@ 2016-04-26 20:50   ` Andi Kleen
  2016-04-26 22:33     ` Andy Lutomirski
  2016-04-29 10:50   ` [tip:x86/asm] " tip-bot for Andy Lutomirski
  1 sibling, 1 reply; 19+ messages in thread
From: Andi Kleen @ 2016-04-26 20:50 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: x86, linux-kernel, Borislav Petkov, Brian Gerst, Andi Kleen

On Tue, Apr 26, 2016 at 12:23:28PM -0700, Andy Lutomirski wrote:
> As far as I know, the optimization doesn't work on any modern distro
> because modern distros use high addresses for ASLR.  Remove it.

I disagree with this patch. For example it will be a regression
for static executables.  And randomly making old systems slower is a bad
idea.

-Andi

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 5/8] x86/arch_prctl/64: Remove FSBASE/GSBASE < 4G optimization
  2016-04-26 20:50   ` Andi Kleen
@ 2016-04-26 22:33     ` Andy Lutomirski
  0 siblings, 0 replies; 19+ messages in thread
From: Andy Lutomirski @ 2016-04-26 22:33 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Andy Lutomirski, X86 ML, linux-kernel, Borislav Petkov, Brian Gerst

On Tue, Apr 26, 2016 at 1:50 PM, Andi Kleen <andi@firstfloor.org> wrote:
> On Tue, Apr 26, 2016 at 12:23:28PM -0700, Andy Lutomirski wrote:
>> As far as I know, the optimization doesn't work on any modern distro
>> because modern distros use high addresses for ASLR.  Remove it.
>
> I disagree with this patch. For example it will be a regression
> for static executables.  And randomly making old systems slower is a bad
> idea.

That's odd.  statically linked glibc uses low addresses, even in PIE
mode.  I wonder why.

In any event, this isn't actually much of a performance regression, it
has no effect on normal dynamically linked programs, and it's a
considerably simplification, so I still think it's a good idea.  It
also removes some nasty special cases from code that is already way
too full of special cases for comfort.

--Andy

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [tip:x86/asm] x86/asm: Stop depending on ptrace.h in alternative.h
  2016-04-26 19:23 ` [PATCH 1/8] x86/asm: Stop depending on ptrace.h in alternative.h Andy Lutomirski
@ 2016-04-29 10:48   ` " tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 19+ messages in thread
From: tip-bot for Andy Lutomirski @ 2016-04-29 10:48 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, peterz, hpa, torvalds, brgerst, dvlasenk, linux-kernel,
	tglx, luto, luto, bp

Commit-ID:  35de5b0692aaa1f99803044526f2cc00ff864426
Gitweb:     http://git.kernel.org/tip/35de5b0692aaa1f99803044526f2cc00ff864426
Author:     Andy Lutomirski <luto@kernel.org>
AuthorDate: Tue, 26 Apr 2016 12:23:24 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 29 Apr 2016 11:56:40 +0200

x86/asm: Stop depending on ptrace.h in alternative.h

alternative.h pulls in ptrace.h, which means that alternatives can't
be used in anything referenced from ptrace.h, which is a mess.

Break the dependency by pulling text patching helpers into their own
header.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/99b93b13f2c9eb671f5c98bba4c2cbdc061293a2.1461698311.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/alternative.h   | 33 -----------------------------
 arch/x86/include/asm/kgdb.h          |  2 ++
 arch/x86/include/asm/setup.h         |  1 +
 arch/x86/include/asm/text-patching.h | 40 ++++++++++++++++++++++++++++++++++++
 arch/x86/kernel/alternative.c        |  1 +
 arch/x86/kernel/jump_label.c         |  1 +
 arch/x86/kernel/kgdb.c               |  1 +
 arch/x86/kernel/kprobes/core.c       |  1 +
 arch/x86/kernel/kprobes/opt.c        |  1 +
 arch/x86/kernel/module.c             |  1 +
 arch/x86/kernel/traps.c              |  1 +
 11 files changed, 50 insertions(+), 33 deletions(-)

diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
index 99afb66..be4496c 100644
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -5,7 +5,6 @@
 #include <linux/stddef.h>
 #include <linux/stringify.h>
 #include <asm/asm.h>
-#include <asm/ptrace.h>
 
 /*
  * Alternative inline assembly for SMP.
@@ -233,36 +232,4 @@ static inline int alternatives_text_reserved(void *start, void *end)
  */
 #define ASM_NO_INPUT_CLOBBER(clbr...) "i" (0) : clbr
 
-struct paravirt_patch_site;
-#ifdef CONFIG_PARAVIRT
-void apply_paravirt(struct paravirt_patch_site *start,
-		    struct paravirt_patch_site *end);
-#else
-static inline void apply_paravirt(struct paravirt_patch_site *start,
-				  struct paravirt_patch_site *end)
-{}
-#define __parainstructions	NULL
-#define __parainstructions_end	NULL
-#endif
-
-extern void *text_poke_early(void *addr, const void *opcode, size_t len);
-
-/*
- * Clear and restore the kernel write-protection flag on the local CPU.
- * Allows the kernel to edit read-only pages.
- * Side-effect: any interrupt handler running between save and restore will have
- * the ability to write to read-only pages.
- *
- * Warning:
- * Code patching in the UP case is safe if NMIs and MCE handlers are stopped and
- * no thread can be preempted in the instructions being modified (no iret to an
- * invalid instruction possible) or if the instructions are changed from a
- * consistent state to another consistent state atomically.
- * On the local CPU you need to be protected again NMI or MCE handlers seeing an
- * inconsistent instruction while you patch.
- */
-extern void *text_poke(void *addr, const void *opcode, size_t len);
-extern int poke_int3_handler(struct pt_regs *regs);
-extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
-
 #endif /* _ASM_X86_ALTERNATIVE_H */
diff --git a/arch/x86/include/asm/kgdb.h b/arch/x86/include/asm/kgdb.h
index 332f98c..22a8537 100644
--- a/arch/x86/include/asm/kgdb.h
+++ b/arch/x86/include/asm/kgdb.h
@@ -6,6 +6,8 @@
  * Copyright (C) 2008 Wind River Systems, Inc.
  */
 
+#include <asm/ptrace.h>
+
 /*
  * BUFMAX defines the maximum number of characters in inbound/outbound
  * buffers at least NUMREGBYTES*2 are needed for register packets
diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index 11af24e..ac1d5da 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -6,6 +6,7 @@
 #define COMMAND_LINE_SIZE 2048
 
 #include <linux/linkage.h>
+#include <asm/page_types.h>
 
 #ifdef __i386__
 
diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
new file mode 100644
index 0000000..9039506
--- /dev/null
+++ b/arch/x86/include/asm/text-patching.h
@@ -0,0 +1,40 @@
+#ifndef _ASM_X86_TEXT_PATCHING_H
+#define _ASM_X86_TEXT_PATCHING_H
+
+#include <linux/types.h>
+#include <linux/stddef.h>
+#include <asm/ptrace.h>
+
+struct paravirt_patch_site;
+#ifdef CONFIG_PARAVIRT
+void apply_paravirt(struct paravirt_patch_site *start,
+		    struct paravirt_patch_site *end);
+#else
+static inline void apply_paravirt(struct paravirt_patch_site *start,
+				  struct paravirt_patch_site *end)
+{}
+#define __parainstructions	NULL
+#define __parainstructions_end	NULL
+#endif
+
+extern void *text_poke_early(void *addr, const void *opcode, size_t len);
+
+/*
+ * Clear and restore the kernel write-protection flag on the local CPU.
+ * Allows the kernel to edit read-only pages.
+ * Side-effect: any interrupt handler running between save and restore will have
+ * the ability to write to read-only pages.
+ *
+ * Warning:
+ * Code patching in the UP case is safe if NMIs and MCE handlers are stopped and
+ * no thread can be preempted in the instructions being modified (no iret to an
+ * invalid instruction possible) or if the instructions are changed from a
+ * consistent state to another consistent state atomically.
+ * On the local CPU you need to be protected again NMI or MCE handlers seeing an
+ * inconsistent instruction while you patch.
+ */
+extern void *text_poke(void *addr, const void *opcode, size_t len);
+extern int poke_int3_handler(struct pt_regs *regs);
+extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
+
+#endif /* _ASM_X86_TEXT_PATCHING_H */
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 25f9093..5cb272a 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -11,6 +11,7 @@
 #include <linux/stop_machine.h>
 #include <linux/slab.h>
 #include <linux/kdebug.h>
+#include <asm/text-patching.h>
 #include <asm/alternative.h>
 #include <asm/sections.h>
 #include <asm/pgtable.h>
diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index e565e0e..fc25f69 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -13,6 +13,7 @@
 #include <linux/cpu.h>
 #include <asm/kprobes.h>
 #include <asm/alternative.h>
+#include <asm/text-patching.h>
 
 #ifdef HAVE_JUMP_LABEL
 
diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
index 2da6ee9..04cde52 100644
--- a/arch/x86/kernel/kgdb.c
+++ b/arch/x86/kernel/kgdb.c
@@ -45,6 +45,7 @@
 #include <linux/uaccess.h>
 #include <linux/memory.h>
 
+#include <asm/text-patching.h>
 #include <asm/debugreg.h>
 #include <asm/apicdef.h>
 #include <asm/apic.h>
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index ae703ac..38cf7a7 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -51,6 +51,7 @@
 #include <linux/ftrace.h>
 #include <linux/frame.h>
 
+#include <asm/text-patching.h>
 #include <asm/cacheflush.h>
 #include <asm/desc.h>
 #include <asm/pgtable.h>
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 7b3b9d1..4425f59 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -29,6 +29,7 @@
 #include <linux/kallsyms.h>
 #include <linux/ftrace.h>
 
+#include <asm/text-patching.h>
 #include <asm/cacheflush.h>
 #include <asm/desc.h>
 #include <asm/pgtable.h>
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index 005c03e..477ae80 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -31,6 +31,7 @@
 #include <linux/jump_label.h>
 #include <linux/random.h>
 
+#include <asm/text-patching.h>
 #include <asm/page.h>
 #include <asm/pgtable.h>
 #include <asm/setup.h>
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 06cbe25..d159048 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -51,6 +51,7 @@
 #include <asm/processor.h>
 #include <asm/debugreg.h>
 #include <linux/atomic.h>
+#include <asm/text-patching.h>
 #include <asm/ftrace.h>
 #include <asm/traps.h>
 #include <asm/desc.h>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [tip:x86/asm] x86/asm: Make asm/alternative.h safe from assembly
  2016-04-26 19:23 ` [PATCH 2/8] x86/asm: Make asm/alternative.h safe from assembly Andy Lutomirski
@ 2016-04-29 10:49   ` " tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 19+ messages in thread
From: tip-bot for Andy Lutomirski @ 2016-04-29 10:49 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, brgerst, linux-kernel, bp, mingo, luto, torvalds,
	dvlasenk, tglx, hpa, luto

Commit-ID:  f005f5d860e0231fe212cfda8c1a3148b99609f4
Gitweb:     http://git.kernel.org/tip/f005f5d860e0231fe212cfda8c1a3148b99609f4
Author:     Andy Lutomirski <luto@kernel.org>
AuthorDate: Tue, 26 Apr 2016 12:23:25 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 29 Apr 2016 11:56:41 +0200

x86/asm: Make asm/alternative.h safe from assembly

asm/alternative.h isn't directly useful from assembly, but it
shouldn't break the build.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/e5b693fcef99fe6e80341c9e97a002fb23871e91.1461698311.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/alternative.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
index be4496c..e77a644 100644
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -1,6 +1,8 @@
 #ifndef _ASM_X86_ALTERNATIVE_H
 #define _ASM_X86_ALTERNATIVE_H
 
+#ifndef __ASSEMBLY__
+
 #include <linux/types.h>
 #include <linux/stddef.h>
 #include <linux/stringify.h>
@@ -232,4 +234,6 @@ static inline int alternatives_text_reserved(void *start, void *end)
  */
 #define ASM_NO_INPUT_CLOBBER(clbr...) "i" (0) : clbr
 
+#endif /* __ASSEMBLY__ */
+
 #endif /* _ASM_X86_ALTERNATIVE_H */

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [tip:x86/asm] x86/segments/64: When loadsegment(fs, ...) fails, clear the base
  2016-04-26 19:23 ` [PATCH 3/8] x86/segments/64: When loadsegment(fs, ...) fails, clear the base Andy Lutomirski
@ 2016-04-29 10:49   ` " tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 19+ messages in thread
From: tip-bot for Andy Lutomirski @ 2016-04-29 10:49 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, mingo, peterz, brgerst, torvalds, dvlasenk, luto,
	tglx, hpa, bp, luto

Commit-ID:  45e876f794e8e566bf827c25ef0791875081724f
Gitweb:     http://git.kernel.org/tip/45e876f794e8e566bf827c25ef0791875081724f
Author:     Andy Lutomirski <luto@kernel.org>
AuthorDate: Tue, 26 Apr 2016 12:23:26 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 29 Apr 2016 11:56:41 +0200

x86/segments/64: When loadsegment(fs, ...) fails, clear the base

On AMD CPUs, a failed loadsegment currently may not clear the FS
base.  Fix it.

While we're at it, prevent loadsegment(gs, xyz) from even compiling
on 64-bit kernels.  It shouldn't be used.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/a084c1b93b7b1408b58d3fd0b5d6e47da8e7d7cf.1461698311.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/segment.h | 42 +++++++++++++++++++++++++++++++++++++++---
 arch/x86/kernel/cpu/common.c   |  2 +-
 arch/x86/mm/extable.c          | 10 ++++++++++
 3 files changed, 50 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h
index 7d5a192..e1a4afd 100644
--- a/arch/x86/include/asm/segment.h
+++ b/arch/x86/include/asm/segment.h
@@ -2,6 +2,7 @@
 #define _ASM_X86_SEGMENT_H
 
 #include <linux/const.h>
+#include <asm/alternative.h>
 
 /*
  * Constructor for a conventional segment GDT (or LDT) entry.
@@ -249,10 +250,13 @@ extern const char early_idt_handler_array[NUM_EXCEPTION_VECTORS][EARLY_IDT_HANDL
 #endif
 
 /*
- * Load a segment. Fall back on loading the zero
- * segment if something goes wrong..
+ * Load a segment. Fall back on loading the zero segment if something goes
+ * wrong.  This variant assumes that loading zero fully clears the segment.
+ * This is always the case on Intel CPUs and, even on 64-bit AMD CPUs, any
+ * failure to fully clear the cached descriptor is only observable for
+ * FS and GS.
  */
-#define loadsegment(seg, value)						\
+#define __loadsegment_simple(seg, value)				\
 do {									\
 	unsigned short __val = (value);					\
 									\
@@ -269,6 +273,38 @@ do {									\
 		     : "+r" (__val) : : "memory");			\
 } while (0)
 
+#define __loadsegment_ss(value) __loadsegment_simple(ss, (value))
+#define __loadsegment_ds(value) __loadsegment_simple(ds, (value))
+#define __loadsegment_es(value) __loadsegment_simple(es, (value))
+
+#ifdef CONFIG_X86_32
+
+/*
+ * On 32-bit systems, the hidden parts of FS and GS are unobservable if
+ * the selector is NULL, so there's no funny business here.
+ */
+#define __loadsegment_fs(value) __loadsegment_simple(fs, (value))
+#define __loadsegment_gs(value) __loadsegment_simple(gs, (value))
+
+#else
+
+static inline void __loadsegment_fs(unsigned short value)
+{
+	asm volatile("						\n"
+		     "1:	movw %0, %%fs			\n"
+		     "2:					\n"
+
+		     _ASM_EXTABLE_HANDLE(1b, 2b, ex_handler_clear_fs)
+
+		     : : "rm" (value) : "memory");
+}
+
+/* __loadsegment_gs is intentionally undefined.  Use load_gs_index instead. */
+
+#endif
+
+#define loadsegment(seg, value) __loadsegment_ ## seg (value)
+
 /*
  * Save a segment register away:
  */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 6bfa36d..0881061 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -430,7 +430,7 @@ void load_percpu_segment(int cpu)
 #ifdef CONFIG_X86_32
 	loadsegment(fs, __KERNEL_PERCPU);
 #else
-	loadsegment(gs, 0);
+	__loadsegment_simple(gs, 0);
 	wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
 #endif
 	load_stack_canary_segment();
diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
index aaeda3f..4bb53b8 100644
--- a/arch/x86/mm/extable.c
+++ b/arch/x86/mm/extable.c
@@ -70,6 +70,16 @@ bool ex_handler_wrmsr_unsafe(const struct exception_table_entry *fixup,
 }
 EXPORT_SYMBOL(ex_handler_wrmsr_unsafe);
 
+bool ex_handler_clear_fs(const struct exception_table_entry *fixup,
+			 struct pt_regs *regs, int trapnr)
+{
+	if (static_cpu_has(X86_BUG_NULL_SEG))
+		asm volatile ("mov %0, %%fs" : : "rm" (__USER_DS));
+	asm volatile ("mov %0, %%fs" : : "rm" (0));
+	return ex_handler_default(fixup, regs, trapnr);
+}
+EXPORT_SYMBOL(ex_handler_clear_fs);
+
 bool ex_has_fault_handler(unsigned long ip)
 {
 	const struct exception_table_entry *e;

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [tip:x86/asm] x86/segments/64: When load_gs_index fails, clear the base
  2016-04-26 19:23 ` [PATCH 4/8] x86/segments/64: When load_gs_index " Andy Lutomirski
@ 2016-04-29 10:49   ` " tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 19+ messages in thread
From: tip-bot for Andy Lutomirski @ 2016-04-29 10:49 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: bp, tglx, dvlasenk, mingo, torvalds, linux-kernel, peterz, luto,
	luto, brgerst, hpa

Commit-ID:  b038c842b385f1470f991078e71b7c5b084a7341
Gitweb:     http://git.kernel.org/tip/b038c842b385f1470f991078e71b7c5b084a7341
Author:     Andy Lutomirski <luto@kernel.org>
AuthorDate: Tue, 26 Apr 2016 12:23:27 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 29 Apr 2016 11:56:41 +0200

x86/segments/64: When load_gs_index fails, clear the base

On AMD CPUs, a failed load_gs_base currently may not clear the FS
base.  Fix it.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1a6c4d3a8a4e7be79ba448b42685e0321d50c14c.1461698311.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/entry/entry_64.S | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 1693c17..6344629 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -794,6 +794,12 @@ END(native_load_gs_index)
 	/* running with kernelgs */
 bad_gs:
 	SWAPGS					/* switch back to user gs */
+.macro ZAP_GS
+	/* This can't be a string because the preprocessor needs to see it. */
+	movl $__USER_DS, %eax
+	movl %eax, %gs
+.endm
+	ALTERNATIVE "", "ZAP_GS", X86_BUG_NULL_SEG
 	xorl	%eax, %eax
 	movl	%eax, %gs
 	jmp	2b

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [tip:x86/asm] x86/arch_prctl/64: Remove FSBASE/GSBASE < 4G optimization
  2016-04-26 19:23 ` [PATCH 5/8] x86/arch_prctl/64: Remove FSBASE/GSBASE < 4G optimization Andy Lutomirski
  2016-04-26 20:50   ` Andi Kleen
@ 2016-04-29 10:50   ` " tip-bot for Andy Lutomirski
  1 sibling, 0 replies; 19+ messages in thread
From: tip-bot for Andy Lutomirski @ 2016-04-29 10:50 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, torvalds, mingo, tglx, bp, hpa, luto, luto, dvlasenk,
	brgerst, linux-kernel

Commit-ID:  731e33e39a5b95ad77017811b3ced32ecf9dc666
Gitweb:     http://git.kernel.org/tip/731e33e39a5b95ad77017811b3ced32ecf9dc666
Author:     Andy Lutomirski <luto@kernel.org>
AuthorDate: Tue, 26 Apr 2016 12:23:28 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 29 Apr 2016 11:56:41 +0200

x86/arch_prctl/64: Remove FSBASE/GSBASE < 4G optimization

As far as I know, the optimization doesn't work on any modern distro
because modern distros use high addresses for ASLR.  Remove it.

The ptrace code was either wrong or very strange, but the behavior
with this patch should be essentially identical to the behavior
without this patch unless user code goes out of its way to mislead
ptrace.

On newer CPUs, once the FSGSBASE instructions are enabled, we won't
want to use the optimized variant anyway.

This isn't actually much of a performance regression, it has no effect
on normal dynamically linked programs, and it's a considerably
simplification. It also removes some nasty special cases from code
that is already way too full of special cases for comfort.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/dd1599b08866961dba9d2458faa6bbd7fba471d7.1461698311.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/segment.h |  7 -----
 arch/x86/kernel/process_64.c   | 71 +++++++-----------------------------------
 arch/x86/kernel/ptrace.c       | 44 ++++----------------------
 3 files changed, 17 insertions(+), 105 deletions(-)

diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h
index e1a4afd..1549caa0 100644
--- a/arch/x86/include/asm/segment.h
+++ b/arch/x86/include/asm/segment.h
@@ -208,13 +208,6 @@
 #define __USER_CS			(GDT_ENTRY_DEFAULT_USER_CS*8 + 3)
 #define __PER_CPU_SEG			(GDT_ENTRY_PER_CPU*8 + 3)
 
-/* TLS indexes for 64-bit - hardcoded in arch_prctl(): */
-#define FS_TLS				0
-#define GS_TLS				1
-
-#define GS_TLS_SEL			((GDT_ENTRY_TLS_MIN+GS_TLS)*8 + 3)
-#define FS_TLS_SEL			((GDT_ENTRY_TLS_MIN+FS_TLS)*8 + 3)
-
 #endif
 
 #ifndef CONFIG_PARAVIRT
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 24d1b7f..864fe2c 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -136,25 +136,6 @@ void release_thread(struct task_struct *dead_task)
 	}
 }
 
-static inline void set_32bit_tls(struct task_struct *t, int tls, u32 addr)
-{
-	struct user_desc ud = {
-		.base_addr = addr,
-		.limit = 0xfffff,
-		.seg_32bit = 1,
-		.limit_in_pages = 1,
-		.useable = 1,
-	};
-	struct desc_struct *desc = t->thread.tls_array;
-	desc += tls;
-	fill_ldt(desc, &ud);
-}
-
-static inline u32 read_32bit_tls(struct task_struct *t, int tls)
-{
-	return get_desc_base(&t->thread.tls_array[tls]);
-}
-
 int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
 		unsigned long arg, struct task_struct *p, unsigned long tls)
 {
@@ -554,25 +535,12 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 		if (addr >= TASK_SIZE_OF(task))
 			return -EPERM;
 		cpu = get_cpu();
-		/* handle small bases via the GDT because that's faster to
-		   switch. */
-		if (addr <= 0xffffffff) {
-			set_32bit_tls(task, GS_TLS, addr);
-			if (doit) {
-				load_TLS(&task->thread, cpu);
-				load_gs_index(GS_TLS_SEL);
-			}
-			task->thread.gsindex = GS_TLS_SEL;
-			task->thread.gs = 0;
-		} else {
-			task->thread.gsindex = 0;
-			task->thread.gs = addr;
-			if (doit) {
-				load_gs_index(0);
-				ret = wrmsrl_safe(MSR_KERNEL_GS_BASE, addr);
-			}
+		task->thread.gsindex = 0;
+		task->thread.gs = addr;
+		if (doit) {
+			load_gs_index(0);
+			ret = wrmsrl_safe(MSR_KERNEL_GS_BASE, addr);
 		}
-		put_cpu();
 		break;
 	case ARCH_SET_FS:
 		/* Not strictly needed for fs, but do it for symmetry
@@ -580,25 +548,12 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 		if (addr >= TASK_SIZE_OF(task))
 			return -EPERM;
 		cpu = get_cpu();
-		/* handle small bases via the GDT because that's faster to
-		   switch. */
-		if (addr <= 0xffffffff) {
-			set_32bit_tls(task, FS_TLS, addr);
-			if (doit) {
-				load_TLS(&task->thread, cpu);
-				loadsegment(fs, FS_TLS_SEL);
-			}
-			task->thread.fsindex = FS_TLS_SEL;
-			task->thread.fs = 0;
-		} else {
-			task->thread.fsindex = 0;
-			task->thread.fs = addr;
-			if (doit) {
-				/* set the selector to 0 to not confuse
-				   __switch_to */
-				loadsegment(fs, 0);
-				ret = wrmsrl_safe(MSR_FS_BASE, addr);
-			}
+		task->thread.fsindex = 0;
+		task->thread.fs = addr;
+		if (doit) {
+			/* set the selector to 0 to not confuse __switch_to */
+			loadsegment(fs, 0);
+			ret = wrmsrl_safe(MSR_FS_BASE, addr);
 		}
 		put_cpu();
 		break;
@@ -606,8 +561,6 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 		unsigned long base;
 		if (doit)
 			rdmsrl(MSR_FS_BASE, base);
-		else if (task->thread.fsindex == FS_TLS_SEL)
-			base = read_32bit_tls(task, FS_TLS);
 		else
 			base = task->thread.fs;
 		ret = put_user(base, (unsigned long __user *)addr);
@@ -617,8 +570,6 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 		unsigned long base;
 		if (doit)
 			rdmsrl(MSR_KERNEL_GS_BASE, base);
-		else if (task->thread.gsindex == GS_TLS_SEL)
-			base = read_32bit_tls(task, GS_TLS);
 		else
 			base = task->thread.gs;
 		ret = put_user(base, (unsigned long __user *)addr);
diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
index 0f4d2a5..e72ab40 100644
--- a/arch/x86/kernel/ptrace.c
+++ b/arch/x86/kernel/ptrace.c
@@ -303,29 +303,11 @@ static int set_segment_reg(struct task_struct *task,
 
 	switch (offset) {
 	case offsetof(struct user_regs_struct,fs):
-		/*
-		 * If this is setting fs as for normal 64-bit use but
-		 * setting fs_base has implicitly changed it, leave it.
-		 */
-		if ((value == FS_TLS_SEL && task->thread.fsindex == 0 &&
-		     task->thread.fs != 0) ||
-		    (value == 0 && task->thread.fsindex == FS_TLS_SEL &&
-		     task->thread.fs == 0))
-			break;
 		task->thread.fsindex = value;
 		if (task == current)
 			loadsegment(fs, task->thread.fsindex);
 		break;
 	case offsetof(struct user_regs_struct,gs):
-		/*
-		 * If this is setting gs as for normal 64-bit use but
-		 * setting gs_base has implicitly changed it, leave it.
-		 */
-		if ((value == GS_TLS_SEL && task->thread.gsindex == 0 &&
-		     task->thread.gs != 0) ||
-		    (value == 0 && task->thread.gsindex == GS_TLS_SEL &&
-		     task->thread.gs == 0))
-			break;
 		task->thread.gsindex = value;
 		if (task == current)
 			load_gs_index(task->thread.gsindex);
@@ -453,31 +435,17 @@ static unsigned long getreg(struct task_struct *task, unsigned long offset)
 #ifdef CONFIG_X86_64
 	case offsetof(struct user_regs_struct, fs_base): {
 		/*
-		 * do_arch_prctl may have used a GDT slot instead of
-		 * the MSR.  To userland, it appears the same either
-		 * way, except the %fs segment selector might not be 0.
+		 * XXX: This will not behave as expected if called on
+		 * current or if fsindex != 0.
 		 */
-		unsigned int seg = task->thread.fsindex;
-		if (task->thread.fs != 0)
-			return task->thread.fs;
-		if (task == current)
-			asm("movl %%fs,%0" : "=r" (seg));
-		if (seg != FS_TLS_SEL)
-			return 0;
-		return get_desc_base(&task->thread.tls_array[FS_TLS]);
+		return task->thread.fs;
 	}
 	case offsetof(struct user_regs_struct, gs_base): {
 		/*
-		 * Exactly the same here as the %fs handling above.
+		 * XXX: This will not behave as expected if called on
+		 * current or if fsindex != 0.
 		 */
-		unsigned int seg = task->thread.gsindex;
-		if (task->thread.gs != 0)
-			return task->thread.gs;
-		if (task == current)
-			asm("movl %%gs,%0" : "=r" (seg));
-		if (seg != GS_TLS_SEL)
-			return 0;
-		return get_desc_base(&task->thread.tls_array[GS_TLS]);
+		return task->thread.gs;
 	}
 #endif
 	}

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [tip:x86/asm] x86/asm/64: Rename thread_struct's fs and gs to fsbase and gsbase
  2016-04-26 19:23 ` [PATCH 6/8] x86/asm/64: Rename thread_struct's fs and gs to fsbase and gsbase Andy Lutomirski
@ 2016-04-29 10:50   ` " tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 19+ messages in thread
From: tip-bot for Andy Lutomirski @ 2016-04-29 10:50 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, bp, hpa, dvlasenk, tglx, luto, brgerst, torvalds,
	mingo, peterz, luto

Commit-ID:  296f781a4b7801ad9c1c0219f9e87b6c25e196fe
Gitweb:     http://git.kernel.org/tip/296f781a4b7801ad9c1c0219f9e87b6c25e196fe
Author:     Andy Lutomirski <luto@kernel.org>
AuthorDate: Tue, 26 Apr 2016 12:23:29 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 29 Apr 2016 11:56:42 +0200

x86/asm/64: Rename thread_struct's fs and gs to fsbase and gsbase

Unlike ds and es, these are base addresses, not selectors.  Rename
them so their meaning is more obvious.

On x86_32, the field is still called fs.  Fixing that could make sense
as a future cleanup.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/69a18a51c4cba0ce29a241e570fc618ad721d908.1461698311.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/elf.h       |  6 +++---
 arch/x86/include/asm/processor.h | 11 +++++++++--
 arch/x86/kernel/process_64.c     | 30 +++++++++++++++---------------
 arch/x86/kernel/ptrace.c         |  8 ++++----
 arch/x86/kvm/svm.c               |  2 +-
 5 files changed, 32 insertions(+), 25 deletions(-)

diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index 15340e3..fea7724 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -176,7 +176,7 @@ static inline void elf_common_init(struct thread_struct *t,
 	regs->si = regs->di = regs->bp = 0;
 	regs->r8 = regs->r9 = regs->r10 = regs->r11 = 0;
 	regs->r12 = regs->r13 = regs->r14 = regs->r15 = 0;
-	t->fs = t->gs = 0;
+	t->fsbase = t->gsbase = 0;
 	t->fsindex = t->gsindex = 0;
 	t->ds = t->es = ds;
 }
@@ -226,8 +226,8 @@ do {								\
 	(pr_reg)[18] = (regs)->flags;				\
 	(pr_reg)[19] = (regs)->sp;				\
 	(pr_reg)[20] = (regs)->ss;				\
-	(pr_reg)[21] = current->thread.fs;			\
-	(pr_reg)[22] = current->thread.gs;			\
+	(pr_reg)[21] = current->thread.fsbase;			\
+	(pr_reg)[22] = current->thread.gsbase;			\
 	asm("movl %%ds,%0" : "=r" (v)); (pr_reg)[23] = v;	\
 	asm("movl %%es,%0" : "=r" (v)); (pr_reg)[24] = v;	\
 	asm("movl %%fs,%0" : "=r" (v)); (pr_reg)[25] = v;	\
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 9264476..9251aa9 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -388,9 +388,16 @@ struct thread_struct {
 	unsigned long		ip;
 #endif
 #ifdef CONFIG_X86_64
-	unsigned long		fs;
+	unsigned long		fsbase;
+	unsigned long		gsbase;
+#else
+	/*
+	 * XXX: this could presumably be unsigned short.  Alternatively,
+	 * 32-bit kernels could be taught to use fsindex instead.
+	 */
+	unsigned long fs;
+	unsigned long gs;
 #endif
-	unsigned long		gs;
 
 	/* Save middle states of ptrace breakpoints */
 	struct perf_event	*ptrace_bps[HBP_NUM];
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 864fe2c..4285f6a 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -150,9 +150,9 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
 	p->thread.io_bitmap_ptr = NULL;
 
 	savesegment(gs, p->thread.gsindex);
-	p->thread.gs = p->thread.gsindex ? 0 : me->thread.gs;
+	p->thread.gsbase = p->thread.gsindex ? 0 : me->thread.gsbase;
 	savesegment(fs, p->thread.fsindex);
-	p->thread.fs = p->thread.fsindex ? 0 : me->thread.fs;
+	p->thread.fsbase = p->thread.fsindex ? 0 : me->thread.fsbase;
 	savesegment(es, p->thread.es);
 	savesegment(ds, p->thread.ds);
 	memset(p->thread.ptrace_bps, 0, sizeof(p->thread.ptrace_bps));
@@ -329,18 +329,18 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	 * stronger guarantees.)
 	 *
 	 * As an invariant,
-	 * (fs != 0 && fsindex != 0) || (gs != 0 && gsindex != 0) is
+	 * (fsbase != 0 && fsindex != 0) || (gsbase != 0 && gsindex != 0) is
 	 * impossible.
 	 */
 	if (next->fsindex) {
 		/* Loading a nonzero value into FS sets the index and base. */
 		loadsegment(fs, next->fsindex);
 	} else {
-		if (next->fs) {
+		if (next->fsbase) {
 			/* Next index is zero but next base is nonzero. */
 			if (prev_fsindex)
 				loadsegment(fs, 0);
-			wrmsrl(MSR_FS_BASE, next->fs);
+			wrmsrl(MSR_FS_BASE, next->fsbase);
 		} else {
 			/* Next base and index are both zero. */
 			if (static_cpu_has_bug(X86_BUG_NULL_SEG)) {
@@ -356,7 +356,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 				 * didn't change the base, then the base is
 				 * also zero and we don't need to do anything.
 				 */
-				if (prev->fs || prev_fsindex)
+				if (prev->fsbase || prev_fsindex)
 					loadsegment(fs, 0);
 			}
 		}
@@ -369,18 +369,18 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	 * us.
 	 */
 	if (prev_fsindex)
-		prev->fs = 0;
+		prev->fsbase = 0;
 	prev->fsindex = prev_fsindex;
 
 	if (next->gsindex) {
 		/* Loading a nonzero value into GS sets the index and base. */
 		load_gs_index(next->gsindex);
 	} else {
-		if (next->gs) {
+		if (next->gsbase) {
 			/* Next index is zero but next base is nonzero. */
 			if (prev_gsindex)
 				load_gs_index(0);
-			wrmsrl(MSR_KERNEL_GS_BASE, next->gs);
+			wrmsrl(MSR_KERNEL_GS_BASE, next->gsbase);
 		} else {
 			/* Next base and index are both zero. */
 			if (static_cpu_has_bug(X86_BUG_NULL_SEG)) {
@@ -400,7 +400,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 				 * didn't change the base, then the base is
 				 * also zero and we don't need to do anything.
 				 */
-				if (prev->gs || prev_gsindex)
+				if (prev->gsbase || prev_gsindex)
 					load_gs_index(0);
 			}
 		}
@@ -413,7 +413,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	 * us.
 	 */
 	if (prev_gsindex)
-		prev->gs = 0;
+		prev->gsbase = 0;
 	prev->gsindex = prev_gsindex;
 
 	switch_fpu_finish(next_fpu, fpu_switch);
@@ -536,7 +536,7 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 			return -EPERM;
 		cpu = get_cpu();
 		task->thread.gsindex = 0;
-		task->thread.gs = addr;
+		task->thread.gsbase = addr;
 		if (doit) {
 			load_gs_index(0);
 			ret = wrmsrl_safe(MSR_KERNEL_GS_BASE, addr);
@@ -549,7 +549,7 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 			return -EPERM;
 		cpu = get_cpu();
 		task->thread.fsindex = 0;
-		task->thread.fs = addr;
+		task->thread.fsbase = addr;
 		if (doit) {
 			/* set the selector to 0 to not confuse __switch_to */
 			loadsegment(fs, 0);
@@ -562,7 +562,7 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 		if (doit)
 			rdmsrl(MSR_FS_BASE, base);
 		else
-			base = task->thread.fs;
+			base = task->thread.fsbase;
 		ret = put_user(base, (unsigned long __user *)addr);
 		break;
 	}
@@ -571,7 +571,7 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
 		if (doit)
 			rdmsrl(MSR_KERNEL_GS_BASE, base);
 		else
-			base = task->thread.gs;
+			base = task->thread.gsbase;
 		ret = put_user(base, (unsigned long __user *)addr);
 		break;
 	}
diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
index e72ab40..e60ef91 100644
--- a/arch/x86/kernel/ptrace.c
+++ b/arch/x86/kernel/ptrace.c
@@ -399,7 +399,7 @@ static int putreg(struct task_struct *child,
 		 * to set either thread.fs or thread.fsindex and the
 		 * corresponding GDT slot.
 		 */
-		if (child->thread.fs != value)
+		if (child->thread.fsbase != value)
 			return do_arch_prctl(child, ARCH_SET_FS, value);
 		return 0;
 	case offsetof(struct user_regs_struct,gs_base):
@@ -408,7 +408,7 @@ static int putreg(struct task_struct *child,
 		 */
 		if (value >= TASK_SIZE_OF(child))
 			return -EIO;
-		if (child->thread.gs != value)
+		if (child->thread.gsbase != value)
 			return do_arch_prctl(child, ARCH_SET_GS, value);
 		return 0;
 #endif
@@ -438,14 +438,14 @@ static unsigned long getreg(struct task_struct *task, unsigned long offset)
 		 * XXX: This will not behave as expected if called on
 		 * current or if fsindex != 0.
 		 */
-		return task->thread.fs;
+		return task->thread.fsbase;
 	}
 	case offsetof(struct user_regs_struct, gs_base): {
 		/*
 		 * XXX: This will not behave as expected if called on
 		 * current or if fsindex != 0.
 		 */
-		return task->thread.gs;
+		return task->thread.gsbase;
 	}
 #endif
 	}
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 31346a3..fafd720 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1254,7 +1254,7 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu)
 	kvm_load_ldt(svm->host.ldt);
 #ifdef CONFIG_X86_64
 	loadsegment(fs, svm->host.fs);
-	wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gs);
+	wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gsbase);
 	load_gs_index(svm->host.gs);
 #else
 #ifdef CONFIG_X86_32_LAZY_GS

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [tip:x86/asm] x86/tls: Synchronize segment registers in set_thread_area()
  2016-04-26 19:23 ` [PATCH 7/8] x86/tls: Synchronize segment registers in set_thread_area Andy Lutomirski
@ 2016-04-29 10:51   ` tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 19+ messages in thread
From: tip-bot for Andy Lutomirski @ 2016-04-29 10:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, tglx, brgerst, torvalds, bp, dvlasenk, luto, peterz, luto,
	linux-kernel, hpa

Commit-ID:  c9867f863e19dd07c6085b457f6775047924c3b5
Gitweb:     http://git.kernel.org/tip/c9867f863e19dd07c6085b457f6775047924c3b5
Author:     Andy Lutomirski <luto@kernel.org>
AuthorDate: Tue, 26 Apr 2016 12:23:30 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 29 Apr 2016 11:56:42 +0200

x86/tls: Synchronize segment registers in set_thread_area()

The current behavior of set_thread_area() when it modifies a segment that is
currently loaded is a bit confused.

If CS [1] or SS is modified, the change will take effect on return
to userspace because CS and SS are fundamentally always reloaded on
return to userspace.

Similarly, on 32-bit kernels, if DS, ES, FS, or (depending on
configuration) GS refers to a modified segment, the change will take
effect immediately on return to user mode because the entry code
reloads these registers.

If set_thread_area() modifies DS, ES [2], FS, or GS on 64-bit kernels or
GS on 32-bit lazy-GS [3] kernels, however, the segment registers
will be left alone until something (most likely a context switch)
causes them to be reloaded.  This means that behavior visible to
user space is inconsistent.

If set_thread_area() is implicitly called via CLONE_SETTLS, then all
segment registers will be reloaded before the thread starts because
CLONE_SETTLS happens before the initial context switch into the
newly created thread.

Empirically, glibc requires the immediate reload on CLONE_SETTLS --
32-bit glibc on my system does *not* manually reload GS when
creating a new thread.

Before enabling FSGSBASE, we need to figure out what the behavior
will be, as FSGSBASE requires that we reconsider our behavior when,
e.g., GS and GSBASE are out of sync in user mode.  Given that we
must preserve the existing behavior of CLONE_SETTLS, it makes sense
to me that we simply extend similar behavior to all invocations
of set_thread_area().

This patch explicitly updates any segment register referring to a
segment that is targetted by set_thread_area().  If set_thread_area()
deletes the segment, then the segment register will be nulled out.

[1] This can't actually happen since 0e58af4e1d21 ("x86/tls:
    Disallow unusual TLS segments") but, if it did, this is how it
    would behave.

[2] I strongly doubt that any existing non-malicious program loads a
    TLS segment into DS or ES on a 64-bit kernel because the context
    switch code was badly broken until recently, but that's not an
    excuse to leave the current code alone.

[3] One way or another, that config option should to go away.  Yuck!

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/27d119b0d396e9b82009e40dff8333a249038225.1461698311.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/tls.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c
index 7fc5e84..9692a5e 100644
--- a/arch/x86/kernel/tls.c
+++ b/arch/x86/kernel/tls.c
@@ -114,6 +114,7 @@ int do_set_thread_area(struct task_struct *p, int idx,
 		       int can_allocate)
 {
 	struct user_desc info;
+	unsigned short __maybe_unused sel, modified_sel;
 
 	if (copy_from_user(&info, u_info, sizeof(info)))
 		return -EFAULT;
@@ -141,6 +142,47 @@ int do_set_thread_area(struct task_struct *p, int idx,
 
 	set_tls_desc(p, idx, &info, 1);
 
+	/*
+	 * If DS, ES, FS, or GS points to the modified segment, forcibly
+	 * refresh it.  Only needed on x86_64 because x86_32 reloads them
+	 * on return to user mode.
+	 */
+	modified_sel = (idx << 3) | 3;
+
+	if (p == current) {
+#ifdef CONFIG_X86_64
+		savesegment(ds, sel);
+		if (sel == modified_sel)
+			loadsegment(ds, sel);
+
+		savesegment(es, sel);
+		if (sel == modified_sel)
+			loadsegment(es, sel);
+
+		savesegment(fs, sel);
+		if (sel == modified_sel)
+			loadsegment(fs, sel);
+
+		savesegment(gs, sel);
+		if (sel == modified_sel)
+			load_gs_index(sel);
+#endif
+
+#ifdef CONFIG_X86_32_LAZY_GS
+		savesegment(gs, sel);
+		if (sel == modified_sel)
+			loadsegment(gs, sel);
+#endif
+	} else {
+#ifdef CONFIG_X86_64
+		if (p->thread.fsindex == modified_sel)
+			p->thread.fsbase = info.base_addr;
+
+		if (p->thread.gsindex == modified_sel)
+			p->thread.gsbase = info.base_addr;
+#endif
+	}
+
 	return 0;
 }
 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [tip:x86/asm] selftests/x86/ldt_gdt: Test set_thread_area() deletion of an active segment
  2016-04-26 19:23 ` [PATCH 8/8] selftests/x86/ldt_gdt: Test set_thread_area deletion of an active segment Andy Lutomirski
@ 2016-04-29 10:51   ` " tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 19+ messages in thread
From: tip-bot for Andy Lutomirski @ 2016-04-29 10:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: brgerst, tglx, linux-kernel, hpa, bp, mingo, peterz, luto, luto,
	torvalds, dvlasenk

Commit-ID:  d63f4b5269e93759d3036932890c96d7a8639e5b
Gitweb:     http://git.kernel.org/tip/d63f4b5269e93759d3036932890c96d7a8639e5b
Author:     Andy Lutomirski <luto@kernel.org>
AuthorDate: Tue, 26 Apr 2016 12:23:31 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 29 Apr 2016 11:56:42 +0200

selftests/x86/ldt_gdt: Test set_thread_area() deletion of an active segment

Now that set_thread_area() is supposed to give deterministic behavior
when it modifies in-use segments, test it.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/f2bc11af1ee1a0f815ed910840cbdba06b640a20.1461698311.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 tools/testing/selftests/x86/ldt_gdt.c | 250 ++++++++++++++++++++++++++++++++++
 1 file changed, 250 insertions(+)

diff --git a/tools/testing/selftests/x86/ldt_gdt.c b/tools/testing/selftests/x86/ldt_gdt.c
index 31a3035..4af4707 100644
--- a/tools/testing/selftests/x86/ldt_gdt.c
+++ b/tools/testing/selftests/x86/ldt_gdt.c
@@ -21,6 +21,9 @@
 #include <pthread.h>
 #include <sched.h>
 #include <linux/futex.h>
+#include <sys/mman.h>
+#include <asm/prctl.h>
+#include <sys/prctl.h>
 
 #define AR_ACCESSED		(1<<8)
 
@@ -44,6 +47,12 @@
 
 static int nerrs;
 
+/* Points to an array of 1024 ints, each holding its own index. */
+static const unsigned int *counter_page;
+static struct user_desc *low_user_desc;
+static struct user_desc *low_user_desc_clear;  /* Use to delete GDT entry */
+static int gdt_entry_num;
+
 static void check_invalid_segment(uint16_t index, int ldt)
 {
 	uint32_t has_limit = 0, has_ar = 0, limit, ar;
@@ -561,16 +570,257 @@ static void do_exec_test(void)
 	}
 }
 
+static void setup_counter_page(void)
+{
+	unsigned int *page = mmap(NULL, 4096, PROT_READ | PROT_WRITE,
+			 MAP_ANONYMOUS | MAP_PRIVATE | MAP_32BIT, -1, 0);
+	if (page == MAP_FAILED)
+		err(1, "mmap");
+
+	for (int i = 0; i < 1024; i++)
+		page[i] = i;
+	counter_page = page;
+}
+
+static int invoke_set_thread_area(void)
+{
+	int ret;
+	asm volatile ("int $0x80"
+		      : "=a" (ret), "+m" (low_user_desc) :
+			"a" (243), "b" (low_user_desc)
+		      : "flags");
+	return ret;
+}
+
+static void setup_low_user_desc(void)
+{
+	low_user_desc = mmap(NULL, 2 * sizeof(struct user_desc),
+			     PROT_READ | PROT_WRITE,
+			     MAP_ANONYMOUS | MAP_PRIVATE | MAP_32BIT, -1, 0);
+	if (low_user_desc == MAP_FAILED)
+		err(1, "mmap");
+
+	low_user_desc->entry_number	= -1;
+	low_user_desc->base_addr	= (unsigned long)&counter_page[1];
+	low_user_desc->limit		= 0xfffff;
+	low_user_desc->seg_32bit	= 1;
+	low_user_desc->contents		= 0; /* Data, grow-up*/
+	low_user_desc->read_exec_only	= 0;
+	low_user_desc->limit_in_pages	= 1;
+	low_user_desc->seg_not_present	= 0;
+	low_user_desc->useable		= 0;
+
+	if (invoke_set_thread_area() == 0) {
+		gdt_entry_num = low_user_desc->entry_number;
+		printf("[NOTE]\tset_thread_area is available; will use GDT index %d\n", gdt_entry_num);
+	} else {
+		printf("[NOTE]\tset_thread_area is unavailable\n");
+	}
+
+	low_user_desc_clear = low_user_desc + 1;
+	low_user_desc_clear->entry_number = gdt_entry_num;
+	low_user_desc_clear->read_exec_only = 1;
+	low_user_desc_clear->seg_not_present = 1;
+}
+
+static void test_gdt_invalidation(void)
+{
+	if (!gdt_entry_num)
+		return;	/* 64-bit only system -- we can't use set_thread_area */
+
+	unsigned short prev_sel;
+	unsigned short sel;
+	unsigned int eax;
+	const char *result;
+#ifdef __x86_64__
+	unsigned long saved_base;
+	unsigned long new_base;
+#endif
+
+	/* Test DS */
+	invoke_set_thread_area();
+	eax = 243;
+	sel = (gdt_entry_num << 3) | 3;
+	asm volatile ("movw %%ds, %[prev_sel]\n\t"
+		      "movw %[sel], %%ds\n\t"
+#ifdef __i386__
+		      "pushl %%ebx\n\t"
+#endif
+		      "movl %[arg1], %%ebx\n\t"
+		      "int $0x80\n\t"	/* Should invalidate ds */
+#ifdef __i386__
+		      "popl %%ebx\n\t"
+#endif
+		      "movw %%ds, %[sel]\n\t"
+		      "movw %[prev_sel], %%ds"
+		      : [prev_sel] "=&r" (prev_sel), [sel] "+r" (sel),
+			"+a" (eax)
+		      : "m" (low_user_desc_clear),
+			[arg1] "r" ((unsigned int)(unsigned long)low_user_desc_clear)
+		      : "flags");
+
+	if (sel != 0) {
+		result = "FAIL";
+		nerrs++;
+	} else {
+		result = "OK";
+	}
+	printf("[%s]\tInvalidate DS with set_thread_area: new DS = 0x%hx\n",
+	       result, sel);
+
+	/* Test ES */
+	invoke_set_thread_area();
+	eax = 243;
+	sel = (gdt_entry_num << 3) | 3;
+	asm volatile ("movw %%es, %[prev_sel]\n\t"
+		      "movw %[sel], %%es\n\t"
+#ifdef __i386__
+		      "pushl %%ebx\n\t"
+#endif
+		      "movl %[arg1], %%ebx\n\t"
+		      "int $0x80\n\t"	/* Should invalidate es */
+#ifdef __i386__
+		      "popl %%ebx\n\t"
+#endif
+		      "movw %%es, %[sel]\n\t"
+		      "movw %[prev_sel], %%es"
+		      : [prev_sel] "=&r" (prev_sel), [sel] "+r" (sel),
+			"+a" (eax)
+		      : "m" (low_user_desc_clear),
+			[arg1] "r" ((unsigned int)(unsigned long)low_user_desc_clear)
+		      : "flags");
+
+	if (sel != 0) {
+		result = "FAIL";
+		nerrs++;
+	} else {
+		result = "OK";
+	}
+	printf("[%s]\tInvalidate ES with set_thread_area: new ES = 0x%hx\n",
+	       result, sel);
+
+	/* Test FS */
+	invoke_set_thread_area();
+	eax = 243;
+	sel = (gdt_entry_num << 3) | 3;
+#ifdef __x86_64__
+	syscall(SYS_arch_prctl, ARCH_GET_FS, &saved_base);
+#endif
+	asm volatile ("movw %%fs, %[prev_sel]\n\t"
+		      "movw %[sel], %%fs\n\t"
+#ifdef __i386__
+		      "pushl %%ebx\n\t"
+#endif
+		      "movl %[arg1], %%ebx\n\t"
+		      "int $0x80\n\t"	/* Should invalidate fs */
+#ifdef __i386__
+		      "popl %%ebx\n\t"
+#endif
+		      "movw %%fs, %[sel]\n\t"
+		      : [prev_sel] "=&r" (prev_sel), [sel] "+r" (sel),
+			"+a" (eax)
+		      : "m" (low_user_desc_clear),
+			[arg1] "r" ((unsigned int)(unsigned long)low_user_desc_clear)
+		      : "flags");
+
+#ifdef __x86_64__
+	syscall(SYS_arch_prctl, ARCH_GET_FS, &new_base);
+#endif
+
+	/* Restore FS/BASE for glibc */
+	asm volatile ("movw %[prev_sel], %%fs" : : [prev_sel] "rm" (prev_sel));
+#ifdef __x86_64__
+	if (saved_base)
+		syscall(SYS_arch_prctl, ARCH_SET_FS, saved_base);
+#endif
+
+	if (sel != 0) {
+		result = "FAIL";
+		nerrs++;
+	} else {
+		result = "OK";
+	}
+	printf("[%s]\tInvalidate FS with set_thread_area: new FS = 0x%hx\n",
+	       result, sel);
+
+#ifdef __x86_64__
+	if (sel == 0 && new_base != 0) {
+		nerrs++;
+		printf("[FAIL]\tNew FSBASE was 0x%lx\n", new_base);
+	} else {
+		printf("[OK]\tNew FSBASE was zero\n");
+	}
+#endif
+
+	/* Test GS */
+	invoke_set_thread_area();
+	eax = 243;
+	sel = (gdt_entry_num << 3) | 3;
+#ifdef __x86_64__
+	syscall(SYS_arch_prctl, ARCH_GET_GS, &saved_base);
+#endif
+	asm volatile ("movw %%gs, %[prev_sel]\n\t"
+		      "movw %[sel], %%gs\n\t"
+#ifdef __i386__
+		      "pushl %%ebx\n\t"
+#endif
+		      "movl %[arg1], %%ebx\n\t"
+		      "int $0x80\n\t"	/* Should invalidate gs */
+#ifdef __i386__
+		      "popl %%ebx\n\t"
+#endif
+		      "movw %%gs, %[sel]\n\t"
+		      : [prev_sel] "=&r" (prev_sel), [sel] "+r" (sel),
+			"+a" (eax)
+		      : "m" (low_user_desc_clear),
+			[arg1] "r" ((unsigned int)(unsigned long)low_user_desc_clear)
+		      : "flags");
+
+#ifdef __x86_64__
+	syscall(SYS_arch_prctl, ARCH_GET_GS, &new_base);
+#endif
+
+	/* Restore GS/BASE for glibc */
+	asm volatile ("movw %[prev_sel], %%gs" : : [prev_sel] "rm" (prev_sel));
+#ifdef __x86_64__
+	if (saved_base)
+		syscall(SYS_arch_prctl, ARCH_SET_GS, saved_base);
+#endif
+
+	if (sel != 0) {
+		result = "FAIL";
+		nerrs++;
+	} else {
+		result = "OK";
+	}
+	printf("[%s]\tInvalidate GS with set_thread_area: new GS = 0x%hx\n",
+	       result, sel);
+
+#ifdef __x86_64__
+	if (sel == 0 && new_base != 0) {
+		nerrs++;
+		printf("[FAIL]\tNew GSBASE was 0x%lx\n", new_base);
+	} else {
+		printf("[OK]\tNew GSBASE was zero\n");
+	}
+#endif
+}
+
 int main(int argc, char **argv)
 {
 	if (argc == 1 && !strcmp(argv[0], "ldt_gdt_test_exec"))
 		return finish_exec_test();
 
+	setup_counter_page();
+	setup_low_user_desc();
+
 	do_simple_tests();
 
 	do_multicpu_tests();
 
 	do_exec_test();
 
+	test_gdt_invalidation();
+
 	return nerrs ? 1 : 0;
 }

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, back to index

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-26 19:23 [PATCH 0/8] x86: A round of x86 segmentation improvements Andy Lutomirski
2016-04-26 19:23 ` [PATCH 1/8] x86/asm: Stop depending on ptrace.h in alternative.h Andy Lutomirski
2016-04-29 10:48   ` [tip:x86/asm] " tip-bot for Andy Lutomirski
2016-04-26 19:23 ` [PATCH 2/8] x86/asm: Make asm/alternative.h safe from assembly Andy Lutomirski
2016-04-29 10:49   ` [tip:x86/asm] " tip-bot for Andy Lutomirski
2016-04-26 19:23 ` [PATCH 3/8] x86/segments/64: When loadsegment(fs, ...) fails, clear the base Andy Lutomirski
2016-04-29 10:49   ` [tip:x86/asm] " tip-bot for Andy Lutomirski
2016-04-26 19:23 ` [PATCH 4/8] x86/segments/64: When load_gs_index " Andy Lutomirski
2016-04-29 10:49   ` [tip:x86/asm] " tip-bot for Andy Lutomirski
2016-04-26 19:23 ` [PATCH 5/8] x86/arch_prctl/64: Remove FSBASE/GSBASE < 4G optimization Andy Lutomirski
2016-04-26 20:50   ` Andi Kleen
2016-04-26 22:33     ` Andy Lutomirski
2016-04-29 10:50   ` [tip:x86/asm] " tip-bot for Andy Lutomirski
2016-04-26 19:23 ` [PATCH 6/8] x86/asm/64: Rename thread_struct's fs and gs to fsbase and gsbase Andy Lutomirski
2016-04-29 10:50   ` [tip:x86/asm] " tip-bot for Andy Lutomirski
2016-04-26 19:23 ` [PATCH 7/8] x86/tls: Synchronize segment registers in set_thread_area Andy Lutomirski
2016-04-29 10:51   ` [tip:x86/asm] x86/tls: Synchronize segment registers in set_thread_area() tip-bot for Andy Lutomirski
2016-04-26 19:23 ` [PATCH 8/8] selftests/x86/ldt_gdt: Test set_thread_area deletion of an active segment Andy Lutomirski
2016-04-29 10:51   ` [tip:x86/asm] selftests/x86/ldt_gdt: Test set_thread_area() " tip-bot for Andy Lutomirski

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org linux-kernel@archiver.kernel.org
	public-inbox-index lkml


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox