linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org, Linus Torvalds <torvalds@linux-foundation.org>,
	Andy Lutomirsky <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Borislav Petkov <bpetkov@suse.de>,
	Greg KH <gregkh@linuxfoundation.org>,
	keescook@google.com, hughd@google.com,
	Brian Gerst <brgerst@gmail.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Denys Vlasenko <dvlasenk@redhat.com>,
	Rik van Riel <riel@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	David Laight <David.Laight@aculab.com>,
	Eduardo Valentin <eduval@amazon.com>,
	aliguori@amazon.com, Will Deacon <will.deacon@arm.com>,
	daniel.gruss@iaik.tugraz.at, Kees Cook <keescook@chromium.org>,
	Borislav Petkov <bp@alien8.de>,
	"Kirill A. Shutemov" <kirill@shutemov.name>
Subject: [patch V149 38/50] x86/pti: Put the LDT in its own PGD if PTI is on
Date: Sat, 16 Dec 2017 22:24:32 +0100	[thread overview]
Message-ID: <20171216213139.549001406@linutronix.de> (raw)
In-Reply-To: 20171216212354.120930222@linutronix.de

[-- Attachment #1: x86-pti--Put_the_LDT_in_its_own_PGD_if_PTI_is_on.patch --]
[-- Type: text/plain, Size: 14391 bytes --]

From: Andy Lutomirski <luto@kernel.org>

With PTI enabled, the LDT must be mapped in the usermode tables somewhere.
The LDT is per process, i.e. per mm.

An earlier approach mapped the LDT on context switch into a fixmap area,
but that's a big overhead and exhausted the fixmap space when NR_CPUS got
big.

Take advantage of the fact that there is an address space hole which
provides a completely unused pgd. Use this pgd to manage per-mm LDT
mappings.

This has a down side: the LDT isn't (currently) randomized, and an attack
that can write the LDT is instant root due to call gates (thanks, AMD, for
leaving call gates in AMD64 but designing them wrong so they're only useful
for exploits).  This can be mitigated by making the LDT read-only or
randomizing the mapping, either of which is strightforward on top of this
patch.

This will significantly slow down LDT users, but that shouldn't matter for
important workloads -- the LDT is only used by DOSEMU(2), Wine, and very
old libc implementations.

[ tglx: Decrapified it ]

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: David Laight <David.Laight@aculab.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>

---
 Documentation/x86/x86_64/mm.txt         |    3 
 arch/x86/include/asm/mmu_context.h      |   55 +++++++++++-
 arch/x86/include/asm/pgtable_64_types.h |    4 
 arch/x86/include/asm/processor.h        |   23 +++--
 arch/x86/kernel/ldt.c                   |  139 +++++++++++++++++++++++++++++++-
 arch/x86/mm/dump_pagetables.c           |   12 ++
 6 files changed, 219 insertions(+), 17 deletions(-)

--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -12,6 +12,7 @@ ffffea0000000000 - ffffeaffffffffff (=40
 ... unused hole ...
 ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
 ... unused hole ...
+fffffe8000000000 - fffffeffffffffff (=39 bits) LDT remap for PTI
 ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
 ... unused hole ...
 ffffffef00000000 - fffffffeffffffff (=64 GB) EFI region mapping space
@@ -28,7 +29,7 @@ ffffffffffe00000 - ffffffffffffffff (=2
 hole caused by [56:63] sign extension
 ff00000000000000 - ff0fffffffffffff (=52 bits) guard hole, reserved for hypervisor
 ff10000000000000 - ff8fffffffffffff (=55 bits) direct mapping of all phys. memory
-ff90000000000000 - ff9fffffffffffff (=52 bits) hole
+ff90000000000000 - ff9fffffffffffff (=49 bits) LDT remap for PTI
 ffa0000000000000 - ffd1ffffffffffff (=54 bits) vmalloc/ioremap space
 ffd2000000000000 - ffd3ffffffffffff (=49 bits) hole
 ffd4000000000000 - ffd5ffffffffffff (=49 bits) virtual memory map (512TB)
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -50,10 +50,29 @@ struct ldt_struct {
 	 * call gates.  On native, we could merge the ldt_struct and LDT
 	 * allocations, but it's not worth trying to optimize.
 	 */
-	struct desc_struct *entries;
-	unsigned int nr_entries;
+	struct desc_struct	*entries;
+	unsigned int		nr_entries;
+
+	/*
+	 * If PTI is in use, then the entries array is not mapped while we're
+	 * in user mode.  The whole array will be aliased at the addressed
+	 * given by ldt_slot_va(slot).  We use two slots so that we can allocate
+	 * and map, and enable a new LDT without invalidating the mapping
+	 * of an older, still-in-use LDT.
+	 *
+	 * slot will be -1 if this LDT doesn't have an alias mapping.
+	 */
+	int			slot;
 };
 
+/* This is a multiple of PAGE_SIZE. */
+#define LDT_SLOT_STRIDE (LDT_ENTRIES * LDT_ENTRY_SIZE)
+
+static inline void *ldt_slot_va(int slot)
+{
+	return (void *)(LDT_BASE_ADDR + LDT_SLOT_STRIDE * slot);
+}
+
 /*
  * Used for LDT copy/destruction.
  */
@@ -64,6 +83,7 @@ static inline void init_new_context_ldt(
 }
 int ldt_dup_context(struct mm_struct *oldmm, struct mm_struct *mm);
 void destroy_context_ldt(struct mm_struct *mm);
+void ldt_arch_exit_mmap(struct mm_struct *mm);
 #else	/* CONFIG_MODIFY_LDT_SYSCALL */
 static inline void init_new_context_ldt(struct mm_struct *mm) { }
 static inline int ldt_dup_context(struct mm_struct *oldmm,
@@ -71,7 +91,8 @@ static inline int ldt_dup_context(struct
 {
 	return 0;
 }
-static inline void destroy_context_ldt(struct mm_struct *mm) {}
+static inline void destroy_context_ldt(struct mm_struct *mm) { }
+static inline void ldt_arch_exit_mmap(struct mm_struct *mm) { }
 #endif
 
 static inline void load_mm_ldt(struct mm_struct *mm)
@@ -96,10 +117,31 @@ static inline void load_mm_ldt(struct mm
 	 * that we can see.
 	 */
 
-	if (unlikely(ldt))
-		set_ldt(ldt->entries, ldt->nr_entries);
-	else
+	if (unlikely(ldt)) {
+		if (static_cpu_has_bug(X86_BUG_CPU_SECURE_MODE_PTI)) {
+			if (WARN_ON_ONCE((unsigned long)ldt->slot > 1)) {
+				/*
+				 * Whoops -- either the new LDT isn't mapped
+				 * (if slot == -1) or is mapped into a bogus
+				 * slot (if slot > 1).
+				 */
+				clear_LDT();
+				return;
+			}
+
+			/*
+			 * If page table isolation is enabled, ldt->entries
+			 * will not be mapped in the userspace pagetables.
+			 * Tell the CPU to access the LDT through the alias
+			 * at ldt_slot_va(ldt->slot).
+			 */
+			set_ldt(ldt_slot_va(ldt->slot), ldt->nr_entries);
+		} else {
+			set_ldt(ldt->entries, ldt->nr_entries);
+		}
+	} else {
 		clear_LDT();
+	}
 #else
 	clear_LDT();
 #endif
@@ -194,6 +236,7 @@ static inline int arch_dup_mmap(struct m
 static inline void arch_exit_mmap(struct mm_struct *mm)
 {
 	paravirt_arch_exit_mmap(mm);
+	ldt_arch_exit_mmap(mm);
 }
 
 #ifdef CONFIG_X86_64
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -81,10 +81,14 @@ typedef struct { pteval_t pte; } pte_t;
 #define VMALLOC_SIZE_TB _AC(12800, UL)
 #define __VMALLOC_BASE	_AC(0xffa0000000000000, UL)
 #define __VMEMMAP_BASE	_AC(0xffd4000000000000, UL)
+#define LDT_PGD_ENTRY _AC(-112, UL)
+#define LDT_BASE_ADDR (LDT_PGD_ENTRY << PGDIR_SHIFT)
 #else
 #define VMALLOC_SIZE_TB	_AC(32, UL)
 #define __VMALLOC_BASE	_AC(0xffffc90000000000, UL)
 #define __VMEMMAP_BASE	_AC(0xffffea0000000000, UL)
+#define LDT_PGD_ENTRY _AC(-3, UL)
+#define LDT_BASE_ADDR (LDT_PGD_ENTRY << PGDIR_SHIFT)
 #endif
 #ifdef CONFIG_RANDOMIZE_MEMORY
 #define VMALLOC_START	vmalloc_base
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -852,13 +852,22 @@ static inline void spin_lock_prefetch(co
 
 #else
 /*
- * User space process size. 47bits minus one guard page.  The guard
- * page is necessary on Intel CPUs: if a SYSCALL instruction is at
- * the highest possible canonical userspace address, then that
- * syscall will enter the kernel with a non-canonical return
- * address, and SYSRET will explode dangerously.  We avoid this
- * particular problem by preventing anything from being mapped
- * at the maximum canonical address.
+ * User space process size.  This is the first address outside the user range.
+ * There are a few constraints that determine this:
+ *
+ * On Intel CPUs, if a SYSCALL instruction is at the highest canonical
+ * address, then that syscall will enter the kernel with a
+ * non-canonical return address, and SYSRET will explode dangerously.
+ * We avoid this particular problem by preventing anything executable
+ * from being mapped at the maximum canonical address.
+ *
+ * On AMD CPUs in the Ryzen family, there's a nasty bug in which the
+ * CPUs malfunction if they execute code from the highest canonical page.
+ * They'll speculate right off the end of the canonical space, and
+ * bad things happen.  This is worked around in the same way as the
+ * Intel problem.
+ *
+ * With page table isolation enabled, we map the LDT in ... [stay tuned]
  */
 #define TASK_SIZE_MAX	((1UL << __VIRTUAL_MASK_SHIFT) - PAGE_SIZE)
 
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -24,6 +24,7 @@
 #include <linux/uaccess.h>
 
 #include <asm/ldt.h>
+#include <asm/tlb.h>
 #include <asm/desc.h>
 #include <asm/mmu_context.h>
 #include <asm/syscalls.h>
@@ -51,13 +52,11 @@ static void refresh_ldt_segments(void)
 static void flush_ldt(void *__mm)
 {
 	struct mm_struct *mm = __mm;
-	mm_context_t *pc;
 
 	if (this_cpu_read(cpu_tlbstate.loaded_mm) != mm)
 		return;
 
-	pc = &mm->context;
-	set_ldt(pc->ldt->entries, pc->ldt->nr_entries);
+	load_mm_ldt(mm);
 
 	refresh_ldt_segments();
 }
@@ -94,10 +93,121 @@ static struct ldt_struct *alloc_ldt_stru
 		return NULL;
 	}
 
+	/* The new LDT isn't aliased for PTI yet. */
+	new_ldt->slot = -1;
+
 	new_ldt->nr_entries = num_entries;
 	return new_ldt;
 }
 
+/*
+ * If PTI is enabled, this maps the LDT into the kernelmode and
+ * usermode tables for the given mm.
+ *
+ * There is no corresponding unmap function.  Even if the LDT is freed, we
+ * leave the PTEs around until the slot is reused or the mm is destroyed.
+ * This is harmless: the LDT is always in ordinary memory, and no one will
+ * access the freed slot.
+ *
+ * If we wanted to unmap freed LDTs, we'd also need to do a flush to make
+ * it useful, and the flush would slow down modify_ldt().
+ */
+static int
+map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot)
+{
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+	bool is_vmalloc, had_top_level_entry;
+	unsigned long va;
+	spinlock_t *ptl;
+	pgd_t *pgd;
+	int i;
+
+	if (!static_cpu_has_bug(X86_BUG_CPU_SECURE_MODE_PTI))
+		return 0;
+
+	/*
+	 * Any given ldt_struct should have map_ldt_struct() called at most
+	 * once.
+	 */
+	WARN_ON(ldt->slot != -1);
+
+	/*
+	 * Did we already have the top level entry allocated?  We can't
+	 * use pgd_none() for this because it doens't do anything on
+	 * 4-level page table kernels.
+	 */
+	pgd = pgd_offset(mm, LDT_BASE_ADDR);
+	had_top_level_entry = (pgd->pgd != 0);
+
+	is_vmalloc = is_vmalloc_addr(ldt->entries);
+
+	for (i = 0; i * PAGE_SIZE < ldt->nr_entries * LDT_ENTRY_SIZE; i++) {
+		unsigned long offset = i << PAGE_SHIFT;
+		const void *src = (char *)ldt->entries + offset;
+		unsigned long pfn;
+		pte_t pte, *ptep;
+
+		va = (unsigned long)ldt_slot_va(slot) + offset;
+		pfn = is_vmalloc ? vmalloc_to_pfn(src) :
+			page_to_pfn(virt_to_page(src));
+		/*
+		 * Treat the PTI LDT range as a *userspace* range.
+		 * get_locked_pte() will allocate all needed pagetables
+		 * and account for them in this mm.
+		 */
+		ptep = get_locked_pte(mm, va, &ptl);
+		if (!ptep)
+			return -ENOMEM;
+		pte = pfn_pte(pfn, __pgprot(__PAGE_KERNEL & ~_PAGE_GLOBAL));
+		set_pte_at(mm, va, ptep, pte);
+		pte_unmap_unlock(ptep, ptl);
+	}
+
+	if (mm->context.ldt) {
+		/*
+		 * We already had an LDT.  The top-level entry should already
+		 * have been allocated and synchronized with the usermode
+		 * tables.
+		 */
+		WARN_ON(!had_top_level_entry);
+		if (static_cpu_has_bug(X86_BUG_CPU_SECURE_MODE_PTI))
+			WARN_ON(!kernel_to_user_pgdp(pgd)->pgd);
+	} else {
+		/*
+		 * This is the first time we're mapping an LDT for this process.
+		 * Sync the pgd to the usermode tables.
+		 */
+		WARN_ON(had_top_level_entry);
+		if (static_cpu_has_bug(X86_BUG_CPU_SECURE_MODE_PTI)) {
+			WARN_ON(kernel_to_user_pgdp(pgd)->pgd);
+			set_pgd(kernel_to_user_pgdp(pgd), *pgd);
+		}
+	}
+
+	va = (unsigned long)ldt_slot_va(slot);
+	flush_tlb_mm_range(mm, va, va + LDT_SLOT_STRIDE, 0);
+
+	ldt->slot = slot;
+#endif
+	return 0;
+}
+
+static void free_ldt_pgtables(struct mm_struct *mm)
+{
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+	struct mmu_gather tlb;
+	unsigned long start = LDT_BASE_ADDR;
+	unsigned long end = start + (1UL << PGDIR_SHIFT);
+
+	if (!static_cpu_has_bug(X86_BUG_CPU_SECURE_MODE_PTI))
+		return;
+
+	tlb_gather_mmu(&tlb, mm, start, end);
+	free_pgd_range(&tlb, start, end, start, end);
+	tlb_finish_mmu(&tlb, start, end);
+#endif
+}
+
 /* After calling this, the LDT is immutable. */
 static void finalize_ldt_struct(struct ldt_struct *ldt)
 {
@@ -156,6 +266,12 @@ int ldt_dup_context(struct mm_struct *ol
 	       new_ldt->nr_entries * LDT_ENTRY_SIZE);
 	finalize_ldt_struct(new_ldt);
 
+	retval = map_ldt_struct(mm, new_ldt, 0);
+	if (retval) {
+		free_ldt_pgtables(mm);
+		free_ldt_struct(new_ldt);
+		goto out_unlock;
+	}
 	mm->context.ldt = new_ldt;
 
 out_unlock:
@@ -174,6 +290,11 @@ void destroy_context_ldt(struct mm_struc
 	mm->context.ldt = NULL;
 }
 
+void ldt_arch_exit_mmap(struct mm_struct *mm)
+{
+	free_ldt_pgtables(mm);
+}
+
 static int read_ldt(void __user *ptr, unsigned long bytecount)
 {
 	struct mm_struct *mm = current->mm;
@@ -287,6 +408,18 @@ static int write_ldt(void __user *ptr, u
 	new_ldt->entries[ldt_info.entry_number] = ldt;
 	finalize_ldt_struct(new_ldt);
 
+	/*
+	 * If we are using PTI, map the new LDT into the userspace pagetables.
+	 * If there is already an LDT, use the other slot so that other CPUs
+	 * will continue to use the old LDT until install_ldt() switches
+	 * them over to the new LDT.
+	 */
+	error = map_ldt_struct(mm, new_ldt, old_ldt ? !old_ldt->slot : 0);
+	if (error) {
+		free_ldt_struct(old_ldt);
+		goto out_unlock;
+	}
+
 	install_ldt(mm, new_ldt);
 	free_ldt_struct(old_ldt);
 	error = 0;
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -50,12 +50,18 @@ enum address_markers_idx {
 #ifdef CONFIG_X86_64
 	KERNEL_SPACE_NR,
 	LOW_KERNEL_NR,
+#if defined(CONFIG_MODIFY_LDT_SYSCALL) && defined(CONFIG_X86_5LEVEL)
+	LDT_NR,
+#endif
 	VMALLOC_START_NR,
 	VMEMMAP_START_NR,
 #ifdef CONFIG_KASAN
 	KASAN_SHADOW_START_NR,
 	KASAN_SHADOW_END_NR,
 #endif
+#if defined(CONFIG_MODIFY_LDT_SYSCALL) && !defined(CONFIG_X86_5LEVEL)
+	LDT_NR,
+#endif
 # ifdef CONFIG_X86_ESPFIX64
 	ESPFIX_START_NR,
 # endif
@@ -79,12 +85,18 @@ static struct addr_marker address_marker
 #ifdef CONFIG_X86_64
 	{ 0x8000000000000000UL, "Kernel Space" },
 	{ 0/* PAGE_OFFSET */,   "Low Kernel Mapping" },
+#if defined(CONFIG_MODIFY_LDT_SYSCALL) && defined(CONFIG_X86_5LEVEL)
+	{ LDT_BASE_ADDR,	"LDT remap" },
+#endif
 	{ 0/* VMALLOC_START */, "vmalloc() Area" },
 	{ 0/* VMEMMAP_START */, "Vmemmap" },
 #ifdef CONFIG_KASAN
 	{ KASAN_SHADOW_START,	"KASAN shadow" },
 	{ KASAN_SHADOW_END,	"KASAN shadow end" },
 #endif
+#if defined(CONFIG_MODIFY_LDT_SYSCALL) && !defined(CONFIG_X86_5LEVEL)
+	{ LDT_BASE_ADDR,	"LDT remap" },
+#endif
 # ifdef CONFIG_X86_ESPFIX64
 	{ ESPFIX_BASE_ADDR,	"ESPfix Area", 16 },
 # endif

  parent reply	other threads:[~2017-12-16 21:38 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-16 21:23 [patch V149 00/50] x86/pti: Updated and reshuffled patch queue Thomas Gleixner
2017-12-16 21:23 ` [patch V149 01/50] x86/mm/dump_pagetables: Check PAGE_PRESENT for real Thomas Gleixner
2017-12-16 21:23 ` [patch V149 02/50] x86/vsyscall/64: Explicitly set _PAGE_USER in the pagetable hierarchy Thomas Gleixner
2017-12-16 21:23 ` [patch V149 03/50] x86/vsyscall/64: Warn and fail vsyscall emulation in NATIVE mode Thomas Gleixner
2017-12-16 21:23 ` [patch V149 04/50] arch: Allow arch_dup_mmap() to fail Thomas Gleixner
2017-12-16 21:23 ` [patch V149 05/50] x86/ldt: Rework locking Thomas Gleixner
2017-12-16 21:24 ` [patch V149 06/50] x86/ldt: Prevent ldt inheritance on exec Thomas Gleixner
2017-12-16 21:24 ` [patch V149 07/50] x86/mm/64: Improve the memory map documentation Thomas Gleixner
2017-12-16 21:24 ` [patch V149 08/50] x86/doc: Remove obvious weirdness Thomas Gleixner
2017-12-16 21:24 ` [patch V149 09/50] x86/entry: Remove SYSENTER_stack naming Thomas Gleixner
2017-12-16 21:24 ` [patch V149 10/50] x86/uv: Use the right tlbflush API Thomas Gleixner
2017-12-16 21:24 ` [patch V149 11/50] x86/microcode: Dont abuse the tlbflush interface Thomas Gleixner
2017-12-16 21:24 ` [patch V149 12/50] x86/mm: Use __flush_tlb_one() for kernel memory Thomas Gleixner
2017-12-16 21:24 ` [patch V149 13/50] x86/mm: Remove superfluous barriers Thomas Gleixner
2017-12-16 21:24 ` [patch V149 14/50] x86/mm: Clarify which functions are supposed to flush what Thomas Gleixner
2017-12-16 21:24 ` [patch V149 15/50] x86/mm: Move the CR3 construction functions to tlbflush.h Thomas Gleixner
2017-12-16 21:24 ` [patch V149 16/50] x86/mm: Remove hard-coded ASID limit checks Thomas Gleixner
2017-12-16 21:24 ` [patch V149 17/50] x86/mm: Put MMU to hardware ASID translation in one place Thomas Gleixner
2017-12-16 21:24 ` [patch V149 18/50] x86/mm: Create asm/invpcid.h Thomas Gleixner
2017-12-16 21:24 ` [patch V149 19/50] x86/cpufeatures: Add X86_BUG_CPU_INSECURE Thomas Gleixner
2017-12-16 21:24 ` [patch V149 20/50] x86/mm/pti: Disable global pages if PAGE_TABLE_ISOLATION=y Thomas Gleixner
2017-12-16 21:24 ` [patch V149 21/50] x86/mm/pti: Prepare the x86/entry assembly code for entry/exit CR3 switching Thomas Gleixner
2017-12-16 21:24 ` [patch V149 22/50] x86/mm/pti: Add infrastructure for page table isolation Thomas Gleixner
2017-12-16 21:24 ` [patch V149 23/50] x86/mm/pti: Add mapping helper functions Thomas Gleixner
2017-12-16 21:24 ` [patch V149 24/50] x86/mm/pti: Allow NX poison to be set in p4d/pgd Thomas Gleixner
2017-12-16 21:24 ` [patch V149 25/50] x86/mm/pti: Allocate a separate user PGD Thomas Gleixner
2017-12-16 21:24 ` [patch V149 26/50] x86/mm/pti: Populate " Thomas Gleixner
2017-12-16 21:24 ` [patch V149 27/50] x86/mm/pti: Add functions to clone kernel PMDs Thomas Gleixner
2017-12-16 21:24 ` [patch V149 28/50] x86/mm/pti: Force entry through trampoline when PTI active Thomas Gleixner
2017-12-16 21:24 ` [patch V149 29/50] x86/fixmap: Move the CPU entry area into a separate PMD Thomas Gleixner
2017-12-16 21:24 ` [patch V149 30/50] x86/mm/pti: Share cpu_entry_area PMDs Thomas Gleixner
2017-12-16 21:24 ` [patch V149 31/50] x86/entry: Align entry text section to PMD boundary Thomas Gleixner
2017-12-16 21:24 ` [patch V149 32/50] x86/mm/pti: Share entry text PMD Thomas Gleixner
2017-12-16 21:24 ` [patch V149 33/50] x86/mm/pti: Map ESPFIX into user space Thomas Gleixner
2017-12-16 21:24 ` [patch V149 34/50] x86/fixmap: Move IDT fixmap into the cpu_entry_area range Thomas Gleixner
2017-12-16 21:24 ` [patch V149 35/50] x86/fixmap: Add debugstore entries to cpu_entry_area Thomas Gleixner
2017-12-16 21:24 ` [patch V149 36/50] x86/events/intel/ds: Map debug buffers in fixmap Thomas Gleixner
2017-12-16 21:24 ` [patch V149 37/50] x86/mm/64: Make a full PGD-entry size hole in the memory map Thomas Gleixner
2017-12-17 11:07   ` Kirill A. Shutemov
2017-12-16 21:24 ` Thomas Gleixner [this message]
2017-12-17 11:09   ` [patch V149 38/50] x86/pti: Put the LDT in its own PGD if PTI is on Kirill A. Shutemov
2017-12-16 21:24 ` [patch V149 39/50] x86/pti: Map the vsyscall page if needed Thomas Gleixner
2017-12-16 21:24 ` [patch V149 40/50] x86/mm: Allow flushing for future ASID switches Thomas Gleixner
2017-12-16 21:24 ` [patch V149 41/50] x86/mm: Abstract switching CR3 Thomas Gleixner
2017-12-16 21:24 ` [patch V149 42/50] x86/mm: Use/Fix PCID to optimize user/kernel switches Thomas Gleixner
2017-12-16 21:24 ` [patch V149 43/50] x86/mm: Optimize RESTORE_CR3 Thomas Gleixner
2017-12-16 21:24 ` [patch V149 44/50] x86/mm: Use INVPCID for __native_flush_tlb_single() Thomas Gleixner
2017-12-16 21:24 ` [patch V149 45/50] x86/mm: Clarify the whole ASID/kernel PCID/user PCID naming Thomas Gleixner
2017-12-16 21:24 ` [patch V149 46/50] x86/mm/pti: Add Kconfig Thomas Gleixner
2017-12-16 21:24 ` [patch V149 47/50] x86/mm/dump_pagetables: Add page table directory Thomas Gleixner
2017-12-16 21:24 ` [patch V149 48/50] x86/mm/dump_pagetables: Check user space page table for WX pages Thomas Gleixner
2017-12-16 21:24 ` [patch V149 49/50] x86/mm/dump_pagetables: Allow dumping current pagetables Thomas Gleixner
2017-12-16 21:24 ` [patch V149 50/50] x86/ldt: Make the LDT mapping RO Thomas Gleixner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171216213139.549001406@linutronix.de \
    --to=tglx@linutronix.de \
    --cc=David.Laight@aculab.com \
    --cc=aliguori@amazon.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=bp@alien8.de \
    --cc=bpetkov@suse.de \
    --cc=brgerst@gmail.com \
    --cc=daniel.gruss@iaik.tugraz.at \
    --cc=dave.hansen@intel.com \
    --cc=dvlasenk@redhat.com \
    --cc=eduval@amazon.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hughd@google.com \
    --cc=jgross@suse.com \
    --cc=jpoimboe@redhat.com \
    --cc=keescook@chromium.org \
    --cc=keescook@google.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    --cc=torvalds@linux-foundation.org \
    --cc=will.deacon@arm.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).