All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
@ 2017-11-23  0:34 ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross

Thanks, everyone for all the reviews thus far.  I hope I managed to
address all the feedback given so far, except for the TODOs of
course.  This is a pretty minor update compared to v1->v2.

These patches are all on this tip branch:

	https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/log/?h=WIP.x86/mm

Changes from v3:
 * Remove process stack mappings.  Andy's new entry trampoline
   gets rid of needing to do this.
 * A bunch of cleanups and a few minor fixes in response to
   a review from Thomas Gleixner, including a bunch of commit
   message text, comments, and documentation.
 * Removed debug IDT in shadow mapping (thanks to Andy L).

Changes from v2:
 * Reword documentation removing "we"
 * Fix some whitespace damage
 * Fix up MAX ASID values off-by-one noted by Peter Z
 * Change CodingStyle stuff from Borislav comments
 * Always use _KERNPG_TABLE for pmd_populate_kernel().

Changes from v1:
 * Updated to be on top of Andy L's new entry code
 * Allow global pages again, and use them for pages mapped into
   userspace page tables.
 * Use trampoline stack instead of process stack at entry so no
   longer need to map process stack (big win in fork() speed)
 * Made the page table walking less generic by restricting it
   to kernel addresses and !_PAGE_USER pages.
 * Added a debugfs file to enable/disable CR3 switching at
   runtime.  This does not remove all the KAISER overhead, but
   it removes the largest source.
 * Use runtime disable with Xen to permit Xen-PV guests with
   KAISER=y.
 * Moved assembly code from "core" to "prepare assembly" patch
 * Pass full register name to asm macros
 * Remove double stack switch in entry_SYSENTER_compat
 * Disable vsyscall native case when KAISER=y
 * Separate PER_CPU_USER_MAPPED generic definitions from use
   by arch/x86/.

TODO:
 * Allow dumping the shadow page tables with the ptdump code
 * Put LDT at top of userspace
 * Create separate tlb flushing functions for user and kernel
 * Chase down the source of the new !CR4.PGE warning that 0day
   found with i386

---

tl;dr:

KAISER makes it harder to defeat KASLR, but makes syscalls and
interrupts slower.  These patches are based on work from a team at
Graz University of Technology posted here[1].  The major addition is
support for Intel PCIDs which builds on top of Andy Lutomorski's PCID
work merged for 4.14.  PCIDs make KAISER's overhead very reasonable
for a wide variety of use cases.

Full Description:

KAISER is a countermeasure against attacks on kernel address
information.  There are at least three existing, published,
approaches using the shared user/kernel mapping and hardware features
to defeat KASLR.  One approach referenced in the paper locates the
kernel by observing differences in page fault timing between
present-but-inaccessable kernel pages and non-present pages.

KAISER addresses this by unmapping (most of) the kernel when
userspace runs.  It leaves the existing page tables largely alone and
refers to them as "kernel page tables".  For running userspace, a new
"shadow" copy of the page tables is allocated for each process.  The
shadow page tables map all the same user memory as the "kernel" copy,
but only maps a minimal set of kernel memory.

When we enter the kernel via syscalls, interrupts or exceptions,
page tables are switched to the full "kernel" copy.  When the system
switches back to user mode, the "shadow" copy is used.  Process
Context IDentifiers (PCIDs) are used to to ensure that the TLB is not
flushed when switching between page tables, which makes syscalls
roughly 2x faster than without it.  PCIDs are usable on Haswell and
newer CPUs (the ones with "v4", or called fourth-generation Core).

The minimal kernel page tables try to map only what is needed to
enter/exit the kernel such as the entry/exit functions, interrupt
descriptors (IDT) and the kernel trampoline stacks.  This minimal set
of data can still reveal the kernel's ASLR base address.  But, this
minimal kernel data is all trusted, which makes it harder to exploit
than data in the kernel direct map which contains loads of
user-controlled data.

KAISER will affect performance for anything that does system calls or
interrupts: everything.  Just the new instructions (CR3 manipulation)
add a few hundred cycles to a syscall or interrupt.  Most workloads
that we have run show single-digit regressions.  5% is a good round
number for what is typical.  The worst we have seen is a roughly 30%
regression on a loopback networking test that did a ton of syscalls
and context switches.  More details about possible performance
impacts are in the new Documentation/ file.

This code is based on a version I downloaded from
(https://github.com/IAIK/KAISER).  It has been heavily modified.

The approach is described in detail in a paper[2].  However, there is
some incorrect and information in the paper, both on how Linux and
the hardware works.  For instance, I do not share the opinion that
KAISER has "runtime overhead of only 0.28%".  Please rely on this
patch series as the canonical source of information about this
submission.

Here is one example of how the kernel image grow with CONFIG_KAISER
on and off.  Most of the size increase is presumably from additional
alignment requirements for mapping entry/exit code and structures.

    text    data     bss      dec filename
11786064 7356724 2928640 22071428 vmlinux-nokaiser
11798203 7371704 2928640 22098547 vmlinux-kaiser
  +12139  +14980       0   +27119

To give folks an idea what the performance impact is like, I took
the following test and ran it single-threaded:

	https://github.com/antonblanchard/will-it-scale/blob/master/tests/lseek1.c

It's a pretty quick syscall so this shows how much KAISER slows
down syscalls (and how much PCIDs help).  The units here are
lseeks/second:

        no kaiser: 5.2M
    kaiser+  pcid: 3.0M
    kaiser+nopcid: 2.2M

"nopcid" is literally with the "nopcid" command-line option which
turns PCIDs off entirely.

Thanks to:
The original KAISER team at Graz University of Technology.
Andy Lutomirski for all the help with the entry code.
Kirill Shutemov for a helpful review of the code.

1. https://github.com/IAIK/KAISER
2. https://gruss.cc/files/kaiser.pdf

--

The code is available here:

	https://git.kernel.org/pub/scm/linux/kernel/git/daveh/x86-kaiser.git/

 Documentation/x86/kaiser.txt                | 162 +++++
 arch/x86/Kconfig                            |   8 +
 arch/x86/entry/calling.h                    |  89 +++
 arch/x86/entry/entry_64.S                   |  48 +-
 arch/x86/entry/entry_64_compat.S            |  32 +-
 arch/x86/events/intel/ds.c                  |  49 +-
 arch/x86/include/asm/cpufeatures.h          |   1 +
 arch/x86/include/asm/desc.h                 |   2 +-
 arch/x86/include/asm/kaiser.h               |  68 +++
 arch/x86/include/asm/mmu_context.h          |  29 +-
 arch/x86/include/asm/pgtable.h              |  19 +-
 arch/x86/include/asm/pgtable_64.h           | 146 +++++
 arch/x86/include/asm/pgtable_types.h        |  25 +-
 arch/x86/include/asm/processor.h            |   2 +-
 arch/x86/include/asm/tlbflush.h             | 208 ++++++-
 arch/x86/include/uapi/asm/processor-flags.h |   3 +-
 arch/x86/kernel/cpu/common.c                |  10 +-
 arch/x86/kernel/espfix_64.c                 |  27 +-
 arch/x86/kernel/head_64.S                   |  30 +-
 arch/x86/kernel/ldt.c                       |  25 +-
 arch/x86/kernel/process.c                   |   2 +-
 arch/x86/kernel/process_64.c                |   2 +-
 arch/x86/kvm/x86.c                          |   3 +-
 arch/x86/mm/Makefile                        |   1 +
 arch/x86/mm/init.c                          |  75 ++-
 arch/x86/mm/kaiser.c                        | 620 ++++++++++++++++++++
 arch/x86/mm/pageattr.c                      |  18 +-
 arch/x86/mm/pgtable.c                       |  16 +-
 arch/x86/mm/tlb.c                           | 105 +++-
 include/asm-generic/vmlinux.lds.h           |   7 +
 include/linux/kaiser.h                      |  38 ++
 include/linux/percpu-defs.h                 |  30 +
 init/main.c                                 |   3 +
 kernel/fork.c                               |   1 +
 security/Kconfig                            |  10 +
 35 files changed, 1783 insertions(+), 131 deletions(-)

Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
Cc: Juergen Gross <jgross@suse.com>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
@ 2017-11-23  0:34 ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross

Thanks, everyone for all the reviews thus far.  I hope I managed to
address all the feedback given so far, except for the TODOs of
course.  This is a pretty minor update compared to v1->v2.

These patches are all on this tip branch:

	https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/log/?h=WIP.x86/mm

Changes from v3:
 * Remove process stack mappings.  Andy's new entry trampoline
   gets rid of needing to do this.
 * A bunch of cleanups and a few minor fixes in response to
   a review from Thomas Gleixner, including a bunch of commit
   message text, comments, and documentation.
 * Removed debug IDT in shadow mapping (thanks to Andy L).

Changes from v2:
 * Reword documentation removing "we"
 * Fix some whitespace damage
 * Fix up MAX ASID values off-by-one noted by Peter Z
 * Change CodingStyle stuff from Borislav comments
 * Always use _KERNPG_TABLE for pmd_populate_kernel().

Changes from v1:
 * Updated to be on top of Andy L's new entry code
 * Allow global pages again, and use them for pages mapped into
   userspace page tables.
 * Use trampoline stack instead of process stack at entry so no
   longer need to map process stack (big win in fork() speed)
 * Made the page table walking less generic by restricting it
   to kernel addresses and !_PAGE_USER pages.
 * Added a debugfs file to enable/disable CR3 switching at
   runtime.  This does not remove all the KAISER overhead, but
   it removes the largest source.
 * Use runtime disable with Xen to permit Xen-PV guests with
   KAISER=y.
 * Moved assembly code from "core" to "prepare assembly" patch
 * Pass full register name to asm macros
 * Remove double stack switch in entry_SYSENTER_compat
 * Disable vsyscall native case when KAISER=y
 * Separate PER_CPU_USER_MAPPED generic definitions from use
   by arch/x86/.

TODO:
 * Allow dumping the shadow page tables with the ptdump code
 * Put LDT at top of userspace
 * Create separate tlb flushing functions for user and kernel
 * Chase down the source of the new !CR4.PGE warning that 0day
   found with i386

---

tl;dr:

KAISER makes it harder to defeat KASLR, but makes syscalls and
interrupts slower.  These patches are based on work from a team at
Graz University of Technology posted here[1].  The major addition is
support for Intel PCIDs which builds on top of Andy Lutomorski's PCID
work merged for 4.14.  PCIDs make KAISER's overhead very reasonable
for a wide variety of use cases.

Full Description:

KAISER is a countermeasure against attacks on kernel address
information.  There are at least three existing, published,
approaches using the shared user/kernel mapping and hardware features
to defeat KASLR.  One approach referenced in the paper locates the
kernel by observing differences in page fault timing between
present-but-inaccessable kernel pages and non-present pages.

KAISER addresses this by unmapping (most of) the kernel when
userspace runs.  It leaves the existing page tables largely alone and
refers to them as "kernel page tables".  For running userspace, a new
"shadow" copy of the page tables is allocated for each process.  The
shadow page tables map all the same user memory as the "kernel" copy,
but only maps a minimal set of kernel memory.

When we enter the kernel via syscalls, interrupts or exceptions,
page tables are switched to the full "kernel" copy.  When the system
switches back to user mode, the "shadow" copy is used.  Process
Context IDentifiers (PCIDs) are used to to ensure that the TLB is not
flushed when switching between page tables, which makes syscalls
roughly 2x faster than without it.  PCIDs are usable on Haswell and
newer CPUs (the ones with "v4", or called fourth-generation Core).

The minimal kernel page tables try to map only what is needed to
enter/exit the kernel such as the entry/exit functions, interrupt
descriptors (IDT) and the kernel trampoline stacks.  This minimal set
of data can still reveal the kernel's ASLR base address.  But, this
minimal kernel data is all trusted, which makes it harder to exploit
than data in the kernel direct map which contains loads of
user-controlled data.

KAISER will affect performance for anything that does system calls or
interrupts: everything.  Just the new instructions (CR3 manipulation)
add a few hundred cycles to a syscall or interrupt.  Most workloads
that we have run show single-digit regressions.  5% is a good round
number for what is typical.  The worst we have seen is a roughly 30%
regression on a loopback networking test that did a ton of syscalls
and context switches.  More details about possible performance
impacts are in the new Documentation/ file.

This code is based on a version I downloaded from
(https://github.com/IAIK/KAISER).  It has been heavily modified.

The approach is described in detail in a paper[2].  However, there is
some incorrect and information in the paper, both on how Linux and
the hardware works.  For instance, I do not share the opinion that
KAISER has "runtime overhead of only 0.28%".  Please rely on this
patch series as the canonical source of information about this
submission.

Here is one example of how the kernel image grow with CONFIG_KAISER
on and off.  Most of the size increase is presumably from additional
alignment requirements for mapping entry/exit code and structures.

    text    data     bss      dec filename
11786064 7356724 2928640 22071428 vmlinux-nokaiser
11798203 7371704 2928640 22098547 vmlinux-kaiser
  +12139  +14980       0   +27119

To give folks an idea what the performance impact is like, I took
the following test and ran it single-threaded:

	https://github.com/antonblanchard/will-it-scale/blob/master/tests/lseek1.c

It's a pretty quick syscall so this shows how much KAISER slows
down syscalls (and how much PCIDs help).  The units here are
lseeks/second:

        no kaiser: 5.2M
    kaiser+  pcid: 3.0M
    kaiser+nopcid: 2.2M

"nopcid" is literally with the "nopcid" command-line option which
turns PCIDs off entirely.

Thanks to:
The original KAISER team at Graz University of Technology.
Andy Lutomirski for all the help with the entry code.
Kirill Shutemov for a helpful review of the code.

1. https://github.com/IAIK/KAISER
2. https://gruss.cc/files/kaiser.pdf

--

The code is available here:

	https://git.kernel.org/pub/scm/linux/kernel/git/daveh/x86-kaiser.git/

 Documentation/x86/kaiser.txt                | 162 +++++
 arch/x86/Kconfig                            |   8 +
 arch/x86/entry/calling.h                    |  89 +++
 arch/x86/entry/entry_64.S                   |  48 +-
 arch/x86/entry/entry_64_compat.S            |  32 +-
 arch/x86/events/intel/ds.c                  |  49 +-
 arch/x86/include/asm/cpufeatures.h          |   1 +
 arch/x86/include/asm/desc.h                 |   2 +-
 arch/x86/include/asm/kaiser.h               |  68 +++
 arch/x86/include/asm/mmu_context.h          |  29 +-
 arch/x86/include/asm/pgtable.h              |  19 +-
 arch/x86/include/asm/pgtable_64.h           | 146 +++++
 arch/x86/include/asm/pgtable_types.h        |  25 +-
 arch/x86/include/asm/processor.h            |   2 +-
 arch/x86/include/asm/tlbflush.h             | 208 ++++++-
 arch/x86/include/uapi/asm/processor-flags.h |   3 +-
 arch/x86/kernel/cpu/common.c                |  10 +-
 arch/x86/kernel/espfix_64.c                 |  27 +-
 arch/x86/kernel/head_64.S                   |  30 +-
 arch/x86/kernel/ldt.c                       |  25 +-
 arch/x86/kernel/process.c                   |   2 +-
 arch/x86/kernel/process_64.c                |   2 +-
 arch/x86/kvm/x86.c                          |   3 +-
 arch/x86/mm/Makefile                        |   1 +
 arch/x86/mm/init.c                          |  75 ++-
 arch/x86/mm/kaiser.c                        | 620 ++++++++++++++++++++
 arch/x86/mm/pageattr.c                      |  18 +-
 arch/x86/mm/pgtable.c                       |  16 +-
 arch/x86/mm/tlb.c                           | 105 +++-
 include/asm-generic/vmlinux.lds.h           |   7 +
 include/linux/kaiser.h                      |  38 ++
 include/linux/percpu-defs.h                 |  30 +
 init/main.c                                 |   3 +
 kernel/fork.c                               |   1 +
 security/Kconfig                            |  10 +
 35 files changed, 1783 insertions(+), 131 deletions(-)

Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
Cc: Juergen Gross <jgross@suse.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 01/23] x86, kaiser: disable global pages by default with KAISER
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:34   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, bp, tglx, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

Global pages stay in the TLB across context switches.  Since all contexts
share the same kernel mapping, these mappings are marked as global pages
so kernel entries in the TLB are not flushed out on a context switch.

But, even having these entries in the TLB opens up something that an
attacker can use [1].

That means that even when KAISER switches page tables on return to user
space the global pages would stay in the TLB cache.

Disable global pages so that kernel TLB entries can be flushed before
returning to user space. This way, all accesses to kernel addresses from
userspace result in a TLB miss independent of the existence of a kernel
mapping.

Replace _PAGE_GLOBAL by __PAGE_KERNEL_GLOBAL and keep _PAGE_GLOBAL
available so that it can still be used for a few selected kernel mappings
which must be visible to userspace, when KAISER is enabled, like the
entry/exit code and data.

1. The double-page-fault attack:
   http://www.ieee-security.org/TC/SP2013/papers/4977a191.pdf

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/pgtable_types.h |   14 +++++++++++++-
 b/arch/x86/mm/pageattr.c               |   16 ++++++++--------
 2 files changed, 21 insertions(+), 9 deletions(-)

diff -puN arch/x86/include/asm/pgtable_types.h~kaiser-prep-disable-global-pages arch/x86/include/asm/pgtable_types.h
--- a/arch/x86/include/asm/pgtable_types.h~kaiser-prep-disable-global-pages	2017-11-22 15:45:44.182619751 -0800
+++ b/arch/x86/include/asm/pgtable_types.h	2017-11-22 15:45:44.188619751 -0800
@@ -180,8 +180,20 @@ enum page_cache_mode {
 #define PAGE_READONLY_EXEC	__pgprot(_PAGE_PRESENT | _PAGE_USER |	\
 					 _PAGE_ACCESSED)
 
+/*
+ * Disable global pages for anything using the default
+ * __PAGE_KERNEL* macros.  PGE will still be enabled
+ * and _PAGE_GLOBAL may still be used carefully.
+ */
+#ifdef CONFIG_KAISER
+#define __PAGE_KERNEL_GLOBAL	0
+#else
+#define __PAGE_KERNEL_GLOBAL	_PAGE_GLOBAL
+#endif
+
 #define __PAGE_KERNEL_EXEC						\
-	(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_GLOBAL)
+	(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED |	\
+	 __PAGE_KERNEL_GLOBAL)
 #define __PAGE_KERNEL		(__PAGE_KERNEL_EXEC | _PAGE_NX)
 
 #define __PAGE_KERNEL_RO		(__PAGE_KERNEL & ~_PAGE_RW)
diff -puN arch/x86/mm/pageattr.c~kaiser-prep-disable-global-pages arch/x86/mm/pageattr.c
--- a/arch/x86/mm/pageattr.c~kaiser-prep-disable-global-pages	2017-11-22 15:45:44.184619751 -0800
+++ b/arch/x86/mm/pageattr.c	2017-11-22 15:45:44.188619751 -0800
@@ -585,9 +585,9 @@ try_preserve_large_page(pte_t *kpte, uns
 	 * for the ancient hardware that doesn't support it.
 	 */
 	if (pgprot_val(req_prot) & _PAGE_PRESENT)
-		pgprot_val(req_prot) |= _PAGE_PSE | _PAGE_GLOBAL;
+		pgprot_val(req_prot) |= _PAGE_PSE | __PAGE_KERNEL_GLOBAL;
 	else
-		pgprot_val(req_prot) &= ~(_PAGE_PSE | _PAGE_GLOBAL);
+		pgprot_val(req_prot) &= ~(_PAGE_PSE | __PAGE_KERNEL_GLOBAL);
 
 	req_prot = canon_pgprot(req_prot);
 
@@ -705,9 +705,9 @@ __split_large_page(struct cpa_data *cpa,
 	 * for the ancient hardware that doesn't support it.
 	 */
 	if (pgprot_val(ref_prot) & _PAGE_PRESENT)
-		pgprot_val(ref_prot) |= _PAGE_GLOBAL;
+		pgprot_val(ref_prot) |= __PAGE_KERNEL_GLOBAL;
 	else
-		pgprot_val(ref_prot) &= ~_PAGE_GLOBAL;
+		pgprot_val(ref_prot) &= ~__PAGE_KERNEL_GLOBAL;
 
 	/*
 	 * Get the target pfn from the original entry:
@@ -938,9 +938,9 @@ static void populate_pte(struct cpa_data
 	 * support it.
 	 */
 	if (pgprot_val(pgprot) & _PAGE_PRESENT)
-		pgprot_val(pgprot) |= _PAGE_GLOBAL;
+		pgprot_val(pgprot) |= __PAGE_KERNEL_GLOBAL;
 	else
-		pgprot_val(pgprot) &= ~_PAGE_GLOBAL;
+		pgprot_val(pgprot) &= ~__PAGE_KERNEL_GLOBAL;
 
 	pgprot = canon_pgprot(pgprot);
 
@@ -1242,9 +1242,9 @@ repeat:
 		 * support it.
 		 */
 		if (pgprot_val(new_prot) & _PAGE_PRESENT)
-			pgprot_val(new_prot) |= _PAGE_GLOBAL;
+			pgprot_val(new_prot) |= __PAGE_KERNEL_GLOBAL;
 		else
-			pgprot_val(new_prot) &= ~_PAGE_GLOBAL;
+			pgprot_val(new_prot) &= ~__PAGE_KERNEL_GLOBAL;
 
 		/*
 		 * We need to keep the pfn from the existing PTE,
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 01/23] x86, kaiser: disable global pages by default with KAISER
@ 2017-11-23  0:34   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, bp, tglx, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

Global pages stay in the TLB across context switches.  Since all contexts
share the same kernel mapping, these mappings are marked as global pages
so kernel entries in the TLB are not flushed out on a context switch.

But, even having these entries in the TLB opens up something that an
attacker can use [1].

That means that even when KAISER switches page tables on return to user
space the global pages would stay in the TLB cache.

Disable global pages so that kernel TLB entries can be flushed before
returning to user space. This way, all accesses to kernel addresses from
userspace result in a TLB miss independent of the existence of a kernel
mapping.

Replace _PAGE_GLOBAL by __PAGE_KERNEL_GLOBAL and keep _PAGE_GLOBAL
available so that it can still be used for a few selected kernel mappings
which must be visible to userspace, when KAISER is enabled, like the
entry/exit code and data.

1. The double-page-fault attack:
   http://www.ieee-security.org/TC/SP2013/papers/4977a191.pdf

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/pgtable_types.h |   14 +++++++++++++-
 b/arch/x86/mm/pageattr.c               |   16 ++++++++--------
 2 files changed, 21 insertions(+), 9 deletions(-)

diff -puN arch/x86/include/asm/pgtable_types.h~kaiser-prep-disable-global-pages arch/x86/include/asm/pgtable_types.h
--- a/arch/x86/include/asm/pgtable_types.h~kaiser-prep-disable-global-pages	2017-11-22 15:45:44.182619751 -0800
+++ b/arch/x86/include/asm/pgtable_types.h	2017-11-22 15:45:44.188619751 -0800
@@ -180,8 +180,20 @@ enum page_cache_mode {
 #define PAGE_READONLY_EXEC	__pgprot(_PAGE_PRESENT | _PAGE_USER |	\
 					 _PAGE_ACCESSED)
 
+/*
+ * Disable global pages for anything using the default
+ * __PAGE_KERNEL* macros.  PGE will still be enabled
+ * and _PAGE_GLOBAL may still be used carefully.
+ */
+#ifdef CONFIG_KAISER
+#define __PAGE_KERNEL_GLOBAL	0
+#else
+#define __PAGE_KERNEL_GLOBAL	_PAGE_GLOBAL
+#endif
+
 #define __PAGE_KERNEL_EXEC						\
-	(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_GLOBAL)
+	(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED |	\
+	 __PAGE_KERNEL_GLOBAL)
 #define __PAGE_KERNEL		(__PAGE_KERNEL_EXEC | _PAGE_NX)
 
 #define __PAGE_KERNEL_RO		(__PAGE_KERNEL & ~_PAGE_RW)
diff -puN arch/x86/mm/pageattr.c~kaiser-prep-disable-global-pages arch/x86/mm/pageattr.c
--- a/arch/x86/mm/pageattr.c~kaiser-prep-disable-global-pages	2017-11-22 15:45:44.184619751 -0800
+++ b/arch/x86/mm/pageattr.c	2017-11-22 15:45:44.188619751 -0800
@@ -585,9 +585,9 @@ try_preserve_large_page(pte_t *kpte, uns
 	 * for the ancient hardware that doesn't support it.
 	 */
 	if (pgprot_val(req_prot) & _PAGE_PRESENT)
-		pgprot_val(req_prot) |= _PAGE_PSE | _PAGE_GLOBAL;
+		pgprot_val(req_prot) |= _PAGE_PSE | __PAGE_KERNEL_GLOBAL;
 	else
-		pgprot_val(req_prot) &= ~(_PAGE_PSE | _PAGE_GLOBAL);
+		pgprot_val(req_prot) &= ~(_PAGE_PSE | __PAGE_KERNEL_GLOBAL);
 
 	req_prot = canon_pgprot(req_prot);
 
@@ -705,9 +705,9 @@ __split_large_page(struct cpa_data *cpa,
 	 * for the ancient hardware that doesn't support it.
 	 */
 	if (pgprot_val(ref_prot) & _PAGE_PRESENT)
-		pgprot_val(ref_prot) |= _PAGE_GLOBAL;
+		pgprot_val(ref_prot) |= __PAGE_KERNEL_GLOBAL;
 	else
-		pgprot_val(ref_prot) &= ~_PAGE_GLOBAL;
+		pgprot_val(ref_prot) &= ~__PAGE_KERNEL_GLOBAL;
 
 	/*
 	 * Get the target pfn from the original entry:
@@ -938,9 +938,9 @@ static void populate_pte(struct cpa_data
 	 * support it.
 	 */
 	if (pgprot_val(pgprot) & _PAGE_PRESENT)
-		pgprot_val(pgprot) |= _PAGE_GLOBAL;
+		pgprot_val(pgprot) |= __PAGE_KERNEL_GLOBAL;
 	else
-		pgprot_val(pgprot) &= ~_PAGE_GLOBAL;
+		pgprot_val(pgprot) &= ~__PAGE_KERNEL_GLOBAL;
 
 	pgprot = canon_pgprot(pgprot);
 
@@ -1242,9 +1242,9 @@ repeat:
 		 * support it.
 		 */
 		if (pgprot_val(new_prot) & _PAGE_PRESENT)
-			pgprot_val(new_prot) |= _PAGE_GLOBAL;
+			pgprot_val(new_prot) |= __PAGE_KERNEL_GLOBAL;
 		else
-			pgprot_val(new_prot) &= ~_PAGE_GLOBAL;
+			pgprot_val(new_prot) &= ~__PAGE_KERNEL_GLOBAL;
 
 		/*
 		 * We need to keep the pfn from the existing PTE,
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 02/23] x86, kaiser: prepare assembly for entry/exit CR3 switching
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:34   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

This is largely code from Andy Lutomirski.  I fixed a few bugs
in it, and added a few SWITCH_TO_* spots.

KAISER needs to switch to a different CR3 value when it enters
the kernel and switch back when it exits.  This essentially
needs to be done before leaving assembly code.

This is extra challenging because the switching context is
tricky: the registers that can be clobbered can vary.  It is also
hard to store things on the stack because there is an established
ABI (ptregs) or the stack is entirely unsafe to use.

This patch establishes a set of macros that allow changing to
the user and kernel CR3 values.

Interactions with SWAPGS: previous versions of the KAISER code
relied on having per-cpu scratch space to save/restore a register
that can be used for the CR3 MOV.  The %GS register is used to
index into our per-cpu space, so SWAPGS *had* to be done before
the CR3 switch.  That scratch space is gone now, but the semantic
that SWAPGS must be done before the CR3 MOV is retained.  This is
good to keep because it is not that hard to do and it allows us
to do things like add per-cpu debugging information to help us
figure out what goes wrong sometimes.

What this does in the NMI code is worth pointing out.  NMIs
can interrupt *any* context and they can also be nested with
NMIs interrupting other NMIs.  The comments below
".Lnmi_from_kernel" explain the format of the stack during this
situation.  Changing the format of this stack is not a fun
exercise: I tried.  Instead of storing the old CR3 value on the
stack, this patch depend on the *regular* register save/restore
mechanism and then uses %r14 to keep CR3 during the NMI.  It is
callee-saved and will not be clobbered by the C NMI handlers that
get called.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/entry/calling.h         |   65 +++++++++++++++++++++++++++++++++++++
 b/arch/x86/entry/entry_64.S        |   47 +++++++++++++++++++++++---
 b/arch/x86/entry/entry_64_compat.S |   32 +++++++++++++++++-
 3 files changed, 138 insertions(+), 6 deletions(-)

diff -puN arch/x86/entry/calling.h~kaiser-luto-base-cr3-work arch/x86/entry/calling.h
--- a/arch/x86/entry/calling.h~kaiser-luto-base-cr3-work	2017-11-22 15:45:44.745619750 -0800
+++ b/arch/x86/entry/calling.h	2017-11-22 15:45:44.753619750 -0800
@@ -1,6 +1,7 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 #include <linux/jump_label.h>
 #include <asm/unwind_hints.h>
+#include <asm/cpufeatures.h>
 
 /*
 
@@ -187,6 +188,70 @@ For 32-bit we have the following convent
 #endif
 .endm
 
+#ifdef CONFIG_KAISER
+
+/* KAISER PGDs are 8k.  Flip bit 12 to switch between the two halves: */
+#define KAISER_SWITCH_MASK (1<<PAGE_SHIFT)
+
+.macro ADJUST_KERNEL_CR3 reg:req
+	/* Clear "KAISER bit", point CR3 at kernel pagetables: */
+	andq	$(~KAISER_SWITCH_MASK), \reg
+.endm
+
+.macro ADJUST_USER_CR3 reg:req
+	/* Move CR3 up a page to the user page tables: */
+	orq	$(KAISER_SWITCH_MASK), \reg
+.endm
+
+.macro SWITCH_TO_KERNEL_CR3 scratch_reg:req
+	mov	%cr3, \scratch_reg
+	ADJUST_KERNEL_CR3 \scratch_reg
+	mov	\scratch_reg, %cr3
+.endm
+
+.macro SWITCH_TO_USER_CR3 scratch_reg:req
+	mov	%cr3, \scratch_reg
+	ADJUST_USER_CR3 \scratch_reg
+	mov	\scratch_reg, %cr3
+.endm
+
+.macro SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg:req save_reg:req
+	movq	%cr3, %r\scratch_reg
+	movq	%r\scratch_reg, \save_reg
+	/*
+	 * Is the switch bit zero?  This means the address is
+	 * up in real KAISER patches in a moment.
+	 */
+	testq	$(KAISER_SWITCH_MASK), %r\scratch_reg
+	jz	.Ldone_\@
+
+	ADJUST_KERNEL_CR3 %r\scratch_reg
+	movq	%r\scratch_reg, %cr3
+
+.Ldone_\@:
+.endm
+
+.macro RESTORE_CR3 save_reg:req
+	/*
+	 * The CR3 write could be avoided when not changing its value,
+	 * but would require a CR3 read *and* a scratch register.
+	 */
+	movq	\save_reg, %cr3
+.endm
+
+#else /* CONFIG_KAISER=n: */
+
+.macro SWITCH_TO_KERNEL_CR3 scratch_reg:req
+.endm
+.macro SWITCH_TO_USER_CR3 scratch_reg:req
+.endm
+.macro SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg:req save_reg:req
+.endm
+.macro RESTORE_CR3 save_reg:req
+.endm
+
+#endif
+
 #endif /* CONFIG_X86_64 */
 
 /*
diff -puN arch/x86/entry/entry_64_compat.S~kaiser-luto-base-cr3-work arch/x86/entry/entry_64_compat.S
--- a/arch/x86/entry/entry_64_compat.S~kaiser-luto-base-cr3-work	2017-11-22 15:45:44.747619750 -0800
+++ b/arch/x86/entry/entry_64_compat.S	2017-11-22 15:45:44.753619750 -0800
@@ -49,6 +49,10 @@
 ENTRY(entry_SYSENTER_compat)
 	/* Interrupts are off on entry. */
 	SWAPGS
+
+	/* We are about to clobber %rsp anyway, clobbering here is OK */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
+
 	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
 
 	/*
@@ -216,6 +220,12 @@ GLOBAL(entry_SYSCALL_compat_after_hwfram
 	pushq   $0			/* pt_regs->r15 = 0 */
 
 	/*
+	 * We just saved %rdi so it is safe to clobber.  It is not
+	 * preserved during the C calls inside TRACE_IRQS_OFF anyway.
+	 */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
+
+	/*
 	 * User mode is traced as though IRQs are on, and SYSENTER
 	 * turned them off.
 	 */
@@ -256,10 +266,22 @@ sysret32_from_system_call:
 	 * when the system call started, which is already known to user
 	 * code.  We zero R8-R10 to avoid info leaks.
          */
+	movq	RSP-ORIG_RAX(%rsp), %rsp
+
+	/*
+	 * The original userspace %rsp (RSP-ORIG_RAX(%rsp)) is stored
+	 * on the process stack which is not mapped to userspace and
+	 * not readable after we SWITCH_TO_USER_CR3.  Delay the CR3
+	 * switch until after after the last reference to the process
+	 * stack.
+	 *
+	 * %r8 is zeroed before the sysret, thus safe to clobber.
+	 */
+	SWITCH_TO_USER_CR3 scratch_reg=%r8
+
 	xorq	%r8, %r8
 	xorq	%r9, %r9
 	xorq	%r10, %r10
-	movq	RSP-ORIG_RAX(%rsp), %rsp
 	swapgs
 	sysretl
 END(entry_SYSCALL_compat)
@@ -297,6 +319,14 @@ ENTRY(entry_INT80_compat)
 	ASM_CLAC			/* Do this early to minimize exposure */
 	SWAPGS
 
+	/*
+	 * Must switch CR3 before thread stack is used.  %r8 itself
+	 * is not saved into pt_regs and is not preserved across
+	 * function calls (like TRACE_IRQS_OFF calls), thus should
+	 * be safe to use.
+	 */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%r8
+
 	subq	$16*8, %rsp
 	call	switch_to_thread_stack
 	addq	$16*8, %rsp
diff -puN arch/x86/entry/entry_64.S~kaiser-luto-base-cr3-work arch/x86/entry/entry_64.S
--- a/arch/x86/entry/entry_64.S~kaiser-luto-base-cr3-work	2017-11-22 15:45:44.749619750 -0800
+++ b/arch/x86/entry/entry_64.S	2017-11-22 15:45:44.754619750 -0800
@@ -164,6 +164,9 @@ ENTRY(entry_SYSCALL_64_trampoline)
 	/* Stash the user RSP. */
 	movq	%rsp, RSP_SCRATCH
 
+	/* Note: using %rsp as a scratch reg. */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
+
 	/* Load the top of the task stack into RSP */
 	movq	CPU_ENTRY_AREA_tss + TSS_sp1 + CPU_ENTRY_AREA, %rsp
 
@@ -207,9 +210,16 @@ ENTRY(entry_SYSCALL_64)
 
 	swapgs
 	movq	%rsp, PER_CPU_VAR(rsp_scratch)
-	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
 
-	TRACE_IRQS_OFF
+	/*
+	 * The kernel CR3 is needed to map the process stack, but we
+	 * need a scratch register to be able to load CR3.  %rsp is
+	 * clobberable right now, so use it as a scratch register.
+	 * %rsp will be look crazy here for a couple instructions.
+	 */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
+
+	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
 
 	/* Construct struct pt_regs on stack */
 	pushq	$__USER_DS			/* pt_regs->ss */
@@ -231,6 +241,9 @@ GLOBAL(entry_SYSCALL_64_after_hwframe)
 	sub	$(6*8), %rsp			/* pt_regs->bp, bx, r12-15 not saved */
 	UNWIND_HINT_REGS extra=0
 
+	/* Must wait until we have the kernel CR3 to call C functions: */
+	TRACE_IRQS_OFF
+
 	/*
 	 * If we need to do entry work or if we guess we'll need to do
 	 * exit work, go straight to the slow path.
@@ -402,6 +415,7 @@ syscall_return_via_sysret:
 	 * We are on the trampoline stack.  All regs except RDI are live.
 	 * We can do future final exit work right here.
 	 */
+	SWITCH_TO_USER_CR3 scratch_reg=%rdi
 
 	popq	%rdi
 	popq	%rsp
@@ -743,6 +757,8 @@ GLOBAL(swapgs_restore_regs_and_return_to
 	 * We can do future final exit work right here.
 	 */
 
+	SWITCH_TO_USER_CR3 scratch_reg=%rdi
+
 	/* Restore RDI. */
 	popq	%rdi
 	SWAPGS
@@ -952,6 +968,10 @@ ENTRY(switch_to_thread_stack)
 	UNWIND_HINT_IRET_REGS offset=17*8
 
 	movq	%rdi, RDI+8(%rsp)
+
+	/* Need to switch before accessing the thread stack. */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
+
 	movq	%rsp, %rdi
 	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
 	UNWIND_HINT_IRET_REGS offset=17*8 base=%rdi
@@ -1252,7 +1272,11 @@ ENTRY(paranoid_entry)
 	js	1f				/* negative -> in kernel */
 	SWAPGS
 	xorl	%ebx, %ebx
-1:	ret
+
+1:
+	SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg=ax save_reg=%r14
+
+	ret
 END(paranoid_entry)
 
 /*
@@ -1274,6 +1298,7 @@ ENTRY(paranoid_exit)
 	testl	%ebx, %ebx			/* swapgs needed? */
 	jnz	.Lparanoid_exit_no_swapgs
 	TRACE_IRQS_IRETQ
+	RESTORE_CR3	%r14
 	SWAPGS_UNSAFE_STACK
 	jmp	.Lparanoid_exit_restore
 .Lparanoid_exit_no_swapgs:
@@ -1302,6 +1327,9 @@ ENTRY(error_entry)
 	 */
 	SWAPGS
 
+	/* We have user CR3.  Change to kernel CR3. */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
+
 .Lerror_entry_from_usermode_after_swapgs:
 	/* Put us onto the real thread stack. */
 	leaq	8(%rsp), %rdi			/* pt_regs pointer */
@@ -1346,6 +1374,7 @@ ENTRY(error_entry)
 	 * gsbase and proceed.  We'll fix up the exception and land in
 	 * .Lgs_change's error handler with kernel gsbase.
 	 */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
 	SWAPGS
 	jmp .Lerror_entry_done
 
@@ -1356,9 +1385,10 @@ ENTRY(error_entry)
 
 .Lerror_bad_iret:
 	/*
-	 * We came from an IRET to user mode, so we have user gsbase.
-	 * Switch to kernel gsbase:
+	 * We came from an IRET to user mode, so we have user
+	 * gsbase and CR3.  Switch to kernel gsbase and CR3:
 	 */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
 	SWAPGS
 
 	/*
@@ -1391,6 +1421,10 @@ END(error_exit)
 /*
  * Runs on exception stack.  Xen PV does not go through this path at all,
  * so we can use real assembly here.
+ *
+ * Registers:
+ *	%r14: Used to save/restore the CR3 of the interrupted context
+ *	      when KAISER is in use.  Do not clobber.
  */
 ENTRY(nmi)
 	UNWIND_HINT_IRET_REGS
@@ -1454,6 +1488,7 @@ ENTRY(nmi)
 
 	swapgs
 	cld
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdx
 	movq	%rsp, %rdx
 	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
 	UNWIND_HINT_IRET_REGS base=%rdx offset=8
@@ -1706,6 +1741,8 @@ end_repeat_nmi:
 	movq	$-1, %rsi
 	call	do_nmi
 
+	RESTORE_CR3 save_reg=%r14
+
 	testl	%ebx, %ebx			/* swapgs needed? */
 	jnz	nmi_restore
 nmi_swapgs:
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 02/23] x86, kaiser: prepare assembly for entry/exit CR3 switching
@ 2017-11-23  0:34   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

This is largely code from Andy Lutomirski.  I fixed a few bugs
in it, and added a few SWITCH_TO_* spots.

KAISER needs to switch to a different CR3 value when it enters
the kernel and switch back when it exits.  This essentially
needs to be done before leaving assembly code.

This is extra challenging because the switching context is
tricky: the registers that can be clobbered can vary.  It is also
hard to store things on the stack because there is an established
ABI (ptregs) or the stack is entirely unsafe to use.

This patch establishes a set of macros that allow changing to
the user and kernel CR3 values.

Interactions with SWAPGS: previous versions of the KAISER code
relied on having per-cpu scratch space to save/restore a register
that can be used for the CR3 MOV.  The %GS register is used to
index into our per-cpu space, so SWAPGS *had* to be done before
the CR3 switch.  That scratch space is gone now, but the semantic
that SWAPGS must be done before the CR3 MOV is retained.  This is
good to keep because it is not that hard to do and it allows us
to do things like add per-cpu debugging information to help us
figure out what goes wrong sometimes.

What this does in the NMI code is worth pointing out.  NMIs
can interrupt *any* context and they can also be nested with
NMIs interrupting other NMIs.  The comments below
".Lnmi_from_kernel" explain the format of the stack during this
situation.  Changing the format of this stack is not a fun
exercise: I tried.  Instead of storing the old CR3 value on the
stack, this patch depend on the *regular* register save/restore
mechanism and then uses %r14 to keep CR3 during the NMI.  It is
callee-saved and will not be clobbered by the C NMI handlers that
get called.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/entry/calling.h         |   65 +++++++++++++++++++++++++++++++++++++
 b/arch/x86/entry/entry_64.S        |   47 +++++++++++++++++++++++---
 b/arch/x86/entry/entry_64_compat.S |   32 +++++++++++++++++-
 3 files changed, 138 insertions(+), 6 deletions(-)

diff -puN arch/x86/entry/calling.h~kaiser-luto-base-cr3-work arch/x86/entry/calling.h
--- a/arch/x86/entry/calling.h~kaiser-luto-base-cr3-work	2017-11-22 15:45:44.745619750 -0800
+++ b/arch/x86/entry/calling.h	2017-11-22 15:45:44.753619750 -0800
@@ -1,6 +1,7 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 #include <linux/jump_label.h>
 #include <asm/unwind_hints.h>
+#include <asm/cpufeatures.h>
 
 /*
 
@@ -187,6 +188,70 @@ For 32-bit we have the following convent
 #endif
 .endm
 
+#ifdef CONFIG_KAISER
+
+/* KAISER PGDs are 8k.  Flip bit 12 to switch between the two halves: */
+#define KAISER_SWITCH_MASK (1<<PAGE_SHIFT)
+
+.macro ADJUST_KERNEL_CR3 reg:req
+	/* Clear "KAISER bit", point CR3 at kernel pagetables: */
+	andq	$(~KAISER_SWITCH_MASK), \reg
+.endm
+
+.macro ADJUST_USER_CR3 reg:req
+	/* Move CR3 up a page to the user page tables: */
+	orq	$(KAISER_SWITCH_MASK), \reg
+.endm
+
+.macro SWITCH_TO_KERNEL_CR3 scratch_reg:req
+	mov	%cr3, \scratch_reg
+	ADJUST_KERNEL_CR3 \scratch_reg
+	mov	\scratch_reg, %cr3
+.endm
+
+.macro SWITCH_TO_USER_CR3 scratch_reg:req
+	mov	%cr3, \scratch_reg
+	ADJUST_USER_CR3 \scratch_reg
+	mov	\scratch_reg, %cr3
+.endm
+
+.macro SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg:req save_reg:req
+	movq	%cr3, %r\scratch_reg
+	movq	%r\scratch_reg, \save_reg
+	/*
+	 * Is the switch bit zero?  This means the address is
+	 * up in real KAISER patches in a moment.
+	 */
+	testq	$(KAISER_SWITCH_MASK), %r\scratch_reg
+	jz	.Ldone_\@
+
+	ADJUST_KERNEL_CR3 %r\scratch_reg
+	movq	%r\scratch_reg, %cr3
+
+.Ldone_\@:
+.endm
+
+.macro RESTORE_CR3 save_reg:req
+	/*
+	 * The CR3 write could be avoided when not changing its value,
+	 * but would require a CR3 read *and* a scratch register.
+	 */
+	movq	\save_reg, %cr3
+.endm
+
+#else /* CONFIG_KAISER=n: */
+
+.macro SWITCH_TO_KERNEL_CR3 scratch_reg:req
+.endm
+.macro SWITCH_TO_USER_CR3 scratch_reg:req
+.endm
+.macro SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg:req save_reg:req
+.endm
+.macro RESTORE_CR3 save_reg:req
+.endm
+
+#endif
+
 #endif /* CONFIG_X86_64 */
 
 /*
diff -puN arch/x86/entry/entry_64_compat.S~kaiser-luto-base-cr3-work arch/x86/entry/entry_64_compat.S
--- a/arch/x86/entry/entry_64_compat.S~kaiser-luto-base-cr3-work	2017-11-22 15:45:44.747619750 -0800
+++ b/arch/x86/entry/entry_64_compat.S	2017-11-22 15:45:44.753619750 -0800
@@ -49,6 +49,10 @@
 ENTRY(entry_SYSENTER_compat)
 	/* Interrupts are off on entry. */
 	SWAPGS
+
+	/* We are about to clobber %rsp anyway, clobbering here is OK */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
+
 	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
 
 	/*
@@ -216,6 +220,12 @@ GLOBAL(entry_SYSCALL_compat_after_hwfram
 	pushq   $0			/* pt_regs->r15 = 0 */
 
 	/*
+	 * We just saved %rdi so it is safe to clobber.  It is not
+	 * preserved during the C calls inside TRACE_IRQS_OFF anyway.
+	 */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
+
+	/*
 	 * User mode is traced as though IRQs are on, and SYSENTER
 	 * turned them off.
 	 */
@@ -256,10 +266,22 @@ sysret32_from_system_call:
 	 * when the system call started, which is already known to user
 	 * code.  We zero R8-R10 to avoid info leaks.
          */
+	movq	RSP-ORIG_RAX(%rsp), %rsp
+
+	/*
+	 * The original userspace %rsp (RSP-ORIG_RAX(%rsp)) is stored
+	 * on the process stack which is not mapped to userspace and
+	 * not readable after we SWITCH_TO_USER_CR3.  Delay the CR3
+	 * switch until after after the last reference to the process
+	 * stack.
+	 *
+	 * %r8 is zeroed before the sysret, thus safe to clobber.
+	 */
+	SWITCH_TO_USER_CR3 scratch_reg=%r8
+
 	xorq	%r8, %r8
 	xorq	%r9, %r9
 	xorq	%r10, %r10
-	movq	RSP-ORIG_RAX(%rsp), %rsp
 	swapgs
 	sysretl
 END(entry_SYSCALL_compat)
@@ -297,6 +319,14 @@ ENTRY(entry_INT80_compat)
 	ASM_CLAC			/* Do this early to minimize exposure */
 	SWAPGS
 
+	/*
+	 * Must switch CR3 before thread stack is used.  %r8 itself
+	 * is not saved into pt_regs and is not preserved across
+	 * function calls (like TRACE_IRQS_OFF calls), thus should
+	 * be safe to use.
+	 */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%r8
+
 	subq	$16*8, %rsp
 	call	switch_to_thread_stack
 	addq	$16*8, %rsp
diff -puN arch/x86/entry/entry_64.S~kaiser-luto-base-cr3-work arch/x86/entry/entry_64.S
--- a/arch/x86/entry/entry_64.S~kaiser-luto-base-cr3-work	2017-11-22 15:45:44.749619750 -0800
+++ b/arch/x86/entry/entry_64.S	2017-11-22 15:45:44.754619750 -0800
@@ -164,6 +164,9 @@ ENTRY(entry_SYSCALL_64_trampoline)
 	/* Stash the user RSP. */
 	movq	%rsp, RSP_SCRATCH
 
+	/* Note: using %rsp as a scratch reg. */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
+
 	/* Load the top of the task stack into RSP */
 	movq	CPU_ENTRY_AREA_tss + TSS_sp1 + CPU_ENTRY_AREA, %rsp
 
@@ -207,9 +210,16 @@ ENTRY(entry_SYSCALL_64)
 
 	swapgs
 	movq	%rsp, PER_CPU_VAR(rsp_scratch)
-	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
 
-	TRACE_IRQS_OFF
+	/*
+	 * The kernel CR3 is needed to map the process stack, but we
+	 * need a scratch register to be able to load CR3.  %rsp is
+	 * clobberable right now, so use it as a scratch register.
+	 * %rsp will be look crazy here for a couple instructions.
+	 */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
+
+	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
 
 	/* Construct struct pt_regs on stack */
 	pushq	$__USER_DS			/* pt_regs->ss */
@@ -231,6 +241,9 @@ GLOBAL(entry_SYSCALL_64_after_hwframe)
 	sub	$(6*8), %rsp			/* pt_regs->bp, bx, r12-15 not saved */
 	UNWIND_HINT_REGS extra=0
 
+	/* Must wait until we have the kernel CR3 to call C functions: */
+	TRACE_IRQS_OFF
+
 	/*
 	 * If we need to do entry work or if we guess we'll need to do
 	 * exit work, go straight to the slow path.
@@ -402,6 +415,7 @@ syscall_return_via_sysret:
 	 * We are on the trampoline stack.  All regs except RDI are live.
 	 * We can do future final exit work right here.
 	 */
+	SWITCH_TO_USER_CR3 scratch_reg=%rdi
 
 	popq	%rdi
 	popq	%rsp
@@ -743,6 +757,8 @@ GLOBAL(swapgs_restore_regs_and_return_to
 	 * We can do future final exit work right here.
 	 */
 
+	SWITCH_TO_USER_CR3 scratch_reg=%rdi
+
 	/* Restore RDI. */
 	popq	%rdi
 	SWAPGS
@@ -952,6 +968,10 @@ ENTRY(switch_to_thread_stack)
 	UNWIND_HINT_IRET_REGS offset=17*8
 
 	movq	%rdi, RDI+8(%rsp)
+
+	/* Need to switch before accessing the thread stack. */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
+
 	movq	%rsp, %rdi
 	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
 	UNWIND_HINT_IRET_REGS offset=17*8 base=%rdi
@@ -1252,7 +1272,11 @@ ENTRY(paranoid_entry)
 	js	1f				/* negative -> in kernel */
 	SWAPGS
 	xorl	%ebx, %ebx
-1:	ret
+
+1:
+	SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg=ax save_reg=%r14
+
+	ret
 END(paranoid_entry)
 
 /*
@@ -1274,6 +1298,7 @@ ENTRY(paranoid_exit)
 	testl	%ebx, %ebx			/* swapgs needed? */
 	jnz	.Lparanoid_exit_no_swapgs
 	TRACE_IRQS_IRETQ
+	RESTORE_CR3	%r14
 	SWAPGS_UNSAFE_STACK
 	jmp	.Lparanoid_exit_restore
 .Lparanoid_exit_no_swapgs:
@@ -1302,6 +1327,9 @@ ENTRY(error_entry)
 	 */
 	SWAPGS
 
+	/* We have user CR3.  Change to kernel CR3. */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
+
 .Lerror_entry_from_usermode_after_swapgs:
 	/* Put us onto the real thread stack. */
 	leaq	8(%rsp), %rdi			/* pt_regs pointer */
@@ -1346,6 +1374,7 @@ ENTRY(error_entry)
 	 * gsbase and proceed.  We'll fix up the exception and land in
 	 * .Lgs_change's error handler with kernel gsbase.
 	 */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
 	SWAPGS
 	jmp .Lerror_entry_done
 
@@ -1356,9 +1385,10 @@ ENTRY(error_entry)
 
 .Lerror_bad_iret:
 	/*
-	 * We came from an IRET to user mode, so we have user gsbase.
-	 * Switch to kernel gsbase:
+	 * We came from an IRET to user mode, so we have user
+	 * gsbase and CR3.  Switch to kernel gsbase and CR3:
 	 */
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
 	SWAPGS
 
 	/*
@@ -1391,6 +1421,10 @@ END(error_exit)
 /*
  * Runs on exception stack.  Xen PV does not go through this path at all,
  * so we can use real assembly here.
+ *
+ * Registers:
+ *	%r14: Used to save/restore the CR3 of the interrupted context
+ *	      when KAISER is in use.  Do not clobber.
  */
 ENTRY(nmi)
 	UNWIND_HINT_IRET_REGS
@@ -1454,6 +1488,7 @@ ENTRY(nmi)
 
 	swapgs
 	cld
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdx
 	movq	%rsp, %rdx
 	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
 	UNWIND_HINT_IRET_REGS base=%rdx offset=8
@@ -1706,6 +1741,8 @@ end_repeat_nmi:
 	movq	$-1, %rsi
 	call	do_nmi
 
+	RESTORE_CR3 save_reg=%r14
+
 	testl	%ebx, %ebx			/* swapgs needed? */
 	jnz	nmi_restore
 nmi_swapgs:
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 03/23] x86, kaiser: introduce user-mapped per-cpu areas
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:34   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

These patches are based on work from a team at Graz University of
Technology posted here: https://github.com/IAIK/KAISER

The KAISER approach keeps two copies of the page tables: one for running
in the kernel and one for running userspace.  But, there are a few
structures that are needed for switching in and out of the kernel and
a good subset of *those* are per-cpu data.

This patch creates a new kind of per-cpu data that is mapped and
can be used no matter which copy of the page tables is active.
Users of this new section will be forthcoming.

Thanks to Hugh Dickins for cleanups to this code.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/include/asm-generic/vmlinux.lds.h |    7 +++++++
 b/include/linux/percpu-defs.h       |   30 ++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff -puN include/asm-generic/vmlinux.lds.h~kaiser-prep-user-mapped-percpu include/asm-generic/vmlinux.lds.h
--- a/include/asm-generic/vmlinux.lds.h~kaiser-prep-user-mapped-percpu	2017-11-22 15:45:45.347619748 -0800
+++ b/include/asm-generic/vmlinux.lds.h	2017-11-22 15:45:45.352619748 -0800
@@ -826,7 +826,14 @@
  */
 #define PERCPU_INPUT(cacheline)						\
 	VMLINUX_SYMBOL(__per_cpu_start) = .;				\
+	VMLINUX_SYMBOL(__per_cpu_user_mapped_start) = .;		\
 	*(.data..percpu..first)						\
+	. = ALIGN(cacheline);						\
+	*(.data..percpu..user_mapped)					\
+	*(.data..percpu..user_mapped..shared_aligned)			\
+	. = ALIGN(PAGE_SIZE);						\
+	*(.data..percpu..user_mapped..page_aligned)			\
+	VMLINUX_SYMBOL(__per_cpu_user_mapped_end) = .;			\
 	. = ALIGN(PAGE_SIZE);						\
 	*(.data..percpu..page_aligned)					\
 	. = ALIGN(cacheline);						\
diff -puN include/linux/percpu-defs.h~kaiser-prep-user-mapped-percpu include/linux/percpu-defs.h
--- a/include/linux/percpu-defs.h~kaiser-prep-user-mapped-percpu	2017-11-22 15:45:45.349619748 -0800
+++ b/include/linux/percpu-defs.h	2017-11-22 15:45:45.353619748 -0800
@@ -35,6 +35,12 @@
 
 #endif
 
+#ifdef CONFIG_KAISER
+#define USER_MAPPED_SECTION "..user_mapped"
+#else
+#define USER_MAPPED_SECTION ""
+#endif
+
 /*
  * Base implementations of per-CPU variable declarations and definitions, where
  * the section in which the variable is to be placed is provided by the
@@ -115,6 +121,12 @@
 #define DEFINE_PER_CPU(type, name)					\
 	DEFINE_PER_CPU_SECTION(type, name, "")
 
+#define DECLARE_PER_CPU_USER_MAPPED(type, name)				\
+	DECLARE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION)
+
+#define DEFINE_PER_CPU_USER_MAPPED(type, name)				\
+	DEFINE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION)
+
 /*
  * Declaration/definition used for per-CPU variables that must come first in
  * the set of variables.
@@ -144,6 +156,14 @@
 	DEFINE_PER_CPU_SECTION(type, name, PER_CPU_SHARED_ALIGNED_SECTION) \
 	____cacheline_aligned_in_smp
 
+#define DECLARE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(type, name)		\
+	DECLARE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION PER_CPU_SHARED_ALIGNED_SECTION) \
+	____cacheline_aligned_in_smp
+
+#define DEFINE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(type, name)		\
+	DEFINE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION PER_CPU_SHARED_ALIGNED_SECTION) \
+	____cacheline_aligned_in_smp
+
 #define DECLARE_PER_CPU_ALIGNED(type, name)				\
 	DECLARE_PER_CPU_SECTION(type, name, PER_CPU_ALIGNED_SECTION)	\
 	____cacheline_aligned
@@ -162,6 +182,16 @@
 #define DEFINE_PER_CPU_PAGE_ALIGNED(type, name)				\
 	DEFINE_PER_CPU_SECTION(type, name, "..page_aligned")		\
 	__aligned(PAGE_SIZE)
+/*
+ * Declaration/definition used for per-CPU variables that must be page aligned and need to be mapped in user mode.
+ */
+#define DECLARE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(type, name)		\
+	DECLARE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION"..page_aligned") \
+	__aligned(PAGE_SIZE)
+
+#define DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(type, name)		\
+	DEFINE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION"..page_aligned") \
+	__aligned(PAGE_SIZE)
 
 /*
  * Declaration/definition used for per-CPU variables that must be read mostly.
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 03/23] x86, kaiser: introduce user-mapped per-cpu areas
@ 2017-11-23  0:34   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

These patches are based on work from a team at Graz University of
Technology posted here: https://github.com/IAIK/KAISER

The KAISER approach keeps two copies of the page tables: one for running
in the kernel and one for running userspace.  But, there are a few
structures that are needed for switching in and out of the kernel and
a good subset of *those* are per-cpu data.

This patch creates a new kind of per-cpu data that is mapped and
can be used no matter which copy of the page tables is active.
Users of this new section will be forthcoming.

Thanks to Hugh Dickins for cleanups to this code.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/include/asm-generic/vmlinux.lds.h |    7 +++++++
 b/include/linux/percpu-defs.h       |   30 ++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff -puN include/asm-generic/vmlinux.lds.h~kaiser-prep-user-mapped-percpu include/asm-generic/vmlinux.lds.h
--- a/include/asm-generic/vmlinux.lds.h~kaiser-prep-user-mapped-percpu	2017-11-22 15:45:45.347619748 -0800
+++ b/include/asm-generic/vmlinux.lds.h	2017-11-22 15:45:45.352619748 -0800
@@ -826,7 +826,14 @@
  */
 #define PERCPU_INPUT(cacheline)						\
 	VMLINUX_SYMBOL(__per_cpu_start) = .;				\
+	VMLINUX_SYMBOL(__per_cpu_user_mapped_start) = .;		\
 	*(.data..percpu..first)						\
+	. = ALIGN(cacheline);						\
+	*(.data..percpu..user_mapped)					\
+	*(.data..percpu..user_mapped..shared_aligned)			\
+	. = ALIGN(PAGE_SIZE);						\
+	*(.data..percpu..user_mapped..page_aligned)			\
+	VMLINUX_SYMBOL(__per_cpu_user_mapped_end) = .;			\
 	. = ALIGN(PAGE_SIZE);						\
 	*(.data..percpu..page_aligned)					\
 	. = ALIGN(cacheline);						\
diff -puN include/linux/percpu-defs.h~kaiser-prep-user-mapped-percpu include/linux/percpu-defs.h
--- a/include/linux/percpu-defs.h~kaiser-prep-user-mapped-percpu	2017-11-22 15:45:45.349619748 -0800
+++ b/include/linux/percpu-defs.h	2017-11-22 15:45:45.353619748 -0800
@@ -35,6 +35,12 @@
 
 #endif
 
+#ifdef CONFIG_KAISER
+#define USER_MAPPED_SECTION "..user_mapped"
+#else
+#define USER_MAPPED_SECTION ""
+#endif
+
 /*
  * Base implementations of per-CPU variable declarations and definitions, where
  * the section in which the variable is to be placed is provided by the
@@ -115,6 +121,12 @@
 #define DEFINE_PER_CPU(type, name)					\
 	DEFINE_PER_CPU_SECTION(type, name, "")
 
+#define DECLARE_PER_CPU_USER_MAPPED(type, name)				\
+	DECLARE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION)
+
+#define DEFINE_PER_CPU_USER_MAPPED(type, name)				\
+	DEFINE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION)
+
 /*
  * Declaration/definition used for per-CPU variables that must come first in
  * the set of variables.
@@ -144,6 +156,14 @@
 	DEFINE_PER_CPU_SECTION(type, name, PER_CPU_SHARED_ALIGNED_SECTION) \
 	____cacheline_aligned_in_smp
 
+#define DECLARE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(type, name)		\
+	DECLARE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION PER_CPU_SHARED_ALIGNED_SECTION) \
+	____cacheline_aligned_in_smp
+
+#define DEFINE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(type, name)		\
+	DEFINE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION PER_CPU_SHARED_ALIGNED_SECTION) \
+	____cacheline_aligned_in_smp
+
 #define DECLARE_PER_CPU_ALIGNED(type, name)				\
 	DECLARE_PER_CPU_SECTION(type, name, PER_CPU_ALIGNED_SECTION)	\
 	____cacheline_aligned
@@ -162,6 +182,16 @@
 #define DEFINE_PER_CPU_PAGE_ALIGNED(type, name)				\
 	DEFINE_PER_CPU_SECTION(type, name, "..page_aligned")		\
 	__aligned(PAGE_SIZE)
+/*
+ * Declaration/definition used for per-CPU variables that must be page aligned and need to be mapped in user mode.
+ */
+#define DECLARE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(type, name)		\
+	DECLARE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION"..page_aligned") \
+	__aligned(PAGE_SIZE)
+
+#define DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(type, name)		\
+	DEFINE_PER_CPU_SECTION(type, name, USER_MAPPED_SECTION"..page_aligned") \
+	__aligned(PAGE_SIZE)
 
 /*
  * Declaration/definition used for per-CPU variables that must be read mostly.
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 04/23] x86, kaiser: mark per-cpu data structures required for entry/exit
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:34   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

These patches are based on work from a team at Graz University of
Technology posted here: https://github.com/IAIK/KAISER

The KAISER approach keeps two copies of the page tables: one for running
in the kernel and one for running userspace.  But, there are a few
structures that are needed for switching in and out of the kernel and
a good subset of *those* are per-cpu data.

Here's a short summary of the things mapped to userspace:
 * The gdt_page's virtual address is pointed to by the LGDT instruction.
   It is needed to define the segments.  Deeply required by CPU to run.
 * cpu_tss tells the CPU, among other things, where the new stacks are
   after user<->kernel transitions.  Needed by the CPU to make ring
   transitions.
 * exception_stacks are needed at interrupt and exception entry
   so that there is storage for, among other things, some temporary
   space to permit clobbering a register to load the kernel CR3.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/desc.h      |    2 +-
 b/arch/x86/include/asm/processor.h |    2 +-
 b/arch/x86/kernel/cpu/common.c     |    4 ++--
 b/arch/x86/kernel/process.c        |    2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff -puN arch/x86/include/asm/desc.h~kaiser-prep-x86-percpu-user-mapped arch/x86/include/asm/desc.h
--- a/arch/x86/include/asm/desc.h~kaiser-prep-x86-percpu-user-mapped	2017-11-22 15:45:45.913619747 -0800
+++ b/arch/x86/include/asm/desc.h	2017-11-22 15:45:45.923619747 -0800
@@ -46,7 +46,7 @@ struct gdt_page {
 	struct desc_struct gdt[GDT_ENTRIES];
 } __attribute__((aligned(PAGE_SIZE)));
 
-DECLARE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page);
+DECLARE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct gdt_page, gdt_page);
 
 /* Provide the original GDT */
 static inline struct desc_struct *get_cpu_gdt_rw(unsigned int cpu)
diff -puN arch/x86/include/asm/processor.h~kaiser-prep-x86-percpu-user-mapped arch/x86/include/asm/processor.h
--- a/arch/x86/include/asm/processor.h~kaiser-prep-x86-percpu-user-mapped	2017-11-22 15:45:45.915619747 -0800
+++ b/arch/x86/include/asm/processor.h	2017-11-22 15:45:45.923619747 -0800
@@ -356,7 +356,7 @@ struct tss_struct {
 	unsigned long		io_bitmap[IO_BITMAP_LONGS + 1];
 } __attribute__((__aligned__(PAGE_SIZE)));
 
-DECLARE_PER_CPU_PAGE_ALIGNED(struct tss_struct, cpu_tss);
+DECLARE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct tss_struct, cpu_tss);
 
 /*
  * sizeof(unsigned long) coming from an extra "long" at the end
diff -puN arch/x86/kernel/cpu/common.c~kaiser-prep-x86-percpu-user-mapped arch/x86/kernel/cpu/common.c
--- a/arch/x86/kernel/cpu/common.c~kaiser-prep-x86-percpu-user-mapped	2017-11-22 15:45:45.917619747 -0800
+++ b/arch/x86/kernel/cpu/common.c	2017-11-22 15:45:45.924619747 -0800
@@ -98,7 +98,7 @@ static const struct cpu_dev default_cpu
 
 static const struct cpu_dev *this_cpu = &default_cpu;
 
-DEFINE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page) = { .gdt = {
+DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct gdt_page, gdt_page) = { .gdt = {
 #ifdef CONFIG_X86_64
 	/*
 	 * We need valid kernel segments for data and code in long mode too
@@ -517,7 +517,7 @@ static const unsigned int exception_stac
 	  [DEBUG_STACK - 1]			= DEBUG_STKSZ
 };
 
-static DEFINE_PER_CPU_PAGE_ALIGNED(char, exception_stacks
+DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(char, exception_stacks
 	[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ + DEBUG_STKSZ]);
 #endif
 
diff -puN arch/x86/kernel/process.c~kaiser-prep-x86-percpu-user-mapped arch/x86/kernel/process.c
--- a/arch/x86/kernel/process.c~kaiser-prep-x86-percpu-user-mapped	2017-11-22 15:45:45.919619747 -0800
+++ b/arch/x86/kernel/process.c	2017-11-22 15:45:45.924619747 -0800
@@ -47,7 +47,7 @@
  * section. Since TSS's are completely CPU-local, we want them
  * on exact cacheline boundaries, to eliminate cacheline ping-pong.
  */
-__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss) = {
+__visible DEFINE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(struct tss_struct, cpu_tss) = {
 	.x86_tss = {
 		/*
 		 * .sp0 is only used when entering ring 0 from a lower
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 04/23] x86, kaiser: mark per-cpu data structures required for entry/exit
@ 2017-11-23  0:34   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

These patches are based on work from a team at Graz University of
Technology posted here: https://github.com/IAIK/KAISER

The KAISER approach keeps two copies of the page tables: one for running
in the kernel and one for running userspace.  But, there are a few
structures that are needed for switching in and out of the kernel and
a good subset of *those* are per-cpu data.

Here's a short summary of the things mapped to userspace:
 * The gdt_page's virtual address is pointed to by the LGDT instruction.
   It is needed to define the segments.  Deeply required by CPU to run.
 * cpu_tss tells the CPU, among other things, where the new stacks are
   after user<->kernel transitions.  Needed by the CPU to make ring
   transitions.
 * exception_stacks are needed at interrupt and exception entry
   so that there is storage for, among other things, some temporary
   space to permit clobbering a register to load the kernel CR3.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/desc.h      |    2 +-
 b/arch/x86/include/asm/processor.h |    2 +-
 b/arch/x86/kernel/cpu/common.c     |    4 ++--
 b/arch/x86/kernel/process.c        |    2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff -puN arch/x86/include/asm/desc.h~kaiser-prep-x86-percpu-user-mapped arch/x86/include/asm/desc.h
--- a/arch/x86/include/asm/desc.h~kaiser-prep-x86-percpu-user-mapped	2017-11-22 15:45:45.913619747 -0800
+++ b/arch/x86/include/asm/desc.h	2017-11-22 15:45:45.923619747 -0800
@@ -46,7 +46,7 @@ struct gdt_page {
 	struct desc_struct gdt[GDT_ENTRIES];
 } __attribute__((aligned(PAGE_SIZE)));
 
-DECLARE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page);
+DECLARE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct gdt_page, gdt_page);
 
 /* Provide the original GDT */
 static inline struct desc_struct *get_cpu_gdt_rw(unsigned int cpu)
diff -puN arch/x86/include/asm/processor.h~kaiser-prep-x86-percpu-user-mapped arch/x86/include/asm/processor.h
--- a/arch/x86/include/asm/processor.h~kaiser-prep-x86-percpu-user-mapped	2017-11-22 15:45:45.915619747 -0800
+++ b/arch/x86/include/asm/processor.h	2017-11-22 15:45:45.923619747 -0800
@@ -356,7 +356,7 @@ struct tss_struct {
 	unsigned long		io_bitmap[IO_BITMAP_LONGS + 1];
 } __attribute__((__aligned__(PAGE_SIZE)));
 
-DECLARE_PER_CPU_PAGE_ALIGNED(struct tss_struct, cpu_tss);
+DECLARE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct tss_struct, cpu_tss);
 
 /*
  * sizeof(unsigned long) coming from an extra "long" at the end
diff -puN arch/x86/kernel/cpu/common.c~kaiser-prep-x86-percpu-user-mapped arch/x86/kernel/cpu/common.c
--- a/arch/x86/kernel/cpu/common.c~kaiser-prep-x86-percpu-user-mapped	2017-11-22 15:45:45.917619747 -0800
+++ b/arch/x86/kernel/cpu/common.c	2017-11-22 15:45:45.924619747 -0800
@@ -98,7 +98,7 @@ static const struct cpu_dev default_cpu
 
 static const struct cpu_dev *this_cpu = &default_cpu;
 
-DEFINE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page) = { .gdt = {
+DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct gdt_page, gdt_page) = { .gdt = {
 #ifdef CONFIG_X86_64
 	/*
 	 * We need valid kernel segments for data and code in long mode too
@@ -517,7 +517,7 @@ static const unsigned int exception_stac
 	  [DEBUG_STACK - 1]			= DEBUG_STKSZ
 };
 
-static DEFINE_PER_CPU_PAGE_ALIGNED(char, exception_stacks
+DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(char, exception_stacks
 	[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ + DEBUG_STKSZ]);
 #endif
 
diff -puN arch/x86/kernel/process.c~kaiser-prep-x86-percpu-user-mapped arch/x86/kernel/process.c
--- a/arch/x86/kernel/process.c~kaiser-prep-x86-percpu-user-mapped	2017-11-22 15:45:45.919619747 -0800
+++ b/arch/x86/kernel/process.c	2017-11-22 15:45:45.924619747 -0800
@@ -47,7 +47,7 @@
  * section. Since TSS's are completely CPU-local, we want them
  * on exact cacheline boundaries, to eliminate cacheline ping-pong.
  */
-__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss) = {
+__visible DEFINE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(struct tss_struct, cpu_tss) = {
 	.x86_tss = {
 		/*
 		 * .sp0 is only used when entering ring 0 from a lower
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:34   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, richard.fellner, moritz.lipp,
	daniel.gruss, michael.schwarz, luto, torvalds, keescook, hughd,
	x86


From: Dave Hansen <dave.hansen@linux.intel.com>

These patches are based on work from a team at Graz University of
Technology: https://github.com/IAIK/KAISER .  This work would not have
been possible without their work as a starting point.

KAISER is a countermeasure against side channel attacks against kernel
virtual memory.  It leaves the existing page tables largely alone and
refers to them as the "kernel page tables.  It adds a "shadow" pgd for
every process which is intended for use when running userspace.  The
shadow pgd maps all the same user memory as the "kernel" copy, but
only maps a minimal set of kernel memory.

Whenever entering the kernel (syscalls, interrupts, exceptions), the
pgd is switched to the "kernel" copy.  When switching back to user
mode, the shadow pgd is used.

The minimalistic kernel page tables try to map only what is needed to
enter/exit the kernel such as the entry/exit functions themselves and
the interrupt descriptors (IDT).

=== Page Table Poisoning ===

KAISER has two copies of the page tables: one for the kernel and
one for when running in userspace.  There is also a kernel
portion of each of the page tables: the part that *maps* the
kernel.

The kernel portion is relatively static and uses pre-populated
PGDs.  Nobody ever calls set_pgd() on the kernel portion during
normal operation.

The userspace portion of the page tables is updated frequently as
userspace pages are mapped and page table pages are allocated.
These updates of the userspace *portion* of the tables need to be
reflected into both the kernel and user/shadow copies.

The original KAISER patches did this by effectively looking at the
address that is being updated.  If it is <PAGE_OFFSET, it is
considered to be doing an update for the userspace portion of the page
tables and must make an entry in the shadow.

However, this has a wrinkle: there are a few places where low
addresses are used in supervisor (kernel) mode.  When EFI calls
are made, they use what are traditionally user addresses in
supervisor mode and trip over these checks.  The trampoline code
that used for booting secondary CPUs has a similar issue.

Remember, there are two things that KAISER needs performed on a
userspace PGD:

 1. Populate the shadow itself
 2. Poison the kernel PGD so it can not be used by userspace.

Only perform these actions when dealing with a user address *and* the
PGD has _PAGE_USER set.  That way, in-kernel users of low addresses
typically used by userspace are not accidentally poisoned.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
Changes from original KAISER patch:
 * Gobs of coding style cleanups
 * The original patch tried to allocate an order-2 page, then
   8k-align the result.  That's silly since order-2 is already
   guaranteed to be 16k-aligned.  Removed that gunk and just
   allocate an order-1 page.
 * Handle (or at least detect and warn on) allocation failures
 * Use _KERNPG_TABLE, not _PAGE_TABLE when creating mappings for
   the kernel in the shadow (user) page tables.
 * BUG_ON() for !pte_none() case was totally insane: it checked
   the physical address of the 'struct page' against the physical
   address of the page being mapped.
 * Added 5-level page table support
 * Never free kaiser page tables.  We don't have the locking to
   keep them from getting referenced during the freeing process.
 * Use a totally different scheme in the entry code.  The
   original code just fell apart in horrific ways in debug faults,
   NMIs, or when iret faults.  Big thanks to Andy Lutomirski for
   reducing the number of places that needed to be patched.  He
   made the code a ton simpler.
 * Use new entry trampoline instead of mapping process stacks.

Note: The original KAISER authors signed-off on their patch.  Some of
their code has been broken out into other patches in this series, but
their SoB was only retained here.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org

---

 b/Documentation/x86/kaiser.txt      |  162 +++++++++++++
 b/arch/x86/entry/calling.h          |    1 
 b/arch/x86/include/asm/kaiser.h     |   57 ++++
 b/arch/x86/include/asm/pgtable.h    |    5 
 b/arch/x86/include/asm/pgtable_64.h |  132 ++++++++++
 b/arch/x86/kernel/espfix_64.c       |   17 +
 b/arch/x86/kernel/head_64.S         |   14 -
 b/arch/x86/mm/Makefile              |    1 
 b/arch/x86/mm/kaiser.c              |  441 ++++++++++++++++++++++++++++++++++++
 b/arch/x86/mm/pageattr.c            |    2 
 b/arch/x86/mm/pgtable.c             |   16 +
 b/include/linux/kaiser.h            |   29 ++
 b/init/main.c                       |    3 
 b/kernel/fork.c                     |    1 
 14 files changed, 875 insertions(+), 6 deletions(-)

diff -puN arch/x86/entry/calling.h~kaiser-base arch/x86/entry/calling.h
--- a/arch/x86/entry/calling.h~kaiser-base	2017-11-22 15:45:46.527619745 -0800
+++ b/arch/x86/entry/calling.h	2017-11-22 15:45:46.547619745 -0800
@@ -2,6 +2,7 @@
 #include <linux/jump_label.h>
 #include <asm/unwind_hints.h>
 #include <asm/cpufeatures.h>
+#include <asm/page_types.h>
 
 /*
 
diff -puN /dev/null arch/x86/include/asm/kaiser.h
--- /dev/null	2017-11-06 07:51:38.702108459 -0800
+++ b/arch/x86/include/asm/kaiser.h	2017-11-22 15:45:46.548619745 -0800
@@ -0,0 +1,57 @@
+#ifndef _ASM_X86_KAISER_H
+#define _ASM_X86_KAISER_H
+/*
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Based on work published here: https://github.com/IAIK/KAISER
+ * Modified by Dave Hansen <dave.hansen@intel.com to actually work.
+ */
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_KAISER
+/**
+ *  kaiser_add_mapping - map a kernel range into the user page tables
+ *  @addr: the start address of the range
+ *  @size: the size of the range
+ *  @flags: The mapping flags of the pages
+ *
+ *  Use this on all data and code that need to be mapped into both
+ *  copies of the page tables.  This includes the code that switches
+ *  to/from userspace and all of the hardware structures that are
+ *  virtually-addressed and needed in userspace like the interrupt
+ *  table.
+ */
+extern int kaiser_add_mapping(unsigned long addr, unsigned long size,
+			      unsigned long flags);
+
+/**
+ *  kaiser_remove_mapping - remove a kernel mapping from the userpage tables
+ *  @addr: the start address of the range
+ *  @size: the size of the range
+ */
+extern void kaiser_remove_mapping(unsigned long start, unsigned long size);
+
+/**
+ *  kaiser_init - Initialize the shadow mapping
+ *
+ *  Most parts of the shadow mapping can be mapped upon boot
+ *  time.  Only per-process things like the thread stacks
+ *  or a new LDT have to be mapped at runtime.  These boot-
+ *  time mappings are permanent and never unmapped.
+ */
+extern void kaiser_init(void);
+
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_X86_KAISER_H */
diff -puN arch/x86/include/asm/pgtable_64.h~kaiser-base arch/x86/include/asm/pgtable_64.h
--- a/arch/x86/include/asm/pgtable_64.h~kaiser-base	2017-11-22 15:45:46.529619745 -0800
+++ b/arch/x86/include/asm/pgtable_64.h	2017-11-22 15:45:46.548619745 -0800
@@ -131,9 +131,137 @@ static inline pud_t native_pudp_get_and_
 #endif
 }
 
+#ifdef CONFIG_KAISER
+/*
+ * All top-level KAISER page tables are order-1 pages (8k-aligned
+ * and 8k in size).  The kernel one is at the beginning 4k and
+ * the user (shadow) one is in the last 4k.  To switch between
+ * them, you just need to flip the 12th bit in their addresses.
+ */
+#define KAISER_PGTABLE_SWITCH_BIT	PAGE_SHIFT
+
+/*
+ * This generates better code than the inline assembly in
+ * __set_bit().
+ */
+static inline void *ptr_set_bit(void *ptr, int bit)
+{
+	unsigned long __ptr = (unsigned long)ptr;
+
+	__ptr |= (1<<bit);
+	return (void *)__ptr;
+}
+static inline void *ptr_clear_bit(void *ptr, int bit)
+{
+	unsigned long __ptr = (unsigned long)ptr;
+
+	__ptr &= ~(1<<bit);
+	return (void *)__ptr;
+}
+
+static inline pgd_t *kernel_to_shadow_pgdp(pgd_t *pgdp)
+{
+	return ptr_set_bit(pgdp, KAISER_PGTABLE_SWITCH_BIT);
+}
+static inline pgd_t *shadow_to_kernel_pgdp(pgd_t *pgdp)
+{
+	return ptr_clear_bit(pgdp, KAISER_PGTABLE_SWITCH_BIT);
+}
+static inline p4d_t *kernel_to_shadow_p4dp(p4d_t *p4dp)
+{
+	return ptr_set_bit(p4dp, KAISER_PGTABLE_SWITCH_BIT);
+}
+static inline p4d_t *shadow_to_kernel_p4dp(p4d_t *p4dp)
+{
+	return ptr_clear_bit(p4dp, KAISER_PGTABLE_SWITCH_BIT);
+}
+#endif /* CONFIG_KAISER */
+
+/*
+ * Page table pages are page-aligned.  The lower half of the top
+ * level is used for userspace and the top half for the kernel.
+ *
+ * Returns true for parts of the PGD that map userspace and
+ * false for the parts that map the kernel.
+ */
+static inline bool pgdp_maps_userspace(void *__ptr)
+{
+	unsigned long ptr = (unsigned long)__ptr;
+
+	return (ptr & ~PAGE_MASK) < (PAGE_SIZE / 2);
+}
+
+/*
+ * Does this PGD allow access from userspace?
+ */
+static inline bool pgd_userspace_access(pgd_t pgd)
+{
+	return pgd.pgd & _PAGE_USER;
+}
+
+/*
+ * Take a PGD location (pgdp) and a pgd value that needs
+ * to be set there.  Populates the shadow and returns
+ * the resulting PGD that must be set in the kernel copy
+ * of the page tables.
+ */
+static inline pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
+{
+#ifdef CONFIG_KAISER
+	if (pgd_userspace_access(pgd)) {
+		if (pgdp_maps_userspace(pgdp)) {
+			/*
+			 * The user/shadow page tables get the full
+			 * PGD, accessible from userspace:
+			 */
+			kernel_to_shadow_pgdp(pgdp)->pgd = pgd.pgd;
+			/*
+			 * For the copy of the pgd that the kernel
+			 * uses, make it unusable to userspace.  This
+			 * ensures if we get out to userspace with the
+			 * wrong CR3 value, userspace will crash
+			 * instead of running.
+			 */
+			pgd.pgd |= _PAGE_NX;
+		}
+	} else if (pgd_userspace_access(*pgdp)) {
+		/*
+		 * We are clearing a _PAGE_USER PGD for which we
+		 * presumably populated the shadow.  We must now
+		 * clear the shadow PGD entry.
+		 */
+		if (pgdp_maps_userspace(pgdp)) {
+			kernel_to_shadow_pgdp(pgdp)->pgd = pgd.pgd;
+		} else {
+			/*
+			 * Attempted to clear a _PAGE_USER PGD which
+			 * is in the kernel porttion of the address
+			 * space.  PGDs are pre-populated and we
+			 * never clear them.
+			 */
+			WARN_ON_ONCE(1);
+		}
+	} else {
+		/*
+		 * _PAGE_USER was not set in either the PGD being set
+		 * or cleared.  All kernel PGDs should be
+		 * pre-populated so this should never happen after
+		 * boot.
+		 */
+	}
+#endif
+	/* return the copy of the PGD we want the kernel to use: */
+	return pgd;
+}
+
+
 static inline void native_set_p4d(p4d_t *p4dp, p4d_t p4d)
 {
+#if defined(CONFIG_KAISER) && !defined(CONFIG_X86_5LEVEL)
+	p4dp->pgd = kaiser_set_shadow_pgd(&p4dp->pgd, p4d.pgd);
+#else /* CONFIG_KAISER */
 	*p4dp = p4d;
+#endif
 }
 
 static inline void native_p4d_clear(p4d_t *p4d)
@@ -147,7 +275,11 @@ static inline void native_p4d_clear(p4d_
 
 static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd)
 {
+#ifdef CONFIG_KAISER
+	*pgdp = kaiser_set_shadow_pgd(pgdp, pgd);
+#else /* CONFIG_KAISER */
 	*pgdp = pgd;
+#endif
 }
 
 static inline void native_pgd_clear(pgd_t *pgd)
diff -puN arch/x86/include/asm/pgtable.h~kaiser-base arch/x86/include/asm/pgtable.h
--- a/arch/x86/include/asm/pgtable.h~kaiser-base	2017-11-22 15:45:46.531619745 -0800
+++ b/arch/x86/include/asm/pgtable.h	2017-11-22 15:45:46.549619745 -0800
@@ -1106,6 +1106,11 @@ static inline void pmdp_set_wrprotect(st
 static inline void clone_pgd_range(pgd_t *dst, pgd_t *src, int count)
 {
        memcpy(dst, src, count * sizeof(pgd_t));
+#ifdef CONFIG_KAISER
+	/* Clone the shadow pgd part as well */
+	memcpy(kernel_to_shadow_pgdp(dst), kernel_to_shadow_pgdp(src),
+	       count * sizeof(pgd_t));
+#endif
 }
 
 #define PTE_SHIFT ilog2(PTRS_PER_PTE)
diff -puN arch/x86/kernel/espfix_64.c~kaiser-base arch/x86/kernel/espfix_64.c
--- a/arch/x86/kernel/espfix_64.c~kaiser-base	2017-11-22 15:45:46.533619745 -0800
+++ b/arch/x86/kernel/espfix_64.c	2017-11-22 15:45:46.549619745 -0800
@@ -41,6 +41,7 @@
 #include <asm/pgalloc.h>
 #include <asm/setup.h>
 #include <asm/espfix.h>
+#include <asm/kaiser.h>
 
 /*
  * Note: we only need 6*8 = 48 bytes for the espfix stack, but round
@@ -128,6 +129,22 @@ void __init init_espfix_bsp(void)
 	pgd = &init_top_pgt[pgd_index(ESPFIX_BASE_ADDR)];
 	p4d = p4d_alloc(&init_mm, pgd, ESPFIX_BASE_ADDR);
 	p4d_populate(&init_mm, p4d, espfix_pud_page);
+	/*
+	 * Just copy the top-level PGD that is mapping the espfix
+	 * area to ensure it is mapped into the shadow user page
+	 * tables.
+	 *
+	 * For 5-level paging, the espfix pgd was populated when
+	 * kaiser_init() pre-populated all the pgd entries.  The above
+	 * p4d_alloc() would never do anything and the p4d_populate()
+	 * would be done to a p4d already mapped in the userspace pgd.
+	 */
+#ifdef CONFIG_KAISER
+	if (CONFIG_PGTABLE_LEVELS <= 4) {
+		set_pgd(kernel_to_shadow_pgdp(pgd),
+			__pgd(_KERNPG_TABLE | (p4d_pfn(*p4d) << PAGE_SHIFT)));
+	}
+#endif
 
 	/* Randomize the locations */
 	init_espfix_random();
diff -puN arch/x86/kernel/head_64.S~kaiser-base arch/x86/kernel/head_64.S
--- a/arch/x86/kernel/head_64.S~kaiser-base	2017-11-22 15:45:46.534619745 -0800
+++ b/arch/x86/kernel/head_64.S	2017-11-22 15:45:46.549619745 -0800
@@ -341,6 +341,14 @@ GLOBAL(early_recursion_flag)
 	.balign	PAGE_SIZE; \
 GLOBAL(name)
 
+#ifdef CONFIG_KAISER
+#define NEXT_PGD_PAGE(name) \
+	.balign 2 * PAGE_SIZE; \
+GLOBAL(name)
+#else
+#define NEXT_PGD_PAGE(name) NEXT_PAGE(name)
+#endif
+
 /* Automate the creation of 1 to 1 mapping pmd entries */
 #define PMDS(START, PERM, COUNT)			\
 	i = 0 ;						\
@@ -350,7 +358,7 @@ GLOBAL(name)
 	.endr
 
 	__INITDATA
-NEXT_PAGE(early_top_pgt)
+NEXT_PGD_PAGE(early_top_pgt)
 	.fill	511,8,0
 #ifdef CONFIG_X86_5LEVEL
 	.quad	level4_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
@@ -364,7 +372,7 @@ NEXT_PAGE(early_dynamic_pgts)
 	.data
 
 #if defined(CONFIG_XEN_PV) || defined(CONFIG_XEN_PVH)
-NEXT_PAGE(init_top_pgt)
+NEXT_PGD_PAGE(init_top_pgt)
 	.quad   level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
 	.org    init_top_pgt + PGD_PAGE_OFFSET*8, 0
 	.quad   level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
@@ -381,7 +389,7 @@ NEXT_PAGE(level2_ident_pgt)
 	 */
 	PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD)
 #else
-NEXT_PAGE(init_top_pgt)
+NEXT_PGD_PAGE(init_top_pgt)
 	.fill	512,8,0
 #endif
 
diff -puN /dev/null arch/x86/mm/kaiser.c
--- /dev/null	2017-11-06 07:51:38.702108459 -0800
+++ b/arch/x86/mm/kaiser.c	2017-11-22 15:45:46.550619745 -0800
@@ -0,0 +1,441 @@
+/*
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * This code is based in part on work published here:
+ *
+ *	https://github.com/IAIK/KAISER
+ *
+ * The original work was written by and and signed off by for the Linux
+ * kernel by:
+ *
+ *   Signed-off-by: Richard Fellner <richard.fellner@student.tugraz.at>
+ *   Signed-off-by: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
+ *   Signed-off-by: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
+ *   Signed-off-by: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
+ *
+ * Major changes to the original code by: Dave Hansen <dave.hansen@intel.com>
+ */
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/bug.h>
+#include <linux/init.h>
+#include <linux/spinlock.h>
+#include <linux/mm.h>
+#include <linux/uaccess.h>
+
+#include <asm/kaiser.h>
+#include <asm/pgtable.h>
+#include <asm/pgalloc.h>
+#include <asm/tlbflush.h>
+#include <asm/desc.h>
+
+#define KAISER_WALK_ATOMIC  0x1
+
+/*
+ * At runtime, the only things we map are some things for CPU
+ * hotplug, and stacks for new processes.  No two CPUs will ever
+ * be populating the same addresses, so we only need to ensure
+ * that we protect between two CPUs trying to allocate and
+ * populate the same page table page.
+ *
+ * Only take this lock when doing a set_p[4um]d(), but it is not
+ * needed for doing a set_pte().  We assume that only the *owner*
+ * of a given allocation will be doing this for _their_
+ * allocation.
+ *
+ * This ensures that once a system has been running for a while
+ * and there have been stacks all over and these page tables
+ * are fully populated, there will be no further acquisitions of
+ * this lock.
+ */
+static DEFINE_SPINLOCK(shadow_table_allocation_lock);
+
+/*
+ * This is only for walking kernel addresses.  We use it to help
+ * recreate the "shadow" page tables which are used while we are in
+ * userspace.
+ *
+ * This can be called on any kernel memory addresses and will work
+ * with any page sizes and any types: normal linear map memory,
+ * vmalloc(), even kmap().
+ *
+ * Note: this is only used when mapping new *kernel* entries into
+ * the user/shadow page tables.  It is never used for userspace
+ * addresses.
+ *
+ * Returns -1 on error.
+ */
+static inline unsigned long get_pa_from_kernel_map(unsigned long vaddr)
+{
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+
+	/* We should only be asked to walk kernel addresses */
+	if (vaddr < PAGE_OFFSET) {
+		WARN_ON_ONCE(1);
+		return -1;
+	}
+
+	pgd = pgd_offset_k(vaddr);
+	/*
+	 * We made all the kernel PGDs present in kaiser_init().
+	 * We expect them to stay that way.
+	 */
+	if (pgd_none(*pgd)) {
+		WARN_ON_ONCE(1);
+		return -1;
+	}
+	/*
+	 * PGDs are either 512GB or 128TB on all x86_64
+	 * configurations.  We don't handle these.
+	 */
+	BUILD_BUG_ON(pgd_large(*pgd) != 0);
+
+	p4d = p4d_offset(pgd, vaddr);
+	if (p4d_none(*p4d)) {
+		WARN_ON_ONCE(1);
+		return -1;
+	}
+
+	pud = pud_offset(p4d, vaddr);
+	if (pud_none(*pud)) {
+		WARN_ON_ONCE(1);
+		return -1;
+	}
+
+	if (pud_large(*pud))
+		return (pud_pfn(*pud) << PAGE_SHIFT) | (vaddr & ~PUD_PAGE_MASK);
+
+	pmd = pmd_offset(pud, vaddr);
+	if (pmd_none(*pmd)) {
+		WARN_ON_ONCE(1);
+		return -1;
+	}
+
+	if (pmd_large(*pmd))
+		return (pmd_pfn(*pmd) << PAGE_SHIFT) | (vaddr & ~PMD_PAGE_MASK);
+
+	pte = pte_offset_kernel(pmd, vaddr);
+	if (pte_none(*pte)) {
+		WARN_ON_ONCE(1);
+		return -1;
+	}
+
+	return (pte_pfn(*pte) << PAGE_SHIFT) | (vaddr & ~PAGE_MASK);
+}
+
+/*
+ * Walk the shadow copy of the page tables (optionally) trying to
+ * allocate page table pages on the way down.  Does not support
+ * large pages since the data we are mapping is (generally) not
+ * large enough or aligned to 2MB.
+ *
+ * Note: this is only used when mapping *new* kernel data into the
+ * user/shadow page tables.  It is never used for userspace data.
+ *
+ * Returns a pointer to a PTE on success, or NULL on failure.
+ */
+static pte_t *kaiser_shadow_pagetable_walk(unsigned long address,
+					   unsigned long flags)
+{
+	pte_t *pte;
+	pmd_t *pmd;
+	pud_t *pud;
+	p4d_t *p4d;
+	pgd_t *pgd = kernel_to_shadow_pgdp(pgd_offset_k(address));
+	gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+
+	if (flags & KAISER_WALK_ATOMIC) {
+		gfp &= ~GFP_KERNEL;
+		gfp |= __GFP_HIGH | __GFP_ATOMIC;
+	}
+
+	if (address < PAGE_OFFSET) {
+		WARN_ONCE(1, "attempt to walk user address\n");
+		return NULL;
+	}
+
+	if (pgd_none(*pgd)) {
+		WARN_ONCE(1, "All shadow pgds should have been populated\n");
+		return NULL;
+	}
+	BUILD_BUG_ON(pgd_large(*pgd) != 0);
+
+	p4d = p4d_offset(pgd, address);
+	BUILD_BUG_ON(p4d_large(*p4d) != 0);
+	if (p4d_none(*p4d)) {
+		unsigned long new_pud_page = __get_free_page(gfp);
+		if (!new_pud_page)
+			return NULL;
+
+		spin_lock(&shadow_table_allocation_lock);
+		if (p4d_none(*p4d))
+			set_p4d(p4d, __p4d(_KERNPG_TABLE | __pa(new_pud_page)));
+		else
+			free_page(new_pud_page);
+		spin_unlock(&shadow_table_allocation_lock);
+	}
+
+	pud = pud_offset(p4d, address);
+	/* The shadow page tables do not use large mappings: */
+	if (pud_large(*pud)) {
+		WARN_ON(1);
+		return NULL;
+	}
+	if (pud_none(*pud)) {
+		unsigned long new_pmd_page = __get_free_page(gfp);
+		if (!new_pmd_page)
+			return NULL;
+
+		spin_lock(&shadow_table_allocation_lock);
+		if (pud_none(*pud))
+			set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page)));
+		else
+			free_page(new_pmd_page);
+		spin_unlock(&shadow_table_allocation_lock);
+	}
+
+	pmd = pmd_offset(pud, address);
+	/* The shadow page tables do not use large mappings: */
+	if (pmd_large(*pmd)) {
+		WARN_ON(1);
+		return NULL;
+	}
+	if (pmd_none(*pmd)) {
+		unsigned long new_pte_page = __get_free_page(gfp);
+		if (!new_pte_page)
+			return NULL;
+
+		spin_lock(&shadow_table_allocation_lock);
+		if (pmd_none(*pmd))
+			set_pmd(pmd, __pmd(_KERNPG_TABLE  | __pa(new_pte_page)));
+		else
+			free_page(new_pte_page);
+		spin_unlock(&shadow_table_allocation_lock);
+	}
+
+	pte = pte_offset_kernel(pmd, address);
+	if (pte_flags(*pte) & _PAGE_USER) {
+		WARN_ONCE(1, "attempt to walk to user pte\n");
+		return NULL;
+	}
+	return pte;
+}
+
+/*
+ * Given a kernel address, @__start_addr, copy that mapping into
+ * the user (shadow) page tables.  This may need to allocate page
+ * table pages.
+ */
+int kaiser_add_user_map(const void *__start_addr, unsigned long size,
+			unsigned long flags)
+{
+	pte_t *pte;
+	unsigned long start_addr = (unsigned long)__start_addr;
+	unsigned long address = start_addr & PAGE_MASK;
+	unsigned long end_addr = PAGE_ALIGN(start_addr + size);
+	unsigned long target_address;
+
+	for (; address < end_addr; address += PAGE_SIZE) {
+		target_address = get_pa_from_kernel_map(address);
+		if (target_address == -1)
+			return -EIO;
+
+		pte = kaiser_shadow_pagetable_walk(address, false);
+		/*
+		 * Errors come from either -ENOMEM for a page
+		 * table page, or something screwy that did a
+		 * WARN_ON().  Just return -ENOMEM.
+		 */
+		if (!pte)
+			return -ENOMEM;
+		if (pte_none(*pte)) {
+			set_pte(pte, __pte(flags | target_address));
+		} else {
+			pte_t tmp;
+			/*
+			 * Make a fake, temporary PTE that mimics the
+			 * one we would have created.
+			 */
+			set_pte(&tmp, __pte(flags | target_address));
+			/*
+			 * Warn if the pte that would have been
+			 * created is different from the one that
+			 * was there previously.  In other words,
+			 * we allow the same PTE value to be set,
+			 * but not changed.
+			 */
+			WARN_ON_ONCE(!pte_same(*pte, tmp));
+		}
+	}
+	return 0;
+}
+
+int kaiser_add_user_map_ptrs(const void *__start_addr,
+			     const void *__end_addr,
+			     unsigned long flags)
+{
+	return kaiser_add_user_map(__start_addr,
+				   __end_addr - __start_addr,
+				   flags);
+}
+
+/*
+ * Ensure that the top level of the (shadow) page tables are
+ * entirely populated.  This ensures that all processes that get
+ * forked have the same entries.  This way, we do not have to
+ * ever go set up new entries in older processes.
+ *
+ * Note: we never free these, so there are no updates to them
+ * after this.
+ */
+static void __init kaiser_init_all_pgds(void)
+{
+	pgd_t *pgd;
+	int i;
+
+	pgd = kernel_to_shadow_pgdp(pgd_offset_k(0UL));
+	for (i = PTRS_PER_PGD / 2; i < PTRS_PER_PGD; i++) {
+		/*
+		 * Each PGD entry moves up PGDIR_SIZE bytes through
+		 * the address space, so get the first virtual
+		 * address mapped by PGD #i:
+		 */
+		unsigned long addr = i * PGDIR_SIZE;
+#if CONFIG_PGTABLE_LEVELS > 4
+		p4d_t *p4d = p4d_alloc_one(&init_mm, addr);
+		if (!p4d) {
+			WARN_ON(1);
+			break;
+		}
+		set_pgd(pgd + i, __pgd(_KERNPG_TABLE | __pa(p4d)));
+#else /* CONFIG_PGTABLE_LEVELS <= 4 */
+		pud_t *pud = pud_alloc_one(&init_mm, addr);
+		if (!pud) {
+			WARN_ON(1);
+			break;
+		}
+		set_pgd(pgd + i, __pgd(_KERNPG_TABLE | __pa(pud)));
+#endif /* CONFIG_PGTABLE_LEVELS */
+	}
+}
+
+/*
+ * Page table allocations called by kaiser_add_user_map() can
+ * theoretically fail, but are very unlikely to fail in early boot.
+ * This would at least output a warning before crashing.
+ *
+ * Do the checking and warning in a macro to make it more readable and
+ * preserve line numbers in the warning message that you would not get
+ * with an inline.
+ */
+#define kaiser_add_user_map_early(start, size, flags) do {	\
+	int __ret = kaiser_add_user_map(start, size, flags);	\
+	WARN_ON(__ret);						\
+} while (0)
+
+#define kaiser_add_user_map_ptrs_early(start, end, flags) do {		\
+	int __ret = kaiser_add_user_map_ptrs(start, end, flags);	\
+	WARN_ON(__ret);							\
+} while (0)
+
+extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
+/*
+ * If anything in here fails, we will likely die on one of the
+ * first kernel->user transitions and init will die.  But, we
+ * will have most of the kernel up by then and should be able to
+ * get a clean warning out of it.  If we BUG_ON() here, we run
+ * the risk of being before we have good console output.
+ *
+ * When KAISER is enabled, we remove _PAGE_GLOBAL from all of the
+ * kernel PTE permissions.  This ensures that the TLB entries for
+ * the kernel are not available when in userspace.  However, for
+ * the pages that are available to userspace *anyway*, we might as
+ * well continue to map them _PAGE_GLOBAL and enjoy the potential
+ * performance advantages.
+ */
+void __init kaiser_init(void)
+{
+	int cpu;
+
+	kaiser_init_all_pgds();
+
+	for_each_possible_cpu(cpu) {
+		void *percpu_vaddr = __per_cpu_user_mapped_start +
+				     per_cpu_offset(cpu);
+		unsigned long percpu_sz = __per_cpu_user_mapped_end -
+					  __per_cpu_user_mapped_start;
+		kaiser_add_user_map_early(percpu_vaddr, percpu_sz,
+					  __PAGE_KERNEL | _PAGE_GLOBAL);
+	}
+
+	kaiser_add_user_map_ptrs_early(__entry_text_start, __entry_text_end,
+				       __PAGE_KERNEL_RX | _PAGE_GLOBAL);
+
+	/* the fixed map address of the idt_table */
+	kaiser_add_user_map_early((void *)idt_descr.address,
+				  sizeof(gate_desc) * NR_VECTORS,
+				  __PAGE_KERNEL_RO | _PAGE_GLOBAL);
+}
+
+int kaiser_add_mapping(unsigned long addr, unsigned long size,
+		       unsigned long flags)
+{
+	return kaiser_add_user_map((const void *)addr, size, flags);
+}
+
+void kaiser_remove_mapping(unsigned long start, unsigned long size)
+{
+	unsigned long addr;
+
+	/* The shadow page tables always use small pages: */
+	for (addr = start; addr < start + size; addr += PAGE_SIZE) {
+		/*
+		 * Do an "atomic" walk in case this got called from an atomic
+		 * context.  This should not do any allocations because we
+		 * should only be walking things that are known to be mapped.
+		 */
+		pte_t *pte = kaiser_shadow_pagetable_walk(addr, KAISER_WALK_ATOMIC);
+
+		/*
+		 * We are removing a mapping that should
+		 * exist.  WARN if it was not there:
+		 */
+		if (!pte) {
+			WARN_ON_ONCE(1);
+			continue;
+		}
+
+		pte_clear(&init_mm, addr, pte);
+	}
+	/*
+	 * This ensures that the TLB entries used to map this data are
+	 * no longer usable on *this* CPU.  We theoretically want to
+	 * flush the entries on all CPUs here, but that's too
+	 * expensive right now: this is called to unmap process
+	 * stacks in the exit() path.
+	 *
+	 * This can change if we get to the point where this is not
+	 * in a remotely hot path, like only called via write_ldt().
+	 *
+	 * Note: we could probably also just invalidate the individual
+	 * addresses to take care of *this* PCID and then do a
+	 * tlb_flush_shared_nonglobals() to ensure that all other
+	 * PCIDs get flushed before being used again.
+	 */
+	__native_flush_tlb_global();
+}
diff -puN arch/x86/mm/Makefile~kaiser-base arch/x86/mm/Makefile
--- a/arch/x86/mm/Makefile~kaiser-base	2017-11-22 15:45:46.536619745 -0800
+++ b/arch/x86/mm/Makefile	2017-11-22 15:45:46.550619745 -0800
@@ -46,6 +46,7 @@ obj-$(CONFIG_NUMA_EMU)		+= numa_emulatio
 obj-$(CONFIG_X86_INTEL_MPX)	+= mpx.o
 obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o
 obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o
+obj-$(CONFIG_KAISER)		+= kaiser.o
 
 obj-$(CONFIG_AMD_MEM_ENCRYPT)	+= mem_encrypt.o
 obj-$(CONFIG_AMD_MEM_ENCRYPT)	+= mem_encrypt_boot.o
diff -puN arch/x86/mm/pageattr.c~kaiser-base arch/x86/mm/pageattr.c
--- a/arch/x86/mm/pageattr.c~kaiser-base	2017-11-22 15:45:46.538619745 -0800
+++ b/arch/x86/mm/pageattr.c	2017-11-22 15:45:46.551619745 -0800
@@ -859,7 +859,7 @@ static void unmap_pmd_range(pud_t *pud,
 			pud_clear(pud);
 }
 
-static void unmap_pud_range(p4d_t *p4d, unsigned long start, unsigned long end)
+void unmap_pud_range(p4d_t *p4d, unsigned long start, unsigned long end)
 {
 	pud_t *pud = pud_offset(p4d, start);
 
diff -puN arch/x86/mm/pgtable.c~kaiser-base arch/x86/mm/pgtable.c
--- a/arch/x86/mm/pgtable.c~kaiser-base	2017-11-22 15:45:46.540619745 -0800
+++ b/arch/x86/mm/pgtable.c	2017-11-22 15:45:46.551619745 -0800
@@ -355,14 +355,26 @@ static inline void _pgd_free(pgd_t *pgd)
 		kmem_cache_free(pgd_cache, pgd);
 }
 #else
+
+#ifdef CONFIG_KAISER
+/*
+ * Instead of one pgd, we aquire two pgds.  Being order-1, it is
+ * both 8k in size and 8k-aligned.  That lets us just flip bit 12
+ * in a pointer to swap between the two 4k halves.
+ */
+#define PGD_ALLOCATION_ORDER 1
+#else
+#define PGD_ALLOCATION_ORDER 0
+#endif
+
 static inline pgd_t *_pgd_alloc(void)
 {
-	return (pgd_t *)__get_free_page(PGALLOC_GFP);
+	return (pgd_t *)__get_free_pages(PGALLOC_GFP, PGD_ALLOCATION_ORDER);
 }
 
 static inline void _pgd_free(pgd_t *pgd)
 {
-	free_page((unsigned long)pgd);
+	free_pages((unsigned long)pgd, PGD_ALLOCATION_ORDER);
 }
 #endif /* CONFIG_X86_PAE */
 
diff -puN /dev/null Documentation/x86/kaiser.txt
--- /dev/null	2017-11-06 07:51:38.702108459 -0800
+++ b/Documentation/x86/kaiser.txt	2017-11-22 15:45:46.552619745 -0800
@@ -0,0 +1,162 @@
+Overview
+========
+
+KAISER is a countermeasure against attacks on kernel address
+information.  There are at least three existing, published,
+approaches using the shared user/kernel mapping and hardware features
+to defeat KASLR.  One approach referenced in the paper locates the
+kernel by observing differences in page fault timing between
+present-but-inaccessable kernel pages and non-present pages.
+
+When the kernel is entered via syscalls, interrupts or exceptions,
+page tables are switched to the full "kernel" copy.  When the
+system switches back to user mode, the user/shadow copy is used.
+
+The minimalistic kernel portion of the user page tables try to
+map only what is needed to enter/exit the kernel such as the
+entry/exit functions themselves and the interrupt descriptor
+table (IDT).  There are a few unnecessary things that get mapped
+such as the first C function when entering an interrupt (see
+comments in kaiser.c).
+
+This helps to ensure that side-channel attacks that leverage the
+paging structures do not function when KAISER is enabled.  It can be
+enabled by setting CONFIG_KAISER=y
+
+Page Table Management
+=====================
+
+When KAISER is enabled, the kernel manages two sets of page
+tables.  The first copy is very similar to what would be present
+for a kernel without KAISER.  This includes a complete mapping of
+userspace that the kernel can use for things like copy_to_user().
+
+The second (shadow) is used when running userspace and mirrors the
+mapping of userspace present in the kernel copy.  It maps a only
+the kernel data needed to enter and exit the kernel.
+
+The shadow is populated by the kaiser_add_*() functions.  Only
+kernel data which has been explicity mapped will appear in the
+shadow copy.  These calls are rare at runtime.
+
+For a new userspace mapping, the kernel makes the entries in its
+page tables like normal.  The only difference is when the kernel
+makes entries in the top (PGD) level.  In addition to setting the
+entry in the main kernel PGD, a copy if the entry is made in the
+shadow PGD.
+
+For user space mappings the kernel creates an entry in the kernel
+PGD and the same entry in the shadow PGD, so the underlying page
+table to which the PGD entry points is shared down to the PTE
+level.  This leaves a single, shared set of userspace page tables
+to manage.  One PTE to lock, one set set of accessed bits, dirty
+bits, etc...
+
+Overhead
+========
+
+Protection against side-channel attacks is important.  But,
+this protection comes at a cost:
+
+1. Increased Memory Use
+  a. Each process now needs an order-1 PGD instead of order-0.
+     (Consumes 4k per process).
+  b. The pre-allocated second-level (p4d or pud) kernel page
+     table pages cost ~1MB of additional memory at boot.  This
+     is not totally wasted because some of these pages would
+     have been needed eventually for normal kernel page tables
+     and things in the vmalloc() area like vmemmap[].
+  c. Statically-allocated structures and entry/exit text must
+     be padded out to 4k (or 8k for PGDs) so they can be mapped
+     into the user page tables.  This bloats the kernel image
+     by ~20-30k.
+  d. The shadow page tables eventually grow to map all of used
+     vmalloc() space.  They can have roughly the same memory
+     consumption as the vmalloc() page tables.
+
+2. Runtime Cost
+  a. CR3 manipulation to switch between the page table copies
+     must be done at interrupt, syscall, and exception entry
+     and exit (it can be skipped when the kernel is interrupted,
+     though.)  Moves to CR3 are on the order of a hundred
+     cycles, and are required every at entry and every at exit.
+  b. Task stacks must be mapped/unmapped.  We need to walk
+     and modify the shadow page tables at fork() and exit().
+  c. Global pages are disabled.  This feature of the MMU
+     allows different processes to share TLB entries mapping
+     the kernel.  Losing the feature means potentially more
+     TLB misses after a context switch.
+  d. Process Context IDentifiers (PCID) is a CPU feature that
+     allows us to skip flushing the entire TLB when switching
+     page tables.  This makes switching the page tables (at
+     context switch, or kernel entry/exit) cheaper.  But, on
+     systems with PCID support, the context switch code must flush
+     both the user and kernel entries out of the TLB, with an
+     INVPCID in addition to the CR3 write.  This INVPCID is
+     generally slower than a CR3 write, but still on the order of
+     a hundred cycles.
+  e. The shadow page tables must be populated for each new
+     process.  Even without KAISER, the shared kernel mappings
+     are created by copying top-level (PGD) entries into each
+     new process.  But, with KAISER, there are now *two* kernel
+     mappings: one in the kernel page tables that maps everything
+     and one in the user/shadow page tables mapping the "minimal"
+     kernel.  At fork(), a copy of the portion of the shadow PGD
+     that maps the minimal kernel structures is needed in
+     addition to the normal kernel PGD.
+  f. In addition to the fork()-time copying, there must also
+     be an update to the shadow PGD any time a set_pgd() is done
+     on a PGD used to map userspace.  This ensures that the kernel
+     and user/shadow copies always map the same userspace
+     memory.
+  g. On systems without PCID support, each CR3 write flushes
+     the entire TLB.  That means that each syscall, interrupt
+     or exception flushes the TLB.
+
+Possible Future Work:
+1. We can be more careful about not actually writing to CR3
+   unless its value is actually changed.
+2. Compress the user/shadow-mapped data to be mapped together
+   underneath a single PGD entry.
+3. Re-enable global pages, but use them for mappings in the
+   user/shadow page tables.  This would allow the kernel to
+   take advantage of TLB entries that were established from
+   the user page tables.  This might speed up the entry/exit
+   code or userspace since it will not have to reload all of
+   its TLB entries.  However, its upside is limited by PCID
+   being used.
+4. Allow KAISER to enabled/disabled at runtime so folks can
+   run a single kernel image.
+
+Debugging:
+
+Bugs in KAISER cause a few different signatures of crashes
+that are worth noting here.
+
+ * Crashes in early boot, especially around CPU bringup.  Bugs
+   in the trampoline code or mappings cause these.
+ * Crashes at the first interrupt.  Caused by bugs in entry_64.S,
+   like screwing up a page table switch.  Also caused by
+   incorrectly mapping the IRQ handler entry code.
+ * Crashes at the first NMI.  The NMI code is separate from main
+   interrupt handlers and can have bugs that do not affect
+   normal interrupts.  Also caused by incorrectly mapping NMI
+   code.  NMIs that interrupt the entry code must be very
+   careful and can be the cause of crashes that show up when
+   running perf.
+ * Kernel crashes at the first exit to userspace.  entry_64.S
+   bugs, or failing to map some of the exit code.
+ * Crashes at first interrupt that interrupts userspace. The paths
+   in entry_64.S that return to userspace are sometimes separate
+   from the ones that return to the kernel.
+ * Double faults: overflowing the kernel stack because of page
+   faults upon page faults.  Caused by touching non-kaiser-mapped
+   data in the entry code, or forgetting to switch to kernel
+   CR3 before calling into C functions which are not kaiser-mapped.
+ * Failures of the selftests/x86 code.  Usually a bug in one of the
+   more obscure corners of entry_64.S
+ * Userspace segfaults early in boot, sometimes manifesting
+   as mount(8) failing to mount the rootfs.  These have
+   tended to be TLB invalidation issues.  Usually invalidating
+   the wrong PCID, or otherwise missing an invalidation.
+
diff -puN /dev/null include/linux/kaiser.h
--- /dev/null	2017-11-06 07:51:38.702108459 -0800
+++ b/include/linux/kaiser.h	2017-11-22 15:45:46.552619745 -0800
@@ -0,0 +1,29 @@
+#ifndef _INCLUDE_KAISER_H
+#define _INCLUDE_KAISER_H
+
+#ifdef CONFIG_KAISER
+#include <asm/kaiser.h>
+#else
+
+/*
+ * These stubs are used whenever CONFIG_KAISER is off, which
+ * includes architectures that support KAISER, but have it
+ * disabled.
+ */
+
+static inline void kaiser_init(void)
+{
+}
+
+static inline void kaiser_remove_mapping(unsigned long start, unsigned long size)
+{
+}
+
+static inline int kaiser_add_mapping(unsigned long addr, unsigned long size,
+				     unsigned long flags)
+{
+	return 0;
+}
+
+#endif /* !CONFIG_KAISER */
+#endif /* _INCLUDE_KAISER_H */
diff -puN init/main.c~kaiser-base init/main.c
--- a/init/main.c~kaiser-base	2017-11-22 15:45:46.542619745 -0800
+++ b/init/main.c	2017-11-22 15:45:46.552619745 -0800
@@ -76,6 +76,7 @@
 #include <linux/slab.h>
 #include <linux/perf_event.h>
 #include <linux/ptrace.h>
+#include <linux/kaiser.h>
 #include <linux/blkdev.h>
 #include <linux/elevator.h>
 #include <linux/sched_clock.h>
@@ -505,6 +506,8 @@ static void __init mm_init(void)
 	pgtable_init();
 	vmalloc_init();
 	ioremap_huge_init();
+	/* This just needs to be done before we first run userspace: */
+	kaiser_init();
 }
 
 asmlinkage __visible void __init start_kernel(void)
diff -puN kernel/fork.c~kaiser-base kernel/fork.c
--- a/kernel/fork.c~kaiser-base	2017-11-22 15:45:46.544619745 -0800
+++ b/kernel/fork.c	2017-11-22 15:45:46.553619745 -0800
@@ -70,6 +70,7 @@
 #include <linux/tsacct_kern.h>
 #include <linux/cn_proc.h>
 #include <linux/freezer.h>
+#include <linux/kaiser.h>
 #include <linux/delayacct.h>
 #include <linux/taskstats_kern.h>
 #include <linux/random.h>
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2017-11-23  0:34   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, richard.fellner, moritz.lipp,
	daniel.gruss, michael.schwarz, luto, torvalds, keescook, hughd,
	x86


From: Dave Hansen <dave.hansen@linux.intel.com>

These patches are based on work from a team at Graz University of
Technology: https://github.com/IAIK/KAISER .  This work would not have
been possible without their work as a starting point.

KAISER is a countermeasure against side channel attacks against kernel
virtual memory.  It leaves the existing page tables largely alone and
refers to them as the "kernel page tables.  It adds a "shadow" pgd for
every process which is intended for use when running userspace.  The
shadow pgd maps all the same user memory as the "kernel" copy, but
only maps a minimal set of kernel memory.

Whenever entering the kernel (syscalls, interrupts, exceptions), the
pgd is switched to the "kernel" copy.  When switching back to user
mode, the shadow pgd is used.

The minimalistic kernel page tables try to map only what is needed to
enter/exit the kernel such as the entry/exit functions themselves and
the interrupt descriptors (IDT).

=== Page Table Poisoning ===

KAISER has two copies of the page tables: one for the kernel and
one for when running in userspace.  There is also a kernel
portion of each of the page tables: the part that *maps* the
kernel.

The kernel portion is relatively static and uses pre-populated
PGDs.  Nobody ever calls set_pgd() on the kernel portion during
normal operation.

The userspace portion of the page tables is updated frequently as
userspace pages are mapped and page table pages are allocated.
These updates of the userspace *portion* of the tables need to be
reflected into both the kernel and user/shadow copies.

The original KAISER patches did this by effectively looking at the
address that is being updated.  If it is <PAGE_OFFSET, it is
considered to be doing an update for the userspace portion of the page
tables and must make an entry in the shadow.

However, this has a wrinkle: there are a few places where low
addresses are used in supervisor (kernel) mode.  When EFI calls
are made, they use what are traditionally user addresses in
supervisor mode and trip over these checks.  The trampoline code
that used for booting secondary CPUs has a similar issue.

Remember, there are two things that KAISER needs performed on a
userspace PGD:

 1. Populate the shadow itself
 2. Poison the kernel PGD so it can not be used by userspace.

Only perform these actions when dealing with a user address *and* the
PGD has _PAGE_USER set.  That way, in-kernel users of low addresses
typically used by userspace are not accidentally poisoned.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
Changes from original KAISER patch:
 * Gobs of coding style cleanups
 * The original patch tried to allocate an order-2 page, then
   8k-align the result.  That's silly since order-2 is already
   guaranteed to be 16k-aligned.  Removed that gunk and just
   allocate an order-1 page.
 * Handle (or at least detect and warn on) allocation failures
 * Use _KERNPG_TABLE, not _PAGE_TABLE when creating mappings for
   the kernel in the shadow (user) page tables.
 * BUG_ON() for !pte_none() case was totally insane: it checked
   the physical address of the 'struct page' against the physical
   address of the page being mapped.
 * Added 5-level page table support
 * Never free kaiser page tables.  We don't have the locking to
   keep them from getting referenced during the freeing process.
 * Use a totally different scheme in the entry code.  The
   original code just fell apart in horrific ways in debug faults,
   NMIs, or when iret faults.  Big thanks to Andy Lutomirski for
   reducing the number of places that needed to be patched.  He
   made the code a ton simpler.
 * Use new entry trampoline instead of mapping process stacks.

Note: The original KAISER authors signed-off on their patch.  Some of
their code has been broken out into other patches in this series, but
their SoB was only retained here.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org

---

 b/Documentation/x86/kaiser.txt      |  162 +++++++++++++
 b/arch/x86/entry/calling.h          |    1 
 b/arch/x86/include/asm/kaiser.h     |   57 ++++
 b/arch/x86/include/asm/pgtable.h    |    5 
 b/arch/x86/include/asm/pgtable_64.h |  132 ++++++++++
 b/arch/x86/kernel/espfix_64.c       |   17 +
 b/arch/x86/kernel/head_64.S         |   14 -
 b/arch/x86/mm/Makefile              |    1 
 b/arch/x86/mm/kaiser.c              |  441 ++++++++++++++++++++++++++++++++++++
 b/arch/x86/mm/pageattr.c            |    2 
 b/arch/x86/mm/pgtable.c             |   16 +
 b/include/linux/kaiser.h            |   29 ++
 b/init/main.c                       |    3 
 b/kernel/fork.c                     |    1 
 14 files changed, 875 insertions(+), 6 deletions(-)

diff -puN arch/x86/entry/calling.h~kaiser-base arch/x86/entry/calling.h
--- a/arch/x86/entry/calling.h~kaiser-base	2017-11-22 15:45:46.527619745 -0800
+++ b/arch/x86/entry/calling.h	2017-11-22 15:45:46.547619745 -0800
@@ -2,6 +2,7 @@
 #include <linux/jump_label.h>
 #include <asm/unwind_hints.h>
 #include <asm/cpufeatures.h>
+#include <asm/page_types.h>
 
 /*
 
diff -puN /dev/null arch/x86/include/asm/kaiser.h
--- /dev/null	2017-11-06 07:51:38.702108459 -0800
+++ b/arch/x86/include/asm/kaiser.h	2017-11-22 15:45:46.548619745 -0800
@@ -0,0 +1,57 @@
+#ifndef _ASM_X86_KAISER_H
+#define _ASM_X86_KAISER_H
+/*
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Based on work published here: https://github.com/IAIK/KAISER
+ * Modified by Dave Hansen <dave.hansen@intel.com to actually work.
+ */
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_KAISER
+/**
+ *  kaiser_add_mapping - map a kernel range into the user page tables
+ *  @addr: the start address of the range
+ *  @size: the size of the range
+ *  @flags: The mapping flags of the pages
+ *
+ *  Use this on all data and code that need to be mapped into both
+ *  copies of the page tables.  This includes the code that switches
+ *  to/from userspace and all of the hardware structures that are
+ *  virtually-addressed and needed in userspace like the interrupt
+ *  table.
+ */
+extern int kaiser_add_mapping(unsigned long addr, unsigned long size,
+			      unsigned long flags);
+
+/**
+ *  kaiser_remove_mapping - remove a kernel mapping from the userpage tables
+ *  @addr: the start address of the range
+ *  @size: the size of the range
+ */
+extern void kaiser_remove_mapping(unsigned long start, unsigned long size);
+
+/**
+ *  kaiser_init - Initialize the shadow mapping
+ *
+ *  Most parts of the shadow mapping can be mapped upon boot
+ *  time.  Only per-process things like the thread stacks
+ *  or a new LDT have to be mapped at runtime.  These boot-
+ *  time mappings are permanent and never unmapped.
+ */
+extern void kaiser_init(void);
+
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_X86_KAISER_H */
diff -puN arch/x86/include/asm/pgtable_64.h~kaiser-base arch/x86/include/asm/pgtable_64.h
--- a/arch/x86/include/asm/pgtable_64.h~kaiser-base	2017-11-22 15:45:46.529619745 -0800
+++ b/arch/x86/include/asm/pgtable_64.h	2017-11-22 15:45:46.548619745 -0800
@@ -131,9 +131,137 @@ static inline pud_t native_pudp_get_and_
 #endif
 }
 
+#ifdef CONFIG_KAISER
+/*
+ * All top-level KAISER page tables are order-1 pages (8k-aligned
+ * and 8k in size).  The kernel one is at the beginning 4k and
+ * the user (shadow) one is in the last 4k.  To switch between
+ * them, you just need to flip the 12th bit in their addresses.
+ */
+#define KAISER_PGTABLE_SWITCH_BIT	PAGE_SHIFT
+
+/*
+ * This generates better code than the inline assembly in
+ * __set_bit().
+ */
+static inline void *ptr_set_bit(void *ptr, int bit)
+{
+	unsigned long __ptr = (unsigned long)ptr;
+
+	__ptr |= (1<<bit);
+	return (void *)__ptr;
+}
+static inline void *ptr_clear_bit(void *ptr, int bit)
+{
+	unsigned long __ptr = (unsigned long)ptr;
+
+	__ptr &= ~(1<<bit);
+	return (void *)__ptr;
+}
+
+static inline pgd_t *kernel_to_shadow_pgdp(pgd_t *pgdp)
+{
+	return ptr_set_bit(pgdp, KAISER_PGTABLE_SWITCH_BIT);
+}
+static inline pgd_t *shadow_to_kernel_pgdp(pgd_t *pgdp)
+{
+	return ptr_clear_bit(pgdp, KAISER_PGTABLE_SWITCH_BIT);
+}
+static inline p4d_t *kernel_to_shadow_p4dp(p4d_t *p4dp)
+{
+	return ptr_set_bit(p4dp, KAISER_PGTABLE_SWITCH_BIT);
+}
+static inline p4d_t *shadow_to_kernel_p4dp(p4d_t *p4dp)
+{
+	return ptr_clear_bit(p4dp, KAISER_PGTABLE_SWITCH_BIT);
+}
+#endif /* CONFIG_KAISER */
+
+/*
+ * Page table pages are page-aligned.  The lower half of the top
+ * level is used for userspace and the top half for the kernel.
+ *
+ * Returns true for parts of the PGD that map userspace and
+ * false for the parts that map the kernel.
+ */
+static inline bool pgdp_maps_userspace(void *__ptr)
+{
+	unsigned long ptr = (unsigned long)__ptr;
+
+	return (ptr & ~PAGE_MASK) < (PAGE_SIZE / 2);
+}
+
+/*
+ * Does this PGD allow access from userspace?
+ */
+static inline bool pgd_userspace_access(pgd_t pgd)
+{
+	return pgd.pgd & _PAGE_USER;
+}
+
+/*
+ * Take a PGD location (pgdp) and a pgd value that needs
+ * to be set there.  Populates the shadow and returns
+ * the resulting PGD that must be set in the kernel copy
+ * of the page tables.
+ */
+static inline pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
+{
+#ifdef CONFIG_KAISER
+	if (pgd_userspace_access(pgd)) {
+		if (pgdp_maps_userspace(pgdp)) {
+			/*
+			 * The user/shadow page tables get the full
+			 * PGD, accessible from userspace:
+			 */
+			kernel_to_shadow_pgdp(pgdp)->pgd = pgd.pgd;
+			/*
+			 * For the copy of the pgd that the kernel
+			 * uses, make it unusable to userspace.  This
+			 * ensures if we get out to userspace with the
+			 * wrong CR3 value, userspace will crash
+			 * instead of running.
+			 */
+			pgd.pgd |= _PAGE_NX;
+		}
+	} else if (pgd_userspace_access(*pgdp)) {
+		/*
+		 * We are clearing a _PAGE_USER PGD for which we
+		 * presumably populated the shadow.  We must now
+		 * clear the shadow PGD entry.
+		 */
+		if (pgdp_maps_userspace(pgdp)) {
+			kernel_to_shadow_pgdp(pgdp)->pgd = pgd.pgd;
+		} else {
+			/*
+			 * Attempted to clear a _PAGE_USER PGD which
+			 * is in the kernel porttion of the address
+			 * space.  PGDs are pre-populated and we
+			 * never clear them.
+			 */
+			WARN_ON_ONCE(1);
+		}
+	} else {
+		/*
+		 * _PAGE_USER was not set in either the PGD being set
+		 * or cleared.  All kernel PGDs should be
+		 * pre-populated so this should never happen after
+		 * boot.
+		 */
+	}
+#endif
+	/* return the copy of the PGD we want the kernel to use: */
+	return pgd;
+}
+
+
 static inline void native_set_p4d(p4d_t *p4dp, p4d_t p4d)
 {
+#if defined(CONFIG_KAISER) && !defined(CONFIG_X86_5LEVEL)
+	p4dp->pgd = kaiser_set_shadow_pgd(&p4dp->pgd, p4d.pgd);
+#else /* CONFIG_KAISER */
 	*p4dp = p4d;
+#endif
 }
 
 static inline void native_p4d_clear(p4d_t *p4d)
@@ -147,7 +275,11 @@ static inline void native_p4d_clear(p4d_
 
 static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd)
 {
+#ifdef CONFIG_KAISER
+	*pgdp = kaiser_set_shadow_pgd(pgdp, pgd);
+#else /* CONFIG_KAISER */
 	*pgdp = pgd;
+#endif
 }
 
 static inline void native_pgd_clear(pgd_t *pgd)
diff -puN arch/x86/include/asm/pgtable.h~kaiser-base arch/x86/include/asm/pgtable.h
--- a/arch/x86/include/asm/pgtable.h~kaiser-base	2017-11-22 15:45:46.531619745 -0800
+++ b/arch/x86/include/asm/pgtable.h	2017-11-22 15:45:46.549619745 -0800
@@ -1106,6 +1106,11 @@ static inline void pmdp_set_wrprotect(st
 static inline void clone_pgd_range(pgd_t *dst, pgd_t *src, int count)
 {
        memcpy(dst, src, count * sizeof(pgd_t));
+#ifdef CONFIG_KAISER
+	/* Clone the shadow pgd part as well */
+	memcpy(kernel_to_shadow_pgdp(dst), kernel_to_shadow_pgdp(src),
+	       count * sizeof(pgd_t));
+#endif
 }
 
 #define PTE_SHIFT ilog2(PTRS_PER_PTE)
diff -puN arch/x86/kernel/espfix_64.c~kaiser-base arch/x86/kernel/espfix_64.c
--- a/arch/x86/kernel/espfix_64.c~kaiser-base	2017-11-22 15:45:46.533619745 -0800
+++ b/arch/x86/kernel/espfix_64.c	2017-11-22 15:45:46.549619745 -0800
@@ -41,6 +41,7 @@
 #include <asm/pgalloc.h>
 #include <asm/setup.h>
 #include <asm/espfix.h>
+#include <asm/kaiser.h>
 
 /*
  * Note: we only need 6*8 = 48 bytes for the espfix stack, but round
@@ -128,6 +129,22 @@ void __init init_espfix_bsp(void)
 	pgd = &init_top_pgt[pgd_index(ESPFIX_BASE_ADDR)];
 	p4d = p4d_alloc(&init_mm, pgd, ESPFIX_BASE_ADDR);
 	p4d_populate(&init_mm, p4d, espfix_pud_page);
+	/*
+	 * Just copy the top-level PGD that is mapping the espfix
+	 * area to ensure it is mapped into the shadow user page
+	 * tables.
+	 *
+	 * For 5-level paging, the espfix pgd was populated when
+	 * kaiser_init() pre-populated all the pgd entries.  The above
+	 * p4d_alloc() would never do anything and the p4d_populate()
+	 * would be done to a p4d already mapped in the userspace pgd.
+	 */
+#ifdef CONFIG_KAISER
+	if (CONFIG_PGTABLE_LEVELS <= 4) {
+		set_pgd(kernel_to_shadow_pgdp(pgd),
+			__pgd(_KERNPG_TABLE | (p4d_pfn(*p4d) << PAGE_SHIFT)));
+	}
+#endif
 
 	/* Randomize the locations */
 	init_espfix_random();
diff -puN arch/x86/kernel/head_64.S~kaiser-base arch/x86/kernel/head_64.S
--- a/arch/x86/kernel/head_64.S~kaiser-base	2017-11-22 15:45:46.534619745 -0800
+++ b/arch/x86/kernel/head_64.S	2017-11-22 15:45:46.549619745 -0800
@@ -341,6 +341,14 @@ GLOBAL(early_recursion_flag)
 	.balign	PAGE_SIZE; \
 GLOBAL(name)
 
+#ifdef CONFIG_KAISER
+#define NEXT_PGD_PAGE(name) \
+	.balign 2 * PAGE_SIZE; \
+GLOBAL(name)
+#else
+#define NEXT_PGD_PAGE(name) NEXT_PAGE(name)
+#endif
+
 /* Automate the creation of 1 to 1 mapping pmd entries */
 #define PMDS(START, PERM, COUNT)			\
 	i = 0 ;						\
@@ -350,7 +358,7 @@ GLOBAL(name)
 	.endr
 
 	__INITDATA
-NEXT_PAGE(early_top_pgt)
+NEXT_PGD_PAGE(early_top_pgt)
 	.fill	511,8,0
 #ifdef CONFIG_X86_5LEVEL
 	.quad	level4_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
@@ -364,7 +372,7 @@ NEXT_PAGE(early_dynamic_pgts)
 	.data
 
 #if defined(CONFIG_XEN_PV) || defined(CONFIG_XEN_PVH)
-NEXT_PAGE(init_top_pgt)
+NEXT_PGD_PAGE(init_top_pgt)
 	.quad   level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
 	.org    init_top_pgt + PGD_PAGE_OFFSET*8, 0
 	.quad   level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
@@ -381,7 +389,7 @@ NEXT_PAGE(level2_ident_pgt)
 	 */
 	PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD)
 #else
-NEXT_PAGE(init_top_pgt)
+NEXT_PGD_PAGE(init_top_pgt)
 	.fill	512,8,0
 #endif
 
diff -puN /dev/null arch/x86/mm/kaiser.c
--- /dev/null	2017-11-06 07:51:38.702108459 -0800
+++ b/arch/x86/mm/kaiser.c	2017-11-22 15:45:46.550619745 -0800
@@ -0,0 +1,441 @@
+/*
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * This code is based in part on work published here:
+ *
+ *	https://github.com/IAIK/KAISER
+ *
+ * The original work was written by and and signed off by for the Linux
+ * kernel by:
+ *
+ *   Signed-off-by: Richard Fellner <richard.fellner@student.tugraz.at>
+ *   Signed-off-by: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
+ *   Signed-off-by: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
+ *   Signed-off-by: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
+ *
+ * Major changes to the original code by: Dave Hansen <dave.hansen@intel.com>
+ */
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/bug.h>
+#include <linux/init.h>
+#include <linux/spinlock.h>
+#include <linux/mm.h>
+#include <linux/uaccess.h>
+
+#include <asm/kaiser.h>
+#include <asm/pgtable.h>
+#include <asm/pgalloc.h>
+#include <asm/tlbflush.h>
+#include <asm/desc.h>
+
+#define KAISER_WALK_ATOMIC  0x1
+
+/*
+ * At runtime, the only things we map are some things for CPU
+ * hotplug, and stacks for new processes.  No two CPUs will ever
+ * be populating the same addresses, so we only need to ensure
+ * that we protect between two CPUs trying to allocate and
+ * populate the same page table page.
+ *
+ * Only take this lock when doing a set_p[4um]d(), but it is not
+ * needed for doing a set_pte().  We assume that only the *owner*
+ * of a given allocation will be doing this for _their_
+ * allocation.
+ *
+ * This ensures that once a system has been running for a while
+ * and there have been stacks all over and these page tables
+ * are fully populated, there will be no further acquisitions of
+ * this lock.
+ */
+static DEFINE_SPINLOCK(shadow_table_allocation_lock);
+
+/*
+ * This is only for walking kernel addresses.  We use it to help
+ * recreate the "shadow" page tables which are used while we are in
+ * userspace.
+ *
+ * This can be called on any kernel memory addresses and will work
+ * with any page sizes and any types: normal linear map memory,
+ * vmalloc(), even kmap().
+ *
+ * Note: this is only used when mapping new *kernel* entries into
+ * the user/shadow page tables.  It is never used for userspace
+ * addresses.
+ *
+ * Returns -1 on error.
+ */
+static inline unsigned long get_pa_from_kernel_map(unsigned long vaddr)
+{
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+
+	/* We should only be asked to walk kernel addresses */
+	if (vaddr < PAGE_OFFSET) {
+		WARN_ON_ONCE(1);
+		return -1;
+	}
+
+	pgd = pgd_offset_k(vaddr);
+	/*
+	 * We made all the kernel PGDs present in kaiser_init().
+	 * We expect them to stay that way.
+	 */
+	if (pgd_none(*pgd)) {
+		WARN_ON_ONCE(1);
+		return -1;
+	}
+	/*
+	 * PGDs are either 512GB or 128TB on all x86_64
+	 * configurations.  We don't handle these.
+	 */
+	BUILD_BUG_ON(pgd_large(*pgd) != 0);
+
+	p4d = p4d_offset(pgd, vaddr);
+	if (p4d_none(*p4d)) {
+		WARN_ON_ONCE(1);
+		return -1;
+	}
+
+	pud = pud_offset(p4d, vaddr);
+	if (pud_none(*pud)) {
+		WARN_ON_ONCE(1);
+		return -1;
+	}
+
+	if (pud_large(*pud))
+		return (pud_pfn(*pud) << PAGE_SHIFT) | (vaddr & ~PUD_PAGE_MASK);
+
+	pmd = pmd_offset(pud, vaddr);
+	if (pmd_none(*pmd)) {
+		WARN_ON_ONCE(1);
+		return -1;
+	}
+
+	if (pmd_large(*pmd))
+		return (pmd_pfn(*pmd) << PAGE_SHIFT) | (vaddr & ~PMD_PAGE_MASK);
+
+	pte = pte_offset_kernel(pmd, vaddr);
+	if (pte_none(*pte)) {
+		WARN_ON_ONCE(1);
+		return -1;
+	}
+
+	return (pte_pfn(*pte) << PAGE_SHIFT) | (vaddr & ~PAGE_MASK);
+}
+
+/*
+ * Walk the shadow copy of the page tables (optionally) trying to
+ * allocate page table pages on the way down.  Does not support
+ * large pages since the data we are mapping is (generally) not
+ * large enough or aligned to 2MB.
+ *
+ * Note: this is only used when mapping *new* kernel data into the
+ * user/shadow page tables.  It is never used for userspace data.
+ *
+ * Returns a pointer to a PTE on success, or NULL on failure.
+ */
+static pte_t *kaiser_shadow_pagetable_walk(unsigned long address,
+					   unsigned long flags)
+{
+	pte_t *pte;
+	pmd_t *pmd;
+	pud_t *pud;
+	p4d_t *p4d;
+	pgd_t *pgd = kernel_to_shadow_pgdp(pgd_offset_k(address));
+	gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+
+	if (flags & KAISER_WALK_ATOMIC) {
+		gfp &= ~GFP_KERNEL;
+		gfp |= __GFP_HIGH | __GFP_ATOMIC;
+	}
+
+	if (address < PAGE_OFFSET) {
+		WARN_ONCE(1, "attempt to walk user address\n");
+		return NULL;
+	}
+
+	if (pgd_none(*pgd)) {
+		WARN_ONCE(1, "All shadow pgds should have been populated\n");
+		return NULL;
+	}
+	BUILD_BUG_ON(pgd_large(*pgd) != 0);
+
+	p4d = p4d_offset(pgd, address);
+	BUILD_BUG_ON(p4d_large(*p4d) != 0);
+	if (p4d_none(*p4d)) {
+		unsigned long new_pud_page = __get_free_page(gfp);
+		if (!new_pud_page)
+			return NULL;
+
+		spin_lock(&shadow_table_allocation_lock);
+		if (p4d_none(*p4d))
+			set_p4d(p4d, __p4d(_KERNPG_TABLE | __pa(new_pud_page)));
+		else
+			free_page(new_pud_page);
+		spin_unlock(&shadow_table_allocation_lock);
+	}
+
+	pud = pud_offset(p4d, address);
+	/* The shadow page tables do not use large mappings: */
+	if (pud_large(*pud)) {
+		WARN_ON(1);
+		return NULL;
+	}
+	if (pud_none(*pud)) {
+		unsigned long new_pmd_page = __get_free_page(gfp);
+		if (!new_pmd_page)
+			return NULL;
+
+		spin_lock(&shadow_table_allocation_lock);
+		if (pud_none(*pud))
+			set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page)));
+		else
+			free_page(new_pmd_page);
+		spin_unlock(&shadow_table_allocation_lock);
+	}
+
+	pmd = pmd_offset(pud, address);
+	/* The shadow page tables do not use large mappings: */
+	if (pmd_large(*pmd)) {
+		WARN_ON(1);
+		return NULL;
+	}
+	if (pmd_none(*pmd)) {
+		unsigned long new_pte_page = __get_free_page(gfp);
+		if (!new_pte_page)
+			return NULL;
+
+		spin_lock(&shadow_table_allocation_lock);
+		if (pmd_none(*pmd))
+			set_pmd(pmd, __pmd(_KERNPG_TABLE  | __pa(new_pte_page)));
+		else
+			free_page(new_pte_page);
+		spin_unlock(&shadow_table_allocation_lock);
+	}
+
+	pte = pte_offset_kernel(pmd, address);
+	if (pte_flags(*pte) & _PAGE_USER) {
+		WARN_ONCE(1, "attempt to walk to user pte\n");
+		return NULL;
+	}
+	return pte;
+}
+
+/*
+ * Given a kernel address, @__start_addr, copy that mapping into
+ * the user (shadow) page tables.  This may need to allocate page
+ * table pages.
+ */
+int kaiser_add_user_map(const void *__start_addr, unsigned long size,
+			unsigned long flags)
+{
+	pte_t *pte;
+	unsigned long start_addr = (unsigned long)__start_addr;
+	unsigned long address = start_addr & PAGE_MASK;
+	unsigned long end_addr = PAGE_ALIGN(start_addr + size);
+	unsigned long target_address;
+
+	for (; address < end_addr; address += PAGE_SIZE) {
+		target_address = get_pa_from_kernel_map(address);
+		if (target_address == -1)
+			return -EIO;
+
+		pte = kaiser_shadow_pagetable_walk(address, false);
+		/*
+		 * Errors come from either -ENOMEM for a page
+		 * table page, or something screwy that did a
+		 * WARN_ON().  Just return -ENOMEM.
+		 */
+		if (!pte)
+			return -ENOMEM;
+		if (pte_none(*pte)) {
+			set_pte(pte, __pte(flags | target_address));
+		} else {
+			pte_t tmp;
+			/*
+			 * Make a fake, temporary PTE that mimics the
+			 * one we would have created.
+			 */
+			set_pte(&tmp, __pte(flags | target_address));
+			/*
+			 * Warn if the pte that would have been
+			 * created is different from the one that
+			 * was there previously.  In other words,
+			 * we allow the same PTE value to be set,
+			 * but not changed.
+			 */
+			WARN_ON_ONCE(!pte_same(*pte, tmp));
+		}
+	}
+	return 0;
+}
+
+int kaiser_add_user_map_ptrs(const void *__start_addr,
+			     const void *__end_addr,
+			     unsigned long flags)
+{
+	return kaiser_add_user_map(__start_addr,
+				   __end_addr - __start_addr,
+				   flags);
+}
+
+/*
+ * Ensure that the top level of the (shadow) page tables are
+ * entirely populated.  This ensures that all processes that get
+ * forked have the same entries.  This way, we do not have to
+ * ever go set up new entries in older processes.
+ *
+ * Note: we never free these, so there are no updates to them
+ * after this.
+ */
+static void __init kaiser_init_all_pgds(void)
+{
+	pgd_t *pgd;
+	int i;
+
+	pgd = kernel_to_shadow_pgdp(pgd_offset_k(0UL));
+	for (i = PTRS_PER_PGD / 2; i < PTRS_PER_PGD; i++) {
+		/*
+		 * Each PGD entry moves up PGDIR_SIZE bytes through
+		 * the address space, so get the first virtual
+		 * address mapped by PGD #i:
+		 */
+		unsigned long addr = i * PGDIR_SIZE;
+#if CONFIG_PGTABLE_LEVELS > 4
+		p4d_t *p4d = p4d_alloc_one(&init_mm, addr);
+		if (!p4d) {
+			WARN_ON(1);
+			break;
+		}
+		set_pgd(pgd + i, __pgd(_KERNPG_TABLE | __pa(p4d)));
+#else /* CONFIG_PGTABLE_LEVELS <= 4 */
+		pud_t *pud = pud_alloc_one(&init_mm, addr);
+		if (!pud) {
+			WARN_ON(1);
+			break;
+		}
+		set_pgd(pgd + i, __pgd(_KERNPG_TABLE | __pa(pud)));
+#endif /* CONFIG_PGTABLE_LEVELS */
+	}
+}
+
+/*
+ * Page table allocations called by kaiser_add_user_map() can
+ * theoretically fail, but are very unlikely to fail in early boot.
+ * This would at least output a warning before crashing.
+ *
+ * Do the checking and warning in a macro to make it more readable and
+ * preserve line numbers in the warning message that you would not get
+ * with an inline.
+ */
+#define kaiser_add_user_map_early(start, size, flags) do {	\
+	int __ret = kaiser_add_user_map(start, size, flags);	\
+	WARN_ON(__ret);						\
+} while (0)
+
+#define kaiser_add_user_map_ptrs_early(start, end, flags) do {		\
+	int __ret = kaiser_add_user_map_ptrs(start, end, flags);	\
+	WARN_ON(__ret);							\
+} while (0)
+
+extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
+/*
+ * If anything in here fails, we will likely die on one of the
+ * first kernel->user transitions and init will die.  But, we
+ * will have most of the kernel up by then and should be able to
+ * get a clean warning out of it.  If we BUG_ON() here, we run
+ * the risk of being before we have good console output.
+ *
+ * When KAISER is enabled, we remove _PAGE_GLOBAL from all of the
+ * kernel PTE permissions.  This ensures that the TLB entries for
+ * the kernel are not available when in userspace.  However, for
+ * the pages that are available to userspace *anyway*, we might as
+ * well continue to map them _PAGE_GLOBAL and enjoy the potential
+ * performance advantages.
+ */
+void __init kaiser_init(void)
+{
+	int cpu;
+
+	kaiser_init_all_pgds();
+
+	for_each_possible_cpu(cpu) {
+		void *percpu_vaddr = __per_cpu_user_mapped_start +
+				     per_cpu_offset(cpu);
+		unsigned long percpu_sz = __per_cpu_user_mapped_end -
+					  __per_cpu_user_mapped_start;
+		kaiser_add_user_map_early(percpu_vaddr, percpu_sz,
+					  __PAGE_KERNEL | _PAGE_GLOBAL);
+	}
+
+	kaiser_add_user_map_ptrs_early(__entry_text_start, __entry_text_end,
+				       __PAGE_KERNEL_RX | _PAGE_GLOBAL);
+
+	/* the fixed map address of the idt_table */
+	kaiser_add_user_map_early((void *)idt_descr.address,
+				  sizeof(gate_desc) * NR_VECTORS,
+				  __PAGE_KERNEL_RO | _PAGE_GLOBAL);
+}
+
+int kaiser_add_mapping(unsigned long addr, unsigned long size,
+		       unsigned long flags)
+{
+	return kaiser_add_user_map((const void *)addr, size, flags);
+}
+
+void kaiser_remove_mapping(unsigned long start, unsigned long size)
+{
+	unsigned long addr;
+
+	/* The shadow page tables always use small pages: */
+	for (addr = start; addr < start + size; addr += PAGE_SIZE) {
+		/*
+		 * Do an "atomic" walk in case this got called from an atomic
+		 * context.  This should not do any allocations because we
+		 * should only be walking things that are known to be mapped.
+		 */
+		pte_t *pte = kaiser_shadow_pagetable_walk(addr, KAISER_WALK_ATOMIC);
+
+		/*
+		 * We are removing a mapping that should
+		 * exist.  WARN if it was not there:
+		 */
+		if (!pte) {
+			WARN_ON_ONCE(1);
+			continue;
+		}
+
+		pte_clear(&init_mm, addr, pte);
+	}
+	/*
+	 * This ensures that the TLB entries used to map this data are
+	 * no longer usable on *this* CPU.  We theoretically want to
+	 * flush the entries on all CPUs here, but that's too
+	 * expensive right now: this is called to unmap process
+	 * stacks in the exit() path.
+	 *
+	 * This can change if we get to the point where this is not
+	 * in a remotely hot path, like only called via write_ldt().
+	 *
+	 * Note: we could probably also just invalidate the individual
+	 * addresses to take care of *this* PCID and then do a
+	 * tlb_flush_shared_nonglobals() to ensure that all other
+	 * PCIDs get flushed before being used again.
+	 */
+	__native_flush_tlb_global();
+}
diff -puN arch/x86/mm/Makefile~kaiser-base arch/x86/mm/Makefile
--- a/arch/x86/mm/Makefile~kaiser-base	2017-11-22 15:45:46.536619745 -0800
+++ b/arch/x86/mm/Makefile	2017-11-22 15:45:46.550619745 -0800
@@ -46,6 +46,7 @@ obj-$(CONFIG_NUMA_EMU)		+= numa_emulatio
 obj-$(CONFIG_X86_INTEL_MPX)	+= mpx.o
 obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o
 obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o
+obj-$(CONFIG_KAISER)		+= kaiser.o
 
 obj-$(CONFIG_AMD_MEM_ENCRYPT)	+= mem_encrypt.o
 obj-$(CONFIG_AMD_MEM_ENCRYPT)	+= mem_encrypt_boot.o
diff -puN arch/x86/mm/pageattr.c~kaiser-base arch/x86/mm/pageattr.c
--- a/arch/x86/mm/pageattr.c~kaiser-base	2017-11-22 15:45:46.538619745 -0800
+++ b/arch/x86/mm/pageattr.c	2017-11-22 15:45:46.551619745 -0800
@@ -859,7 +859,7 @@ static void unmap_pmd_range(pud_t *pud,
 			pud_clear(pud);
 }
 
-static void unmap_pud_range(p4d_t *p4d, unsigned long start, unsigned long end)
+void unmap_pud_range(p4d_t *p4d, unsigned long start, unsigned long end)
 {
 	pud_t *pud = pud_offset(p4d, start);
 
diff -puN arch/x86/mm/pgtable.c~kaiser-base arch/x86/mm/pgtable.c
--- a/arch/x86/mm/pgtable.c~kaiser-base	2017-11-22 15:45:46.540619745 -0800
+++ b/arch/x86/mm/pgtable.c	2017-11-22 15:45:46.551619745 -0800
@@ -355,14 +355,26 @@ static inline void _pgd_free(pgd_t *pgd)
 		kmem_cache_free(pgd_cache, pgd);
 }
 #else
+
+#ifdef CONFIG_KAISER
+/*
+ * Instead of one pgd, we aquire two pgds.  Being order-1, it is
+ * both 8k in size and 8k-aligned.  That lets us just flip bit 12
+ * in a pointer to swap between the two 4k halves.
+ */
+#define PGD_ALLOCATION_ORDER 1
+#else
+#define PGD_ALLOCATION_ORDER 0
+#endif
+
 static inline pgd_t *_pgd_alloc(void)
 {
-	return (pgd_t *)__get_free_page(PGALLOC_GFP);
+	return (pgd_t *)__get_free_pages(PGALLOC_GFP, PGD_ALLOCATION_ORDER);
 }
 
 static inline void _pgd_free(pgd_t *pgd)
 {
-	free_page((unsigned long)pgd);
+	free_pages((unsigned long)pgd, PGD_ALLOCATION_ORDER);
 }
 #endif /* CONFIG_X86_PAE */
 
diff -puN /dev/null Documentation/x86/kaiser.txt
--- /dev/null	2017-11-06 07:51:38.702108459 -0800
+++ b/Documentation/x86/kaiser.txt	2017-11-22 15:45:46.552619745 -0800
@@ -0,0 +1,162 @@
+Overview
+========
+
+KAISER is a countermeasure against attacks on kernel address
+information.  There are at least three existing, published,
+approaches using the shared user/kernel mapping and hardware features
+to defeat KASLR.  One approach referenced in the paper locates the
+kernel by observing differences in page fault timing between
+present-but-inaccessable kernel pages and non-present pages.
+
+When the kernel is entered via syscalls, interrupts or exceptions,
+page tables are switched to the full "kernel" copy.  When the
+system switches back to user mode, the user/shadow copy is used.
+
+The minimalistic kernel portion of the user page tables try to
+map only what is needed to enter/exit the kernel such as the
+entry/exit functions themselves and the interrupt descriptor
+table (IDT).  There are a few unnecessary things that get mapped
+such as the first C function when entering an interrupt (see
+comments in kaiser.c).
+
+This helps to ensure that side-channel attacks that leverage the
+paging structures do not function when KAISER is enabled.  It can be
+enabled by setting CONFIG_KAISER=y
+
+Page Table Management
+=====================
+
+When KAISER is enabled, the kernel manages two sets of page
+tables.  The first copy is very similar to what would be present
+for a kernel without KAISER.  This includes a complete mapping of
+userspace that the kernel can use for things like copy_to_user().
+
+The second (shadow) is used when running userspace and mirrors the
+mapping of userspace present in the kernel copy.  It maps a only
+the kernel data needed to enter and exit the kernel.
+
+The shadow is populated by the kaiser_add_*() functions.  Only
+kernel data which has been explicity mapped will appear in the
+shadow copy.  These calls are rare at runtime.
+
+For a new userspace mapping, the kernel makes the entries in its
+page tables like normal.  The only difference is when the kernel
+makes entries in the top (PGD) level.  In addition to setting the
+entry in the main kernel PGD, a copy if the entry is made in the
+shadow PGD.
+
+For user space mappings the kernel creates an entry in the kernel
+PGD and the same entry in the shadow PGD, so the underlying page
+table to which the PGD entry points is shared down to the PTE
+level.  This leaves a single, shared set of userspace page tables
+to manage.  One PTE to lock, one set set of accessed bits, dirty
+bits, etc...
+
+Overhead
+========
+
+Protection against side-channel attacks is important.  But,
+this protection comes at a cost:
+
+1. Increased Memory Use
+  a. Each process now needs an order-1 PGD instead of order-0.
+     (Consumes 4k per process).
+  b. The pre-allocated second-level (p4d or pud) kernel page
+     table pages cost ~1MB of additional memory at boot.  This
+     is not totally wasted because some of these pages would
+     have been needed eventually for normal kernel page tables
+     and things in the vmalloc() area like vmemmap[].
+  c. Statically-allocated structures and entry/exit text must
+     be padded out to 4k (or 8k for PGDs) so they can be mapped
+     into the user page tables.  This bloats the kernel image
+     by ~20-30k.
+  d. The shadow page tables eventually grow to map all of used
+     vmalloc() space.  They can have roughly the same memory
+     consumption as the vmalloc() page tables.
+
+2. Runtime Cost
+  a. CR3 manipulation to switch between the page table copies
+     must be done at interrupt, syscall, and exception entry
+     and exit (it can be skipped when the kernel is interrupted,
+     though.)  Moves to CR3 are on the order of a hundred
+     cycles, and are required every at entry and every at exit.
+  b. Task stacks must be mapped/unmapped.  We need to walk
+     and modify the shadow page tables at fork() and exit().
+  c. Global pages are disabled.  This feature of the MMU
+     allows different processes to share TLB entries mapping
+     the kernel.  Losing the feature means potentially more
+     TLB misses after a context switch.
+  d. Process Context IDentifiers (PCID) is a CPU feature that
+     allows us to skip flushing the entire TLB when switching
+     page tables.  This makes switching the page tables (at
+     context switch, or kernel entry/exit) cheaper.  But, on
+     systems with PCID support, the context switch code must flush
+     both the user and kernel entries out of the TLB, with an
+     INVPCID in addition to the CR3 write.  This INVPCID is
+     generally slower than a CR3 write, but still on the order of
+     a hundred cycles.
+  e. The shadow page tables must be populated for each new
+     process.  Even without KAISER, the shared kernel mappings
+     are created by copying top-level (PGD) entries into each
+     new process.  But, with KAISER, there are now *two* kernel
+     mappings: one in the kernel page tables that maps everything
+     and one in the user/shadow page tables mapping the "minimal"
+     kernel.  At fork(), a copy of the portion of the shadow PGD
+     that maps the minimal kernel structures is needed in
+     addition to the normal kernel PGD.
+  f. In addition to the fork()-time copying, there must also
+     be an update to the shadow PGD any time a set_pgd() is done
+     on a PGD used to map userspace.  This ensures that the kernel
+     and user/shadow copies always map the same userspace
+     memory.
+  g. On systems without PCID support, each CR3 write flushes
+     the entire TLB.  That means that each syscall, interrupt
+     or exception flushes the TLB.
+
+Possible Future Work:
+1. We can be more careful about not actually writing to CR3
+   unless its value is actually changed.
+2. Compress the user/shadow-mapped data to be mapped together
+   underneath a single PGD entry.
+3. Re-enable global pages, but use them for mappings in the
+   user/shadow page tables.  This would allow the kernel to
+   take advantage of TLB entries that were established from
+   the user page tables.  This might speed up the entry/exit
+   code or userspace since it will not have to reload all of
+   its TLB entries.  However, its upside is limited by PCID
+   being used.
+4. Allow KAISER to enabled/disabled at runtime so folks can
+   run a single kernel image.
+
+Debugging:
+
+Bugs in KAISER cause a few different signatures of crashes
+that are worth noting here.
+
+ * Crashes in early boot, especially around CPU bringup.  Bugs
+   in the trampoline code or mappings cause these.
+ * Crashes at the first interrupt.  Caused by bugs in entry_64.S,
+   like screwing up a page table switch.  Also caused by
+   incorrectly mapping the IRQ handler entry code.
+ * Crashes at the first NMI.  The NMI code is separate from main
+   interrupt handlers and can have bugs that do not affect
+   normal interrupts.  Also caused by incorrectly mapping NMI
+   code.  NMIs that interrupt the entry code must be very
+   careful and can be the cause of crashes that show up when
+   running perf.
+ * Kernel crashes at the first exit to userspace.  entry_64.S
+   bugs, or failing to map some of the exit code.
+ * Crashes at first interrupt that interrupts userspace. The paths
+   in entry_64.S that return to userspace are sometimes separate
+   from the ones that return to the kernel.
+ * Double faults: overflowing the kernel stack because of page
+   faults upon page faults.  Caused by touching non-kaiser-mapped
+   data in the entry code, or forgetting to switch to kernel
+   CR3 before calling into C functions which are not kaiser-mapped.
+ * Failures of the selftests/x86 code.  Usually a bug in one of the
+   more obscure corners of entry_64.S
+ * Userspace segfaults early in boot, sometimes manifesting
+   as mount(8) failing to mount the rootfs.  These have
+   tended to be TLB invalidation issues.  Usually invalidating
+   the wrong PCID, or otherwise missing an invalidation.
+
diff -puN /dev/null include/linux/kaiser.h
--- /dev/null	2017-11-06 07:51:38.702108459 -0800
+++ b/include/linux/kaiser.h	2017-11-22 15:45:46.552619745 -0800
@@ -0,0 +1,29 @@
+#ifndef _INCLUDE_KAISER_H
+#define _INCLUDE_KAISER_H
+
+#ifdef CONFIG_KAISER
+#include <asm/kaiser.h>
+#else
+
+/*
+ * These stubs are used whenever CONFIG_KAISER is off, which
+ * includes architectures that support KAISER, but have it
+ * disabled.
+ */
+
+static inline void kaiser_init(void)
+{
+}
+
+static inline void kaiser_remove_mapping(unsigned long start, unsigned long size)
+{
+}
+
+static inline int kaiser_add_mapping(unsigned long addr, unsigned long size,
+				     unsigned long flags)
+{
+	return 0;
+}
+
+#endif /* !CONFIG_KAISER */
+#endif /* _INCLUDE_KAISER_H */
diff -puN init/main.c~kaiser-base init/main.c
--- a/init/main.c~kaiser-base	2017-11-22 15:45:46.542619745 -0800
+++ b/init/main.c	2017-11-22 15:45:46.552619745 -0800
@@ -76,6 +76,7 @@
 #include <linux/slab.h>
 #include <linux/perf_event.h>
 #include <linux/ptrace.h>
+#include <linux/kaiser.h>
 #include <linux/blkdev.h>
 #include <linux/elevator.h>
 #include <linux/sched_clock.h>
@@ -505,6 +506,8 @@ static void __init mm_init(void)
 	pgtable_init();
 	vmalloc_init();
 	ioremap_huge_init();
+	/* This just needs to be done before we first run userspace: */
+	kaiser_init();
 }
 
 asmlinkage __visible void __init start_kernel(void)
diff -puN kernel/fork.c~kaiser-base kernel/fork.c
--- a/kernel/fork.c~kaiser-base	2017-11-22 15:45:46.544619745 -0800
+++ b/kernel/fork.c	2017-11-22 15:45:46.553619745 -0800
@@ -70,6 +70,7 @@
 #include <linux/tsacct_kern.h>
 #include <linux/cn_proc.h>
 #include <linux/freezer.h>
+#include <linux/kaiser.h>
 #include <linux/delayacct.h>
 #include <linux/taskstats_kern.h>
 #include <linux/random.h>
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 06/23] x86, kaiser: allow NX poison to be set in p4d/pgd
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:34   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

The user portion of the kernel page tables use the NX bit to
poison them for userspace.  But, that trips the p4d/pgd_bad()
checks.  Make sure it does not do that.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/pgtable.h |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff -puN arch/x86/include/asm/pgtable.h~kaiser-p4d-allow-nx arch/x86/include/asm/pgtable.h
--- a/arch/x86/include/asm/pgtable.h~kaiser-p4d-allow-nx	2017-11-22 15:45:47.382619743 -0800
+++ b/arch/x86/include/asm/pgtable.h	2017-11-22 15:45:47.386619743 -0800
@@ -846,7 +846,12 @@ static inline pud_t *pud_offset(p4d_t *p
 
 static inline int p4d_bad(p4d_t p4d)
 {
-	return (p4d_flags(p4d) & ~(_KERNPG_TABLE | _PAGE_USER)) != 0;
+	unsigned long ignore_flags = _KERNPG_TABLE | _PAGE_USER;
+
+	if (IS_ENABLED(CONFIG_KAISER))
+		ignore_flags |= _PAGE_NX;
+
+	return (p4d_flags(p4d) & ~ignore_flags) != 0;
 }
 #endif  /* CONFIG_PGTABLE_LEVELS > 3 */
 
@@ -880,7 +885,12 @@ static inline p4d_t *p4d_offset(pgd_t *p
 
 static inline int pgd_bad(pgd_t pgd)
 {
-	return (pgd_flags(pgd) & ~_PAGE_USER) != _KERNPG_TABLE;
+	unsigned long ignore_flags = _PAGE_USER;
+
+	if (IS_ENABLED(CONFIG_KAISER))
+		ignore_flags |= _PAGE_NX;
+
+	return (pgd_flags(pgd) & ~ignore_flags) != _KERNPG_TABLE;
 }
 
 static inline int pgd_none(pgd_t pgd)
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 06/23] x86, kaiser: allow NX poison to be set in p4d/pgd
@ 2017-11-23  0:34   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

The user portion of the kernel page tables use the NX bit to
poison them for userspace.  But, that trips the p4d/pgd_bad()
checks.  Make sure it does not do that.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/pgtable.h |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff -puN arch/x86/include/asm/pgtable.h~kaiser-p4d-allow-nx arch/x86/include/asm/pgtable.h
--- a/arch/x86/include/asm/pgtable.h~kaiser-p4d-allow-nx	2017-11-22 15:45:47.382619743 -0800
+++ b/arch/x86/include/asm/pgtable.h	2017-11-22 15:45:47.386619743 -0800
@@ -846,7 +846,12 @@ static inline pud_t *pud_offset(p4d_t *p
 
 static inline int p4d_bad(p4d_t p4d)
 {
-	return (p4d_flags(p4d) & ~(_KERNPG_TABLE | _PAGE_USER)) != 0;
+	unsigned long ignore_flags = _KERNPG_TABLE | _PAGE_USER;
+
+	if (IS_ENABLED(CONFIG_KAISER))
+		ignore_flags |= _PAGE_NX;
+
+	return (p4d_flags(p4d) & ~ignore_flags) != 0;
 }
 #endif  /* CONFIG_PGTABLE_LEVELS > 3 */
 
@@ -880,7 +885,12 @@ static inline p4d_t *p4d_offset(pgd_t *p
 
 static inline int pgd_bad(pgd_t pgd)
 {
-	return (pgd_flags(pgd) & ~_PAGE_USER) != _KERNPG_TABLE;
+	unsigned long ignore_flags = _PAGE_USER;
+
+	if (IS_ENABLED(CONFIG_KAISER))
+		ignore_flags |= _PAGE_NX;
+
+	return (pgd_flags(pgd) & ~ignore_flags) != _KERNPG_TABLE;
 }
 
 static inline int pgd_none(pgd_t pgd)
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 07/23] x86, kaiser: make sure static PGDs are 8k in size
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:34   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

A few PGDs come out of the kernel binary instead of being
allocated dynamically.  Before this patch, they are all
8k-aligned, but they must also be 8k in *size*.

The original KAISER patch did not do this.  It probably just
lucked out that it did not trample over data after the last PGD.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/kernel/head_64.S |   16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff -puN arch/x86/kernel/head_64.S~kaiser-head_S-pgds-need-8k-too arch/x86/kernel/head_64.S
--- a/arch/x86/kernel/head_64.S~kaiser-head_S-pgds-need-8k-too	2017-11-22 15:45:47.913619742 -0800
+++ b/arch/x86/kernel/head_64.S	2017-11-22 15:45:47.916619742 -0800
@@ -342,11 +342,24 @@ GLOBAL(early_recursion_flag)
 GLOBAL(name)
 
 #ifdef CONFIG_KAISER
+/*
+ * Each PGD needs to be 8k long and 8k aligned.  We do not
+ * ever go out to userspace with these, so we do not
+ * strictly *need* the second page, but this allows us to
+ * have a single set_pgd() implementation that does not
+ * need to worry about whether it has 4k or 8k to work
+ * with.
+ *
+ * This ensures PGDs are 8k long:
+ */
+#define KAISER_USER_PGD_FILL	512
+/* This ensures they are 8k-aligned: */
 #define NEXT_PGD_PAGE(name) \
 	.balign 2 * PAGE_SIZE; \
 GLOBAL(name)
 #else
 #define NEXT_PGD_PAGE(name) NEXT_PAGE(name)
+#define KAISER_USER_PGD_FILL	0
 #endif
 
 /* Automate the creation of 1 to 1 mapping pmd entries */
@@ -365,6 +378,7 @@ NEXT_PGD_PAGE(early_top_pgt)
 #else
 	.quad	level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
 #endif
+	.fill	KAISER_USER_PGD_FILL,8,0
 
 NEXT_PAGE(early_dynamic_pgts)
 	.fill	512*EARLY_DYNAMIC_PAGE_TABLES,8,0
@@ -379,6 +393,7 @@ NEXT_PGD_PAGE(init_top_pgt)
 	.org    init_top_pgt + PGD_START_KERNEL*8, 0
 	/* (2^48-(2*1024*1024*1024))/(2^39) = 511 */
 	.quad   level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
+	.fill	KAISER_USER_PGD_FILL,8,0
 
 NEXT_PAGE(level3_ident_pgt)
 	.quad	level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
@@ -391,6 +406,7 @@ NEXT_PAGE(level2_ident_pgt)
 #else
 NEXT_PGD_PAGE(init_top_pgt)
 	.fill	512,8,0
+	.fill	KAISER_USER_PGD_FILL,8,0
 #endif
 
 #ifdef CONFIG_X86_5LEVEL
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 07/23] x86, kaiser: make sure static PGDs are 8k in size
@ 2017-11-23  0:34   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

A few PGDs come out of the kernel binary instead of being
allocated dynamically.  Before this patch, they are all
8k-aligned, but they must also be 8k in *size*.

The original KAISER patch did not do this.  It probably just
lucked out that it did not trample over data after the last PGD.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/kernel/head_64.S |   16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff -puN arch/x86/kernel/head_64.S~kaiser-head_S-pgds-need-8k-too arch/x86/kernel/head_64.S
--- a/arch/x86/kernel/head_64.S~kaiser-head_S-pgds-need-8k-too	2017-11-22 15:45:47.913619742 -0800
+++ b/arch/x86/kernel/head_64.S	2017-11-22 15:45:47.916619742 -0800
@@ -342,11 +342,24 @@ GLOBAL(early_recursion_flag)
 GLOBAL(name)
 
 #ifdef CONFIG_KAISER
+/*
+ * Each PGD needs to be 8k long and 8k aligned.  We do not
+ * ever go out to userspace with these, so we do not
+ * strictly *need* the second page, but this allows us to
+ * have a single set_pgd() implementation that does not
+ * need to worry about whether it has 4k or 8k to work
+ * with.
+ *
+ * This ensures PGDs are 8k long:
+ */
+#define KAISER_USER_PGD_FILL	512
+/* This ensures they are 8k-aligned: */
 #define NEXT_PGD_PAGE(name) \
 	.balign 2 * PAGE_SIZE; \
 GLOBAL(name)
 #else
 #define NEXT_PGD_PAGE(name) NEXT_PAGE(name)
+#define KAISER_USER_PGD_FILL	0
 #endif
 
 /* Automate the creation of 1 to 1 mapping pmd entries */
@@ -365,6 +378,7 @@ NEXT_PGD_PAGE(early_top_pgt)
 #else
 	.quad	level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
 #endif
+	.fill	KAISER_USER_PGD_FILL,8,0
 
 NEXT_PAGE(early_dynamic_pgts)
 	.fill	512*EARLY_DYNAMIC_PAGE_TABLES,8,0
@@ -379,6 +393,7 @@ NEXT_PGD_PAGE(init_top_pgt)
 	.org    init_top_pgt + PGD_START_KERNEL*8, 0
 	/* (2^48-(2*1024*1024*1024))/(2^39) = 511 */
 	.quad   level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
+	.fill	KAISER_USER_PGD_FILL,8,0
 
 NEXT_PAGE(level3_ident_pgt)
 	.quad	level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
@@ -391,6 +406,7 @@ NEXT_PAGE(level2_ident_pgt)
 #else
 NEXT_PGD_PAGE(init_top_pgt)
 	.fill	512,8,0
+	.fill	KAISER_USER_PGD_FILL,8,0
 #endif
 
 #ifdef CONFIG_X86_5LEVEL
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 08/23] x86, kaiser: map cpu entry area
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:34   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

There is now a special 'struct cpu_entry' area that contains all
of the data needed to enter the kernel.  It's mapped in the fixmap
area and contains:

 * The GDT (hardware segment descriptor)
 * The TSS (thread information structure that points the hardware
   to the various stacks, and contains the entry stack).
 * The entry trampoline code itself
 * The exception stacks (aka IRQ stacks)

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com> 
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/kaiser.h |    6 ++++++
 b/arch/x86/kernel/cpu/common.c  |    4 ++++
 b/arch/x86/mm/kaiser.c          |   31 +++++++++++++++++++++++++++++++
 b/include/linux/kaiser.h        |    3 +++
 4 files changed, 44 insertions(+)

diff -puN arch/x86/include/asm/kaiser.h~kaiser-user-map-cpu-entry-structure arch/x86/include/asm/kaiser.h
--- a/arch/x86/include/asm/kaiser.h~kaiser-user-map-cpu-entry-structure	2017-11-22 15:45:48.447619740 -0800
+++ b/arch/x86/include/asm/kaiser.h	2017-11-22 15:45:48.456619740 -0800
@@ -34,6 +34,12 @@ extern int kaiser_add_mapping(unsigned l
 			      unsigned long flags);
 
 /**
+ *  kaiser_add_mapping_cpu_entry - map the cpu entry area
+ *  @cpu: the CPU for which the entry area is being mapped
+ */
+extern void kaiser_add_mapping_cpu_entry(int cpu);
+
+/**
  *  kaiser_remove_mapping - remove a kernel mapping from the userpage tables
  *  @addr: the start address of the range
  *  @size: the size of the range
diff -puN arch/x86/kernel/cpu/common.c~kaiser-user-map-cpu-entry-structure arch/x86/kernel/cpu/common.c
--- a/arch/x86/kernel/cpu/common.c~kaiser-user-map-cpu-entry-structure	2017-11-22 15:45:48.449619740 -0800
+++ b/arch/x86/kernel/cpu/common.c	2017-11-22 15:45:48.457619740 -0800
@@ -4,6 +4,7 @@
 #include <linux/kernel.h>
 #include <linux/export.h>
 #include <linux/percpu.h>
+#include <linux/kaiser.h>
 #include <linux/string.h>
 #include <linux/ctype.h>
 #include <linux/delay.h>
@@ -587,6 +588,9 @@ static inline void setup_cpu_entry_area(
 	__set_fixmap(get_cpu_entry_area_index(cpu, entry_trampoline),
 		     __pa_symbol(_entry_trampoline), PAGE_KERNEL_RX);
 #endif
+ 	/* CPU 0's mapping is done in kaiser_init() */
+	if (cpu)
+		kaiser_add_mapping_cpu_entry(cpu);
 }
 
 /* Load the original GDT from the per-cpu structure */
diff -puN arch/x86/mm/kaiser.c~kaiser-user-map-cpu-entry-structure arch/x86/mm/kaiser.c
--- a/arch/x86/mm/kaiser.c~kaiser-user-map-cpu-entry-structure	2017-11-22 15:45:48.451619740 -0800
+++ b/arch/x86/mm/kaiser.c	2017-11-22 15:45:48.457619740 -0800
@@ -353,6 +353,26 @@ static void __init kaiser_init_all_pgds(
 	WARN_ON(__ret);							\
 } while (0)
 
+void kaiser_add_mapping_cpu_entry(int cpu)
+{
+	kaiser_add_user_map_early(get_cpu_gdt_ro(cpu), PAGE_SIZE,
+				  __PAGE_KERNEL_RO);
+
+	/* includes the entry stack */
+	kaiser_add_user_map_early(&get_cpu_entry_area(cpu)->tss,
+				  sizeof(get_cpu_entry_area(cpu)->tss),
+				  __PAGE_KERNEL | _PAGE_GLOBAL);
+
+	/* Entry code, so needs to be EXEC */
+	kaiser_add_user_map_early(&get_cpu_entry_area(cpu)->entry_trampoline,
+				  sizeof(get_cpu_entry_area(cpu)->entry_trampoline),
+				  __PAGE_KERNEL_EXEC | _PAGE_GLOBAL);
+
+	kaiser_add_user_map_early(&get_cpu_entry_area(cpu)->exception_stacks,
+				 sizeof(get_cpu_entry_area(cpu)->exception_stacks),
+				 __PAGE_KERNEL | _PAGE_GLOBAL);
+}
+
 extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
 /*
  * If anything in here fails, we will likely die on one of the
@@ -390,6 +410,17 @@ void __init kaiser_init(void)
 	kaiser_add_user_map_early((void *)idt_descr.address,
 				  sizeof(gate_desc) * NR_VECTORS,
 				  __PAGE_KERNEL_RO | _PAGE_GLOBAL);
+
+	/*
+	 * We delay CPU 0's mappings because these structures are
+	 * created before the page allocator is up.  Deferring it
+	 * until here lets us use the plain page allocator
+	 * unconditionally in the page table code above.
+	 *
+	 * This is OK because kaiser_init() is called long before
+	 * we ever run userspace and need the KAISER mappings.
+	 */
+	kaiser_add_mapping_cpu_entry(0);
 }
 
 int kaiser_add_mapping(unsigned long addr, unsigned long size,
diff -puN include/linux/kaiser.h~kaiser-user-map-cpu-entry-structure include/linux/kaiser.h
--- a/include/linux/kaiser.h~kaiser-user-map-cpu-entry-structure	2017-11-22 15:45:48.453619740 -0800
+++ b/include/linux/kaiser.h	2017-11-22 15:45:48.458619740 -0800
@@ -25,5 +25,8 @@ static inline int kaiser_add_mapping(uns
 	return 0;
 }
 
+static inline void kaiser_add_mapping_cpu_entry(int cpu)
+{
+}
 #endif /* !CONFIG_KAISER */
 #endif /* _INCLUDE_KAISER_H */
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 08/23] x86, kaiser: map cpu entry area
@ 2017-11-23  0:34   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

There is now a special 'struct cpu_entry' area that contains all
of the data needed to enter the kernel.  It's mapped in the fixmap
area and contains:

 * The GDT (hardware segment descriptor)
 * The TSS (thread information structure that points the hardware
   to the various stacks, and contains the entry stack).
 * The entry trampoline code itself
 * The exception stacks (aka IRQ stacks)

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com> 
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/kaiser.h |    6 ++++++
 b/arch/x86/kernel/cpu/common.c  |    4 ++++
 b/arch/x86/mm/kaiser.c          |   31 +++++++++++++++++++++++++++++++
 b/include/linux/kaiser.h        |    3 +++
 4 files changed, 44 insertions(+)

diff -puN arch/x86/include/asm/kaiser.h~kaiser-user-map-cpu-entry-structure arch/x86/include/asm/kaiser.h
--- a/arch/x86/include/asm/kaiser.h~kaiser-user-map-cpu-entry-structure	2017-11-22 15:45:48.447619740 -0800
+++ b/arch/x86/include/asm/kaiser.h	2017-11-22 15:45:48.456619740 -0800
@@ -34,6 +34,12 @@ extern int kaiser_add_mapping(unsigned l
 			      unsigned long flags);
 
 /**
+ *  kaiser_add_mapping_cpu_entry - map the cpu entry area
+ *  @cpu: the CPU for which the entry area is being mapped
+ */
+extern void kaiser_add_mapping_cpu_entry(int cpu);
+
+/**
  *  kaiser_remove_mapping - remove a kernel mapping from the userpage tables
  *  @addr: the start address of the range
  *  @size: the size of the range
diff -puN arch/x86/kernel/cpu/common.c~kaiser-user-map-cpu-entry-structure arch/x86/kernel/cpu/common.c
--- a/arch/x86/kernel/cpu/common.c~kaiser-user-map-cpu-entry-structure	2017-11-22 15:45:48.449619740 -0800
+++ b/arch/x86/kernel/cpu/common.c	2017-11-22 15:45:48.457619740 -0800
@@ -4,6 +4,7 @@
 #include <linux/kernel.h>
 #include <linux/export.h>
 #include <linux/percpu.h>
+#include <linux/kaiser.h>
 #include <linux/string.h>
 #include <linux/ctype.h>
 #include <linux/delay.h>
@@ -587,6 +588,9 @@ static inline void setup_cpu_entry_area(
 	__set_fixmap(get_cpu_entry_area_index(cpu, entry_trampoline),
 		     __pa_symbol(_entry_trampoline), PAGE_KERNEL_RX);
 #endif
+ 	/* CPU 0's mapping is done in kaiser_init() */
+	if (cpu)
+		kaiser_add_mapping_cpu_entry(cpu);
 }
 
 /* Load the original GDT from the per-cpu structure */
diff -puN arch/x86/mm/kaiser.c~kaiser-user-map-cpu-entry-structure arch/x86/mm/kaiser.c
--- a/arch/x86/mm/kaiser.c~kaiser-user-map-cpu-entry-structure	2017-11-22 15:45:48.451619740 -0800
+++ b/arch/x86/mm/kaiser.c	2017-11-22 15:45:48.457619740 -0800
@@ -353,6 +353,26 @@ static void __init kaiser_init_all_pgds(
 	WARN_ON(__ret);							\
 } while (0)
 
+void kaiser_add_mapping_cpu_entry(int cpu)
+{
+	kaiser_add_user_map_early(get_cpu_gdt_ro(cpu), PAGE_SIZE,
+				  __PAGE_KERNEL_RO);
+
+	/* includes the entry stack */
+	kaiser_add_user_map_early(&get_cpu_entry_area(cpu)->tss,
+				  sizeof(get_cpu_entry_area(cpu)->tss),
+				  __PAGE_KERNEL | _PAGE_GLOBAL);
+
+	/* Entry code, so needs to be EXEC */
+	kaiser_add_user_map_early(&get_cpu_entry_area(cpu)->entry_trampoline,
+				  sizeof(get_cpu_entry_area(cpu)->entry_trampoline),
+				  __PAGE_KERNEL_EXEC | _PAGE_GLOBAL);
+
+	kaiser_add_user_map_early(&get_cpu_entry_area(cpu)->exception_stacks,
+				 sizeof(get_cpu_entry_area(cpu)->exception_stacks),
+				 __PAGE_KERNEL | _PAGE_GLOBAL);
+}
+
 extern char __per_cpu_user_mapped_start[], __per_cpu_user_mapped_end[];
 /*
  * If anything in here fails, we will likely die on one of the
@@ -390,6 +410,17 @@ void __init kaiser_init(void)
 	kaiser_add_user_map_early((void *)idt_descr.address,
 				  sizeof(gate_desc) * NR_VECTORS,
 				  __PAGE_KERNEL_RO | _PAGE_GLOBAL);
+
+	/*
+	 * We delay CPU 0's mappings because these structures are
+	 * created before the page allocator is up.  Deferring it
+	 * until here lets us use the plain page allocator
+	 * unconditionally in the page table code above.
+	 *
+	 * This is OK because kaiser_init() is called long before
+	 * we ever run userspace and need the KAISER mappings.
+	 */
+	kaiser_add_mapping_cpu_entry(0);
 }
 
 int kaiser_add_mapping(unsigned long addr, unsigned long size,
diff -puN include/linux/kaiser.h~kaiser-user-map-cpu-entry-structure include/linux/kaiser.h
--- a/include/linux/kaiser.h~kaiser-user-map-cpu-entry-structure	2017-11-22 15:45:48.453619740 -0800
+++ b/include/linux/kaiser.h	2017-11-22 15:45:48.458619740 -0800
@@ -25,5 +25,8 @@ static inline int kaiser_add_mapping(uns
 	return 0;
 }
 
+static inline void kaiser_add_mapping_cpu_entry(int cpu)
+{
+}
 #endif /* !CONFIG_KAISER */
 #endif /* _INCLUDE_KAISER_H */
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 09/23] x86, kaiser: map dynamically-allocated LDTs
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:34   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

Normally, a process has a NULL mm->context.ldt.  But, there is a
syscall for a process to set a new one.  If a process does that,
the LDT be mapped into the user page tables, just like the
default copy.

The original KAISER patch missed this case.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/kernel/ldt.c |   25 ++++++++++++++++++++-----
 1 file changed, 20 insertions(+), 5 deletions(-)

diff -puN arch/x86/kernel/ldt.c~kaiser-user-map-new-ldts arch/x86/kernel/ldt.c
--- a/arch/x86/kernel/ldt.c~kaiser-user-map-new-ldts	2017-11-22 15:45:49.059619739 -0800
+++ b/arch/x86/kernel/ldt.c	2017-11-22 15:45:49.062619739 -0800
@@ -11,6 +11,7 @@
 #include <linux/gfp.h>
 #include <linux/sched.h>
 #include <linux/string.h>
+#include <linux/kaiser.h>
 #include <linux/mm.h>
 #include <linux/smp.h>
 #include <linux/syscalls.h>
@@ -57,11 +58,21 @@ static void flush_ldt(void *__mm)
 	refresh_ldt_segments();
 }
 
+static void __free_ldt_struct(struct ldt_struct *ldt)
+{
+	if (ldt->nr_entries * LDT_ENTRY_SIZE > PAGE_SIZE)
+		vfree_atomic(ldt->entries);
+	else
+		free_page((unsigned long)ldt->entries);
+	kfree(ldt);
+}
+
 /* The caller must call finalize_ldt_struct on the result. LDT starts zeroed. */
 static struct ldt_struct *alloc_ldt_struct(unsigned int num_entries)
 {
 	struct ldt_struct *new_ldt;
 	unsigned int alloc_size;
+	int ret;
 
 	if (num_entries > LDT_ENTRIES)
 		return NULL;
@@ -89,6 +100,12 @@ static struct ldt_struct *alloc_ldt_stru
 		return NULL;
 	}
 
+	ret = kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
+				 __PAGE_KERNEL | _PAGE_GLOBAL);
+	if (ret) {
+		__free_ldt_struct(new_ldt);
+		return NULL;
+	}
 	new_ldt->nr_entries = num_entries;
 	return new_ldt;
 }
@@ -115,12 +132,10 @@ static void free_ldt_struct(struct ldt_s
 	if (likely(!ldt))
 		return;
 
+	kaiser_remove_mapping((unsigned long)ldt->entries,
+			      ldt->nr_entries * LDT_ENTRY_SIZE);
 	paravirt_free_ldt(ldt->entries, ldt->nr_entries);
-	if (ldt->nr_entries * LDT_ENTRY_SIZE > PAGE_SIZE)
-		vfree_atomic(ldt->entries);
-	else
-		free_page((unsigned long)ldt->entries);
-	kfree(ldt);
+	__free_ldt_struct(ldt);
 }
 
 /*
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 09/23] x86, kaiser: map dynamically-allocated LDTs
@ 2017-11-23  0:34   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

Normally, a process has a NULL mm->context.ldt.  But, there is a
syscall for a process to set a new one.  If a process does that,
the LDT be mapped into the user page tables, just like the
default copy.

The original KAISER patch missed this case.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/kernel/ldt.c |   25 ++++++++++++++++++++-----
 1 file changed, 20 insertions(+), 5 deletions(-)

diff -puN arch/x86/kernel/ldt.c~kaiser-user-map-new-ldts arch/x86/kernel/ldt.c
--- a/arch/x86/kernel/ldt.c~kaiser-user-map-new-ldts	2017-11-22 15:45:49.059619739 -0800
+++ b/arch/x86/kernel/ldt.c	2017-11-22 15:45:49.062619739 -0800
@@ -11,6 +11,7 @@
 #include <linux/gfp.h>
 #include <linux/sched.h>
 #include <linux/string.h>
+#include <linux/kaiser.h>
 #include <linux/mm.h>
 #include <linux/smp.h>
 #include <linux/syscalls.h>
@@ -57,11 +58,21 @@ static void flush_ldt(void *__mm)
 	refresh_ldt_segments();
 }
 
+static void __free_ldt_struct(struct ldt_struct *ldt)
+{
+	if (ldt->nr_entries * LDT_ENTRY_SIZE > PAGE_SIZE)
+		vfree_atomic(ldt->entries);
+	else
+		free_page((unsigned long)ldt->entries);
+	kfree(ldt);
+}
+
 /* The caller must call finalize_ldt_struct on the result. LDT starts zeroed. */
 static struct ldt_struct *alloc_ldt_struct(unsigned int num_entries)
 {
 	struct ldt_struct *new_ldt;
 	unsigned int alloc_size;
+	int ret;
 
 	if (num_entries > LDT_ENTRIES)
 		return NULL;
@@ -89,6 +100,12 @@ static struct ldt_struct *alloc_ldt_stru
 		return NULL;
 	}
 
+	ret = kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
+				 __PAGE_KERNEL | _PAGE_GLOBAL);
+	if (ret) {
+		__free_ldt_struct(new_ldt);
+		return NULL;
+	}
 	new_ldt->nr_entries = num_entries;
 	return new_ldt;
 }
@@ -115,12 +132,10 @@ static void free_ldt_struct(struct ldt_s
 	if (likely(!ldt))
 		return;
 
+	kaiser_remove_mapping((unsigned long)ldt->entries,
+			      ldt->nr_entries * LDT_ENTRY_SIZE);
 	paravirt_free_ldt(ldt->entries, ldt->nr_entries);
-	if (ldt->nr_entries * LDT_ENTRY_SIZE > PAGE_SIZE)
-		vfree_atomic(ldt->entries);
-	else
-		free_page((unsigned long)ldt->entries);
-	kfree(ldt);
+	__free_ldt_struct(ldt);
 }
 
 /*
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 10/23] x86, kaiser: map espfix structures
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:34   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

There is some rather arcane code to help when an IRET returns
to 16-bit segments.  It is referred to as the "espfix" code.
This consists of a few per-cpu variables:

	espfix_stack: tells us where the stack is allocated
	  	      (the bottom)
	espfix_waddr: tells us to where %rsp may be pointed
		      (the top)

These are in addition to the stack itself.  All three things must
be mapped for the espfix code to function.

Note: the espfix code runs with a kernel GSBASE, but user
(shadow) page tables.  A switch to the kernel page tables could
be performed instead of mapping these structures, but mapping
them is simpler and less likely to break the assembly.  To switch
over to the kernel copy, additional temporary storage would be
required which is in short supply in this context.

The original KAISER patch missed this case.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/kernel/espfix_64.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff -puN arch/x86/kernel/espfix_64.c~kaiser-user-map-espfix arch/x86/kernel/espfix_64.c
--- a/arch/x86/kernel/espfix_64.c~kaiser-user-map-espfix	2017-11-22 15:45:49.592619738 -0800
+++ b/arch/x86/kernel/espfix_64.c	2017-11-22 15:45:49.596619738 -0800
@@ -33,6 +33,7 @@
 
 #include <linux/init.h>
 #include <linux/init_task.h>
+#include <linux/kaiser.h>
 #include <linux/kernel.h>
 #include <linux/percpu.h>
 #include <linux/gfp.h>
@@ -41,7 +42,6 @@
 #include <asm/pgalloc.h>
 #include <asm/setup.h>
 #include <asm/espfix.h>
-#include <asm/kaiser.h>
 
 /*
  * Note: we only need 6*8 = 48 bytes for the espfix stack, but round
@@ -61,8 +61,8 @@
 #define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO)
 
 /* This contains the *bottom* address of the espfix stack */
-DEFINE_PER_CPU_READ_MOSTLY(unsigned long, espfix_stack);
-DEFINE_PER_CPU_READ_MOSTLY(unsigned long, espfix_waddr);
+DEFINE_PER_CPU_USER_MAPPED(unsigned long, espfix_stack);
+DEFINE_PER_CPU_USER_MAPPED(unsigned long, espfix_waddr);
 
 /* Initialization mutex - should this be a spinlock? */
 static DEFINE_MUTEX(espfix_init_mutex);
@@ -225,4 +225,10 @@ done:
 	per_cpu(espfix_stack, cpu) = addr;
 	per_cpu(espfix_waddr, cpu) = (unsigned long)stack_page
 				      + (addr & ~PAGE_MASK);
+	/*
+	 * _PAGE_GLOBAL is not really required.  This is not a hot
+	 * path, but we do it here for consistency.
+	 */
+	kaiser_add_mapping((unsigned long)stack_page, PAGE_SIZE,
+			__PAGE_KERNEL | _PAGE_GLOBAL);
 }
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 10/23] x86, kaiser: map espfix structures
@ 2017-11-23  0:34   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

There is some rather arcane code to help when an IRET returns
to 16-bit segments.  It is referred to as the "espfix" code.
This consists of a few per-cpu variables:

	espfix_stack: tells us where the stack is allocated
	  	      (the bottom)
	espfix_waddr: tells us to where %rsp may be pointed
		      (the top)

These are in addition to the stack itself.  All three things must
be mapped for the espfix code to function.

Note: the espfix code runs with a kernel GSBASE, but user
(shadow) page tables.  A switch to the kernel page tables could
be performed instead of mapping these structures, but mapping
them is simpler and less likely to break the assembly.  To switch
over to the kernel copy, additional temporary storage would be
required which is in short supply in this context.

The original KAISER patch missed this case.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/kernel/espfix_64.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff -puN arch/x86/kernel/espfix_64.c~kaiser-user-map-espfix arch/x86/kernel/espfix_64.c
--- a/arch/x86/kernel/espfix_64.c~kaiser-user-map-espfix	2017-11-22 15:45:49.592619738 -0800
+++ b/arch/x86/kernel/espfix_64.c	2017-11-22 15:45:49.596619738 -0800
@@ -33,6 +33,7 @@
 
 #include <linux/init.h>
 #include <linux/init_task.h>
+#include <linux/kaiser.h>
 #include <linux/kernel.h>
 #include <linux/percpu.h>
 #include <linux/gfp.h>
@@ -41,7 +42,6 @@
 #include <asm/pgalloc.h>
 #include <asm/setup.h>
 #include <asm/espfix.h>
-#include <asm/kaiser.h>
 
 /*
  * Note: we only need 6*8 = 48 bytes for the espfix stack, but round
@@ -61,8 +61,8 @@
 #define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO)
 
 /* This contains the *bottom* address of the espfix stack */
-DEFINE_PER_CPU_READ_MOSTLY(unsigned long, espfix_stack);
-DEFINE_PER_CPU_READ_MOSTLY(unsigned long, espfix_waddr);
+DEFINE_PER_CPU_USER_MAPPED(unsigned long, espfix_stack);
+DEFINE_PER_CPU_USER_MAPPED(unsigned long, espfix_waddr);
 
 /* Initialization mutex - should this be a spinlock? */
 static DEFINE_MUTEX(espfix_init_mutex);
@@ -225,4 +225,10 @@ done:
 	per_cpu(espfix_stack, cpu) = addr;
 	per_cpu(espfix_waddr, cpu) = (unsigned long)stack_page
 				      + (addr & ~PAGE_MASK);
+	/*
+	 * _PAGE_GLOBAL is not really required.  This is not a hot
+	 * path, but we do it here for consistency.
+	 */
+	kaiser_add_mapping((unsigned long)stack_page, PAGE_SIZE,
+			__PAGE_KERNEL | _PAGE_GLOBAL);
 }
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 11/23] x86, kaiser: map entry stack variables
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:34   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

There are times where the kernel is entered but there is not a
safe stack, like at SYSCALL entry.  To obtain a safe stack, the
per-cpu variables 'rsp_scratch' and 'cpu_current_top_of_stack'
are used to save the old %rsp value and to find where the kernel
stack should start.

You can not directly manipulate the CR3 register.  You can only
'MOV' to it from another register, which means a register must be
clobbered in order to do any CR3 manipulation.  User-mapping
these variables allows us to obtain a safe stack and use it for
temporary storage *before* CR3 is switched.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/kernel/cpu/common.c |    2 +-
 b/arch/x86/kernel/process_64.c |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff -puN arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/cpu/common.c
--- a/arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars	2017-11-22 15:45:50.128619736 -0800
+++ b/arch/x86/kernel/cpu/common.c	2017-11-22 15:45:50.134619736 -0800
@@ -1524,7 +1524,7 @@ EXPORT_PER_CPU_SYMBOL(__preempt_count);
  * the top of the kernel stack.  Use an extra percpu variable to track the
  * top of the kernel stack directly.
  */
-DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) =
+DEFINE_PER_CPU_USER_MAPPED(unsigned long, cpu_current_top_of_stack) =
 	(unsigned long)&init_thread_union + THREAD_SIZE;
 EXPORT_PER_CPU_SYMBOL(cpu_current_top_of_stack);
 
diff -puN arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/process_64.c
--- a/arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars	2017-11-22 15:45:50.130619736 -0800
+++ b/arch/x86/kernel/process_64.c	2017-11-22 15:45:50.134619736 -0800
@@ -59,7 +59,7 @@
 #include <asm/unistd_32_ia32.h>
 #endif
 
-__visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
+__visible DEFINE_PER_CPU_USER_MAPPED(unsigned long, rsp_scratch);
 
 /* Prints also some state that isn't saved in the pt_regs */
 void __show_regs(struct pt_regs *regs, int all)
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 11/23] x86, kaiser: map entry stack variables
@ 2017-11-23  0:34   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

There are times where the kernel is entered but there is not a
safe stack, like at SYSCALL entry.  To obtain a safe stack, the
per-cpu variables 'rsp_scratch' and 'cpu_current_top_of_stack'
are used to save the old %rsp value and to find where the kernel
stack should start.

You can not directly manipulate the CR3 register.  You can only
'MOV' to it from another register, which means a register must be
clobbered in order to do any CR3 manipulation.  User-mapping
these variables allows us to obtain a safe stack and use it for
temporary storage *before* CR3 is switched.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/kernel/cpu/common.c |    2 +-
 b/arch/x86/kernel/process_64.c |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff -puN arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/cpu/common.c
--- a/arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars	2017-11-22 15:45:50.128619736 -0800
+++ b/arch/x86/kernel/cpu/common.c	2017-11-22 15:45:50.134619736 -0800
@@ -1524,7 +1524,7 @@ EXPORT_PER_CPU_SYMBOL(__preempt_count);
  * the top of the kernel stack.  Use an extra percpu variable to track the
  * top of the kernel stack directly.
  */
-DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) =
+DEFINE_PER_CPU_USER_MAPPED(unsigned long, cpu_current_top_of_stack) =
 	(unsigned long)&init_thread_union + THREAD_SIZE;
 EXPORT_PER_CPU_SYMBOL(cpu_current_top_of_stack);
 
diff -puN arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/process_64.c
--- a/arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars	2017-11-22 15:45:50.130619736 -0800
+++ b/arch/x86/kernel/process_64.c	2017-11-22 15:45:50.134619736 -0800
@@ -59,7 +59,7 @@
 #include <asm/unistd_32_ia32.h>
 #endif
 
-__visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
+__visible DEFINE_PER_CPU_USER_MAPPED(unsigned long, rsp_scratch);
 
 /* Prints also some state that isn't saved in the pt_regs */
 void __show_regs(struct pt_regs *regs, int all)
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 12/23] x86, kaiser: map virtually-addressed performance monitoring buffers
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:35   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, hughd, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook, x86


From: Hugh Dickins <hughd@google.com>
[Dave] Add explicit _PAGE_GLOBAL
[Dave] remove KAISER #ifdefs by moving kmalloc() to plain page allocator
[Dave] reword the commit message a bit to be consistent with other patches

The BTS and PEBS buffers both have their virtual addresses
programmed into the hardware.  This means that any access to them
is performed via the page tables.  The times that the hardware
accesses these are entirely dependent on how the performance
monitoring hardware events are set up.  In other words, there is
no way for the kernel to tell when the hardware might access
these buffers.

To avoid perf crashes, place 'debug_store' in the user-mapped
per-cpu area instead of dynamically allocating.  Also use the
page allocator plus kaiser_add_mapping() to keep the BTS and PEBS
buffers user-mapped (that is, present in the user mapping, though
visible only to kernel and hardware).  The PEBS fixup buffer does
not need this treatment.

The need for a user-mapped struct debug_store showed up before doing
any conscious perf testing: in a couple of kernel paging oopses on
Westmere, implicating the debug_store offset of the per-cpu area.

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/events/intel/ds.c |   49 ++++++++++++++++++++++++++++++++-----------
 1 file changed, 37 insertions(+), 12 deletions(-)

diff -puN arch/x86/events/intel/ds.c~kaiser-user-map-virtually-addressed-performance-monitoring-buffers arch/x86/events/intel/ds.c
--- a/arch/x86/events/intel/ds.c~kaiser-user-map-virtually-addressed-performance-monitoring-buffers	2017-11-22 15:45:50.691619735 -0800
+++ b/arch/x86/events/intel/ds.c	2017-11-22 15:45:50.695619735 -0800
@@ -3,11 +3,15 @@
 #include <linux/types.h>
 #include <linux/slab.h>
 
+#include <asm/kaiser.h>
 #include <asm/perf_event.h>
 #include <asm/insn.h>
 
 #include "../perf_event.h"
 
+static
+DEFINE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(struct debug_store, cpu_debug_store);
+
 /* The size of a BTS record in bytes: */
 #define BTS_RECORD_SIZE		24
 
@@ -279,6 +283,31 @@ void fini_debug_store_on_cpu(int cpu)
 
 static DEFINE_PER_CPU(void *, insn_buffer);
 
+static void *dsalloc(size_t size, gfp_t flags, int node)
+{
+	unsigned int order = get_order(size);
+	struct page *page;
+	unsigned long addr;
+
+	page = __alloc_pages_node(node, flags | __GFP_ZERO, order);
+	if (!page)
+		return NULL;
+	addr = (unsigned long)page_address(page);
+	if (kaiser_add_mapping(addr, size, __PAGE_KERNEL | _PAGE_GLOBAL) < 0) {
+		__free_pages(page, order);
+		addr = 0;
+	}
+	return (void *)addr;
+}
+
+static void dsfree(const void *buffer, size_t size)
+{
+	if (!buffer)
+		return;
+	kaiser_remove_mapping((unsigned long)buffer, size);
+	free_pages((unsigned long)buffer, get_order(size));
+}
+
 static int alloc_pebs_buffer(int cpu)
 {
 	struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds;
@@ -289,7 +318,7 @@ static int alloc_pebs_buffer(int cpu)
 	if (!x86_pmu.pebs)
 		return 0;
 
-	buffer = kzalloc_node(x86_pmu.pebs_buffer_size, GFP_KERNEL, node);
+	buffer = dsalloc(x86_pmu.pebs_buffer_size, GFP_KERNEL, node);
 	if (unlikely(!buffer))
 		return -ENOMEM;
 
@@ -300,7 +329,7 @@ static int alloc_pebs_buffer(int cpu)
 	if (x86_pmu.intel_cap.pebs_format < 2) {
 		ibuffer = kzalloc_node(PEBS_FIXUP_SIZE, GFP_KERNEL, node);
 		if (!ibuffer) {
-			kfree(buffer);
+			dsfree(buffer, x86_pmu.pebs_buffer_size);
 			return -ENOMEM;
 		}
 		per_cpu(insn_buffer, cpu) = ibuffer;
@@ -326,7 +355,8 @@ static void release_pebs_buffer(int cpu)
 	kfree(per_cpu(insn_buffer, cpu));
 	per_cpu(insn_buffer, cpu) = NULL;
 
-	kfree((void *)(unsigned long)ds->pebs_buffer_base);
+	dsfree((void *)(unsigned long)ds->pebs_buffer_base,
+			x86_pmu.pebs_buffer_size);
 	ds->pebs_buffer_base = 0;
 }
 
@@ -340,7 +370,7 @@ static int alloc_bts_buffer(int cpu)
 	if (!x86_pmu.bts)
 		return 0;
 
-	buffer = kzalloc_node(BTS_BUFFER_SIZE, GFP_KERNEL | __GFP_NOWARN, node);
+	buffer = dsalloc(BTS_BUFFER_SIZE, GFP_KERNEL | __GFP_NOWARN, node);
 	if (unlikely(!buffer)) {
 		WARN_ONCE(1, "%s: BTS buffer allocation failure\n", __func__);
 		return -ENOMEM;
@@ -366,19 +396,15 @@ static void release_bts_buffer(int cpu)
 	if (!ds || !x86_pmu.bts)
 		return;
 
-	kfree((void *)(unsigned long)ds->bts_buffer_base);
+	dsfree((void *)(unsigned long)ds->bts_buffer_base, BTS_BUFFER_SIZE);
 	ds->bts_buffer_base = 0;
 }
 
 static int alloc_ds_buffer(int cpu)
 {
-	int node = cpu_to_node(cpu);
-	struct debug_store *ds;
-
-	ds = kzalloc_node(sizeof(*ds), GFP_KERNEL, node);
-	if (unlikely(!ds))
-		return -ENOMEM;
+	struct debug_store *ds = per_cpu_ptr(&cpu_debug_store, cpu);
 
+	memset(ds, 0, sizeof(*ds));
 	per_cpu(cpu_hw_events, cpu).ds = ds;
 
 	return 0;
@@ -392,7 +418,6 @@ static void release_ds_buffer(int cpu)
 		return;
 
 	per_cpu(cpu_hw_events, cpu).ds = NULL;
-	kfree(ds);
 }
 
 void release_ds_buffers(void)
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 12/23] x86, kaiser: map virtually-addressed performance monitoring buffers
@ 2017-11-23  0:35   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, hughd, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook, x86


From: Hugh Dickins <hughd@google.com>
[Dave] Add explicit _PAGE_GLOBAL
[Dave] remove KAISER #ifdefs by moving kmalloc() to plain page allocator
[Dave] reword the commit message a bit to be consistent with other patches

The BTS and PEBS buffers both have their virtual addresses
programmed into the hardware.  This means that any access to them
is performed via the page tables.  The times that the hardware
accesses these are entirely dependent on how the performance
monitoring hardware events are set up.  In other words, there is
no way for the kernel to tell when the hardware might access
these buffers.

To avoid perf crashes, place 'debug_store' in the user-mapped
per-cpu area instead of dynamically allocating.  Also use the
page allocator plus kaiser_add_mapping() to keep the BTS and PEBS
buffers user-mapped (that is, present in the user mapping, though
visible only to kernel and hardware).  The PEBS fixup buffer does
not need this treatment.

The need for a user-mapped struct debug_store showed up before doing
any conscious perf testing: in a couple of kernel paging oopses on
Westmere, implicating the debug_store offset of the per-cpu area.

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/events/intel/ds.c |   49 ++++++++++++++++++++++++++++++++-----------
 1 file changed, 37 insertions(+), 12 deletions(-)

diff -puN arch/x86/events/intel/ds.c~kaiser-user-map-virtually-addressed-performance-monitoring-buffers arch/x86/events/intel/ds.c
--- a/arch/x86/events/intel/ds.c~kaiser-user-map-virtually-addressed-performance-monitoring-buffers	2017-11-22 15:45:50.691619735 -0800
+++ b/arch/x86/events/intel/ds.c	2017-11-22 15:45:50.695619735 -0800
@@ -3,11 +3,15 @@
 #include <linux/types.h>
 #include <linux/slab.h>
 
+#include <asm/kaiser.h>
 #include <asm/perf_event.h>
 #include <asm/insn.h>
 
 #include "../perf_event.h"
 
+static
+DEFINE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(struct debug_store, cpu_debug_store);
+
 /* The size of a BTS record in bytes: */
 #define BTS_RECORD_SIZE		24
 
@@ -279,6 +283,31 @@ void fini_debug_store_on_cpu(int cpu)
 
 static DEFINE_PER_CPU(void *, insn_buffer);
 
+static void *dsalloc(size_t size, gfp_t flags, int node)
+{
+	unsigned int order = get_order(size);
+	struct page *page;
+	unsigned long addr;
+
+	page = __alloc_pages_node(node, flags | __GFP_ZERO, order);
+	if (!page)
+		return NULL;
+	addr = (unsigned long)page_address(page);
+	if (kaiser_add_mapping(addr, size, __PAGE_KERNEL | _PAGE_GLOBAL) < 0) {
+		__free_pages(page, order);
+		addr = 0;
+	}
+	return (void *)addr;
+}
+
+static void dsfree(const void *buffer, size_t size)
+{
+	if (!buffer)
+		return;
+	kaiser_remove_mapping((unsigned long)buffer, size);
+	free_pages((unsigned long)buffer, get_order(size));
+}
+
 static int alloc_pebs_buffer(int cpu)
 {
 	struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds;
@@ -289,7 +318,7 @@ static int alloc_pebs_buffer(int cpu)
 	if (!x86_pmu.pebs)
 		return 0;
 
-	buffer = kzalloc_node(x86_pmu.pebs_buffer_size, GFP_KERNEL, node);
+	buffer = dsalloc(x86_pmu.pebs_buffer_size, GFP_KERNEL, node);
 	if (unlikely(!buffer))
 		return -ENOMEM;
 
@@ -300,7 +329,7 @@ static int alloc_pebs_buffer(int cpu)
 	if (x86_pmu.intel_cap.pebs_format < 2) {
 		ibuffer = kzalloc_node(PEBS_FIXUP_SIZE, GFP_KERNEL, node);
 		if (!ibuffer) {
-			kfree(buffer);
+			dsfree(buffer, x86_pmu.pebs_buffer_size);
 			return -ENOMEM;
 		}
 		per_cpu(insn_buffer, cpu) = ibuffer;
@@ -326,7 +355,8 @@ static void release_pebs_buffer(int cpu)
 	kfree(per_cpu(insn_buffer, cpu));
 	per_cpu(insn_buffer, cpu) = NULL;
 
-	kfree((void *)(unsigned long)ds->pebs_buffer_base);
+	dsfree((void *)(unsigned long)ds->pebs_buffer_base,
+			x86_pmu.pebs_buffer_size);
 	ds->pebs_buffer_base = 0;
 }
 
@@ -340,7 +370,7 @@ static int alloc_bts_buffer(int cpu)
 	if (!x86_pmu.bts)
 		return 0;
 
-	buffer = kzalloc_node(BTS_BUFFER_SIZE, GFP_KERNEL | __GFP_NOWARN, node);
+	buffer = dsalloc(BTS_BUFFER_SIZE, GFP_KERNEL | __GFP_NOWARN, node);
 	if (unlikely(!buffer)) {
 		WARN_ONCE(1, "%s: BTS buffer allocation failure\n", __func__);
 		return -ENOMEM;
@@ -366,19 +396,15 @@ static void release_bts_buffer(int cpu)
 	if (!ds || !x86_pmu.bts)
 		return;
 
-	kfree((void *)(unsigned long)ds->bts_buffer_base);
+	dsfree((void *)(unsigned long)ds->bts_buffer_base, BTS_BUFFER_SIZE);
 	ds->bts_buffer_base = 0;
 }
 
 static int alloc_ds_buffer(int cpu)
 {
-	int node = cpu_to_node(cpu);
-	struct debug_store *ds;
-
-	ds = kzalloc_node(sizeof(*ds), GFP_KERNEL, node);
-	if (unlikely(!ds))
-		return -ENOMEM;
+	struct debug_store *ds = per_cpu_ptr(&cpu_debug_store, cpu);
 
+	memset(ds, 0, sizeof(*ds));
 	per_cpu(cpu_hw_events, cpu).ds = ds;
 
 	return 0;
@@ -392,7 +418,6 @@ static void release_ds_buffer(int cpu)
 		return;
 
 	per_cpu(cpu_hw_events, cpu).ds = NULL;
-	kfree(ds);
 }
 
 void release_ds_buffers(void)
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 13/23] x86, mm: Move CR3 construction functions
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:35   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

For flushing the TLB, the ASID which has been programmed into the
hardware must be known.  That differs from what is in 'cpu_tlbstate'.

Add functions to transform the 'cpu_tlbstate' values into to the one
programmed into the hardware (CR3).

It's not easy to include mmu_context.h into tlbflush.h, so just move
the CR3 building over to tlbflush.h.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/mmu_context.h |   29 +----------------------------
 b/arch/x86/include/asm/tlbflush.h    |   27 +++++++++++++++++++++++++++
 b/arch/x86/mm/tlb.c                  |    8 ++++----
 3 files changed, 32 insertions(+), 32 deletions(-)

diff -puN arch/x86/include/asm/mmu_context.h~kaiser-pcid-pre-build-func-move arch/x86/include/asm/mmu_context.h
--- a/arch/x86/include/asm/mmu_context.h~kaiser-pcid-pre-build-func-move	2017-11-22 15:45:51.231619733 -0800
+++ b/arch/x86/include/asm/mmu_context.h	2017-11-22 15:45:51.238619733 -0800
@@ -282,33 +282,6 @@ static inline bool arch_vma_access_permi
 }
 
 /*
- * If PCID is on, ASID-aware code paths put the ASID+1 into the PCID
- * bits.  This serves two purposes.  It prevents a nasty situation in
- * which PCID-unaware code saves CR3, loads some other value (with PCID
- * == 0), and then restores CR3, thus corrupting the TLB for ASID 0 if
- * the saved ASID was nonzero.  It also means that any bugs involving
- * loading a PCID-enabled CR3 with CR4.PCIDE off will trigger
- * deterministically.
- */
-
-static inline unsigned long build_cr3(struct mm_struct *mm, u16 asid)
-{
-	if (static_cpu_has(X86_FEATURE_PCID)) {
-		VM_WARN_ON_ONCE(asid > 4094);
-		return __sme_pa(mm->pgd) | (asid + 1);
-	} else {
-		VM_WARN_ON_ONCE(asid != 0);
-		return __sme_pa(mm->pgd);
-	}
-}
-
-static inline unsigned long build_cr3_noflush(struct mm_struct *mm, u16 asid)
-{
-	VM_WARN_ON_ONCE(asid > 4094);
-	return __sme_pa(mm->pgd) | (asid + 1) | CR3_NOFLUSH;
-}
-
-/*
  * This can be used from process context to figure out what the value of
  * CR3 is without needing to do a (slow) __read_cr3().
  *
@@ -317,7 +290,7 @@ static inline unsigned long build_cr3_no
  */
 static inline unsigned long __get_current_cr3_fast(void)
 {
-	unsigned long cr3 = build_cr3(this_cpu_read(cpu_tlbstate.loaded_mm),
+	unsigned long cr3 = build_cr3(this_cpu_read(cpu_tlbstate.loaded_mm)->pgd,
 		this_cpu_read(cpu_tlbstate.loaded_mm_asid));
 
 	/* For now, be very restrictive about when this can be called. */
diff -puN arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-build-func-move arch/x86/include/asm/tlbflush.h
--- a/arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-build-func-move	2017-11-22 15:45:51.233619733 -0800
+++ b/arch/x86/include/asm/tlbflush.h	2017-11-22 15:45:51.238619733 -0800
@@ -75,6 +75,33 @@ static inline u64 inc_mm_tlb_gen(struct
 	return new_tlb_gen;
 }
 
+/*
+ * If PCID is on, ASID-aware code paths put the ASID+1 into the PCID
+ * bits.  This serves two purposes.  It prevents a nasty situation in
+ * which PCID-unaware code saves CR3, loads some other value (with PCID
+ * == 0), and then restores CR3, thus corrupting the TLB for ASID 0 if
+ * the saved ASID was nonzero.  It also means that any bugs involving
+ * loading a PCID-enabled CR3 with CR4.PCIDE off will trigger
+ * deterministically.
+ */
+struct pgd_t;
+static inline unsigned long build_cr3(pgd_t *pgd, u16 asid)
+{
+	if (static_cpu_has(X86_FEATURE_PCID)) {
+		VM_WARN_ON_ONCE(asid > 4094);
+		return __sme_pa(pgd) | (asid + 1);
+	} else {
+		VM_WARN_ON_ONCE(asid != 0);
+		return __sme_pa(pgd);
+	}
+}
+
+static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid)
+{
+	VM_WARN_ON_ONCE(asid > 4094);
+	return __sme_pa(pgd) | (asid + 1) | CR3_NOFLUSH;
+}
+
 #ifdef CONFIG_PARAVIRT
 #include <asm/paravirt.h>
 #else
diff -puN arch/x86/mm/tlb.c~kaiser-pcid-pre-build-func-move arch/x86/mm/tlb.c
--- a/arch/x86/mm/tlb.c~kaiser-pcid-pre-build-func-move	2017-11-22 15:45:51.235619733 -0800
+++ b/arch/x86/mm/tlb.c	2017-11-22 15:45:51.239619733 -0800
@@ -128,7 +128,7 @@ void switch_mm_irqs_off(struct mm_struct
 	 * isn't free.
 	 */
 #ifdef CONFIG_DEBUG_VM
-	if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev, prev_asid))) {
+	if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev->pgd, prev_asid))) {
 		/*
 		 * If we were to BUG here, we'd be very likely to kill
 		 * the system so hard that we don't see the call trace.
@@ -195,7 +195,7 @@ void switch_mm_irqs_off(struct mm_struct
 		if (need_flush) {
 			this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id);
 			this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen);
-			write_cr3(build_cr3(next, new_asid));
+			write_cr3(build_cr3(next->pgd, new_asid));
 
 			/*
 			 * NB: This gets called via leave_mm() in the idle path
@@ -208,7 +208,7 @@ void switch_mm_irqs_off(struct mm_struct
 			trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
 		} else {
 			/* The new ASID is already up to date. */
-			write_cr3(build_cr3_noflush(next, new_asid));
+			write_cr3(build_cr3_noflush(next->pgd, new_asid));
 
 			/* See above wrt _rcuidle. */
 			trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0);
@@ -288,7 +288,7 @@ void initialize_tlbstate_and_flush(void)
 		!(cr4_read_shadow() & X86_CR4_PCIDE));
 
 	/* Force ASID 0 and force a TLB flush. */
-	write_cr3(build_cr3(mm, 0));
+	write_cr3(build_cr3(mm->pgd, 0));
 
 	/* Reinitialize tlbstate. */
 	this_cpu_write(cpu_tlbstate.loaded_mm_asid, 0);
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 13/23] x86, mm: Move CR3 construction functions
@ 2017-11-23  0:35   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

For flushing the TLB, the ASID which has been programmed into the
hardware must be known.  That differs from what is in 'cpu_tlbstate'.

Add functions to transform the 'cpu_tlbstate' values into to the one
programmed into the hardware (CR3).

It's not easy to include mmu_context.h into tlbflush.h, so just move
the CR3 building over to tlbflush.h.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/mmu_context.h |   29 +----------------------------
 b/arch/x86/include/asm/tlbflush.h    |   27 +++++++++++++++++++++++++++
 b/arch/x86/mm/tlb.c                  |    8 ++++----
 3 files changed, 32 insertions(+), 32 deletions(-)

diff -puN arch/x86/include/asm/mmu_context.h~kaiser-pcid-pre-build-func-move arch/x86/include/asm/mmu_context.h
--- a/arch/x86/include/asm/mmu_context.h~kaiser-pcid-pre-build-func-move	2017-11-22 15:45:51.231619733 -0800
+++ b/arch/x86/include/asm/mmu_context.h	2017-11-22 15:45:51.238619733 -0800
@@ -282,33 +282,6 @@ static inline bool arch_vma_access_permi
 }
 
 /*
- * If PCID is on, ASID-aware code paths put the ASID+1 into the PCID
- * bits.  This serves two purposes.  It prevents a nasty situation in
- * which PCID-unaware code saves CR3, loads some other value (with PCID
- * == 0), and then restores CR3, thus corrupting the TLB for ASID 0 if
- * the saved ASID was nonzero.  It also means that any bugs involving
- * loading a PCID-enabled CR3 with CR4.PCIDE off will trigger
- * deterministically.
- */
-
-static inline unsigned long build_cr3(struct mm_struct *mm, u16 asid)
-{
-	if (static_cpu_has(X86_FEATURE_PCID)) {
-		VM_WARN_ON_ONCE(asid > 4094);
-		return __sme_pa(mm->pgd) | (asid + 1);
-	} else {
-		VM_WARN_ON_ONCE(asid != 0);
-		return __sme_pa(mm->pgd);
-	}
-}
-
-static inline unsigned long build_cr3_noflush(struct mm_struct *mm, u16 asid)
-{
-	VM_WARN_ON_ONCE(asid > 4094);
-	return __sme_pa(mm->pgd) | (asid + 1) | CR3_NOFLUSH;
-}
-
-/*
  * This can be used from process context to figure out what the value of
  * CR3 is without needing to do a (slow) __read_cr3().
  *
@@ -317,7 +290,7 @@ static inline unsigned long build_cr3_no
  */
 static inline unsigned long __get_current_cr3_fast(void)
 {
-	unsigned long cr3 = build_cr3(this_cpu_read(cpu_tlbstate.loaded_mm),
+	unsigned long cr3 = build_cr3(this_cpu_read(cpu_tlbstate.loaded_mm)->pgd,
 		this_cpu_read(cpu_tlbstate.loaded_mm_asid));
 
 	/* For now, be very restrictive about when this can be called. */
diff -puN arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-build-func-move arch/x86/include/asm/tlbflush.h
--- a/arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-build-func-move	2017-11-22 15:45:51.233619733 -0800
+++ b/arch/x86/include/asm/tlbflush.h	2017-11-22 15:45:51.238619733 -0800
@@ -75,6 +75,33 @@ static inline u64 inc_mm_tlb_gen(struct
 	return new_tlb_gen;
 }
 
+/*
+ * If PCID is on, ASID-aware code paths put the ASID+1 into the PCID
+ * bits.  This serves two purposes.  It prevents a nasty situation in
+ * which PCID-unaware code saves CR3, loads some other value (with PCID
+ * == 0), and then restores CR3, thus corrupting the TLB for ASID 0 if
+ * the saved ASID was nonzero.  It also means that any bugs involving
+ * loading a PCID-enabled CR3 with CR4.PCIDE off will trigger
+ * deterministically.
+ */
+struct pgd_t;
+static inline unsigned long build_cr3(pgd_t *pgd, u16 asid)
+{
+	if (static_cpu_has(X86_FEATURE_PCID)) {
+		VM_WARN_ON_ONCE(asid > 4094);
+		return __sme_pa(pgd) | (asid + 1);
+	} else {
+		VM_WARN_ON_ONCE(asid != 0);
+		return __sme_pa(pgd);
+	}
+}
+
+static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid)
+{
+	VM_WARN_ON_ONCE(asid > 4094);
+	return __sme_pa(pgd) | (asid + 1) | CR3_NOFLUSH;
+}
+
 #ifdef CONFIG_PARAVIRT
 #include <asm/paravirt.h>
 #else
diff -puN arch/x86/mm/tlb.c~kaiser-pcid-pre-build-func-move arch/x86/mm/tlb.c
--- a/arch/x86/mm/tlb.c~kaiser-pcid-pre-build-func-move	2017-11-22 15:45:51.235619733 -0800
+++ b/arch/x86/mm/tlb.c	2017-11-22 15:45:51.239619733 -0800
@@ -128,7 +128,7 @@ void switch_mm_irqs_off(struct mm_struct
 	 * isn't free.
 	 */
 #ifdef CONFIG_DEBUG_VM
-	if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev, prev_asid))) {
+	if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev->pgd, prev_asid))) {
 		/*
 		 * If we were to BUG here, we'd be very likely to kill
 		 * the system so hard that we don't see the call trace.
@@ -195,7 +195,7 @@ void switch_mm_irqs_off(struct mm_struct
 		if (need_flush) {
 			this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id);
 			this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen);
-			write_cr3(build_cr3(next, new_asid));
+			write_cr3(build_cr3(next->pgd, new_asid));
 
 			/*
 			 * NB: This gets called via leave_mm() in the idle path
@@ -208,7 +208,7 @@ void switch_mm_irqs_off(struct mm_struct
 			trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
 		} else {
 			/* The new ASID is already up to date. */
-			write_cr3(build_cr3_noflush(next, new_asid));
+			write_cr3(build_cr3_noflush(next->pgd, new_asid));
 
 			/* See above wrt _rcuidle. */
 			trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0);
@@ -288,7 +288,7 @@ void initialize_tlbstate_and_flush(void)
 		!(cr4_read_shadow() & X86_CR4_PCIDE));
 
 	/* Force ASID 0 and force a TLB flush. */
-	write_cr3(build_cr3(mm, 0));
+	write_cr3(build_cr3(mm->pgd, 0));
 
 	/* Reinitialize tlbstate. */
 	this_cpu_write(cpu_tlbstate.loaded_mm_asid, 0);
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 14/23] x86, mm: remove hard-coded ASID limit checks
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:35   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

First, it's nice to remove the magic numbers.

Second, KAISER is going to consume half of the available ASID
space.  The space is currently unused, but add a comment to spell
out this new restriction.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/tlbflush.h |   17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff -puN arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-build-asids-macros arch/x86/include/asm/tlbflush.h
--- a/arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-build-asids-macros	2017-11-22 15:45:51.814619732 -0800
+++ b/arch/x86/include/asm/tlbflush.h	2017-11-22 15:45:51.818619732 -0800
@@ -75,6 +75,19 @@ static inline u64 inc_mm_tlb_gen(struct
 	return new_tlb_gen;
 }
 
+/* There are 12 bits of space for ASIDS in CR3 */
+#define CR3_HW_ASID_BITS 12
+/* When enabled, KAISER consumes a single bit for user/kernel switches */
+#define KAISER_CONSUMED_ASID_BITS 0
+
+#define CR3_AVAIL_ASID_BITS (CR3_HW_ASID_BITS - KAISER_CONSUMED_ASID_BITS)
+/*
+ * ASIDs are zero-based: 0->MAX_AVAIL_ASID are valid.  -1 below
+ * to account for them being zero-based.  Another -1 is because ASID 0
+ * is reserved for use by non-PCID-aware users.
+ */
+#define MAX_ASID_AVAILABLE ((1<<CR3_AVAIL_ASID_BITS) - 2)
+
 /*
  * If PCID is on, ASID-aware code paths put the ASID+1 into the PCID
  * bits.  This serves two purposes.  It prevents a nasty situation in
@@ -88,7 +101,7 @@ struct pgd_t;
 static inline unsigned long build_cr3(pgd_t *pgd, u16 asid)
 {
 	if (static_cpu_has(X86_FEATURE_PCID)) {
-		VM_WARN_ON_ONCE(asid > 4094);
+		VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
 		return __sme_pa(pgd) | (asid + 1);
 	} else {
 		VM_WARN_ON_ONCE(asid != 0);
@@ -98,7 +111,7 @@ static inline unsigned long build_cr3(pg
 
 static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid)
 {
-	VM_WARN_ON_ONCE(asid > 4094);
+	VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
 	return __sme_pa(pgd) | (asid + 1) | CR3_NOFLUSH;
 }
 
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 14/23] x86, mm: remove hard-coded ASID limit checks
@ 2017-11-23  0:35   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

First, it's nice to remove the magic numbers.

Second, KAISER is going to consume half of the available ASID
space.  The space is currently unused, but add a comment to spell
out this new restriction.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/tlbflush.h |   17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff -puN arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-build-asids-macros arch/x86/include/asm/tlbflush.h
--- a/arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-build-asids-macros	2017-11-22 15:45:51.814619732 -0800
+++ b/arch/x86/include/asm/tlbflush.h	2017-11-22 15:45:51.818619732 -0800
@@ -75,6 +75,19 @@ static inline u64 inc_mm_tlb_gen(struct
 	return new_tlb_gen;
 }
 
+/* There are 12 bits of space for ASIDS in CR3 */
+#define CR3_HW_ASID_BITS 12
+/* When enabled, KAISER consumes a single bit for user/kernel switches */
+#define KAISER_CONSUMED_ASID_BITS 0
+
+#define CR3_AVAIL_ASID_BITS (CR3_HW_ASID_BITS - KAISER_CONSUMED_ASID_BITS)
+/*
+ * ASIDs are zero-based: 0->MAX_AVAIL_ASID are valid.  -1 below
+ * to account for them being zero-based.  Another -1 is because ASID 0
+ * is reserved for use by non-PCID-aware users.
+ */
+#define MAX_ASID_AVAILABLE ((1<<CR3_AVAIL_ASID_BITS) - 2)
+
 /*
  * If PCID is on, ASID-aware code paths put the ASID+1 into the PCID
  * bits.  This serves two purposes.  It prevents a nasty situation in
@@ -88,7 +101,7 @@ struct pgd_t;
 static inline unsigned long build_cr3(pgd_t *pgd, u16 asid)
 {
 	if (static_cpu_has(X86_FEATURE_PCID)) {
-		VM_WARN_ON_ONCE(asid > 4094);
+		VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
 		return __sme_pa(pgd) | (asid + 1);
 	} else {
 		VM_WARN_ON_ONCE(asid != 0);
@@ -98,7 +111,7 @@ static inline unsigned long build_cr3(pg
 
 static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid)
 {
-	VM_WARN_ON_ONCE(asid > 4094);
+	VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
 	return __sme_pa(pgd) | (asid + 1) | CR3_NOFLUSH;
 }
 
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 15/23] x86, mm: put mmu-to-h/w ASID translation in one place
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:35   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

There are effectively two ASID types:
1. The one stored in the mmu_context that goes from 0->5
2. The one programmed into the hardware that goes from 1->6

This consolidates the locations where converting beween the two
(by doing +1) to a single place which gives us a nice place to
comment.  KAISER will also need to, given an ASID, know which
hardware ASID to flush for the userspace mapping.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/tlbflush.h |   30 ++++++++++++++++++------------
 1 file changed, 18 insertions(+), 12 deletions(-)

diff -puN arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-build-kern arch/x86/include/asm/tlbflush.h
--- a/arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-build-kern	2017-11-22 15:45:52.346619731 -0800
+++ b/arch/x86/include/asm/tlbflush.h	2017-11-22 15:45:52.350619731 -0800
@@ -88,21 +88,26 @@ static inline u64 inc_mm_tlb_gen(struct
  */
 #define MAX_ASID_AVAILABLE ((1<<CR3_AVAIL_ASID_BITS) - 2)
 
-/*
- * If PCID is on, ASID-aware code paths put the ASID+1 into the PCID
- * bits.  This serves two purposes.  It prevents a nasty situation in
- * which PCID-unaware code saves CR3, loads some other value (with PCID
- * == 0), and then restores CR3, thus corrupting the TLB for ASID 0 if
- * the saved ASID was nonzero.  It also means that any bugs involving
- * loading a PCID-enabled CR3 with CR4.PCIDE off will trigger
- * deterministically.
- */
+static inline u16 kern_asid(u16 asid)
+{
+	VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
+	/*
+	 * If PCID is on, ASID-aware code paths put the ASID+1 into the PCID
+	 * bits.  This serves two purposes.  It prevents a nasty situation in
+	 * which PCID-unaware code saves CR3, loads some other value (with PCID
+	 * == 0), and then restores CR3, thus corrupting the TLB for ASID 0 if
+	 * the saved ASID was nonzero.  It also means that any bugs involving
+	 * loading a PCID-enabled CR3 with CR4.PCIDE off will trigger
+	 * deterministically.
+	 */
+	return asid + 1;
+}
+
 struct pgd_t;
 static inline unsigned long build_cr3(pgd_t *pgd, u16 asid)
 {
 	if (static_cpu_has(X86_FEATURE_PCID)) {
-		VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
-		return __sme_pa(pgd) | (asid + 1);
+		return __sme_pa(pgd) | kern_asid(asid);
 	} else {
 		VM_WARN_ON_ONCE(asid != 0);
 		return __sme_pa(pgd);
@@ -112,7 +117,8 @@ static inline unsigned long build_cr3(pg
 static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid)
 {
 	VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
-	return __sme_pa(pgd) | (asid + 1) | CR3_NOFLUSH;
+	VM_WARN_ON_ONCE(!this_cpu_has(X86_FEATURE_PCID));
+	return __sme_pa(pgd) | kern_asid(asid) | CR3_NOFLUSH;
 }
 
 #ifdef CONFIG_PARAVIRT
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 15/23] x86, mm: put mmu-to-h/w ASID translation in one place
@ 2017-11-23  0:35   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

There are effectively two ASID types:
1. The one stored in the mmu_context that goes from 0->5
2. The one programmed into the hardware that goes from 1->6

This consolidates the locations where converting beween the two
(by doing +1) to a single place which gives us a nice place to
comment.  KAISER will also need to, given an ASID, know which
hardware ASID to flush for the userspace mapping.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/tlbflush.h |   30 ++++++++++++++++++------------
 1 file changed, 18 insertions(+), 12 deletions(-)

diff -puN arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-build-kern arch/x86/include/asm/tlbflush.h
--- a/arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-build-kern	2017-11-22 15:45:52.346619731 -0800
+++ b/arch/x86/include/asm/tlbflush.h	2017-11-22 15:45:52.350619731 -0800
@@ -88,21 +88,26 @@ static inline u64 inc_mm_tlb_gen(struct
  */
 #define MAX_ASID_AVAILABLE ((1<<CR3_AVAIL_ASID_BITS) - 2)
 
-/*
- * If PCID is on, ASID-aware code paths put the ASID+1 into the PCID
- * bits.  This serves two purposes.  It prevents a nasty situation in
- * which PCID-unaware code saves CR3, loads some other value (with PCID
- * == 0), and then restores CR3, thus corrupting the TLB for ASID 0 if
- * the saved ASID was nonzero.  It also means that any bugs involving
- * loading a PCID-enabled CR3 with CR4.PCIDE off will trigger
- * deterministically.
- */
+static inline u16 kern_asid(u16 asid)
+{
+	VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
+	/*
+	 * If PCID is on, ASID-aware code paths put the ASID+1 into the PCID
+	 * bits.  This serves two purposes.  It prevents a nasty situation in
+	 * which PCID-unaware code saves CR3, loads some other value (with PCID
+	 * == 0), and then restores CR3, thus corrupting the TLB for ASID 0 if
+	 * the saved ASID was nonzero.  It also means that any bugs involving
+	 * loading a PCID-enabled CR3 with CR4.PCIDE off will trigger
+	 * deterministically.
+	 */
+	return asid + 1;
+}
+
 struct pgd_t;
 static inline unsigned long build_cr3(pgd_t *pgd, u16 asid)
 {
 	if (static_cpu_has(X86_FEATURE_PCID)) {
-		VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
-		return __sme_pa(pgd) | (asid + 1);
+		return __sme_pa(pgd) | kern_asid(asid);
 	} else {
 		VM_WARN_ON_ONCE(asid != 0);
 		return __sme_pa(pgd);
@@ -112,7 +117,8 @@ static inline unsigned long build_cr3(pg
 static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid)
 {
 	VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
-	return __sme_pa(pgd) | (asid + 1) | CR3_NOFLUSH;
+	VM_WARN_ON_ONCE(!this_cpu_has(X86_FEATURE_PCID));
+	return __sme_pa(pgd) | kern_asid(asid) | CR3_NOFLUSH;
 }
 
 #ifdef CONFIG_PARAVIRT
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 16/23] x86, pcid, kaiser: allow flushing for future ASID switches
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:35   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

If changing the page tables in such a way that an invalidation of
all contexts (aka. PCIDs / ASIDs) is required, they can be
actively invalidated by:

 1. INVPCID for each PCID (works for single pages too).
 2. Load CR3 with each PCID without the NOFLUSH bit set
 3. Load CR3 with the NOFLUSH bit set for each and do
    INVLPG for each address.

But, none of these are really feasible since there are ~6 ASIDs (12 with
KAISER) at the time that invalidation is required.  Instead of
actively invalidating them, invalidate the *current* context and
also mark the cpu_tlbstate _quickly_ to indicate future invalidation
to be required.

At the next context-switch, look for this indicator
('all_other_ctxs_invalid' being set) invalidate all of the
cpu_tlbstate.ctxs[] entries.

This ensures that any future context switches will do a full flush
of the TLB, picking up the previous changes.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/tlbflush.h |   47 +++++++++++++++++++++++++++++---------
 b/arch/x86/mm/tlb.c               |   35 ++++++++++++++++++++++++++++
 2 files changed, 72 insertions(+), 10 deletions(-)

diff -puN arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-clear-pcid-cache arch/x86/include/asm/tlbflush.h
--- a/arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-clear-pcid-cache	2017-11-22 15:45:52.879619729 -0800
+++ b/arch/x86/include/asm/tlbflush.h	2017-11-22 15:45:52.884619729 -0800
@@ -185,6 +185,17 @@ struct tlb_state {
 	bool is_lazy;
 
 	/*
+	 * If set we changed the page tables in such a way that we
+	 * needed an invalidation of all contexts (aka. PCIDs / ASIDs).
+	 * This tells us to go invalidate all the non-loaded ctxs[]
+	 * on the next context switch.
+	 *
+	 * The current ctx was kept up-to-date as it ran and does not
+	 * need to be invalidated.
+	 */
+	bool all_other_ctxs_invalid;
+
+	/*
 	 * Access to this CR4 shadow and to H/W CR4 is protected by
 	 * disabling interrupts when modifying either one.
 	 */
@@ -261,6 +272,19 @@ static inline unsigned long cr4_read_sha
 	return this_cpu_read(cpu_tlbstate.cr4);
 }
 
+static inline void tlb_flush_shared_nonglobals(void)
+{
+	/*
+	 * With global pages, all of the shared kenel page tables
+	 * are set as _PAGE_GLOBAL.  We have no shared nonglobals
+	 * and nothing to do here.
+	 */
+	if (IS_ENABLED(CONFIG_X86_GLOBAL_PAGES))
+		return;
+
+	this_cpu_write(cpu_tlbstate.all_other_ctxs_invalid, true);
+}
+
 /*
  * Save some of cr4 feature set we're using (e.g.  Pentium 4MB
  * enable and PPro Global page enable), so that any CPU's that boot
@@ -290,6 +314,10 @@ static inline void __native_flush_tlb(vo
 	preempt_disable();
 	native_write_cr3(__native_read_cr3());
 	preempt_enable();
+	/*
+	 * Does not need tlb_flush_shared_nonglobals() since the CR3 write
+	 * without PCIDs flushes all non-globals.
+	 */
 }
 
 static inline void __native_flush_tlb_global_irq_disabled(void)
@@ -349,24 +377,23 @@ static inline void __native_flush_tlb_si
 
 static inline void __flush_tlb_all(void)
 {
-	if (boot_cpu_has(X86_FEATURE_PGE))
+	if (boot_cpu_has(X86_FEATURE_PGE)) {
 		__flush_tlb_global();
-	else
+	} else {
 		__flush_tlb();
-
-	/*
-	 * Note: if we somehow had PCID but not PGE, then this wouldn't work --
-	 * we'd end up flushing kernel translations for the current ASID but
-	 * we might fail to flush kernel translations for other cached ASIDs.
-	 *
-	 * To avoid this issue, we force PCID off if PGE is off.
-	 */
+		tlb_flush_shared_nonglobals();
+	}
 }
 
 static inline void __flush_tlb_one(unsigned long addr)
 {
 	count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ONE);
 	__flush_tlb_single(addr);
+	/*
+	 * Invalidate other address spaces inaccessible to single-page
+	 * invalidation:
+	 */
+	tlb_flush_shared_nonglobals();
 }
 
 #define TLB_FLUSH_ALL	-1UL
diff -puN arch/x86/mm/tlb.c~kaiser-pcid-pre-clear-pcid-cache arch/x86/mm/tlb.c
--- a/arch/x86/mm/tlb.c~kaiser-pcid-pre-clear-pcid-cache	2017-11-22 15:45:52.881619729 -0800
+++ b/arch/x86/mm/tlb.c	2017-11-22 15:45:52.885619729 -0800
@@ -28,6 +28,38 @@
  *	Implement flush IPI by CALL_FUNCTION_VECTOR, Alex Shi
  */
 
+/*
+ * We get here when we do something requiring a TLB invalidation
+ * but could not go invalidate all of the contexts.  We do the
+ * necessary invalidation by clearing out the 'ctx_id' which
+ * forces a TLB flush when the context is loaded.
+ */
+void clear_non_loaded_ctxs(void)
+{
+	u16 asid;
+
+	/*
+	 * This is only expected to be set if we have disabled
+	 * kernel _PAGE_GLOBAL pages.
+	 */
+	if (IS_ENABLED(CONFIG_X86_GLOBAL_PAGES)) {
+		WARN_ON_ONCE(1);
+		return;
+	}
+
+	for (asid = 0; asid < TLB_NR_DYN_ASIDS; asid++) {
+		/* Do not need to flush the current asid */
+		if (asid == this_cpu_read(cpu_tlbstate.loaded_mm_asid))
+			continue;
+		/*
+		 * Make sure the next time we go to switch to
+		 * this asid, we do a flush:
+		 */
+		this_cpu_write(cpu_tlbstate.ctxs[asid].ctx_id, 0);
+	}
+	this_cpu_write(cpu_tlbstate.all_other_ctxs_invalid, false);
+}
+
 atomic64_t last_mm_ctx_id = ATOMIC64_INIT(1);
 
 
@@ -42,6 +74,9 @@ static void choose_new_asid(struct mm_st
 		return;
 	}
 
+	if (this_cpu_read(cpu_tlbstate.all_other_ctxs_invalid))
+		clear_non_loaded_ctxs();
+
 	for (asid = 0; asid < TLB_NR_DYN_ASIDS; asid++) {
 		if (this_cpu_read(cpu_tlbstate.ctxs[asid].ctx_id) !=
 		    next->context.ctx_id)
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 16/23] x86, pcid, kaiser: allow flushing for future ASID switches
@ 2017-11-23  0:35   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

If changing the page tables in such a way that an invalidation of
all contexts (aka. PCIDs / ASIDs) is required, they can be
actively invalidated by:

 1. INVPCID for each PCID (works for single pages too).
 2. Load CR3 with each PCID without the NOFLUSH bit set
 3. Load CR3 with the NOFLUSH bit set for each and do
    INVLPG for each address.

But, none of these are really feasible since there are ~6 ASIDs (12 with
KAISER) at the time that invalidation is required.  Instead of
actively invalidating them, invalidate the *current* context and
also mark the cpu_tlbstate _quickly_ to indicate future invalidation
to be required.

At the next context-switch, look for this indicator
('all_other_ctxs_invalid' being set) invalidate all of the
cpu_tlbstate.ctxs[] entries.

This ensures that any future context switches will do a full flush
of the TLB, picking up the previous changes.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/tlbflush.h |   47 +++++++++++++++++++++++++++++---------
 b/arch/x86/mm/tlb.c               |   35 ++++++++++++++++++++++++++++
 2 files changed, 72 insertions(+), 10 deletions(-)

diff -puN arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-clear-pcid-cache arch/x86/include/asm/tlbflush.h
--- a/arch/x86/include/asm/tlbflush.h~kaiser-pcid-pre-clear-pcid-cache	2017-11-22 15:45:52.879619729 -0800
+++ b/arch/x86/include/asm/tlbflush.h	2017-11-22 15:45:52.884619729 -0800
@@ -185,6 +185,17 @@ struct tlb_state {
 	bool is_lazy;
 
 	/*
+	 * If set we changed the page tables in such a way that we
+	 * needed an invalidation of all contexts (aka. PCIDs / ASIDs).
+	 * This tells us to go invalidate all the non-loaded ctxs[]
+	 * on the next context switch.
+	 *
+	 * The current ctx was kept up-to-date as it ran and does not
+	 * need to be invalidated.
+	 */
+	bool all_other_ctxs_invalid;
+
+	/*
 	 * Access to this CR4 shadow and to H/W CR4 is protected by
 	 * disabling interrupts when modifying either one.
 	 */
@@ -261,6 +272,19 @@ static inline unsigned long cr4_read_sha
 	return this_cpu_read(cpu_tlbstate.cr4);
 }
 
+static inline void tlb_flush_shared_nonglobals(void)
+{
+	/*
+	 * With global pages, all of the shared kenel page tables
+	 * are set as _PAGE_GLOBAL.  We have no shared nonglobals
+	 * and nothing to do here.
+	 */
+	if (IS_ENABLED(CONFIG_X86_GLOBAL_PAGES))
+		return;
+
+	this_cpu_write(cpu_tlbstate.all_other_ctxs_invalid, true);
+}
+
 /*
  * Save some of cr4 feature set we're using (e.g.  Pentium 4MB
  * enable and PPro Global page enable), so that any CPU's that boot
@@ -290,6 +314,10 @@ static inline void __native_flush_tlb(vo
 	preempt_disable();
 	native_write_cr3(__native_read_cr3());
 	preempt_enable();
+	/*
+	 * Does not need tlb_flush_shared_nonglobals() since the CR3 write
+	 * without PCIDs flushes all non-globals.
+	 */
 }
 
 static inline void __native_flush_tlb_global_irq_disabled(void)
@@ -349,24 +377,23 @@ static inline void __native_flush_tlb_si
 
 static inline void __flush_tlb_all(void)
 {
-	if (boot_cpu_has(X86_FEATURE_PGE))
+	if (boot_cpu_has(X86_FEATURE_PGE)) {
 		__flush_tlb_global();
-	else
+	} else {
 		__flush_tlb();
-
-	/*
-	 * Note: if we somehow had PCID but not PGE, then this wouldn't work --
-	 * we'd end up flushing kernel translations for the current ASID but
-	 * we might fail to flush kernel translations for other cached ASIDs.
-	 *
-	 * To avoid this issue, we force PCID off if PGE is off.
-	 */
+		tlb_flush_shared_nonglobals();
+	}
 }
 
 static inline void __flush_tlb_one(unsigned long addr)
 {
 	count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ONE);
 	__flush_tlb_single(addr);
+	/*
+	 * Invalidate other address spaces inaccessible to single-page
+	 * invalidation:
+	 */
+	tlb_flush_shared_nonglobals();
 }
 
 #define TLB_FLUSH_ALL	-1UL
diff -puN arch/x86/mm/tlb.c~kaiser-pcid-pre-clear-pcid-cache arch/x86/mm/tlb.c
--- a/arch/x86/mm/tlb.c~kaiser-pcid-pre-clear-pcid-cache	2017-11-22 15:45:52.881619729 -0800
+++ b/arch/x86/mm/tlb.c	2017-11-22 15:45:52.885619729 -0800
@@ -28,6 +28,38 @@
  *	Implement flush IPI by CALL_FUNCTION_VECTOR, Alex Shi
  */
 
+/*
+ * We get here when we do something requiring a TLB invalidation
+ * but could not go invalidate all of the contexts.  We do the
+ * necessary invalidation by clearing out the 'ctx_id' which
+ * forces a TLB flush when the context is loaded.
+ */
+void clear_non_loaded_ctxs(void)
+{
+	u16 asid;
+
+	/*
+	 * This is only expected to be set if we have disabled
+	 * kernel _PAGE_GLOBAL pages.
+	 */
+	if (IS_ENABLED(CONFIG_X86_GLOBAL_PAGES)) {
+		WARN_ON_ONCE(1);
+		return;
+	}
+
+	for (asid = 0; asid < TLB_NR_DYN_ASIDS; asid++) {
+		/* Do not need to flush the current asid */
+		if (asid == this_cpu_read(cpu_tlbstate.loaded_mm_asid))
+			continue;
+		/*
+		 * Make sure the next time we go to switch to
+		 * this asid, we do a flush:
+		 */
+		this_cpu_write(cpu_tlbstate.ctxs[asid].ctx_id, 0);
+	}
+	this_cpu_write(cpu_tlbstate.all_other_ctxs_invalid, false);
+}
+
 atomic64_t last_mm_ctx_id = ATOMIC64_INIT(1);
 
 
@@ -42,6 +74,9 @@ static void choose_new_asid(struct mm_st
 		return;
 	}
 
+	if (this_cpu_read(cpu_tlbstate.all_other_ctxs_invalid))
+		clear_non_loaded_ctxs();
+
 	for (asid = 0; asid < TLB_NR_DYN_ASIDS; asid++) {
 		if (this_cpu_read(cpu_tlbstate.ctxs[asid].ctx_id) !=
 		    next->context.ctx_id)
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 17/23] x86, kaiser: use PCID feature to make user and kernel switches faster
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:35   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

Short summary: Use x86 PCID feature to avoid flushing the TLB at all
interrupts and syscalls.  Speed them up.  Makes context switches
and TLB flushing slower.

Background:

KAISER keeps two copies of the page tables.  Switches between the
copies are performed by writing to the CR3 register.  But, CR3
was really designed for context switches and writes to it also
flush the entire TLB (modulo global pages).  This TLB flush
increases the cost of interrupts and context switches.  For
syscall-heavy microbenchmarks it can cut the rate of syscalls by
2/3.

The kernel recently gained support for and Intel CPU feature
called Process Context IDentifiers (PCID) thanks to Andy
Lutomirski.  This feature is intended to allow you to switch
between contexts without flushing the TLB.

Implementation:

PCIDs can be used to avoid flushing the TLB at kernel entry/exit.
This is speeds up both interrupts and syscalls.

First, the kernel and userspace must be assigned different ASIDs.
On entry from userspace, move over to the kernel page tables
*and* ASID.  On exit, restore the user page tables and ASID.
Fortunately, the ASID is programmed via CR3, which is already
being used to switch between the user and kernel page tables.
This gives us convenient, one-stop shopping.

The CR3 write which is used to switch between processes provides
all the TLB flushing normally required at context switch time.
But, with KAISER, that CR3 write only flushes the current
(kernel) ASID.  An extra TLB flush operation is now required in
order to flush the user ASID.  This new instruction (INVPCID) is
probably ~100 cycles, but this is done with the assumption that
the time lost in context switches is more than made up for by
lower cost of interrupts and syscalls.

Support:

PCIDs are generally available on Sandybridge and newer CPUs.  However,
the accompanying INVPCID instruction did not become available until
Haswell (the ones with "v4", or called fourth-generation Core).  This
instruction allows non-current-PCID TLB entries to be flushed without
switching CR3 and global pages to be flushed without a double
MOV-to-CR4.

Without INVPCID, PCIDs are much harder to use.  TLB invalidation gets
much more onerous:

1. Every kernel TLB flush (even for a single page) requires an
   interrupts-off MOV-to-CR4 which is very expensive.  This is because
   there is no way to flush a kernel address that might be loaded
   in *EVERY* PCID.  Right now, there are "only" ~12 of these per-cpu,
   but that's too painful to use the MOV-to-CR3 to flush them.  That
   leaves only the MOV-to-CR4.
2. Every userspace flush (even for a single page requires one of the
   following:
   a. A pair of flushing (bit 63 clear) CR3 writes: one for
      the kernel ASID and another for userspace.
   b. A pair of non-flushing CR3 writes (bit 63 set) with the
      flush done for each.  For instance, what is currently a
      single instruction without KAISER:

		invpcid_flush_one(current_pcid, addr);

      becomes this with KAISER:

      		invpcid_flush_one(current_kern_pcid, addr);
		invpcid_flush_one(current_user_pcid, addr);

      and this without INVPCID:

      		__native_flush_tlb_single(addr);
		write_cr3(mm->pgd | current_user_pcid | NOFLUSH);
      		__native_flush_tlb_single(addr);
		write_cr3(mm->pgd | current_kern_pcid | NOFLUSH);

So, for now, fully disable PCIDs with KAISER when INVPCID is not
available.  This is fixable, but it's an optimization that can be
performed later.

Hugh Dickins also points out that PCIDs really have two distinct
use-cases in the context of KAISER.  The first way they can be used
is as "TLB preservation across context-switch", which is what
Andy Lutomirksi's 4.14 PCID code does.  They can also be used as
a "KAISER syscall/interrupt accelerator".  If we just use them to
speed up syscall/interrupts (and ignore the context-switch TLB
preservation), then the deficiency of not having INVPCID
becomes much less onerous.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/entry/calling.h                    |   25 +++-
 b/arch/x86/entry/entry_64.S                   |    1 
 b/arch/x86/include/asm/cpufeatures.h          |    1 
 b/arch/x86/include/asm/pgtable_types.h        |   11 ++
 b/arch/x86/include/asm/tlbflush.h             |  137 +++++++++++++++++++++-----
 b/arch/x86/include/uapi/asm/processor-flags.h |    3 
 b/arch/x86/kvm/x86.c                          |    3 
 b/arch/x86/mm/init.c                          |   75 +++++++++-----
 b/arch/x86/mm/tlb.c                           |   66 ++++++++++++
 9 files changed, 262 insertions(+), 60 deletions(-)

diff -puN arch/x86/entry/calling.h~kaiser-pcid arch/x86/entry/calling.h
--- a/arch/x86/entry/calling.h~kaiser-pcid	2017-11-22 15:45:53.443619728 -0800
+++ b/arch/x86/entry/calling.h	2017-11-22 15:45:53.461619728 -0800
@@ -3,6 +3,7 @@
 #include <asm/unwind_hints.h>
 #include <asm/cpufeatures.h>
 #include <asm/page_types.h>
+#include <asm/pgtable_types.h>
 
 /*
 
@@ -192,16 +193,20 @@ For 32-bit we have the following convent
 #ifdef CONFIG_KAISER
 
 /* KAISER PGDs are 8k.  Flip bit 12 to switch between the two halves: */
-#define KAISER_SWITCH_MASK (1<<PAGE_SHIFT)
+#define KAISER_SWITCH_PGTABLES_MASK (1<<PAGE_SHIFT)
+#define KAISER_SWITCH_MASK     (KAISER_SWITCH_PGTABLES_MASK|\
+				(1<<X86_CR3_KAISER_SWITCH_BIT))
 
 .macro ADJUST_KERNEL_CR3 reg:req
-	/* Clear "KAISER bit", point CR3 at kernel pagetables: */
-	andq	$(~KAISER_SWITCH_MASK), \reg
+	ALTERNATIVE "", "bts $63, \reg", X86_FEATURE_PCID
+	/* Clear PCID and "KAISER bit", point CR3 at kernel pagetables: */
+	andq    $(~KAISER_SWITCH_MASK), \reg
 .endm
 
 .macro ADJUST_USER_CR3 reg:req
-	/* Move CR3 up a page to the user page tables: */
-	orq	$(KAISER_SWITCH_MASK), \reg
+	ALTERNATIVE "", "bts $63, \reg", X86_FEATURE_PCID
+	/* Set user PCID bit, and move CR3 up a page to the user page tables: */
+	orq     $(KAISER_SWITCH_MASK), \reg
 .endm
 
 .macro SWITCH_TO_KERNEL_CR3 scratch_reg:req
@@ -220,8 +225,14 @@ For 32-bit we have the following convent
 	movq	%cr3, %r\scratch_reg
 	movq	%r\scratch_reg, \save_reg
 	/*
-	 * Is the switch bit zero?  This means the address is
-	 * up in real KAISER patches in a moment.
+	 * Is the "switch mask" all zero?  That means that both of
+	 * these are zero:
+	 *
+	 *	1. The user/kernel PCID bit, and
+	 *	2. The user/kernel "bit" that points CR3 to the
+	 *	   bottom half of the 8k PGD
+	 *
+	 * That indicates a kernel CR3 value, not user/shadow.
 	 */
 	testq	$(KAISER_SWITCH_MASK), %r\scratch_reg
 	jz	.Ldone_\@
diff -puN arch/x86/entry/entry_64.S~kaiser-pcid arch/x86/entry/entry_64.S
--- a/arch/x86/entry/entry_64.S~kaiser-pcid	2017-11-22 15:45:53.445619728 -0800
+++ b/arch/x86/entry/entry_64.S	2017-11-22 15:45:53.464619728 -0800
@@ -671,6 +671,7 @@ END(irq_entries_start)
 	 * tracking that we're in kernel mode.
 	 */
 	SWAPGS
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
 
 	/*
 	 * We need to tell lockdep that IRQs are off.  We can't do this until
diff -puN arch/x86/include/asm/cpufeatures.h~kaiser-pcid arch/x86/include/asm/cpufeatures.h
--- a/arch/x86/include/asm/cpufeatures.h~kaiser-pcid	2017-11-22 15:45:53.447619728 -0800
+++ b/arch/x86/include/asm/cpufeatures.h	2017-11-22 15:45:53.464619728 -0800
@@ -197,6 +197,7 @@
 #define X86_FEATURE_CAT_L3		( 7*32+ 4) /* Cache Allocation Technology L3 */
 #define X86_FEATURE_CAT_L2		( 7*32+ 5) /* Cache Allocation Technology L2 */
 #define X86_FEATURE_CDP_L3		( 7*32+ 6) /* Code and Data Prioritization L3 */
+#define X86_FEATURE_INVPCID_SINGLE ( 7*32+ 7) /* Effectively INVPCID && CR4.PCIDE=1 */
 
 #define X86_FEATURE_HW_PSTATE		( 7*32+ 8) /* AMD HW-PState */
 #define X86_FEATURE_PROC_FEEDBACK	( 7*32+ 9) /* AMD ProcFeedbackInterface */
diff -puN arch/x86/include/asm/pgtable_types.h~kaiser-pcid arch/x86/include/asm/pgtable_types.h
--- a/arch/x86/include/asm/pgtable_types.h~kaiser-pcid	2017-11-22 15:45:53.448619728 -0800
+++ b/arch/x86/include/asm/pgtable_types.h	2017-11-22 15:45:53.464619728 -0800
@@ -140,6 +140,17 @@
 			 _PAGE_SOFT_DIRTY)
 #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
 
+/* The ASID is the lower 12 bits of CR3 */
+#define X86_CR3_PCID_ASID_MASK  (_AC((1<<12)-1, UL))
+
+/* Mask for all the PCID-related bits in CR3: */
+#define X86_CR3_PCID_MASK       (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_MASK)
+
+/* Make sure this is only usable in KAISER #ifdef'd code: */
+#ifdef CONFIG_KAISER
+#define X86_CR3_KAISER_SWITCH_BIT 11
+#endif
+
 /*
  * The cache modes defined here are used to translate between pure SW usage
  * and the HW defined cache mode bits and/or PAT entries.
diff -puN arch/x86/include/asm/tlbflush.h~kaiser-pcid arch/x86/include/asm/tlbflush.h
--- a/arch/x86/include/asm/tlbflush.h~kaiser-pcid	2017-11-22 15:45:53.450619728 -0800
+++ b/arch/x86/include/asm/tlbflush.h	2017-11-22 15:45:53.465619728 -0800
@@ -78,7 +78,12 @@ static inline u64 inc_mm_tlb_gen(struct
 /* There are 12 bits of space for ASIDS in CR3 */
 #define CR3_HW_ASID_BITS 12
 /* When enabled, KAISER consumes a single bit for user/kernel switches */
+#ifdef CONFIG_KAISER
+#define X86_CR3_KAISER_SWITCH_BIT 11
+#define KAISER_CONSUMED_ASID_BITS 1
+#else
 #define KAISER_CONSUMED_ASID_BITS 0
+#endif
 
 #define CR3_AVAIL_ASID_BITS (CR3_HW_ASID_BITS - KAISER_CONSUMED_ASID_BITS)
 /*
@@ -88,21 +93,62 @@ static inline u64 inc_mm_tlb_gen(struct
  */
 #define MAX_ASID_AVAILABLE ((1<<CR3_AVAIL_ASID_BITS) - 2)
 
+/*
+ * 6 because 6 should be plenty and struct tlb_state will fit in
+ * two cache lines.
+ */
+#define TLB_NR_DYN_ASIDS 6
+
 static inline u16 kern_asid(u16 asid)
 {
 	VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
+
+#ifdef CONFIG_KAISER
+	/*
+	 * Make sure that the dynamic ASID space does not confict
+	 * with the bit we are using to switch between user and
+	 * kernel ASIDs.
+	 */
+	BUILD_BUG_ON(TLB_NR_DYN_ASIDS >= (1<<X86_CR3_KAISER_SWITCH_BIT));
+
 	/*
-	 * If PCID is on, ASID-aware code paths put the ASID+1 into the PCID
-	 * bits.  This serves two purposes.  It prevents a nasty situation in
-	 * which PCID-unaware code saves CR3, loads some other value (with PCID
-	 * == 0), and then restores CR3, thus corrupting the TLB for ASID 0 if
-	 * the saved ASID was nonzero.  It also means that any bugs involving
-	 * loading a PCID-enabled CR3 with CR4.PCIDE off will trigger
-	 * deterministically.
+	 * The ASID being passed in here should have respected
+	 * the MAX_ASID_AVAILABLE and thus never have the switch
+	 * bit set.
+	 */
+	VM_WARN_ON_ONCE(asid & (1<<X86_CR3_KAISER_SWITCH_BIT));
+#endif
+	/*
+	 * The dynamically-assigned ASIDs that get passed in  are
+	 * small (<TLB_NR_DYN_ASIDS).  They never have the high
+	 * switch bit set, so do not bother to clear it.
+	 */
+
+	/*
+	 * If PCID is on, ASID-aware code paths put the ASID+1
+	 * into the PCID bits.  This serves two purposes.  It
+	 * prevents a nasty situation in which PCID-unaware code
+	 * saves CR3, loads some other value (with PCID == 0),
+	 * and then restores CR3, thus corrupting the TLB for
+	 * ASID 0 if the saved ASID was nonzero.  It also means
+	 * that any bugs involving loading a PCID-enabled CR3
+	 * with CR4.PCIDE off will trigger deterministically.
 	 */
 	return asid + 1;
 }
 
+/*
+ * The user ASID is just the kernel one, plus the "switch bit".
+ */
+static inline u16 user_asid(u16 asid)
+{
+	u16 ret = kern_asid(asid);
+#ifdef CONFIG_KAISER
+	ret |= 1<<X86_CR3_KAISER_SWITCH_BIT;
+#endif
+	return ret;
+}
+
 struct pgd_t;
 static inline unsigned long build_cr3(pgd_t *pgd, u16 asid)
 {
@@ -145,12 +191,6 @@ static inline bool tlb_defer_switch_to_i
 	return !static_cpu_has(X86_FEATURE_PCID);
 }
 
-/*
- * 6 because 6 should be plenty and struct tlb_state will fit in
- * two cache lines.
- */
-#define TLB_NR_DYN_ASIDS 6
-
 struct tlb_context {
 	u64 ctx_id;
 	u64 tlb_gen;
@@ -306,18 +346,42 @@ extern void initialize_tlbstate_and_flus
 
 static inline void __native_flush_tlb(void)
 {
+	if (!cpu_feature_enabled(X86_FEATURE_INVPCID)) {
+		/*
+		 * native_write_cr3() only clears the current PCID if
+		 * CR4 has X86_CR4_PCIDE set.  In other words, this does
+		 * not fully flush the TLB if PCIDs are in use.
+		 *
+		 * With KAISER and PCIDs, the means that we did not
+		 * flush the user PCID.  Warn if it gets called.
+		 */
+		if (IS_ENABLED(CONFIG_KAISER))
+			WARN_ON_ONCE(this_cpu_read(cpu_tlbstate.cr4) &
+				     X86_CR4_PCIDE);
+		/*
+		 * If current->mm == NULL then we borrow a mm
+		 * which may change during a task switch and
+		 * therefore we must not be preempted while we
+		 * write CR3 back:
+		 */
+		preempt_disable();
+		native_write_cr3(__native_read_cr3());
+		preempt_enable();
+		/*
+		 * Does not need tlb_flush_shared_nonglobals()
+		 * since the CR3 write without PCIDs flushes all
+		 * non-globals.
+		 */
+		return;
+	}
 	/*
-	 * If current->mm == NULL then we borrow a mm which may change during a
-	 * task switch and therefore we must not be preempted while we write CR3
-	 * back:
-	 */
-	preempt_disable();
-	native_write_cr3(__native_read_cr3());
-	preempt_enable();
-	/*
-	 * Does not need tlb_flush_shared_nonglobals() since the CR3 write
-	 * without PCIDs flushes all non-globals.
+	 * We are no longer using globals with KAISER, so a
+	 * "nonglobals" flush would work too. But, this is more
+	 * conservative.
+	 *
+	 * Note, this works with CR4.PCIDE=0 or 1.
 	 */
+	invpcid_flush_all();
 }
 
 static inline void __native_flush_tlb_global_irq_disabled(void)
@@ -353,6 +417,8 @@ static inline void __native_flush_tlb_gl
 		/*
 		 * Using INVPCID is considerably faster than a pair of writes
 		 * to CR4 sandwiched inside an IRQ flag save/restore.
+		 *
+		 * Note, this works with CR4.PCIDE=0 or 1.
 		 */
 		invpcid_flush_all();
 		return;
@@ -372,7 +438,30 @@ static inline void __native_flush_tlb_gl
 
 static inline void __native_flush_tlb_single(unsigned long addr)
 {
-	asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
+	u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
+
+	/*
+	 * Some platforms #GP if we call invpcid(type=1/2) before
+	 * CR4.PCIDE=1.  Just call invpcid in the case we are called
+	 * early.
+	 */
+	if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE)) {
+		asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
+		return;
+	}
+	/* Flush the address out of both PCIDs. */
+	/*
+	 * An optimization here might be to determine addresses
+	 * that are only kernel-mapped and only flush the kernel
+	 * ASID.  But, userspace flushes are probably much more
+	 * important performance-wise.
+	 *
+	 * Make sure to do only a single invpcid when KAISER is
+	 * disabled and we have only a single ASID.
+	 */
+	if (kern_asid(loaded_mm_asid) != user_asid(loaded_mm_asid))
+		invpcid_flush_one(user_asid(loaded_mm_asid), addr);
+	invpcid_flush_one(kern_asid(loaded_mm_asid), addr);
 }
 
 static inline void __flush_tlb_all(void)
diff -puN arch/x86/include/uapi/asm/processor-flags.h~kaiser-pcid arch/x86/include/uapi/asm/processor-flags.h
--- a/arch/x86/include/uapi/asm/processor-flags.h~kaiser-pcid	2017-11-22 15:45:53.452619728 -0800
+++ b/arch/x86/include/uapi/asm/processor-flags.h	2017-11-22 15:45:53.466619728 -0800
@@ -78,7 +78,8 @@
 #define X86_CR3_PWT		_BITUL(X86_CR3_PWT_BIT)
 #define X86_CR3_PCD_BIT		4 /* Page Cache Disable */
 #define X86_CR3_PCD		_BITUL(X86_CR3_PCD_BIT)
-#define X86_CR3_PCID_MASK	_AC(0x00000fff,UL) /* PCID Mask */
+#define X86_CR3_PCID_NOFLUSH_BIT 63 /* Preserve old PCID */
+#define X86_CR3_PCID_NOFLUSH    _BITULL(X86_CR3_PCID_NOFLUSH_BIT)
 
 /*
  * Intel CPU features in CR4
diff -puN arch/x86/kvm/x86.c~kaiser-pcid arch/x86/kvm/x86.c
--- a/arch/x86/kvm/x86.c~kaiser-pcid	2017-11-22 15:45:53.454619728 -0800
+++ b/arch/x86/kvm/x86.c	2017-11-22 15:45:53.468619728 -0800
@@ -805,7 +805,8 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, u
 			return 1;
 
 		/* PCID can not be enabled when cr3[11:0]!=000H or EFER.LMA=0 */
-		if ((kvm_read_cr3(vcpu) & X86_CR3_PCID_MASK) || !is_long_mode(vcpu))
+		if ((kvm_read_cr3(vcpu) & X86_CR3_PCID_ASID_MASK) ||
+		    !is_long_mode(vcpu))
 			return 1;
 	}
 
diff -puN arch/x86/mm/init.c~kaiser-pcid arch/x86/mm/init.c
--- a/arch/x86/mm/init.c~kaiser-pcid	2017-11-22 15:45:53.456619728 -0800
+++ b/arch/x86/mm/init.c	2017-11-22 15:45:53.468619728 -0800
@@ -196,34 +196,59 @@ static void __init probe_page_size_mask(
 
 static void setup_pcid(void)
 {
-#ifdef CONFIG_X86_64
-	if (boot_cpu_has(X86_FEATURE_PCID)) {
-		if (boot_cpu_has(X86_FEATURE_PGE)) {
-			/*
-			 * This can't be cr4_set_bits_and_update_boot() --
-			 * the trampoline code can't handle CR4.PCIDE and
-			 * it wouldn't do any good anyway.  Despite the name,
-			 * cr4_set_bits_and_update_boot() doesn't actually
-			 * cause the bits in question to remain set all the
-			 * way through the secondary boot asm.
-			 *
-			 * Instead, we brute-force it and set CR4.PCIDE
-			 * manually in start_secondary().
-			 */
-			cr4_set_bits(X86_CR4_PCIDE);
-		} else {
-			/*
-			 * flush_tlb_all(), as currently implemented, won't
-			 * work if PCID is on but PGE is not.  Since that
-			 * combination doesn't exist on real hardware, there's
-			 * no reason to try to fully support it, but it's
-			 * polite to avoid corrupting data if we're on
-			 * an improperly configured VM.
-			 */
+	if (!IS_ENABLED(CONFIG_X86_64))
+		return;
+
+	if (!boot_cpu_has(X86_FEATURE_PCID))
+		return;
+
+	if (boot_cpu_has(X86_FEATURE_PGE)) {
+		/*
+		 * KAISER uses a PCID for the kernel and another
+		 * for userspace.  Both PCIDs need to be flushed
+		 * when the TLB flush functions are called.  But,
+		 * flushing *another* PCID is insane without
+		 * INVPCID.  Just avoid using PCIDs at all if we
+		 * have KAISER and do not have INVPCID.
+		 */
+		if (!IS_ENABLED(CONFIG_X86_GLOBAL_PAGES) &&
+		    !boot_cpu_has(X86_FEATURE_INVPCID)) {
 			setup_clear_cpu_cap(X86_FEATURE_PCID);
+			return;
 		}
+		/*
+		 * This can't be cr4_set_bits_and_update_boot() --
+		 * the trampoline code can't handle CR4.PCIDE and
+		 * it wouldn't do any good anyway.  Despite the name,
+		 * cr4_set_bits_and_update_boot() doesn't actually
+		 * cause the bits in question to remain set all the
+		 * way through the secondary boot asm.
+		 *
+		 * Instead, we brute-force it and set CR4.PCIDE
+		 * manually in start_secondary().
+		 */
+		cr4_set_bits(X86_CR4_PCIDE);
+
+		/*
+		 * INVPCID's single-context modes (2/3) only work
+		 * if we set X86_CR4_PCIDE, *and* we INVPCID
+		 * support.  It's unusable on systems that have
+		 * X86_CR4_PCIDE clear, or that have no INVPCID
+		 * support at all.
+		 */
+		if (boot_cpu_has(X86_FEATURE_INVPCID))
+			setup_force_cpu_cap(X86_FEATURE_INVPCID_SINGLE);
+	} else {
+		/*
+		 * flush_tlb_all(), as currently implemented, won't
+		 * work if PCID is on but PGE is not.  Since that
+		 * combination doesn't exist on real hardware, there's
+		 * no reason to try to fully support it, but it's
+		 * polite to avoid corrupting data if we're on
+		 * an improperly configured VM.
+		 */
+		setup_clear_cpu_cap(X86_FEATURE_PCID);
 	}
-#endif
 }
 
 #ifdef CONFIG_X86_32
diff -puN arch/x86/mm/tlb.c~kaiser-pcid arch/x86/mm/tlb.c
--- a/arch/x86/mm/tlb.c~kaiser-pcid	2017-11-22 15:45:53.458619728 -0800
+++ b/arch/x86/mm/tlb.c	2017-11-22 15:45:53.469619728 -0800
@@ -100,6 +100,68 @@ static void choose_new_asid(struct mm_st
 	*need_flush = true;
 }
 
+/*
+ * Given a kernel asid, flush the corresponding KAISER
+ * user ASID.
+ */
+static void flush_user_asid(pgd_t *pgd, u16 kern_asid)
+{
+	/* There is no user ASID if KAISER is off */
+	if (!IS_ENABLED(CONFIG_KAISER))
+		return;
+	/*
+	 * We only have a single ASID if PCID is off and the CR3
+	 * write will have flushed it.
+	 */
+	if (!cpu_feature_enabled(X86_FEATURE_PCID))
+		return;
+	/*
+	 * With PCIDs enabled, write_cr3() only flushes TLB
+	 * entries for the current (kernel) ASID.  This leaves
+	 * old TLB entries for the user ASID in place and we must
+	 * flush that context separately.  We can theoretically
+	 * delay doing this until we actually load up the
+	 * userspace CR3, but do it here for simplicity.
+	 */
+	if (cpu_feature_enabled(X86_FEATURE_INVPCID)) {
+		invpcid_flush_single_context(user_asid(kern_asid));
+	} else {
+		/*
+		 * On systems with PCIDs, but no INVPCID, the only
+		 * way to flush a PCID is a CR3 write.  Note that
+		 * we use the kernel page tables with the *user*
+		 * ASID here.
+		 */
+		unsigned long user_asid_flush_cr3;
+		user_asid_flush_cr3 = build_cr3(pgd, user_asid(kern_asid));
+		write_cr3(user_asid_flush_cr3);
+		/*
+		 * We do not use PCIDs with KAISER unless we also
+		 * have INVPCID.  Getting here is unexpected.
+		 */
+		WARN_ON_ONCE(1);
+	}
+}
+
+static void load_new_mm_cr3(pgd_t *pgdir, u16 new_asid, bool need_flush)
+{
+	unsigned long new_mm_cr3;
+
+	if (need_flush) {
+		flush_user_asid(pgdir, new_asid);
+		new_mm_cr3 = build_cr3(pgdir, new_asid);
+	} else {
+		new_mm_cr3 = build_cr3_noflush(pgdir, new_asid);
+	}
+
+	/*
+	 * Caution: many callers of this function expect
+	 * that load_cr3() is serializing and orders TLB
+	 * fills with respect to the mm_cpumask writes.
+	 */
+	write_cr3(new_mm_cr3);
+}
+
 void leave_mm(int cpu)
 {
 	struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
@@ -230,7 +292,7 @@ void switch_mm_irqs_off(struct mm_struct
 		if (need_flush) {
 			this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id);
 			this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen);
-			write_cr3(build_cr3(next->pgd, new_asid));
+			load_new_mm_cr3(next->pgd, new_asid, true);
 
 			/*
 			 * NB: This gets called via leave_mm() in the idle path
@@ -243,7 +305,7 @@ void switch_mm_irqs_off(struct mm_struct
 			trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
 		} else {
 			/* The new ASID is already up to date. */
-			write_cr3(build_cr3_noflush(next->pgd, new_asid));
+			load_new_mm_cr3(next->pgd, new_asid, false);
 
 			/* See above wrt _rcuidle. */
 			trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0);
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 17/23] x86, kaiser: use PCID feature to make user and kernel switches faster
@ 2017-11-23  0:35   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

Short summary: Use x86 PCID feature to avoid flushing the TLB at all
interrupts and syscalls.  Speed them up.  Makes context switches
and TLB flushing slower.

Background:

KAISER keeps two copies of the page tables.  Switches between the
copies are performed by writing to the CR3 register.  But, CR3
was really designed for context switches and writes to it also
flush the entire TLB (modulo global pages).  This TLB flush
increases the cost of interrupts and context switches.  For
syscall-heavy microbenchmarks it can cut the rate of syscalls by
2/3.

The kernel recently gained support for and Intel CPU feature
called Process Context IDentifiers (PCID) thanks to Andy
Lutomirski.  This feature is intended to allow you to switch
between contexts without flushing the TLB.

Implementation:

PCIDs can be used to avoid flushing the TLB at kernel entry/exit.
This is speeds up both interrupts and syscalls.

First, the kernel and userspace must be assigned different ASIDs.
On entry from userspace, move over to the kernel page tables
*and* ASID.  On exit, restore the user page tables and ASID.
Fortunately, the ASID is programmed via CR3, which is already
being used to switch between the user and kernel page tables.
This gives us convenient, one-stop shopping.

The CR3 write which is used to switch between processes provides
all the TLB flushing normally required at context switch time.
But, with KAISER, that CR3 write only flushes the current
(kernel) ASID.  An extra TLB flush operation is now required in
order to flush the user ASID.  This new instruction (INVPCID) is
probably ~100 cycles, but this is done with the assumption that
the time lost in context switches is more than made up for by
lower cost of interrupts and syscalls.

Support:

PCIDs are generally available on Sandybridge and newer CPUs.  However,
the accompanying INVPCID instruction did not become available until
Haswell (the ones with "v4", or called fourth-generation Core).  This
instruction allows non-current-PCID TLB entries to be flushed without
switching CR3 and global pages to be flushed without a double
MOV-to-CR4.

Without INVPCID, PCIDs are much harder to use.  TLB invalidation gets
much more onerous:

1. Every kernel TLB flush (even for a single page) requires an
   interrupts-off MOV-to-CR4 which is very expensive.  This is because
   there is no way to flush a kernel address that might be loaded
   in *EVERY* PCID.  Right now, there are "only" ~12 of these per-cpu,
   but that's too painful to use the MOV-to-CR3 to flush them.  That
   leaves only the MOV-to-CR4.
2. Every userspace flush (even for a single page requires one of the
   following:
   a. A pair of flushing (bit 63 clear) CR3 writes: one for
      the kernel ASID and another for userspace.
   b. A pair of non-flushing CR3 writes (bit 63 set) with the
      flush done for each.  For instance, what is currently a
      single instruction without KAISER:

		invpcid_flush_one(current_pcid, addr);

      becomes this with KAISER:

      		invpcid_flush_one(current_kern_pcid, addr);
		invpcid_flush_one(current_user_pcid, addr);

      and this without INVPCID:

      		__native_flush_tlb_single(addr);
		write_cr3(mm->pgd | current_user_pcid | NOFLUSH);
      		__native_flush_tlb_single(addr);
		write_cr3(mm->pgd | current_kern_pcid | NOFLUSH);

So, for now, fully disable PCIDs with KAISER when INVPCID is not
available.  This is fixable, but it's an optimization that can be
performed later.

Hugh Dickins also points out that PCIDs really have two distinct
use-cases in the context of KAISER.  The first way they can be used
is as "TLB preservation across context-switch", which is what
Andy Lutomirksi's 4.14 PCID code does.  They can also be used as
a "KAISER syscall/interrupt accelerator".  If we just use them to
speed up syscall/interrupts (and ignore the context-switch TLB
preservation), then the deficiency of not having INVPCID
becomes much less onerous.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/entry/calling.h                    |   25 +++-
 b/arch/x86/entry/entry_64.S                   |    1 
 b/arch/x86/include/asm/cpufeatures.h          |    1 
 b/arch/x86/include/asm/pgtable_types.h        |   11 ++
 b/arch/x86/include/asm/tlbflush.h             |  137 +++++++++++++++++++++-----
 b/arch/x86/include/uapi/asm/processor-flags.h |    3 
 b/arch/x86/kvm/x86.c                          |    3 
 b/arch/x86/mm/init.c                          |   75 +++++++++-----
 b/arch/x86/mm/tlb.c                           |   66 ++++++++++++
 9 files changed, 262 insertions(+), 60 deletions(-)

diff -puN arch/x86/entry/calling.h~kaiser-pcid arch/x86/entry/calling.h
--- a/arch/x86/entry/calling.h~kaiser-pcid	2017-11-22 15:45:53.443619728 -0800
+++ b/arch/x86/entry/calling.h	2017-11-22 15:45:53.461619728 -0800
@@ -3,6 +3,7 @@
 #include <asm/unwind_hints.h>
 #include <asm/cpufeatures.h>
 #include <asm/page_types.h>
+#include <asm/pgtable_types.h>
 
 /*
 
@@ -192,16 +193,20 @@ For 32-bit we have the following convent
 #ifdef CONFIG_KAISER
 
 /* KAISER PGDs are 8k.  Flip bit 12 to switch between the two halves: */
-#define KAISER_SWITCH_MASK (1<<PAGE_SHIFT)
+#define KAISER_SWITCH_PGTABLES_MASK (1<<PAGE_SHIFT)
+#define KAISER_SWITCH_MASK     (KAISER_SWITCH_PGTABLES_MASK|\
+				(1<<X86_CR3_KAISER_SWITCH_BIT))
 
 .macro ADJUST_KERNEL_CR3 reg:req
-	/* Clear "KAISER bit", point CR3 at kernel pagetables: */
-	andq	$(~KAISER_SWITCH_MASK), \reg
+	ALTERNATIVE "", "bts $63, \reg", X86_FEATURE_PCID
+	/* Clear PCID and "KAISER bit", point CR3 at kernel pagetables: */
+	andq    $(~KAISER_SWITCH_MASK), \reg
 .endm
 
 .macro ADJUST_USER_CR3 reg:req
-	/* Move CR3 up a page to the user page tables: */
-	orq	$(KAISER_SWITCH_MASK), \reg
+	ALTERNATIVE "", "bts $63, \reg", X86_FEATURE_PCID
+	/* Set user PCID bit, and move CR3 up a page to the user page tables: */
+	orq     $(KAISER_SWITCH_MASK), \reg
 .endm
 
 .macro SWITCH_TO_KERNEL_CR3 scratch_reg:req
@@ -220,8 +225,14 @@ For 32-bit we have the following convent
 	movq	%cr3, %r\scratch_reg
 	movq	%r\scratch_reg, \save_reg
 	/*
-	 * Is the switch bit zero?  This means the address is
-	 * up in real KAISER patches in a moment.
+	 * Is the "switch mask" all zero?  That means that both of
+	 * these are zero:
+	 *
+	 *	1. The user/kernel PCID bit, and
+	 *	2. The user/kernel "bit" that points CR3 to the
+	 *	   bottom half of the 8k PGD
+	 *
+	 * That indicates a kernel CR3 value, not user/shadow.
 	 */
 	testq	$(KAISER_SWITCH_MASK), %r\scratch_reg
 	jz	.Ldone_\@
diff -puN arch/x86/entry/entry_64.S~kaiser-pcid arch/x86/entry/entry_64.S
--- a/arch/x86/entry/entry_64.S~kaiser-pcid	2017-11-22 15:45:53.445619728 -0800
+++ b/arch/x86/entry/entry_64.S	2017-11-22 15:45:53.464619728 -0800
@@ -671,6 +671,7 @@ END(irq_entries_start)
 	 * tracking that we're in kernel mode.
 	 */
 	SWAPGS
+	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
 
 	/*
 	 * We need to tell lockdep that IRQs are off.  We can't do this until
diff -puN arch/x86/include/asm/cpufeatures.h~kaiser-pcid arch/x86/include/asm/cpufeatures.h
--- a/arch/x86/include/asm/cpufeatures.h~kaiser-pcid	2017-11-22 15:45:53.447619728 -0800
+++ b/arch/x86/include/asm/cpufeatures.h	2017-11-22 15:45:53.464619728 -0800
@@ -197,6 +197,7 @@
 #define X86_FEATURE_CAT_L3		( 7*32+ 4) /* Cache Allocation Technology L3 */
 #define X86_FEATURE_CAT_L2		( 7*32+ 5) /* Cache Allocation Technology L2 */
 #define X86_FEATURE_CDP_L3		( 7*32+ 6) /* Code and Data Prioritization L3 */
+#define X86_FEATURE_INVPCID_SINGLE ( 7*32+ 7) /* Effectively INVPCID && CR4.PCIDE=1 */
 
 #define X86_FEATURE_HW_PSTATE		( 7*32+ 8) /* AMD HW-PState */
 #define X86_FEATURE_PROC_FEEDBACK	( 7*32+ 9) /* AMD ProcFeedbackInterface */
diff -puN arch/x86/include/asm/pgtable_types.h~kaiser-pcid arch/x86/include/asm/pgtable_types.h
--- a/arch/x86/include/asm/pgtable_types.h~kaiser-pcid	2017-11-22 15:45:53.448619728 -0800
+++ b/arch/x86/include/asm/pgtable_types.h	2017-11-22 15:45:53.464619728 -0800
@@ -140,6 +140,17 @@
 			 _PAGE_SOFT_DIRTY)
 #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
 
+/* The ASID is the lower 12 bits of CR3 */
+#define X86_CR3_PCID_ASID_MASK  (_AC((1<<12)-1, UL))
+
+/* Mask for all the PCID-related bits in CR3: */
+#define X86_CR3_PCID_MASK       (X86_CR3_PCID_NOFLUSH | X86_CR3_PCID_ASID_MASK)
+
+/* Make sure this is only usable in KAISER #ifdef'd code: */
+#ifdef CONFIG_KAISER
+#define X86_CR3_KAISER_SWITCH_BIT 11
+#endif
+
 /*
  * The cache modes defined here are used to translate between pure SW usage
  * and the HW defined cache mode bits and/or PAT entries.
diff -puN arch/x86/include/asm/tlbflush.h~kaiser-pcid arch/x86/include/asm/tlbflush.h
--- a/arch/x86/include/asm/tlbflush.h~kaiser-pcid	2017-11-22 15:45:53.450619728 -0800
+++ b/arch/x86/include/asm/tlbflush.h	2017-11-22 15:45:53.465619728 -0800
@@ -78,7 +78,12 @@ static inline u64 inc_mm_tlb_gen(struct
 /* There are 12 bits of space for ASIDS in CR3 */
 #define CR3_HW_ASID_BITS 12
 /* When enabled, KAISER consumes a single bit for user/kernel switches */
+#ifdef CONFIG_KAISER
+#define X86_CR3_KAISER_SWITCH_BIT 11
+#define KAISER_CONSUMED_ASID_BITS 1
+#else
 #define KAISER_CONSUMED_ASID_BITS 0
+#endif
 
 #define CR3_AVAIL_ASID_BITS (CR3_HW_ASID_BITS - KAISER_CONSUMED_ASID_BITS)
 /*
@@ -88,21 +93,62 @@ static inline u64 inc_mm_tlb_gen(struct
  */
 #define MAX_ASID_AVAILABLE ((1<<CR3_AVAIL_ASID_BITS) - 2)
 
+/*
+ * 6 because 6 should be plenty and struct tlb_state will fit in
+ * two cache lines.
+ */
+#define TLB_NR_DYN_ASIDS 6
+
 static inline u16 kern_asid(u16 asid)
 {
 	VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
+
+#ifdef CONFIG_KAISER
+	/*
+	 * Make sure that the dynamic ASID space does not confict
+	 * with the bit we are using to switch between user and
+	 * kernel ASIDs.
+	 */
+	BUILD_BUG_ON(TLB_NR_DYN_ASIDS >= (1<<X86_CR3_KAISER_SWITCH_BIT));
+
 	/*
-	 * If PCID is on, ASID-aware code paths put the ASID+1 into the PCID
-	 * bits.  This serves two purposes.  It prevents a nasty situation in
-	 * which PCID-unaware code saves CR3, loads some other value (with PCID
-	 * == 0), and then restores CR3, thus corrupting the TLB for ASID 0 if
-	 * the saved ASID was nonzero.  It also means that any bugs involving
-	 * loading a PCID-enabled CR3 with CR4.PCIDE off will trigger
-	 * deterministically.
+	 * The ASID being passed in here should have respected
+	 * the MAX_ASID_AVAILABLE and thus never have the switch
+	 * bit set.
+	 */
+	VM_WARN_ON_ONCE(asid & (1<<X86_CR3_KAISER_SWITCH_BIT));
+#endif
+	/*
+	 * The dynamically-assigned ASIDs that get passed in  are
+	 * small (<TLB_NR_DYN_ASIDS).  They never have the high
+	 * switch bit set, so do not bother to clear it.
+	 */
+
+	/*
+	 * If PCID is on, ASID-aware code paths put the ASID+1
+	 * into the PCID bits.  This serves two purposes.  It
+	 * prevents a nasty situation in which PCID-unaware code
+	 * saves CR3, loads some other value (with PCID == 0),
+	 * and then restores CR3, thus corrupting the TLB for
+	 * ASID 0 if the saved ASID was nonzero.  It also means
+	 * that any bugs involving loading a PCID-enabled CR3
+	 * with CR4.PCIDE off will trigger deterministically.
 	 */
 	return asid + 1;
 }
 
+/*
+ * The user ASID is just the kernel one, plus the "switch bit".
+ */
+static inline u16 user_asid(u16 asid)
+{
+	u16 ret = kern_asid(asid);
+#ifdef CONFIG_KAISER
+	ret |= 1<<X86_CR3_KAISER_SWITCH_BIT;
+#endif
+	return ret;
+}
+
 struct pgd_t;
 static inline unsigned long build_cr3(pgd_t *pgd, u16 asid)
 {
@@ -145,12 +191,6 @@ static inline bool tlb_defer_switch_to_i
 	return !static_cpu_has(X86_FEATURE_PCID);
 }
 
-/*
- * 6 because 6 should be plenty and struct tlb_state will fit in
- * two cache lines.
- */
-#define TLB_NR_DYN_ASIDS 6
-
 struct tlb_context {
 	u64 ctx_id;
 	u64 tlb_gen;
@@ -306,18 +346,42 @@ extern void initialize_tlbstate_and_flus
 
 static inline void __native_flush_tlb(void)
 {
+	if (!cpu_feature_enabled(X86_FEATURE_INVPCID)) {
+		/*
+		 * native_write_cr3() only clears the current PCID if
+		 * CR4 has X86_CR4_PCIDE set.  In other words, this does
+		 * not fully flush the TLB if PCIDs are in use.
+		 *
+		 * With KAISER and PCIDs, the means that we did not
+		 * flush the user PCID.  Warn if it gets called.
+		 */
+		if (IS_ENABLED(CONFIG_KAISER))
+			WARN_ON_ONCE(this_cpu_read(cpu_tlbstate.cr4) &
+				     X86_CR4_PCIDE);
+		/*
+		 * If current->mm == NULL then we borrow a mm
+		 * which may change during a task switch and
+		 * therefore we must not be preempted while we
+		 * write CR3 back:
+		 */
+		preempt_disable();
+		native_write_cr3(__native_read_cr3());
+		preempt_enable();
+		/*
+		 * Does not need tlb_flush_shared_nonglobals()
+		 * since the CR3 write without PCIDs flushes all
+		 * non-globals.
+		 */
+		return;
+	}
 	/*
-	 * If current->mm == NULL then we borrow a mm which may change during a
-	 * task switch and therefore we must not be preempted while we write CR3
-	 * back:
-	 */
-	preempt_disable();
-	native_write_cr3(__native_read_cr3());
-	preempt_enable();
-	/*
-	 * Does not need tlb_flush_shared_nonglobals() since the CR3 write
-	 * without PCIDs flushes all non-globals.
+	 * We are no longer using globals with KAISER, so a
+	 * "nonglobals" flush would work too. But, this is more
+	 * conservative.
+	 *
+	 * Note, this works with CR4.PCIDE=0 or 1.
 	 */
+	invpcid_flush_all();
 }
 
 static inline void __native_flush_tlb_global_irq_disabled(void)
@@ -353,6 +417,8 @@ static inline void __native_flush_tlb_gl
 		/*
 		 * Using INVPCID is considerably faster than a pair of writes
 		 * to CR4 sandwiched inside an IRQ flag save/restore.
+		 *
+		 * Note, this works with CR4.PCIDE=0 or 1.
 		 */
 		invpcid_flush_all();
 		return;
@@ -372,7 +438,30 @@ static inline void __native_flush_tlb_gl
 
 static inline void __native_flush_tlb_single(unsigned long addr)
 {
-	asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
+	u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
+
+	/*
+	 * Some platforms #GP if we call invpcid(type=1/2) before
+	 * CR4.PCIDE=1.  Just call invpcid in the case we are called
+	 * early.
+	 */
+	if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE)) {
+		asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
+		return;
+	}
+	/* Flush the address out of both PCIDs. */
+	/*
+	 * An optimization here might be to determine addresses
+	 * that are only kernel-mapped and only flush the kernel
+	 * ASID.  But, userspace flushes are probably much more
+	 * important performance-wise.
+	 *
+	 * Make sure to do only a single invpcid when KAISER is
+	 * disabled and we have only a single ASID.
+	 */
+	if (kern_asid(loaded_mm_asid) != user_asid(loaded_mm_asid))
+		invpcid_flush_one(user_asid(loaded_mm_asid), addr);
+	invpcid_flush_one(kern_asid(loaded_mm_asid), addr);
 }
 
 static inline void __flush_tlb_all(void)
diff -puN arch/x86/include/uapi/asm/processor-flags.h~kaiser-pcid arch/x86/include/uapi/asm/processor-flags.h
--- a/arch/x86/include/uapi/asm/processor-flags.h~kaiser-pcid	2017-11-22 15:45:53.452619728 -0800
+++ b/arch/x86/include/uapi/asm/processor-flags.h	2017-11-22 15:45:53.466619728 -0800
@@ -78,7 +78,8 @@
 #define X86_CR3_PWT		_BITUL(X86_CR3_PWT_BIT)
 #define X86_CR3_PCD_BIT		4 /* Page Cache Disable */
 #define X86_CR3_PCD		_BITUL(X86_CR3_PCD_BIT)
-#define X86_CR3_PCID_MASK	_AC(0x00000fff,UL) /* PCID Mask */
+#define X86_CR3_PCID_NOFLUSH_BIT 63 /* Preserve old PCID */
+#define X86_CR3_PCID_NOFLUSH    _BITULL(X86_CR3_PCID_NOFLUSH_BIT)
 
 /*
  * Intel CPU features in CR4
diff -puN arch/x86/kvm/x86.c~kaiser-pcid arch/x86/kvm/x86.c
--- a/arch/x86/kvm/x86.c~kaiser-pcid	2017-11-22 15:45:53.454619728 -0800
+++ b/arch/x86/kvm/x86.c	2017-11-22 15:45:53.468619728 -0800
@@ -805,7 +805,8 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, u
 			return 1;
 
 		/* PCID can not be enabled when cr3[11:0]!=000H or EFER.LMA=0 */
-		if ((kvm_read_cr3(vcpu) & X86_CR3_PCID_MASK) || !is_long_mode(vcpu))
+		if ((kvm_read_cr3(vcpu) & X86_CR3_PCID_ASID_MASK) ||
+		    !is_long_mode(vcpu))
 			return 1;
 	}
 
diff -puN arch/x86/mm/init.c~kaiser-pcid arch/x86/mm/init.c
--- a/arch/x86/mm/init.c~kaiser-pcid	2017-11-22 15:45:53.456619728 -0800
+++ b/arch/x86/mm/init.c	2017-11-22 15:45:53.468619728 -0800
@@ -196,34 +196,59 @@ static void __init probe_page_size_mask(
 
 static void setup_pcid(void)
 {
-#ifdef CONFIG_X86_64
-	if (boot_cpu_has(X86_FEATURE_PCID)) {
-		if (boot_cpu_has(X86_FEATURE_PGE)) {
-			/*
-			 * This can't be cr4_set_bits_and_update_boot() --
-			 * the trampoline code can't handle CR4.PCIDE and
-			 * it wouldn't do any good anyway.  Despite the name,
-			 * cr4_set_bits_and_update_boot() doesn't actually
-			 * cause the bits in question to remain set all the
-			 * way through the secondary boot asm.
-			 *
-			 * Instead, we brute-force it and set CR4.PCIDE
-			 * manually in start_secondary().
-			 */
-			cr4_set_bits(X86_CR4_PCIDE);
-		} else {
-			/*
-			 * flush_tlb_all(), as currently implemented, won't
-			 * work if PCID is on but PGE is not.  Since that
-			 * combination doesn't exist on real hardware, there's
-			 * no reason to try to fully support it, but it's
-			 * polite to avoid corrupting data if we're on
-			 * an improperly configured VM.
-			 */
+	if (!IS_ENABLED(CONFIG_X86_64))
+		return;
+
+	if (!boot_cpu_has(X86_FEATURE_PCID))
+		return;
+
+	if (boot_cpu_has(X86_FEATURE_PGE)) {
+		/*
+		 * KAISER uses a PCID for the kernel and another
+		 * for userspace.  Both PCIDs need to be flushed
+		 * when the TLB flush functions are called.  But,
+		 * flushing *another* PCID is insane without
+		 * INVPCID.  Just avoid using PCIDs at all if we
+		 * have KAISER and do not have INVPCID.
+		 */
+		if (!IS_ENABLED(CONFIG_X86_GLOBAL_PAGES) &&
+		    !boot_cpu_has(X86_FEATURE_INVPCID)) {
 			setup_clear_cpu_cap(X86_FEATURE_PCID);
+			return;
 		}
+		/*
+		 * This can't be cr4_set_bits_and_update_boot() --
+		 * the trampoline code can't handle CR4.PCIDE and
+		 * it wouldn't do any good anyway.  Despite the name,
+		 * cr4_set_bits_and_update_boot() doesn't actually
+		 * cause the bits in question to remain set all the
+		 * way through the secondary boot asm.
+		 *
+		 * Instead, we brute-force it and set CR4.PCIDE
+		 * manually in start_secondary().
+		 */
+		cr4_set_bits(X86_CR4_PCIDE);
+
+		/*
+		 * INVPCID's single-context modes (2/3) only work
+		 * if we set X86_CR4_PCIDE, *and* we INVPCID
+		 * support.  It's unusable on systems that have
+		 * X86_CR4_PCIDE clear, or that have no INVPCID
+		 * support at all.
+		 */
+		if (boot_cpu_has(X86_FEATURE_INVPCID))
+			setup_force_cpu_cap(X86_FEATURE_INVPCID_SINGLE);
+	} else {
+		/*
+		 * flush_tlb_all(), as currently implemented, won't
+		 * work if PCID is on but PGE is not.  Since that
+		 * combination doesn't exist on real hardware, there's
+		 * no reason to try to fully support it, but it's
+		 * polite to avoid corrupting data if we're on
+		 * an improperly configured VM.
+		 */
+		setup_clear_cpu_cap(X86_FEATURE_PCID);
 	}
-#endif
 }
 
 #ifdef CONFIG_X86_32
diff -puN arch/x86/mm/tlb.c~kaiser-pcid arch/x86/mm/tlb.c
--- a/arch/x86/mm/tlb.c~kaiser-pcid	2017-11-22 15:45:53.458619728 -0800
+++ b/arch/x86/mm/tlb.c	2017-11-22 15:45:53.469619728 -0800
@@ -100,6 +100,68 @@ static void choose_new_asid(struct mm_st
 	*need_flush = true;
 }
 
+/*
+ * Given a kernel asid, flush the corresponding KAISER
+ * user ASID.
+ */
+static void flush_user_asid(pgd_t *pgd, u16 kern_asid)
+{
+	/* There is no user ASID if KAISER is off */
+	if (!IS_ENABLED(CONFIG_KAISER))
+		return;
+	/*
+	 * We only have a single ASID if PCID is off and the CR3
+	 * write will have flushed it.
+	 */
+	if (!cpu_feature_enabled(X86_FEATURE_PCID))
+		return;
+	/*
+	 * With PCIDs enabled, write_cr3() only flushes TLB
+	 * entries for the current (kernel) ASID.  This leaves
+	 * old TLB entries for the user ASID in place and we must
+	 * flush that context separately.  We can theoretically
+	 * delay doing this until we actually load up the
+	 * userspace CR3, but do it here for simplicity.
+	 */
+	if (cpu_feature_enabled(X86_FEATURE_INVPCID)) {
+		invpcid_flush_single_context(user_asid(kern_asid));
+	} else {
+		/*
+		 * On systems with PCIDs, but no INVPCID, the only
+		 * way to flush a PCID is a CR3 write.  Note that
+		 * we use the kernel page tables with the *user*
+		 * ASID here.
+		 */
+		unsigned long user_asid_flush_cr3;
+		user_asid_flush_cr3 = build_cr3(pgd, user_asid(kern_asid));
+		write_cr3(user_asid_flush_cr3);
+		/*
+		 * We do not use PCIDs with KAISER unless we also
+		 * have INVPCID.  Getting here is unexpected.
+		 */
+		WARN_ON_ONCE(1);
+	}
+}
+
+static void load_new_mm_cr3(pgd_t *pgdir, u16 new_asid, bool need_flush)
+{
+	unsigned long new_mm_cr3;
+
+	if (need_flush) {
+		flush_user_asid(pgdir, new_asid);
+		new_mm_cr3 = build_cr3(pgdir, new_asid);
+	} else {
+		new_mm_cr3 = build_cr3_noflush(pgdir, new_asid);
+	}
+
+	/*
+	 * Caution: many callers of this function expect
+	 * that load_cr3() is serializing and orders TLB
+	 * fills with respect to the mm_cpumask writes.
+	 */
+	write_cr3(new_mm_cr3);
+}
+
 void leave_mm(int cpu)
 {
 	struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
@@ -230,7 +292,7 @@ void switch_mm_irqs_off(struct mm_struct
 		if (need_flush) {
 			this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id);
 			this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen);
-			write_cr3(build_cr3(next->pgd, new_asid));
+			load_new_mm_cr3(next->pgd, new_asid, true);
 
 			/*
 			 * NB: This gets called via leave_mm() in the idle path
@@ -243,7 +305,7 @@ void switch_mm_irqs_off(struct mm_struct
 			trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
 		} else {
 			/* The new ASID is already up to date. */
-			write_cr3(build_cr3_noflush(next->pgd, new_asid));
+			load_new_mm_cr3(next->pgd, new_asid, false);
 
 			/* See above wrt _rcuidle. */
 			trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0);
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 18/23] x86, kaiser: disable native VSYSCALL
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:35   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

The KAISER code attempts to "poison" the user portion of the kernel page
tables.  It detects entries that it wants that it wants to poison in two
ways:
 * Looking for addresses >= PAGE_OFFSET
 * Looking for entries without _PAGE_USER set

But, to allow the _PAGE_USER check to work, it must never be set on
init_mm entries, and an earlier patch in this series ensured that it
will never be set.

The VDSO is at a address >= PAGE_OFFSET and it is also mapped by init_mm.
Because of the earlier, KAISER-enforced restriction, _PAGE_USER is never
set which makes the VDSO unreadable to userspace.

This makes the "NATIVE" case totally unusable since userspace can not
even see the memory any more.  Disable it whenever KAISER is enabled.

Also add some help text about how KAISER might affect the emulation
case as well.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org

---

 b/arch/x86/Kconfig |    8 ++++++++
 1 file changed, 8 insertions(+)

diff -puN arch/x86/Kconfig~kaiser-no-vsyscall arch/x86/Kconfig
--- a/arch/x86/Kconfig~kaiser-no-vsyscall	2017-11-22 15:45:54.196619726 -0800
+++ b/arch/x86/Kconfig	2017-11-22 15:45:54.200619726 -0800
@@ -2249,6 +2249,9 @@ choice
 
 	config LEGACY_VSYSCALL_NATIVE
 		bool "Native"
+		# The VSYSCALL page comes from the kernel page tables
+		# and is not available when KAISER is enabled.
+		depends on ! KAISER
 		help
 		  Actual executable code is located in the fixed vsyscall
 		  address mapping, implementing time() efficiently. Since
@@ -2266,6 +2269,11 @@ choice
 		  exploits. This configuration is recommended when userspace
 		  still uses the vsyscall area.
 
+		  When KAISER is enabled, the vsyscall area will become
+		  unreadable.  This emulation option still works, but KAISER
+		  will make it harder to do things like trace code using the
+		  emulation.
+
 	config LEGACY_VSYSCALL_NONE
 		bool "None"
 		help
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 18/23] x86, kaiser: disable native VSYSCALL
@ 2017-11-23  0:35   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

The KAISER code attempts to "poison" the user portion of the kernel page
tables.  It detects entries that it wants that it wants to poison in two
ways:
 * Looking for addresses >= PAGE_OFFSET
 * Looking for entries without _PAGE_USER set

But, to allow the _PAGE_USER check to work, it must never be set on
init_mm entries, and an earlier patch in this series ensured that it
will never be set.

The VDSO is at a address >= PAGE_OFFSET and it is also mapped by init_mm.
Because of the earlier, KAISER-enforced restriction, _PAGE_USER is never
set which makes the VDSO unreadable to userspace.

This makes the "NATIVE" case totally unusable since userspace can not
even see the memory any more.  Disable it whenever KAISER is enabled.

Also add some help text about how KAISER might affect the emulation
case as well.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org

---

 b/arch/x86/Kconfig |    8 ++++++++
 1 file changed, 8 insertions(+)

diff -puN arch/x86/Kconfig~kaiser-no-vsyscall arch/x86/Kconfig
--- a/arch/x86/Kconfig~kaiser-no-vsyscall	2017-11-22 15:45:54.196619726 -0800
+++ b/arch/x86/Kconfig	2017-11-22 15:45:54.200619726 -0800
@@ -2249,6 +2249,9 @@ choice
 
 	config LEGACY_VSYSCALL_NATIVE
 		bool "Native"
+		# The VSYSCALL page comes from the kernel page tables
+		# and is not available when KAISER is enabled.
+		depends on ! KAISER
 		help
 		  Actual executable code is located in the fixed vsyscall
 		  address mapping, implementing time() efficiently. Since
@@ -2266,6 +2269,11 @@ choice
 		  exploits. This configuration is recommended when userspace
 		  still uses the vsyscall area.
 
+		  When KAISER is enabled, the vsyscall area will become
+		  unreadable.  This emulation option still works, but KAISER
+		  will make it harder to do things like trace code using the
+		  emulation.
+
 	config LEGACY_VSYSCALL_NONE
 		bool "None"
 		help
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 19/23] x86, kaiser: add debugfs file to turn KAISER on/off at runtime
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:35   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

This will be used in a few patches.  Right now, it's not wired up
to do anything useful.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/mm/kaiser.c |   48 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff -puN arch/x86/mm/kaiser.c~kaiser-dynamic-debugfs arch/x86/mm/kaiser.c
--- a/arch/x86/mm/kaiser.c~kaiser-dynamic-debugfs	2017-11-22 15:45:54.726619725 -0800
+++ b/arch/x86/mm/kaiser.c	2017-11-22 15:45:54.730619725 -0800
@@ -29,6 +29,7 @@
 #include <linux/string.h>
 #include <linux/types.h>
 #include <linux/bug.h>
+#include <linux/debugfs.h>
 #include <linux/init.h>
 #include <linux/spinlock.h>
 #include <linux/mm.h>
@@ -470,3 +471,50 @@ void kaiser_remove_mapping(unsigned long
 	 */
 	__native_flush_tlb_global();
 }
+
+int kaiser_enabled = 1;
+static ssize_t kaiser_enabled_read_file(struct file *file, char __user *user_buf,
+			     size_t count, loff_t *ppos)
+{
+	char buf[32];
+	unsigned int len;
+
+	len = sprintf(buf, "%d\n", kaiser_enabled);
+	return simple_read_from_buffer(user_buf, count, ppos, buf, len);
+}
+
+static ssize_t kaiser_enabled_write_file(struct file *file,
+		 const char __user *user_buf, size_t count, loff_t *ppos)
+{
+	char buf[32];
+	ssize_t len;
+	unsigned int enable;
+
+	len = min(count, sizeof(buf) - 1);
+	if (copy_from_user(buf, user_buf, len))
+		return -EFAULT;
+
+	buf[len] = '\0';
+	if (kstrtoint(buf, 0, &enable))
+		return -EINVAL;
+
+	if (enable > 1)
+		return -EINVAL;
+
+	WRITE_ONCE(kaiser_enabled, enable);
+	return count;
+}
+
+static const struct file_operations fops_kaiser_enabled = {
+	.read = kaiser_enabled_read_file,
+	.write = kaiser_enabled_write_file,
+	.llseek = default_llseek,
+};
+
+static int __init create_kaiser_enabled(void)
+{
+	debugfs_create_file("kaiser-enabled", S_IRUSR | S_IWUSR,
+			    arch_debugfs_dir, NULL, &fops_kaiser_enabled);
+	return 0;
+}
+late_initcall(create_kaiser_enabled);
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 19/23] x86, kaiser: add debugfs file to turn KAISER on/off at runtime
@ 2017-11-23  0:35   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

This will be used in a few patches.  Right now, it's not wired up
to do anything useful.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/mm/kaiser.c |   48 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff -puN arch/x86/mm/kaiser.c~kaiser-dynamic-debugfs arch/x86/mm/kaiser.c
--- a/arch/x86/mm/kaiser.c~kaiser-dynamic-debugfs	2017-11-22 15:45:54.726619725 -0800
+++ b/arch/x86/mm/kaiser.c	2017-11-22 15:45:54.730619725 -0800
@@ -29,6 +29,7 @@
 #include <linux/string.h>
 #include <linux/types.h>
 #include <linux/bug.h>
+#include <linux/debugfs.h>
 #include <linux/init.h>
 #include <linux/spinlock.h>
 #include <linux/mm.h>
@@ -470,3 +471,50 @@ void kaiser_remove_mapping(unsigned long
 	 */
 	__native_flush_tlb_global();
 }
+
+int kaiser_enabled = 1;
+static ssize_t kaiser_enabled_read_file(struct file *file, char __user *user_buf,
+			     size_t count, loff_t *ppos)
+{
+	char buf[32];
+	unsigned int len;
+
+	len = sprintf(buf, "%d\n", kaiser_enabled);
+	return simple_read_from_buffer(user_buf, count, ppos, buf, len);
+}
+
+static ssize_t kaiser_enabled_write_file(struct file *file,
+		 const char __user *user_buf, size_t count, loff_t *ppos)
+{
+	char buf[32];
+	ssize_t len;
+	unsigned int enable;
+
+	len = min(count, sizeof(buf) - 1);
+	if (copy_from_user(buf, user_buf, len))
+		return -EFAULT;
+
+	buf[len] = '\0';
+	if (kstrtoint(buf, 0, &enable))
+		return -EINVAL;
+
+	if (enable > 1)
+		return -EINVAL;
+
+	WRITE_ONCE(kaiser_enabled, enable);
+	return count;
+}
+
+static const struct file_operations fops_kaiser_enabled = {
+	.read = kaiser_enabled_read_file,
+	.write = kaiser_enabled_write_file,
+	.llseek = default_llseek,
+};
+
+static int __init create_kaiser_enabled(void)
+{
+	debugfs_create_file("kaiser-enabled", S_IRUSR | S_IWUSR,
+			    arch_debugfs_dir, NULL, &fops_kaiser_enabled);
+	return 0;
+}
+late_initcall(create_kaiser_enabled);
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 20/23] x86, kaiser: add a function to check for KAISER being enabled
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:35   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

Currently, all of the checks for KAISER are compile-time checks.

Runtime checks are needed for turning it on/off at runtime.

Add a function to do that.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/kaiser.h |    5 +++++
 b/include/linux/kaiser.h        |    5 +++++
 2 files changed, 10 insertions(+)

diff -puN arch/x86/include/asm/kaiser.h~kaiser-dynamic-check-func arch/x86/include/asm/kaiser.h
--- a/arch/x86/include/asm/kaiser.h~kaiser-dynamic-check-func	2017-11-22 15:45:55.262619723 -0800
+++ b/arch/x86/include/asm/kaiser.h	2017-11-22 15:45:55.267619723 -0800
@@ -56,6 +56,11 @@ extern void kaiser_remove_mapping(unsign
  */
 extern void kaiser_init(void);
 
+static inline bool kaiser_active(void)
+{
+	extern int kaiser_enabled;
+	return kaiser_enabled;
+}
 #endif
 
 #endif /* __ASSEMBLY__ */
diff -puN include/linux/kaiser.h~kaiser-dynamic-check-func include/linux/kaiser.h
--- a/include/linux/kaiser.h~kaiser-dynamic-check-func	2017-11-22 15:45:55.264619723 -0800
+++ b/include/linux/kaiser.h	2017-11-22 15:45:55.268619723 -0800
@@ -28,5 +28,10 @@ static inline int kaiser_add_mapping(uns
 static inline void kaiser_add_mapping_cpu_entry(int cpu)
 {
 }
+
+static inline bool kaiser_active(void)
+{
+	return 0;
+}
 #endif /* !CONFIG_KAISER */
 #endif /* _INCLUDE_KAISER_H */
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 20/23] x86, kaiser: add a function to check for KAISER being enabled
@ 2017-11-23  0:35   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

Currently, all of the checks for KAISER are compile-time checks.

Runtime checks are needed for turning it on/off at runtime.

Add a function to do that.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/kaiser.h |    5 +++++
 b/include/linux/kaiser.h        |    5 +++++
 2 files changed, 10 insertions(+)

diff -puN arch/x86/include/asm/kaiser.h~kaiser-dynamic-check-func arch/x86/include/asm/kaiser.h
--- a/arch/x86/include/asm/kaiser.h~kaiser-dynamic-check-func	2017-11-22 15:45:55.262619723 -0800
+++ b/arch/x86/include/asm/kaiser.h	2017-11-22 15:45:55.267619723 -0800
@@ -56,6 +56,11 @@ extern void kaiser_remove_mapping(unsign
  */
 extern void kaiser_init(void);
 
+static inline bool kaiser_active(void)
+{
+	extern int kaiser_enabled;
+	return kaiser_enabled;
+}
 #endif
 
 #endif /* __ASSEMBLY__ */
diff -puN include/linux/kaiser.h~kaiser-dynamic-check-func include/linux/kaiser.h
--- a/include/linux/kaiser.h~kaiser-dynamic-check-func	2017-11-22 15:45:55.264619723 -0800
+++ b/include/linux/kaiser.h	2017-11-22 15:45:55.268619723 -0800
@@ -28,5 +28,10 @@ static inline int kaiser_add_mapping(uns
 static inline void kaiser_add_mapping_cpu_entry(int cpu)
 {
 }
+
+static inline bool kaiser_active(void)
+{
+	return 0;
+}
 #endif /* !CONFIG_KAISER */
 #endif /* _INCLUDE_KAISER_H */
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 21/23] x86, kaiser: un-poison PGDs at runtime
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:35   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

With KAISER Kernel PGDs that map userspace are "poisoned" with
the NX bit.  This ensures that if a kernel->user CR3 switch is
missed, userspace crashes instead of running in an unhardened
state.

This code will be needed in a moment when KAISER is turned
on and off at runtime.

Note that an __ASSEMBLY__ #ifdef is now required since kaiser.h
is indirectly included into assembly.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/pgtable_64.h |   16 ++++++++++++++-
 b/arch/x86/mm/kaiser.c              |   38 ++++++++++++++++++++++++++++++++++++
 b/include/linux/kaiser.h            |    3 +-
 3 files changed, 55 insertions(+), 2 deletions(-)

diff -puN arch/x86/include/asm/pgtable_64.h~kaiser-dynamic-unpoison-pgd arch/x86/include/asm/pgtable_64.h
--- a/arch/x86/include/asm/pgtable_64.h~kaiser-dynamic-unpoison-pgd	2017-11-22 15:45:55.818619722 -0800
+++ b/arch/x86/include/asm/pgtable_64.h	2017-11-22 15:45:55.824619722 -0800
@@ -3,6 +3,7 @@
 #define _ASM_X86_PGTABLE_64_H
 
 #include <linux/const.h>
+#include <linux/kaiser.h>
 #include <asm/pgtable_64_types.h>
 
 #ifndef __ASSEMBLY__
@@ -199,6 +200,18 @@ static inline bool pgd_userspace_access(
 	return pgd.pgd & _PAGE_USER;
 }
 
+static inline void kaiser_poison_pgd(pgd_t *pgd)
+{
+	if (pgd->pgd & _PAGE_PRESENT)
+		pgd->pgd |= _PAGE_NX;
+}
+
+static inline void kaiser_unpoison_pgd(pgd_t *pgd)
+{
+	if (pgd->pgd & _PAGE_PRESENT)
+		pgd->pgd &= ~_PAGE_NX;
+}
+
 /*
  * Take a PGD location (pgdp) and a pgd value that needs
  * to be set there.  Populates the shadow and returns
@@ -222,7 +235,8 @@ static inline pgd_t kaiser_set_shadow_pg
 			 * wrong CR3 value, userspace will crash
 			 * instead of running.
 			 */
-			pgd.pgd |= _PAGE_NX;
+			if (kaiser_active())
+				kaiser_poison_pgd(&pgd);
 		}
 	} else if (pgd_userspace_access(*pgdp)) {
 		/*
diff -puN arch/x86/mm/kaiser.c~kaiser-dynamic-unpoison-pgd arch/x86/mm/kaiser.c
--- a/arch/x86/mm/kaiser.c~kaiser-dynamic-unpoison-pgd	2017-11-22 15:45:55.819619722 -0800
+++ b/arch/x86/mm/kaiser.c	2017-11-22 15:45:55.825619722 -0800
@@ -501,6 +501,9 @@ static ssize_t kaiser_enabled_write_file
 	if (enable > 1)
 		return -EINVAL;
 
+	if (kaiser_enabled == enable)
+		return count;
+
 	WRITE_ONCE(kaiser_enabled, enable);
 	return count;
 }
@@ -518,3 +521,38 @@ static int __init create_kaiser_enabled(
 	return 0;
 }
 late_initcall(create_kaiser_enabled);
+
+enum poison {
+	KAISER_POISON,
+	KAISER_UNPOISON
+};
+void kaiser_poison_pgd_page(pgd_t *pgd_page, enum poison do_poison)
+{
+	int i = 0;
+
+	for (i = 0; i < PTRS_PER_PGD; i++) {
+		pgd_t *pgd = &pgd_page[i];
+
+		/* Stop once we hit kernel addresses: */
+		if (!pgdp_maps_userspace(pgd))
+			break;
+
+		if (do_poison == KAISER_POISON)
+			kaiser_poison_pgd(pgd);
+		else
+			kaiser_unpoison_pgd(pgd);
+	}
+
+}
+
+void kaiser_poison_pgds(enum poison do_poison)
+{
+	struct page *page;
+
+	spin_lock(&pgd_lock);
+	list_for_each_entry(page, &pgd_list, lru) {
+		pgd_t *pgd = (pgd_t *)page_address(page);
+		kaiser_poison_pgd_page(pgd, do_poison);
+	}
+	spin_unlock(&pgd_lock);
+}
diff -puN include/linux/kaiser.h~kaiser-dynamic-unpoison-pgd include/linux/kaiser.h
--- a/include/linux/kaiser.h~kaiser-dynamic-unpoison-pgd	2017-11-22 15:45:55.821619722 -0800
+++ b/include/linux/kaiser.h	2017-11-22 15:45:55.826619722 -0800
@@ -4,7 +4,7 @@
 #ifdef CONFIG_KAISER
 #include <asm/kaiser.h>
 #else
-
+#ifndef __ASSEMBLY__
 /*
  * These stubs are used whenever CONFIG_KAISER is off, which
  * includes architectures that support KAISER, but have it
@@ -33,5 +33,6 @@ static inline bool kaiser_active(void)
 {
 	return 0;
 }
+#endif /* __ASSEMBLY__ */
 #endif /* !CONFIG_KAISER */
 #endif /* _INCLUDE_KAISER_H */
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 21/23] x86, kaiser: un-poison PGDs at runtime
@ 2017-11-23  0:35   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

With KAISER Kernel PGDs that map userspace are "poisoned" with
the NX bit.  This ensures that if a kernel->user CR3 switch is
missed, userspace crashes instead of running in an unhardened
state.

This code will be needed in a moment when KAISER is turned
on and off at runtime.

Note that an __ASSEMBLY__ #ifdef is now required since kaiser.h
is indirectly included into assembly.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/include/asm/pgtable_64.h |   16 ++++++++++++++-
 b/arch/x86/mm/kaiser.c              |   38 ++++++++++++++++++++++++++++++++++++
 b/include/linux/kaiser.h            |    3 +-
 3 files changed, 55 insertions(+), 2 deletions(-)

diff -puN arch/x86/include/asm/pgtable_64.h~kaiser-dynamic-unpoison-pgd arch/x86/include/asm/pgtable_64.h
--- a/arch/x86/include/asm/pgtable_64.h~kaiser-dynamic-unpoison-pgd	2017-11-22 15:45:55.818619722 -0800
+++ b/arch/x86/include/asm/pgtable_64.h	2017-11-22 15:45:55.824619722 -0800
@@ -3,6 +3,7 @@
 #define _ASM_X86_PGTABLE_64_H
 
 #include <linux/const.h>
+#include <linux/kaiser.h>
 #include <asm/pgtable_64_types.h>
 
 #ifndef __ASSEMBLY__
@@ -199,6 +200,18 @@ static inline bool pgd_userspace_access(
 	return pgd.pgd & _PAGE_USER;
 }
 
+static inline void kaiser_poison_pgd(pgd_t *pgd)
+{
+	if (pgd->pgd & _PAGE_PRESENT)
+		pgd->pgd |= _PAGE_NX;
+}
+
+static inline void kaiser_unpoison_pgd(pgd_t *pgd)
+{
+	if (pgd->pgd & _PAGE_PRESENT)
+		pgd->pgd &= ~_PAGE_NX;
+}
+
 /*
  * Take a PGD location (pgdp) and a pgd value that needs
  * to be set there.  Populates the shadow and returns
@@ -222,7 +235,8 @@ static inline pgd_t kaiser_set_shadow_pg
 			 * wrong CR3 value, userspace will crash
 			 * instead of running.
 			 */
-			pgd.pgd |= _PAGE_NX;
+			if (kaiser_active())
+				kaiser_poison_pgd(&pgd);
 		}
 	} else if (pgd_userspace_access(*pgdp)) {
 		/*
diff -puN arch/x86/mm/kaiser.c~kaiser-dynamic-unpoison-pgd arch/x86/mm/kaiser.c
--- a/arch/x86/mm/kaiser.c~kaiser-dynamic-unpoison-pgd	2017-11-22 15:45:55.819619722 -0800
+++ b/arch/x86/mm/kaiser.c	2017-11-22 15:45:55.825619722 -0800
@@ -501,6 +501,9 @@ static ssize_t kaiser_enabled_write_file
 	if (enable > 1)
 		return -EINVAL;
 
+	if (kaiser_enabled == enable)
+		return count;
+
 	WRITE_ONCE(kaiser_enabled, enable);
 	return count;
 }
@@ -518,3 +521,38 @@ static int __init create_kaiser_enabled(
 	return 0;
 }
 late_initcall(create_kaiser_enabled);
+
+enum poison {
+	KAISER_POISON,
+	KAISER_UNPOISON
+};
+void kaiser_poison_pgd_page(pgd_t *pgd_page, enum poison do_poison)
+{
+	int i = 0;
+
+	for (i = 0; i < PTRS_PER_PGD; i++) {
+		pgd_t *pgd = &pgd_page[i];
+
+		/* Stop once we hit kernel addresses: */
+		if (!pgdp_maps_userspace(pgd))
+			break;
+
+		if (do_poison == KAISER_POISON)
+			kaiser_poison_pgd(pgd);
+		else
+			kaiser_unpoison_pgd(pgd);
+	}
+
+}
+
+void kaiser_poison_pgds(enum poison do_poison)
+{
+	struct page *page;
+
+	spin_lock(&pgd_lock);
+	list_for_each_entry(page, &pgd_list, lru) {
+		pgd_t *pgd = (pgd_t *)page_address(page);
+		kaiser_poison_pgd_page(pgd, do_poison);
+	}
+	spin_unlock(&pgd_lock);
+}
diff -puN include/linux/kaiser.h~kaiser-dynamic-unpoison-pgd include/linux/kaiser.h
--- a/include/linux/kaiser.h~kaiser-dynamic-unpoison-pgd	2017-11-22 15:45:55.821619722 -0800
+++ b/include/linux/kaiser.h	2017-11-22 15:45:55.826619722 -0800
@@ -4,7 +4,7 @@
 #ifdef CONFIG_KAISER
 #include <asm/kaiser.h>
 #else
-
+#ifndef __ASSEMBLY__
 /*
  * These stubs are used whenever CONFIG_KAISER is off, which
  * includes architectures that support KAISER, but have it
@@ -33,5 +33,6 @@ static inline bool kaiser_active(void)
 {
 	return 0;
 }
+#endif /* __ASSEMBLY__ */
 #endif /* !CONFIG_KAISER */
 #endif /* _INCLUDE_KAISER_H */
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 22/23] x86, kaiser: allow KAISER to be enabled/disabled at runtime
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:35   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

The KAISER CR3 switches are expensive for many reasons.  Not all systems
benefit from the protection provided by KAISER.  Some of them can not
pay the high performance cost.

This patch adds a debugfs file.  To disable KAISER, you do:

	echo 0 > /sys/kernel/debug/x86/kaiser-enabled

and to re-enable it, you can:

	echo 1 > /sys/kernel/debug/x86/kaiser-enabled

This is a *minimal* implementation.  There are certainly plenty of
optimizations that can be done on top of this by using ALTERNATIVES
among other things.

This does, however, completely remove all the KAISER-based CR3 writes.
This permits a paravirtualized system that can not tolerate CR3
writes to theoretically survive with CONFIG_KAISER=y, albeit with
/sys/kernel/debug/x86/kaiser-enabled=0.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/entry/calling.h |   12 +++++++
 b/arch/x86/mm/kaiser.c     |   70 ++++++++++++++++++++++++++++++++++++++++++---
 2 files changed, 78 insertions(+), 4 deletions(-)

diff -puN arch/x86/entry/calling.h~kaiser-dynamic-asm arch/x86/entry/calling.h
--- a/arch/x86/entry/calling.h~kaiser-dynamic-asm	2017-11-22 15:45:56.402619721 -0800
+++ b/arch/x86/entry/calling.h	2017-11-22 15:45:56.407619721 -0800
@@ -209,19 +209,29 @@ For 32-bit we have the following convent
 	orq     $(KAISER_SWITCH_MASK), \reg
 .endm
 
+.macro JUMP_IF_KAISER_OFF	label
+	testq   $1, kaiser_asm_do_switch
+	jz      \label
+.endm
+
 .macro SWITCH_TO_KERNEL_CR3 scratch_reg:req
+	JUMP_IF_KAISER_OFF	.Lswitch_done_\@
 	mov	%cr3, \scratch_reg
 	ADJUST_KERNEL_CR3 \scratch_reg
 	mov	\scratch_reg, %cr3
+.Lswitch_done_\@:
 .endm
 
 .macro SWITCH_TO_USER_CR3 scratch_reg:req
+	JUMP_IF_KAISER_OFF	.Lswitch_done_\@
 	mov	%cr3, \scratch_reg
 	ADJUST_USER_CR3 \scratch_reg
 	mov	\scratch_reg, %cr3
+.Lswitch_done_\@:
 .endm
 
 .macro SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg:req save_reg:req
+	JUMP_IF_KAISER_OFF	.Ldone_\@
 	movq	%cr3, %r\scratch_reg
 	movq	%r\scratch_reg, \save_reg
 	/*
@@ -244,11 +254,13 @@ For 32-bit we have the following convent
 .endm
 
 .macro RESTORE_CR3 save_reg:req
+	JUMP_IF_KAISER_OFF	.Ldone_\@
 	/*
 	 * The CR3 write could be avoided when not changing its value,
 	 * but would require a CR3 read *and* a scratch register.
 	 */
 	movq	\save_reg, %cr3
+.Ldone_\@:
 .endm
 
 #else /* CONFIG_KAISER=n: */
diff -puN arch/x86/mm/kaiser.c~kaiser-dynamic-asm arch/x86/mm/kaiser.c
--- a/arch/x86/mm/kaiser.c~kaiser-dynamic-asm	2017-11-22 15:45:56.404619721 -0800
+++ b/arch/x86/mm/kaiser.c	2017-11-22 15:45:56.408619721 -0800
@@ -43,6 +43,9 @@
 
 #define KAISER_WALK_ATOMIC  0x1
 
+__aligned(PAGE_SIZE)
+unsigned long kaiser_asm_do_switch[PAGE_SIZE/sizeof(unsigned long)] = { 1 };
+
 /*
  * At runtime, the only things we map are some things for CPU
  * hotplug, and stacks for new processes.  No two CPUs will ever
@@ -395,6 +398,9 @@ void __init kaiser_init(void)
 
 	kaiser_init_all_pgds();
 
+	kaiser_add_user_map_early(&kaiser_asm_do_switch, PAGE_SIZE,
+				  __PAGE_KERNEL | _PAGE_GLOBAL);
+
 	for_each_possible_cpu(cpu) {
 		void *percpu_vaddr = __per_cpu_user_mapped_start +
 				     per_cpu_offset(cpu);
@@ -483,6 +489,56 @@ static ssize_t kaiser_enabled_read_file(
 	return simple_read_from_buffer(user_buf, count, ppos, buf, len);
 }
 
+enum poison {
+	KAISER_POISON,
+	KAISER_UNPOISON
+};
+void kaiser_poison_pgds(enum poison do_poison);
+
+void kaiser_do_disable(void)
+{
+	/* Make sure the kernel PGDs are usable by userspace: */
+	kaiser_poison_pgds(KAISER_UNPOISON);
+
+	/*
+	 * Make sure all the CPUs have the poison clear in their TLBs.
+	 * This also functions as a barrier to ensure that everyone
+	 * sees the unpoisoned PGDs.
+	 */
+	flush_tlb_all();
+
+	/* Tell the assembly code to stop switching CR3. */
+	kaiser_asm_do_switch[0] = 0;
+
+	/*
+	 * Make sure everybody does an interrupt.  This means that
+	 * they have gone through a SWITCH_TO_KERNEL_CR3 amd are no
+	 * longer running on the userspace CR3.  If we did not do
+	 * this, we might have CPUs running on the shadow page tables
+	 * that then enter the kernel and think they do *not* need to
+	 * switch.
+	 */
+	flush_tlb_all();
+}
+
+void kaiser_do_enable(void)
+{
+	/* Tell the assembly code to start switching CR3: */
+	kaiser_asm_do_switch[0] = 1;
+
+	/* Make sure everyone can see the kaiser_asm_do_switch update: */
+	synchronize_rcu();
+
+	/*
+	 * Now that userspace is no longer using the kernel copy of
+	 * the page tables, we can poison it:
+	 */
+	kaiser_poison_pgds(KAISER_POISON);
+
+	/* Make sure all the CPUs see the poison: */
+	flush_tlb_all();
+}
+
 static ssize_t kaiser_enabled_write_file(struct file *file,
 		 const char __user *user_buf, size_t count, loff_t *ppos)
 {
@@ -504,7 +560,17 @@ static ssize_t kaiser_enabled_write_file
 	if (kaiser_enabled == enable)
 		return count;
 
+	/*
+	 * This tells the page table code to stop poisoning PGDs
+	 */
 	WRITE_ONCE(kaiser_enabled, enable);
+	synchronize_rcu();
+
+	if (enable)
+		kaiser_do_enable();
+	else
+		kaiser_do_disable();
+
 	return count;
 }
 
@@ -522,10 +588,6 @@ static int __init create_kaiser_enabled(
 }
 late_initcall(create_kaiser_enabled);
 
-enum poison {
-	KAISER_POISON,
-	KAISER_UNPOISON
-};
 void kaiser_poison_pgd_page(pgd_t *pgd_page, enum poison do_poison)
 {
 	int i = 0;
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 22/23] x86, kaiser: allow KAISER to be enabled/disabled at runtime
@ 2017-11-23  0:35   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

The KAISER CR3 switches are expensive for many reasons.  Not all systems
benefit from the protection provided by KAISER.  Some of them can not
pay the high performance cost.

This patch adds a debugfs file.  To disable KAISER, you do:

	echo 0 > /sys/kernel/debug/x86/kaiser-enabled

and to re-enable it, you can:

	echo 1 > /sys/kernel/debug/x86/kaiser-enabled

This is a *minimal* implementation.  There are certainly plenty of
optimizations that can be done on top of this by using ALTERNATIVES
among other things.

This does, however, completely remove all the KAISER-based CR3 writes.
This permits a paravirtualized system that can not tolerate CR3
writes to theoretically survive with CONFIG_KAISER=y, albeit with
/sys/kernel/debug/x86/kaiser-enabled=0.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/arch/x86/entry/calling.h |   12 +++++++
 b/arch/x86/mm/kaiser.c     |   70 ++++++++++++++++++++++++++++++++++++++++++---
 2 files changed, 78 insertions(+), 4 deletions(-)

diff -puN arch/x86/entry/calling.h~kaiser-dynamic-asm arch/x86/entry/calling.h
--- a/arch/x86/entry/calling.h~kaiser-dynamic-asm	2017-11-22 15:45:56.402619721 -0800
+++ b/arch/x86/entry/calling.h	2017-11-22 15:45:56.407619721 -0800
@@ -209,19 +209,29 @@ For 32-bit we have the following convent
 	orq     $(KAISER_SWITCH_MASK), \reg
 .endm
 
+.macro JUMP_IF_KAISER_OFF	label
+	testq   $1, kaiser_asm_do_switch
+	jz      \label
+.endm
+
 .macro SWITCH_TO_KERNEL_CR3 scratch_reg:req
+	JUMP_IF_KAISER_OFF	.Lswitch_done_\@
 	mov	%cr3, \scratch_reg
 	ADJUST_KERNEL_CR3 \scratch_reg
 	mov	\scratch_reg, %cr3
+.Lswitch_done_\@:
 .endm
 
 .macro SWITCH_TO_USER_CR3 scratch_reg:req
+	JUMP_IF_KAISER_OFF	.Lswitch_done_\@
 	mov	%cr3, \scratch_reg
 	ADJUST_USER_CR3 \scratch_reg
 	mov	\scratch_reg, %cr3
+.Lswitch_done_\@:
 .endm
 
 .macro SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg:req save_reg:req
+	JUMP_IF_KAISER_OFF	.Ldone_\@
 	movq	%cr3, %r\scratch_reg
 	movq	%r\scratch_reg, \save_reg
 	/*
@@ -244,11 +254,13 @@ For 32-bit we have the following convent
 .endm
 
 .macro RESTORE_CR3 save_reg:req
+	JUMP_IF_KAISER_OFF	.Ldone_\@
 	/*
 	 * The CR3 write could be avoided when not changing its value,
 	 * but would require a CR3 read *and* a scratch register.
 	 */
 	movq	\save_reg, %cr3
+.Ldone_\@:
 .endm
 
 #else /* CONFIG_KAISER=n: */
diff -puN arch/x86/mm/kaiser.c~kaiser-dynamic-asm arch/x86/mm/kaiser.c
--- a/arch/x86/mm/kaiser.c~kaiser-dynamic-asm	2017-11-22 15:45:56.404619721 -0800
+++ b/arch/x86/mm/kaiser.c	2017-11-22 15:45:56.408619721 -0800
@@ -43,6 +43,9 @@
 
 #define KAISER_WALK_ATOMIC  0x1
 
+__aligned(PAGE_SIZE)
+unsigned long kaiser_asm_do_switch[PAGE_SIZE/sizeof(unsigned long)] = { 1 };
+
 /*
  * At runtime, the only things we map are some things for CPU
  * hotplug, and stacks for new processes.  No two CPUs will ever
@@ -395,6 +398,9 @@ void __init kaiser_init(void)
 
 	kaiser_init_all_pgds();
 
+	kaiser_add_user_map_early(&kaiser_asm_do_switch, PAGE_SIZE,
+				  __PAGE_KERNEL | _PAGE_GLOBAL);
+
 	for_each_possible_cpu(cpu) {
 		void *percpu_vaddr = __per_cpu_user_mapped_start +
 				     per_cpu_offset(cpu);
@@ -483,6 +489,56 @@ static ssize_t kaiser_enabled_read_file(
 	return simple_read_from_buffer(user_buf, count, ppos, buf, len);
 }
 
+enum poison {
+	KAISER_POISON,
+	KAISER_UNPOISON
+};
+void kaiser_poison_pgds(enum poison do_poison);
+
+void kaiser_do_disable(void)
+{
+	/* Make sure the kernel PGDs are usable by userspace: */
+	kaiser_poison_pgds(KAISER_UNPOISON);
+
+	/*
+	 * Make sure all the CPUs have the poison clear in their TLBs.
+	 * This also functions as a barrier to ensure that everyone
+	 * sees the unpoisoned PGDs.
+	 */
+	flush_tlb_all();
+
+	/* Tell the assembly code to stop switching CR3. */
+	kaiser_asm_do_switch[0] = 0;
+
+	/*
+	 * Make sure everybody does an interrupt.  This means that
+	 * they have gone through a SWITCH_TO_KERNEL_CR3 amd are no
+	 * longer running on the userspace CR3.  If we did not do
+	 * this, we might have CPUs running on the shadow page tables
+	 * that then enter the kernel and think they do *not* need to
+	 * switch.
+	 */
+	flush_tlb_all();
+}
+
+void kaiser_do_enable(void)
+{
+	/* Tell the assembly code to start switching CR3: */
+	kaiser_asm_do_switch[0] = 1;
+
+	/* Make sure everyone can see the kaiser_asm_do_switch update: */
+	synchronize_rcu();
+
+	/*
+	 * Now that userspace is no longer using the kernel copy of
+	 * the page tables, we can poison it:
+	 */
+	kaiser_poison_pgds(KAISER_POISON);
+
+	/* Make sure all the CPUs see the poison: */
+	flush_tlb_all();
+}
+
 static ssize_t kaiser_enabled_write_file(struct file *file,
 		 const char __user *user_buf, size_t count, loff_t *ppos)
 {
@@ -504,7 +560,17 @@ static ssize_t kaiser_enabled_write_file
 	if (kaiser_enabled == enable)
 		return count;
 
+	/*
+	 * This tells the page table code to stop poisoning PGDs
+	 */
 	WRITE_ONCE(kaiser_enabled, enable);
+	synchronize_rcu();
+
+	if (enable)
+		kaiser_do_enable();
+	else
+		kaiser_do_disable();
+
 	return count;
 }
 
@@ -522,10 +588,6 @@ static int __init create_kaiser_enabled(
 }
 late_initcall(create_kaiser_enabled);
 
-enum poison {
-	KAISER_POISON,
-	KAISER_UNPOISON
-};
 void kaiser_poison_pgd_page(pgd_t *pgd_page, enum poison do_poison)
 {
 	int i = 0;
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 23/23] x86, kaiser: add Kconfig
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  0:35   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

PARAVIRT generally requires that the kernel not manage its own page
tables.  It also means that the hypervisor and kernel must agree
wholeheartedly about what format the page tables are in and what
they contain.  KAISER, unfortunately, changes the rules and they
can not be used together.

I've seen conflicting feedback from maintainers lately about whether
they want the Kconfig magic to go first or last in a patch series.
It's going last here because the partially-applied series leads to
kernels that can not boot in a bunch of cases.  I did a run through
the entire series with CONFIG_KAISER=y to look for build errors,
though.

Note from Hugh Dickins on why it depends on SMP:

	It is absurd that KAISER should depend on SMP, but
	apparently nobody has tried a UP build before: which
	breaks on implicit declaration of function
	'per_cpu_offset' in arch/x86/mm/kaiser.c.

	Now, you would expect that to be trivially fixed up; but
	looking at the System.map when that block is #ifdef'ed
	out of kaiser_init(), I see that in a UP build
	__per_cpu_user_mapped_end is precisely at
	__per_cpu_user_mapped_start, and the items carefully
	gathered into that section for user-mapping on SMP,
	dispersed elsewhere on UP.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/security/Kconfig |   10 ++++++++++
 1 file changed, 10 insertions(+)

diff -puN security/Kconfig~kaiser-kconfig security/Kconfig
--- a/security/Kconfig~kaiser-kconfig	2017-11-22 15:46:24.395619651 -0800
+++ b/security/Kconfig	2017-11-22 15:46:24.398619651 -0800
@@ -54,6 +54,16 @@ config SECURITY_NETWORK
 	  implement socket and networking access controls.
 	  If you are unsure how to answer this question, answer N.
 
+config KAISER
+	bool "Remove the kernel mapping in user mode"
+	depends on X86_64 && SMP && !PARAVIRT
+	help
+	  This feature reduces the number of hardware side channels by
+	  ensuring that the majority of kernel addresses are not mapped
+	  into userspace.
+
+	  See Documentation/x86/kaiser.txt for more details.
+
 config SECURITY_INFINIBAND
 	bool "Infiniband Security Hooks"
 	depends on SECURITY && INFINIBAND
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 23/23] x86, kaiser: add Kconfig
@ 2017-11-23  0:35   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23  0:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, dave.hansen, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86


From: Dave Hansen <dave.hansen@linux.intel.com>

PARAVIRT generally requires that the kernel not manage its own page
tables.  It also means that the hypervisor and kernel must agree
wholeheartedly about what format the page tables are in and what
they contain.  KAISER, unfortunately, changes the rules and they
can not be used together.

I've seen conflicting feedback from maintainers lately about whether
they want the Kconfig magic to go first or last in a patch series.
It's going last here because the partially-applied series leads to
kernels that can not boot in a bunch of cases.  I did a run through
the entire series with CONFIG_KAISER=y to look for build errors,
though.

Note from Hugh Dickins on why it depends on SMP:

	It is absurd that KAISER should depend on SMP, but
	apparently nobody has tried a UP build before: which
	breaks on implicit declaration of function
	'per_cpu_offset' in arch/x86/mm/kaiser.c.

	Now, you would expect that to be trivially fixed up; but
	looking at the System.map when that block is #ifdef'ed
	out of kaiser_init(), I see that in a UP build
	__per_cpu_user_mapped_end is precisely at
	__per_cpu_user_mapped_start, and the items carefully
	gathered into that section for user-mapping on SMP,
	dispersed elsewhere on UP.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: x86@kernel.org
---

 b/security/Kconfig |   10 ++++++++++
 1 file changed, 10 insertions(+)

diff -puN security/Kconfig~kaiser-kconfig security/Kconfig
--- a/security/Kconfig~kaiser-kconfig	2017-11-22 15:46:24.395619651 -0800
+++ b/security/Kconfig	2017-11-22 15:46:24.398619651 -0800
@@ -54,6 +54,16 @@ config SECURITY_NETWORK
 	  implement socket and networking access controls.
 	  If you are unsure how to answer this question, answer N.
 
+config KAISER
+	bool "Remove the kernel mapping in user mode"
+	depends on X86_64 && SMP && !PARAVIRT
+	help
+	  This feature reduces the number of hardware side channels by
+	  ensuring that the majority of kernel addresses are not mapped
+	  into userspace.
+
+	  See Documentation/x86/kaiser.txt for more details.
+
 config SECURITY_INFINIBAND
 	bool "Infiniband Security Hooks"
 	depends on SECURITY && INFINIBAND
_

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 11/23] x86, kaiser: map entry stack variables
  2017-11-23  0:34   ` Dave Hansen
@ 2017-11-23  3:31     ` Andy Lutomirski
  -1 siblings, 0 replies; 131+ messages in thread
From: Andy Lutomirski @ 2017-11-23  3:31 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, Daniel Gruss,
	michael.schwarz, richard.fellner, Andrew Lutomirski,
	Linus Torvalds, Kees Cook, Hugh Dickins, X86 ML

On Wed, Nov 22, 2017 at 4:34 PM, Dave Hansen
<dave.hansen@linux.intel.com> wrote:
>
> From: Dave Hansen <dave.hansen@linux.intel.com>
>
> There are times where the kernel is entered but there is not a
> safe stack, like at SYSCALL entry.  To obtain a safe stack, the
> per-cpu variables 'rsp_scratch' and 'cpu_current_top_of_stack'
> are used to save the old %rsp value and to find where the kernel
> stack should start.
>
> You can not directly manipulate the CR3 register.  You can only
> 'MOV' to it from another register, which means a register must be
> clobbered in order to do any CR3 manipulation.  User-mapping
> these variables allows us to obtain a safe stack and use it for
> temporary storage *before* CR3 is switched.
>
> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
> Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
> Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
> Cc: Richard Fellner <richard.fellner@student.tugraz.at>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Kees Cook <keescook@google.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: x86@kernel.org
> ---
>
>  b/arch/x86/kernel/cpu/common.c |    2 +-
>  b/arch/x86/kernel/process_64.c |    2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff -puN arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/cpu/common.c
> --- a/arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars    2017-11-22 15:45:50.128619736 -0800
> +++ b/arch/x86/kernel/cpu/common.c      2017-11-22 15:45:50.134619736 -0800
> @@ -1524,7 +1524,7 @@ EXPORT_PER_CPU_SYMBOL(__preempt_count);
>   * the top of the kernel stack.  Use an extra percpu variable to track the
>   * top of the kernel stack directly.
>   */
> -DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) =
> +DEFINE_PER_CPU_USER_MAPPED(unsigned long, cpu_current_top_of_stack) =
>         (unsigned long)&init_thread_union + THREAD_SIZE;

This is in an x86_32-only section and should be dropped, I think.

> diff -puN arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/process_64.c
> --- a/arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars    2017-11-22 15:45:50.130619736 -0800
> +++ b/arch/x86/kernel/process_64.c      2017-11-22 15:45:50.134619736 -0800
> @@ -59,7 +59,7 @@
>  #include <asm/unistd_32_ia32.h>
>  #endif
>
> -__visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
> +__visible DEFINE_PER_CPU_USER_MAPPED(unsigned long, rsp_scratch);
>

This shouldn't be needed any more either.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 11/23] x86, kaiser: map entry stack variables
@ 2017-11-23  3:31     ` Andy Lutomirski
  0 siblings, 0 replies; 131+ messages in thread
From: Andy Lutomirski @ 2017-11-23  3:31 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, Daniel Gruss,
	michael.schwarz, richard.fellner, Andrew Lutomirski,
	Linus Torvalds, Kees Cook, Hugh Dickins, X86 ML

On Wed, Nov 22, 2017 at 4:34 PM, Dave Hansen
<dave.hansen@linux.intel.com> wrote:
>
> From: Dave Hansen <dave.hansen@linux.intel.com>
>
> There are times where the kernel is entered but there is not a
> safe stack, like at SYSCALL entry.  To obtain a safe stack, the
> per-cpu variables 'rsp_scratch' and 'cpu_current_top_of_stack'
> are used to save the old %rsp value and to find where the kernel
> stack should start.
>
> You can not directly manipulate the CR3 register.  You can only
> 'MOV' to it from another register, which means a register must be
> clobbered in order to do any CR3 manipulation.  User-mapping
> these variables allows us to obtain a safe stack and use it for
> temporary storage *before* CR3 is switched.
>
> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
> Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
> Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
> Cc: Richard Fellner <richard.fellner@student.tugraz.at>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Kees Cook <keescook@google.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: x86@kernel.org
> ---
>
>  b/arch/x86/kernel/cpu/common.c |    2 +-
>  b/arch/x86/kernel/process_64.c |    2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff -puN arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/cpu/common.c
> --- a/arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars    2017-11-22 15:45:50.128619736 -0800
> +++ b/arch/x86/kernel/cpu/common.c      2017-11-22 15:45:50.134619736 -0800
> @@ -1524,7 +1524,7 @@ EXPORT_PER_CPU_SYMBOL(__preempt_count);
>   * the top of the kernel stack.  Use an extra percpu variable to track the
>   * top of the kernel stack directly.
>   */
> -DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) =
> +DEFINE_PER_CPU_USER_MAPPED(unsigned long, cpu_current_top_of_stack) =
>         (unsigned long)&init_thread_union + THREAD_SIZE;

This is in an x86_32-only section and should be dropped, I think.

> diff -puN arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/process_64.c
> --- a/arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars    2017-11-22 15:45:50.130619736 -0800
> +++ b/arch/x86/kernel/process_64.c      2017-11-22 15:45:50.134619736 -0800
> @@ -59,7 +59,7 @@
>  #include <asm/unistd_32_ia32.h>
>  #endif
>
> -__visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
> +__visible DEFINE_PER_CPU_USER_MAPPED(unsigned long, rsp_scratch);
>

This shouldn't be needed any more either.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2017-11-23  0:34   ` Dave Hansen
@ 2017-11-23  4:07     ` Andy Lutomirski
  -1 siblings, 0 replies; 131+ messages in thread
From: Andy Lutomirski @ 2017-11-23  4:07 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, richard.fellner, moritz.lipp,
	Daniel Gruss, michael.schwarz, Andrew Lutomirski, Linus Torvalds,
	Kees Cook, Hugh Dickins, X86 ML

On Wed, Nov 22, 2017 at 4:34 PM, Dave Hansen
<dave.hansen@linux.intel.com> wrote:
>
> These actions when dealing with a user address *and* the
> PGD has _PAGE_USER set.  That way, in-kernel users of low addresses
> typically used by userspace are not accidentally poisoned.

This seems sane.

> +/*
> + * Take a PGD location (pgdp) and a pgd value that needs
> + * to be set there.  Populates the shadow and returns
> + * the resulting PGD that must be set in the kernel copy
> + * of the page tables.
> + */
> +static inline pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
> +{
> +#ifdef CONFIG_KAISER
> +       if (pgd_userspace_access(pgd)) {
> +               if (pgdp_maps_userspace(pgdp)) {
> +                       /*
> +                        * The user/shadow page tables get the full
> +                        * PGD, accessible from userspace:
> +                        */
> +                       kernel_to_shadow_pgdp(pgdp)->pgd = pgd.pgd;
> +                       /*
> +                        * For the copy of the pgd that the kernel
> +                        * uses, make it unusable to userspace.  This
> +                        * ensures if we get out to userspace with the
> +                        * wrong CR3 value, userspace will crash
> +                        * instead of running.
> +                        */
> +                       pgd.pgd |= _PAGE_NX;
> +               }
> +       } else if (pgd_userspace_access(*pgdp)) {
> +               /*
> +                * We are clearing a _PAGE_USER PGD for which we
> +                * presumably populated the shadow.  We must now
> +                * clear the shadow PGD entry.
> +                */
> +               if (pgdp_maps_userspace(pgdp)) {
> +                       kernel_to_shadow_pgdp(pgdp)->pgd = pgd.pgd;
> +               } else {
> +                       /*
> +                        * Attempted to clear a _PAGE_USER PGD which
> +                        * is in the kernel porttion of the address
> +                        * space.  PGDs are pre-populated and we
> +                        * never clear them.
> +                        */
> +                       WARN_ON_ONCE(1);
> +               }
> +       } else {
> +               /*
> +                * _PAGE_USER was not set in either the PGD being set
> +                * or cleared.  All kernel PGDs should be
> +                * pre-populated so this should never happen after
> +                * boot.
> +                */
> +       }
> +#endif
> +       /* return the copy of the PGD we want the kernel to use: */
> +       return pgd;
> +}
> +

The more I read this code, the more I dislike "shadow".  Shadow
pagetables mean something specific in the virtualization world and,
more importantly, the word "shadow" fails to convey *which* table it
is.  Unless I'm extra confused, mm->pgd points to the kernelmode
tables.  So can we replace the word "shadow" with "usermode"?  That
will also make the entry stuff way clearer.  (Or I have it backwards,
in which case "kernelmode" would be the right choice.)  And rename the
argument.

That confusion aside, I'm trying to wrap my head around this.  I think
the description above makes sense, but I'm struggling to grok the code
and how it matches the description.  May I suggest an alternative
implementation?  (Apologies for epic whitespace damage.)

/*
 * Install an entry into the usermode pgd.  pgdp points to the kernelmode
 * entry whose usermode counterpart we're supposed to set.  pgd is the
 * desired entry.  Returns pgd, possibly modified if the actual entry installed
 * into the kernelmode needs different mode bits.
 */
static inline pgd_t kaiser_set_usermode_pgd(pgd_t *pgdp, pgd_t pgd) {
  VM_BUG_ON(pgdp points to a usermode table);

  if (pgdp_maps_userspace(pgdp)) {
    /* Install the pgd as requested into the usermode tables. */
    kernelmode_to_usermode_pgdp(pgdp)->pgd = pgd.pgd;

    if (pgd_val(pgd) & _PAGE_USER) {
      /*
       * This is a normal user pgd -- the kernelmode mapping should have NX
       * set to prevent erroneous usermode execution with the kernel tables.
       */
      return __pgd(pgd_val(pgd) | _PAGE_NX;
    } else {
      /* This is a weird mapping, e.g. EFI.  Map it straight through. */
      return pgd;
    }
  } else {
    /*
     * We can get here due to vmalloc, a vmalloc fault, memory
hot-add, or initial setup
     * of kernelmode page tables.  Regardless of which particular code
path we're in,
     * these mappings should not be automatically propagated to the
usermode tables.
     */
    return pgd;
  }
}

As a side benefit, this shouldn't have magical interactions with the
vsyscall page any more.

Are there cases that this would get wrong?

--Andy

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2017-11-23  4:07     ` Andy Lutomirski
  0 siblings, 0 replies; 131+ messages in thread
From: Andy Lutomirski @ 2017-11-23  4:07 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, richard.fellner, moritz.lipp,
	Daniel Gruss, michael.schwarz, Andrew Lutomirski, Linus Torvalds,
	Kees Cook, Hugh Dickins, X86 ML

On Wed, Nov 22, 2017 at 4:34 PM, Dave Hansen
<dave.hansen@linux.intel.com> wrote:
>
> These actions when dealing with a user address *and* the
> PGD has _PAGE_USER set.  That way, in-kernel users of low addresses
> typically used by userspace are not accidentally poisoned.

This seems sane.

> +/*
> + * Take a PGD location (pgdp) and a pgd value that needs
> + * to be set there.  Populates the shadow and returns
> + * the resulting PGD that must be set in the kernel copy
> + * of the page tables.
> + */
> +static inline pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
> +{
> +#ifdef CONFIG_KAISER
> +       if (pgd_userspace_access(pgd)) {
> +               if (pgdp_maps_userspace(pgdp)) {
> +                       /*
> +                        * The user/shadow page tables get the full
> +                        * PGD, accessible from userspace:
> +                        */
> +                       kernel_to_shadow_pgdp(pgdp)->pgd = pgd.pgd;
> +                       /*
> +                        * For the copy of the pgd that the kernel
> +                        * uses, make it unusable to userspace.  This
> +                        * ensures if we get out to userspace with the
> +                        * wrong CR3 value, userspace will crash
> +                        * instead of running.
> +                        */
> +                       pgd.pgd |= _PAGE_NX;
> +               }
> +       } else if (pgd_userspace_access(*pgdp)) {
> +               /*
> +                * We are clearing a _PAGE_USER PGD for which we
> +                * presumably populated the shadow.  We must now
> +                * clear the shadow PGD entry.
> +                */
> +               if (pgdp_maps_userspace(pgdp)) {
> +                       kernel_to_shadow_pgdp(pgdp)->pgd = pgd.pgd;
> +               } else {
> +                       /*
> +                        * Attempted to clear a _PAGE_USER PGD which
> +                        * is in the kernel porttion of the address
> +                        * space.  PGDs are pre-populated and we
> +                        * never clear them.
> +                        */
> +                       WARN_ON_ONCE(1);
> +               }
> +       } else {
> +               /*
> +                * _PAGE_USER was not set in either the PGD being set
> +                * or cleared.  All kernel PGDs should be
> +                * pre-populated so this should never happen after
> +                * boot.
> +                */
> +       }
> +#endif
> +       /* return the copy of the PGD we want the kernel to use: */
> +       return pgd;
> +}
> +

The more I read this code, the more I dislike "shadow".  Shadow
pagetables mean something specific in the virtualization world and,
more importantly, the word "shadow" fails to convey *which* table it
is.  Unless I'm extra confused, mm->pgd points to the kernelmode
tables.  So can we replace the word "shadow" with "usermode"?  That
will also make the entry stuff way clearer.  (Or I have it backwards,
in which case "kernelmode" would be the right choice.)  And rename the
argument.

That confusion aside, I'm trying to wrap my head around this.  I think
the description above makes sense, but I'm struggling to grok the code
and how it matches the description.  May I suggest an alternative
implementation?  (Apologies for epic whitespace damage.)

/*
 * Install an entry into the usermode pgd.  pgdp points to the kernelmode
 * entry whose usermode counterpart we're supposed to set.  pgd is the
 * desired entry.  Returns pgd, possibly modified if the actual entry installed
 * into the kernelmode needs different mode bits.
 */
static inline pgd_t kaiser_set_usermode_pgd(pgd_t *pgdp, pgd_t pgd) {
  VM_BUG_ON(pgdp points to a usermode table);

  if (pgdp_maps_userspace(pgdp)) {
    /* Install the pgd as requested into the usermode tables. */
    kernelmode_to_usermode_pgdp(pgdp)->pgd = pgd.pgd;

    if (pgd_val(pgd) & _PAGE_USER) {
      /*
       * This is a normal user pgd -- the kernelmode mapping should have NX
       * set to prevent erroneous usermode execution with the kernel tables.
       */
      return __pgd(pgd_val(pgd) | _PAGE_NX;
    } else {
      /* This is a weird mapping, e.g. EFI.  Map it straight through. */
      return pgd;
    }
  } else {
    /*
     * We can get here due to vmalloc, a vmalloc fault, memory
hot-add, or initial setup
     * of kernelmode page tables.  Regardless of which particular code
path we're in,
     * these mappings should not be automatically propagated to the
usermode tables.
     */
    return pgd;
  }
}

As a side benefit, this shouldn't have magical interactions with the
vsyscall page any more.

Are there cases that this would get wrong?

--Andy

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  7:23   ` Ingo Molnar
  -1 siblings, 0 replies; 131+ messages in thread
From: Ingo Molnar @ 2017-11-23  7:23 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross


* Dave Hansen <dave.hansen@linux.intel.com> wrote:

> Thanks, everyone for all the reviews thus far.  I hope I managed to
> address all the feedback given so far, except for the TODOs of
> course.  This is a pretty minor update compared to v1->v2.
> 
> These patches are all on this tip branch:
> 
> 	https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/log/?h=WIP.x86/mm

Note that on top of latest -tip the bzImage build fails with:

 arch/x86/boot/compressed/pagetable.o: In function `kernel_ident_mapping_init':
 pagetable.c:(.text+0x31b): undefined reference to `kaiser_enabled'
 arch/x86/boot/compressed/Makefile:109: recipe for target 'arch/x86/boot/compressed/vmlinux' failed

that's I think because the early boot code shares some code via 
kernel_ident_mapping_init() et al, and that code grew a new KAISER runtime 
variable which isn't present in the special early-boot environment.

I.e. something like the (totally untested) patch below should do the trick.

Thanks,

	Ingo

---
 arch/x86/boot/compressed/pagetable.c |    6 ++++++
 1 file changed, 6 insertions(+)

Index: tip/arch/x86/boot/compressed/pagetable.c
===================================================================
--- tip.orig/arch/x86/boot/compressed/pagetable.c
+++ tip/arch/x86/boot/compressed/pagetable.c
@@ -36,6 +36,12 @@
 /* Used by pgtable.h asm code to force instruction serialization. */
 unsigned long __force_order;
 
+/*
+ * We share the kernel_ident_mapping_init(), but the early boot version does not need
+ * the Kaiser-logic:
+ */
+int kaiser_enabled = 0;
+
 /* Used to track our page table allocation area. */
 struct alloc_pgt_data {
 	unsigned char *pgt_buf;

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
@ 2017-11-23  7:23   ` Ingo Molnar
  0 siblings, 0 replies; 131+ messages in thread
From: Ingo Molnar @ 2017-11-23  7:23 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross


* Dave Hansen <dave.hansen@linux.intel.com> wrote:

> Thanks, everyone for all the reviews thus far.  I hope I managed to
> address all the feedback given so far, except for the TODOs of
> course.  This is a pretty minor update compared to v1->v2.
> 
> These patches are all on this tip branch:
> 
> 	https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/log/?h=WIP.x86/mm

Note that on top of latest -tip the bzImage build fails with:

 arch/x86/boot/compressed/pagetable.o: In function `kernel_ident_mapping_init':
 pagetable.c:(.text+0x31b): undefined reference to `kaiser_enabled'
 arch/x86/boot/compressed/Makefile:109: recipe for target 'arch/x86/boot/compressed/vmlinux' failed

that's I think because the early boot code shares some code via 
kernel_ident_mapping_init() et al, and that code grew a new KAISER runtime 
variable which isn't present in the special early-boot environment.

I.e. something like the (totally untested) patch below should do the trick.

Thanks,

	Ingo

---
 arch/x86/boot/compressed/pagetable.c |    6 ++++++
 1 file changed, 6 insertions(+)

Index: tip/arch/x86/boot/compressed/pagetable.c
===================================================================
--- tip.orig/arch/x86/boot/compressed/pagetable.c
+++ tip/arch/x86/boot/compressed/pagetable.c
@@ -36,6 +36,12 @@
 /* Used by pgtable.h asm code to force instruction serialization. */
 unsigned long __force_order;
 
+/*
+ * We share the kernel_ident_mapping_init(), but the early boot version does not need
+ * the Kaiser-logic:
+ */
+int kaiser_enabled = 0;
+
 /* Used to track our page table allocation area. */
 struct alloc_pgt_data {
 	unsigned char *pgt_buf;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23  7:27   ` Ingo Molnar
  -1 siblings, 0 replies; 131+ messages in thread
From: Ingo Molnar @ 2017-11-23  7:27 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross


32-bit x86 defconfig still doesn't build:

 arch/x86/events/intel/ds.c: In function ‘dsalloc’:
 arch/x86/events/intel/ds.c:296:6: error: implicit declaration of function ‘kaiser_add_mapping’; did you mean ‘kgid_has_mapping’? [-Werror=implicit-function-declaration]

Also, could you please use proper subsystem tags, instead of:

  Subject: x86, kaiser: Disable global pages by default with KAISER

Please do something like:

  Subject: x86/mm/kaiser: Disable global pages by default with KAISER

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
@ 2017-11-23  7:27   ` Ingo Molnar
  0 siblings, 0 replies; 131+ messages in thread
From: Ingo Molnar @ 2017-11-23  7:27 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross


32-bit x86 defconfig still doesn't build:

 arch/x86/events/intel/ds.c: In function a??dsalloca??:
 arch/x86/events/intel/ds.c:296:6: error: implicit declaration of function a??kaiser_add_mappinga??; did you mean a??kgid_has_mappinga??? [-Werror=implicit-function-declaration]

Also, could you please use proper subsystem tags, instead of:

  Subject: x86, kaiser: Disable global pages by default with KAISER

Please do something like:

  Subject: x86/mm/kaiser: Disable global pages by default with KAISER

Thanks,

	Ingo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
  2017-11-23  7:27   ` Ingo Molnar
@ 2017-11-23  7:32     ` Ingo Molnar
  -1 siblings, 0 replies; 131+ messages in thread
From: Ingo Molnar @ 2017-11-23  7:32 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross


* Ingo Molnar <mingo@kernel.org> wrote:

> 
> 32-bit x86 defconfig still doesn't build:
> 
>  arch/x86/events/intel/ds.c: In function ‘dsalloc’:
>  arch/x86/events/intel/ds.c:296:6: error: implicit declaration of function ‘kaiser_add_mapping’; did you mean ‘kgid_has_mapping’? [-Werror=implicit-function-declaration]

The patch below should cure this one - only build tested.

Thanks,

	Ingo

 arch/x86/events/intel/ds.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index c9f44d7ce838..61388b01962d 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -3,7 +3,7 @@
 #include <linux/types.h>
 #include <linux/slab.h>
 
-#include <asm/kaiser.h>
+#include <linux/kaiser.h>
 #include <asm/perf_event.h>
 #include <asm/insn.h>
 

^ permalink raw reply related	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
@ 2017-11-23  7:32     ` Ingo Molnar
  0 siblings, 0 replies; 131+ messages in thread
From: Ingo Molnar @ 2017-11-23  7:32 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross


* Ingo Molnar <mingo@kernel.org> wrote:

> 
> 32-bit x86 defconfig still doesn't build:
> 
>  arch/x86/events/intel/ds.c: In function a??dsalloca??:
>  arch/x86/events/intel/ds.c:296:6: error: implicit declaration of function a??kaiser_add_mappinga??; did you mean a??kgid_has_mappinga??? [-Werror=implicit-function-declaration]

The patch below should cure this one - only build tested.

Thanks,

	Ingo

 arch/x86/events/intel/ds.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index c9f44d7ce838..61388b01962d 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -3,7 +3,7 @@
 #include <linux/types.h>
 #include <linux/slab.h>
 
-#include <asm/kaiser.h>
+#include <linux/kaiser.h>
 #include <asm/perf_event.h>
 #include <asm/insn.h>
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
  2017-11-23  7:32     ` Ingo Molnar
@ 2017-11-23 15:02       ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23 15:02 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross

On 11/22/2017 11:32 PM, Ingo Molnar wrote:
> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
> index c9f44d7ce838..61388b01962d 100644
> --- a/arch/x86/events/intel/ds.c
> +++ b/arch/x86/events/intel/ds.c
> @@ -3,7 +3,7 @@
>  #include <linux/types.h>
>  #include <linux/slab.h>
>  
> -#include <asm/kaiser.h>
> +#include <linux/kaiser.h>
>  #include <asm/perf_event.h>
>  #include <asm/insn.h>

Yes, that looks like the correct fix on both counts.

Please let me know if you would like an updated series to fix these,
either in email or a git tree.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
@ 2017-11-23 15:02       ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23 15:02 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross

On 11/22/2017 11:32 PM, Ingo Molnar wrote:
> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
> index c9f44d7ce838..61388b01962d 100644
> --- a/arch/x86/events/intel/ds.c
> +++ b/arch/x86/events/intel/ds.c
> @@ -3,7 +3,7 @@
>  #include <linux/types.h>
>  #include <linux/slab.h>
>  
> -#include <asm/kaiser.h>
> +#include <linux/kaiser.h>
>  #include <asm/perf_event.h>
>  #include <asm/insn.h>

Yes, that looks like the correct fix on both counts.

Please let me know if you would like an updated series to fix these,
either in email or a git tree.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 11/23] x86, kaiser: map entry stack variables
  2017-11-23  3:31     ` Andy Lutomirski
@ 2017-11-23 15:37       ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23 15:37 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-kernel, linux-mm, moritz.lipp, Daniel Gruss,
	michael.schwarz, richard.fellner, Linus Torvalds, Kees Cook,
	Hugh Dickins, X86 ML

On 11/22/2017 07:31 PM, Andy Lutomirski wrote:
> On Wed, Nov 22, 2017 at 4:34 PM, Dave Hansen
> <dave.hansen@linux.intel.com> wrote:
>>
>> From: Dave Hansen <dave.hansen@linux.intel.com>
>>
>> There are times where the kernel is entered but there is not a
>> safe stack, like at SYSCALL entry.  To obtain a safe stack, the
>> per-cpu variables 'rsp_scratch' and 'cpu_current_top_of_stack'
>> are used to save the old %rsp value and to find where the kernel
>> stack should start.
>>
>> You can not directly manipulate the CR3 register.  You can only
>> 'MOV' to it from another register, which means a register must be
>> clobbered in order to do any CR3 manipulation.  User-mapping
>> these variables allows us to obtain a safe stack and use it for
>> temporary storage *before* CR3 is switched.
>>
>> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
>> Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
>> Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
>> Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
>> Cc: Richard Fellner <richard.fellner@student.tugraz.at>
>> Cc: Andy Lutomirski <luto@kernel.org>
>> Cc: Linus Torvalds <torvalds@linux-foundation.org>
>> Cc: Kees Cook <keescook@google.com>
>> Cc: Hugh Dickins <hughd@google.com>
>> Cc: x86@kernel.org
>> ---
>>
>>  b/arch/x86/kernel/cpu/common.c |    2 +-
>>  b/arch/x86/kernel/process_64.c |    2 +-
>>  2 files changed, 2 insertions(+), 2 deletions(-)
>>
>> diff -puN arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/cpu/common.c
>> --- a/arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars    2017-11-22 15:45:50.128619736 -0800
>> +++ b/arch/x86/kernel/cpu/common.c      2017-11-22 15:45:50.134619736 -0800
>> @@ -1524,7 +1524,7 @@ EXPORT_PER_CPU_SYMBOL(__preempt_count);
>>   * the top of the kernel stack.  Use an extra percpu variable to track the
>>   * top of the kernel stack directly.
>>   */
>> -DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) =
>> +DEFINE_PER_CPU_USER_MAPPED(unsigned long, cpu_current_top_of_stack) =
>>         (unsigned long)&init_thread_union + THREAD_SIZE;
> 
> This is in an x86_32-only section and should be dropped, I think.

It's used in entry_SYSCALL_64 (see below).  But I do think it's safe to
drop now.  We switch before we use it.

>> diff -puN arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/process_64.c
>> --- a/arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars    2017-11-22 15:45:50.130619736 -0800
>> +++ b/arch/x86/kernel/process_64.c      2017-11-22 15:45:50.134619736 -0800
>> @@ -59,7 +59,7 @@
>>  #include <asm/unistd_32_ia32.h>
>>  #endif
>>
>> -__visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
>> +__visible DEFINE_PER_CPU_USER_MAPPED(unsigned long, rsp_scratch);
>>
> This shouldn't be needed any more either.

What about this hunk?  It touches rsp_scratch before switching:

@@ -207,9 +210,16 @@ ENTRY(entry_SYSCALL_64)

        swapgs
        movq    %rsp, PER_CPU_VAR(rsp_scratch)
-       movq    PER_CPU_VAR(cpu_current_top_of_stack), %rsp

-       TRACE_IRQS_OFF
+       /*
+        * The kernel CR3 is needed to map the process stack, but we
+        * need a scratch register to be able to load CR3.  %rsp is
+        * clobberable right now, so use it as a scratch register.
+        * %rsp will be look crazy here for a couple instructions.
+        */
+       SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
+
+       movq    PER_CPU_VAR(cpu_current_top_of_stack), %rsp

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 11/23] x86, kaiser: map entry stack variables
@ 2017-11-23 15:37       ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23 15:37 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-kernel, linux-mm, moritz.lipp, Daniel Gruss,
	michael.schwarz, richard.fellner, Linus Torvalds, Kees Cook,
	Hugh Dickins, X86 ML

On 11/22/2017 07:31 PM, Andy Lutomirski wrote:
> On Wed, Nov 22, 2017 at 4:34 PM, Dave Hansen
> <dave.hansen@linux.intel.com> wrote:
>>
>> From: Dave Hansen <dave.hansen@linux.intel.com>
>>
>> There are times where the kernel is entered but there is not a
>> safe stack, like at SYSCALL entry.  To obtain a safe stack, the
>> per-cpu variables 'rsp_scratch' and 'cpu_current_top_of_stack'
>> are used to save the old %rsp value and to find where the kernel
>> stack should start.
>>
>> You can not directly manipulate the CR3 register.  You can only
>> 'MOV' to it from another register, which means a register must be
>> clobbered in order to do any CR3 manipulation.  User-mapping
>> these variables allows us to obtain a safe stack and use it for
>> temporary storage *before* CR3 is switched.
>>
>> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
>> Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
>> Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
>> Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
>> Cc: Richard Fellner <richard.fellner@student.tugraz.at>
>> Cc: Andy Lutomirski <luto@kernel.org>
>> Cc: Linus Torvalds <torvalds@linux-foundation.org>
>> Cc: Kees Cook <keescook@google.com>
>> Cc: Hugh Dickins <hughd@google.com>
>> Cc: x86@kernel.org
>> ---
>>
>>  b/arch/x86/kernel/cpu/common.c |    2 +-
>>  b/arch/x86/kernel/process_64.c |    2 +-
>>  2 files changed, 2 insertions(+), 2 deletions(-)
>>
>> diff -puN arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/cpu/common.c
>> --- a/arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars    2017-11-22 15:45:50.128619736 -0800
>> +++ b/arch/x86/kernel/cpu/common.c      2017-11-22 15:45:50.134619736 -0800
>> @@ -1524,7 +1524,7 @@ EXPORT_PER_CPU_SYMBOL(__preempt_count);
>>   * the top of the kernel stack.  Use an extra percpu variable to track the
>>   * top of the kernel stack directly.
>>   */
>> -DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) =
>> +DEFINE_PER_CPU_USER_MAPPED(unsigned long, cpu_current_top_of_stack) =
>>         (unsigned long)&init_thread_union + THREAD_SIZE;
> 
> This is in an x86_32-only section and should be dropped, I think.

It's used in entry_SYSCALL_64 (see below).  But I do think it's safe to
drop now.  We switch before we use it.

>> diff -puN arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/process_64.c
>> --- a/arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars    2017-11-22 15:45:50.130619736 -0800
>> +++ b/arch/x86/kernel/process_64.c      2017-11-22 15:45:50.134619736 -0800
>> @@ -59,7 +59,7 @@
>>  #include <asm/unistd_32_ia32.h>
>>  #endif
>>
>> -__visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
>> +__visible DEFINE_PER_CPU_USER_MAPPED(unsigned long, rsp_scratch);
>>
> This shouldn't be needed any more either.

What about this hunk?  It touches rsp_scratch before switching:

@@ -207,9 +210,16 @@ ENTRY(entry_SYSCALL_64)

        swapgs
        movq    %rsp, PER_CPU_VAR(rsp_scratch)
-       movq    PER_CPU_VAR(cpu_current_top_of_stack), %rsp

-       TRACE_IRQS_OFF
+       /*
+        * The kernel CR3 is needed to map the process stack, but we
+        * need a scratch register to be able to load CR3.  %rsp is
+        * clobberable right now, so use it as a scratch register.
+        * %rsp will be look crazy here for a couple instructions.
+        */
+       SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
+
+       movq    PER_CPU_VAR(cpu_current_top_of_stack), %rsp


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 11/23] x86, kaiser: map entry stack variables
  2017-11-23 15:37       ` Dave Hansen
@ 2017-11-23 15:55         ` Andy Lutomirski
  -1 siblings, 0 replies; 131+ messages in thread
From: Andy Lutomirski @ 2017-11-23 15:55 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andy Lutomirski, linux-kernel, linux-mm, moritz.lipp,
	Daniel Gruss, michael.schwarz, richard.fellner, Linus Torvalds,
	Kees Cook, Hugh Dickins, X86 ML

On Thu, Nov 23, 2017 at 7:37 AM, Dave Hansen
<dave.hansen@linux.intel.com> wrote:
> On 11/22/2017 07:31 PM, Andy Lutomirski wrote:
>> On Wed, Nov 22, 2017 at 4:34 PM, Dave Hansen
>> <dave.hansen@linux.intel.com> wrote:
>>>
>>> From: Dave Hansen <dave.hansen@linux.intel.com>
>>>
>>> There are times where the kernel is entered but there is not a
>>> safe stack, like at SYSCALL entry.  To obtain a safe stack, the
>>> per-cpu variables 'rsp_scratch' and 'cpu_current_top_of_stack'
>>> are used to save the old %rsp value and to find where the kernel
>>> stack should start.
>>>
>>> You can not directly manipulate the CR3 register.  You can only
>>> 'MOV' to it from another register, which means a register must be
>>> clobbered in order to do any CR3 manipulation.  User-mapping
>>> these variables allows us to obtain a safe stack and use it for
>>> temporary storage *before* CR3 is switched.
>>>
>>> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
>>> Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
>>> Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
>>> Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
>>> Cc: Richard Fellner <richard.fellner@student.tugraz.at>
>>> Cc: Andy Lutomirski <luto@kernel.org>
>>> Cc: Linus Torvalds <torvalds@linux-foundation.org>
>>> Cc: Kees Cook <keescook@google.com>
>>> Cc: Hugh Dickins <hughd@google.com>
>>> Cc: x86@kernel.org
>>> ---
>>>
>>>  b/arch/x86/kernel/cpu/common.c |    2 +-
>>>  b/arch/x86/kernel/process_64.c |    2 +-
>>>  2 files changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff -puN arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/cpu/common.c
>>> --- a/arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars    2017-11-22 15:45:50.128619736 -0800
>>> +++ b/arch/x86/kernel/cpu/common.c      2017-11-22 15:45:50.134619736 -0800
>>> @@ -1524,7 +1524,7 @@ EXPORT_PER_CPU_SYMBOL(__preempt_count);
>>>   * the top of the kernel stack.  Use an extra percpu variable to track the
>>>   * top of the kernel stack directly.
>>>   */
>>> -DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) =
>>> +DEFINE_PER_CPU_USER_MAPPED(unsigned long, cpu_current_top_of_stack) =
>>>         (unsigned long)&init_thread_union + THREAD_SIZE;
>>
>> This is in an x86_32-only section and should be dropped, I think.
>
> It's used in entry_SYSCALL_64 (see below).  But I do think it's safe to
> drop now.  We switch before we use it.
>
>>> diff -puN arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/process_64.c
>>> --- a/arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars    2017-11-22 15:45:50.130619736 -0800
>>> +++ b/arch/x86/kernel/process_64.c      2017-11-22 15:45:50.134619736 -0800
>>> @@ -59,7 +59,7 @@
>>>  #include <asm/unistd_32_ia32.h>
>>>  #endif
>>>
>>> -__visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
>>> +__visible DEFINE_PER_CPU_USER_MAPPED(unsigned long, rsp_scratch);
>>>
>> This shouldn't be needed any more either.
>
> What about this hunk?  It touches rsp_scratch before switching:
>
> @@ -207,9 +210,16 @@ ENTRY(entry_SYSCALL_64)
>
>         swapgs
>         movq    %rsp, PER_CPU_VAR(rsp_scratch)
> -       movq    PER_CPU_VAR(cpu_current_top_of_stack), %rsp
>
> -       TRACE_IRQS_OFF
> +       /*
> +        * The kernel CR3 is needed to map the process stack, but we
> +        * need a scratch register to be able to load CR3.  %rsp is
> +        * clobberable right now, so use it as a scratch register.
> +        * %rsp will be look crazy here for a couple instructions.
> +        */
> +       SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
> +
> +       movq    PER_CPU_VAR(cpu_current_top_of_stack), %rsp
>
>

I'm surprised that boots, since that hunk won't execute at all.  I
think you should move that code into the trampoline.  (Check my latest
tree -- I think it's a bit off in Ingo's tree.)  I've effectively
split SYSCALL64 into two separate paths: entry_SYSCALL_64 (with stack
switching off) and entry_SYSCALL_64_trampoline (with stack switching
on).  The entire point of the trampoline was to get a way to access
some data that varies per cpu without needing access to traditional
%gs-based percpu data.

--Andy

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 11/23] x86, kaiser: map entry stack variables
@ 2017-11-23 15:55         ` Andy Lutomirski
  0 siblings, 0 replies; 131+ messages in thread
From: Andy Lutomirski @ 2017-11-23 15:55 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andy Lutomirski, linux-kernel, linux-mm, moritz.lipp,
	Daniel Gruss, michael.schwarz, richard.fellner, Linus Torvalds,
	Kees Cook, Hugh Dickins, X86 ML

On Thu, Nov 23, 2017 at 7:37 AM, Dave Hansen
<dave.hansen@linux.intel.com> wrote:
> On 11/22/2017 07:31 PM, Andy Lutomirski wrote:
>> On Wed, Nov 22, 2017 at 4:34 PM, Dave Hansen
>> <dave.hansen@linux.intel.com> wrote:
>>>
>>> From: Dave Hansen <dave.hansen@linux.intel.com>
>>>
>>> There are times where the kernel is entered but there is not a
>>> safe stack, like at SYSCALL entry.  To obtain a safe stack, the
>>> per-cpu variables 'rsp_scratch' and 'cpu_current_top_of_stack'
>>> are used to save the old %rsp value and to find where the kernel
>>> stack should start.
>>>
>>> You can not directly manipulate the CR3 register.  You can only
>>> 'MOV' to it from another register, which means a register must be
>>> clobbered in order to do any CR3 manipulation.  User-mapping
>>> these variables allows us to obtain a safe stack and use it for
>>> temporary storage *before* CR3 is switched.
>>>
>>> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
>>> Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
>>> Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
>>> Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
>>> Cc: Richard Fellner <richard.fellner@student.tugraz.at>
>>> Cc: Andy Lutomirski <luto@kernel.org>
>>> Cc: Linus Torvalds <torvalds@linux-foundation.org>
>>> Cc: Kees Cook <keescook@google.com>
>>> Cc: Hugh Dickins <hughd@google.com>
>>> Cc: x86@kernel.org
>>> ---
>>>
>>>  b/arch/x86/kernel/cpu/common.c |    2 +-
>>>  b/arch/x86/kernel/process_64.c |    2 +-
>>>  2 files changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff -puN arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/cpu/common.c
>>> --- a/arch/x86/kernel/cpu/common.c~kaiser-user-map-stack-helper-vars    2017-11-22 15:45:50.128619736 -0800
>>> +++ b/arch/x86/kernel/cpu/common.c      2017-11-22 15:45:50.134619736 -0800
>>> @@ -1524,7 +1524,7 @@ EXPORT_PER_CPU_SYMBOL(__preempt_count);
>>>   * the top of the kernel stack.  Use an extra percpu variable to track the
>>>   * top of the kernel stack directly.
>>>   */
>>> -DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) =
>>> +DEFINE_PER_CPU_USER_MAPPED(unsigned long, cpu_current_top_of_stack) =
>>>         (unsigned long)&init_thread_union + THREAD_SIZE;
>>
>> This is in an x86_32-only section and should be dropped, I think.
>
> It's used in entry_SYSCALL_64 (see below).  But I do think it's safe to
> drop now.  We switch before we use it.
>
>>> diff -puN arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars arch/x86/kernel/process_64.c
>>> --- a/arch/x86/kernel/process_64.c~kaiser-user-map-stack-helper-vars    2017-11-22 15:45:50.130619736 -0800
>>> +++ b/arch/x86/kernel/process_64.c      2017-11-22 15:45:50.134619736 -0800
>>> @@ -59,7 +59,7 @@
>>>  #include <asm/unistd_32_ia32.h>
>>>  #endif
>>>
>>> -__visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
>>> +__visible DEFINE_PER_CPU_USER_MAPPED(unsigned long, rsp_scratch);
>>>
>> This shouldn't be needed any more either.
>
> What about this hunk?  It touches rsp_scratch before switching:
>
> @@ -207,9 +210,16 @@ ENTRY(entry_SYSCALL_64)
>
>         swapgs
>         movq    %rsp, PER_CPU_VAR(rsp_scratch)
> -       movq    PER_CPU_VAR(cpu_current_top_of_stack), %rsp
>
> -       TRACE_IRQS_OFF
> +       /*
> +        * The kernel CR3 is needed to map the process stack, but we
> +        * need a scratch register to be able to load CR3.  %rsp is
> +        * clobberable right now, so use it as a scratch register.
> +        * %rsp will be look crazy here for a couple instructions.
> +        */
> +       SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
> +
> +       movq    PER_CPU_VAR(cpu_current_top_of_stack), %rsp
>
>

I'm surprised that boots, since that hunk won't execute at all.  I
think you should move that code into the trampoline.  (Check my latest
tree -- I think it's a bit off in Ingo's tree.)  I've effectively
split SYSCALL64 into two separate paths: entry_SYSCALL_64 (with stack
switching off) and entry_SYSCALL_64_trampoline (with stack switching
on).  The entire point of the trampoline was to get a way to access
some data that varies per cpu without needing access to traditional
%gs-based percpu data.

--Andy

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
  2017-11-23  0:34 ` Dave Hansen
@ 2017-11-23 16:20   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23 16:20 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, moritz.lipp, daniel.gruss, michael.schwarz,
	richard.fellner, luto, torvalds, keescook, hughd, x86, jgross

I've updated these a bit since yesterday with some minor fixes:
 * Fixed KASLR compile bug
 * Fixed ds.c compile problem
 * Changed ulong to pteval_t to fix 32-bit compile problem
 * Stop mapping cpu_current_top_of_stack (never used until after CR3 switch)

Rather than re-spamming everyone, the resulting branch is here:

https://git.kernel.org/pub/scm/linux/kernel/git/daveh/x86-kaiser.git/log/?h=kaiser-414-tipwip-20171123

If anyone wants to be re-spammed, just say the word.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
@ 2017-11-23 16:20   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-23 16:20 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, moritz.lipp, daniel.gruss, michael.schwarz,
	richard.fellner, luto, torvalds, keescook, hughd, x86, jgross

I've updated these a bit since yesterday with some minor fixes:
 * Fixed KASLR compile bug
 * Fixed ds.c compile problem
 * Changed ulong to pteval_t to fix 32-bit compile problem
 * Stop mapping cpu_current_top_of_stack (never used until after CR3 switch)

Rather than re-spamming everyone, the resulting branch is here:

https://git.kernel.org/pub/scm/linux/kernel/git/daveh/x86-kaiser.git/log/?h=kaiser-414-tipwip-20171123

If anyone wants to be re-spammed, just say the word.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 09/23] x86, kaiser: map dynamically-allocated LDTs
  2017-11-23  0:34   ` Dave Hansen
@ 2017-11-23 19:42     ` Eric Biggers
  -1 siblings, 0 replies; 131+ messages in thread
From: Eric Biggers @ 2017-11-23 19:42 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86

> diff -puN arch/x86/kernel/ldt.c~kaiser-user-map-new-ldts arch/x86/kernel/ldt.c
> --- a/arch/x86/kernel/ldt.c~kaiser-user-map-new-ldts	2017-11-22 15:45:49.059619739 -0800
> +++ b/arch/x86/kernel/ldt.c	2017-11-22 15:45:49.062619739 -0800
> @@ -11,6 +11,7 @@
[...]
> +	ret = kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
> +				 __PAGE_KERNEL | _PAGE_GLOBAL);
> +	if (ret) {
> +		__free_ldt_struct(new_ldt);
> +		return NULL;
> +	}
>  	new_ldt->nr_entries = num_entries;
>  	return new_ldt;

__free_ldt_struct() uses new_ldt->nr_entries, so new_ldt->nr_entries needs to be
set earlier.

Eric

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 09/23] x86, kaiser: map dynamically-allocated LDTs
@ 2017-11-23 19:42     ` Eric Biggers
  0 siblings, 0 replies; 131+ messages in thread
From: Eric Biggers @ 2017-11-23 19:42 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86

> diff -puN arch/x86/kernel/ldt.c~kaiser-user-map-new-ldts arch/x86/kernel/ldt.c
> --- a/arch/x86/kernel/ldt.c~kaiser-user-map-new-ldts	2017-11-22 15:45:49.059619739 -0800
> +++ b/arch/x86/kernel/ldt.c	2017-11-22 15:45:49.062619739 -0800
> @@ -11,6 +11,7 @@
[...]
> +	ret = kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
> +				 __PAGE_KERNEL | _PAGE_GLOBAL);
> +	if (ret) {
> +		__free_ldt_struct(new_ldt);
> +		return NULL;
> +	}
>  	new_ldt->nr_entries = num_entries;
>  	return new_ldt;

__free_ldt_struct() uses new_ldt->nr_entries, so new_ldt->nr_entries needs to be
set earlier.

Eric

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 09/23] x86, kaiser: map dynamically-allocated LDTs
  2017-11-23 19:42     ` Eric Biggers
@ 2017-11-23 20:12       ` Andy Lutomirski
  -1 siblings, 0 replies; 131+ messages in thread
From: Andy Lutomirski @ 2017-11-23 20:12 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Dave Hansen, linux-kernel, linux-mm, moritz.lipp, Daniel Gruss,
	michael.schwarz, richard.fellner, Andrew Lutomirski,
	Linus Torvalds, Kees Cook, Hugh Dickins, X86 ML

On Thu, Nov 23, 2017 at 11:42 AM, Eric Biggers <ebiggers3@gmail.com> wrote:
>> diff -puN arch/x86/kernel/ldt.c~kaiser-user-map-new-ldts arch/x86/kernel/ldt.c
>> --- a/arch/x86/kernel/ldt.c~kaiser-user-map-new-ldts  2017-11-22 15:45:49.059619739 -0800
>> +++ b/arch/x86/kernel/ldt.c   2017-11-22 15:45:49.062619739 -0800
>> @@ -11,6 +11,7 @@
> [...]
>> +     ret = kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
>> +                              __PAGE_KERNEL | _PAGE_GLOBAL);
>> +     if (ret) {
>> +             __free_ldt_struct(new_ldt);
>> +             return NULL;
>> +     }
>>       new_ldt->nr_entries = num_entries;
>>       return new_ldt;
>
> __free_ldt_struct() uses new_ldt->nr_entries, so new_ldt->nr_entries needs to be
> set earlier.
>

I would suggest just dropping this patch and forcing MODIFY_LDT off
when kaiser is on.  I'll fix it later.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 09/23] x86, kaiser: map dynamically-allocated LDTs
@ 2017-11-23 20:12       ` Andy Lutomirski
  0 siblings, 0 replies; 131+ messages in thread
From: Andy Lutomirski @ 2017-11-23 20:12 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Dave Hansen, linux-kernel, linux-mm, moritz.lipp, Daniel Gruss,
	michael.schwarz, richard.fellner, Andrew Lutomirski,
	Linus Torvalds, Kees Cook, Hugh Dickins, X86 ML

On Thu, Nov 23, 2017 at 11:42 AM, Eric Biggers <ebiggers3@gmail.com> wrote:
>> diff -puN arch/x86/kernel/ldt.c~kaiser-user-map-new-ldts arch/x86/kernel/ldt.c
>> --- a/arch/x86/kernel/ldt.c~kaiser-user-map-new-ldts  2017-11-22 15:45:49.059619739 -0800
>> +++ b/arch/x86/kernel/ldt.c   2017-11-22 15:45:49.062619739 -0800
>> @@ -11,6 +11,7 @@
> [...]
>> +     ret = kaiser_add_mapping((unsigned long)new_ldt->entries, alloc_size,
>> +                              __PAGE_KERNEL | _PAGE_GLOBAL);
>> +     if (ret) {
>> +             __free_ldt_struct(new_ldt);
>> +             return NULL;
>> +     }
>>       new_ldt->nr_entries = num_entries;
>>       return new_ldt;
>
> __free_ldt_struct() uses new_ldt->nr_entries, so new_ldt->nr_entries needs to be
> set earlier.
>

I would suggest just dropping this patch and forcing MODIFY_LDT off
when kaiser is on.  I'll fix it later.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
  2017-11-23 16:20   ` Dave Hansen
@ 2017-11-24  6:35     ` Ingo Molnar
  -1 siblings, 0 replies; 131+ messages in thread
From: Ingo Molnar @ 2017-11-24  6:35 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross


* Dave Hansen <dave.hansen@linux.intel.com> wrote:

> I've updated these a bit since yesterday with some minor fixes:
>  * Fixed KASLR compile bug
>  * Fixed ds.c compile problem
>  * Changed ulong to pteval_t to fix 32-bit compile problem
>  * Stop mapping cpu_current_top_of_stack (never used until after CR3 switch)
> 
> Rather than re-spamming everyone, the resulting branch is here:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/daveh/x86-kaiser.git/log/?h=kaiser-414-tipwip-20171123
> 
> If anyone wants to be re-spammed, just say the word.

So the pteval_t changes break the build on most non-x86 architectures (alpha, arm, 
arm64, etc.), because most of them don't have an asm/pgtable_types.h file.

pteval_t is an x86-ism.

So I left out the changes below.

Thanks,

	Ingo

diff --git a/arch/x86/include/asm/kaiser.h b/arch/x86/include/asm/kaiser.h
index 35f12a8a7071..2198855f7de9 100644
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -18,6 +18,8 @@
 #ifndef __ASSEMBLY__
 
 #ifdef CONFIG_KAISER
+#include <asm/pgtable_types.h>
+
 /**
  *  kaiser_add_mapping - map a kernel range into the user page tables
  *  @addr: the start address of the range
@@ -31,7 +33,7 @@
  *  table.
  */
 extern int kaiser_add_mapping(unsigned long addr, unsigned long size,
-			      unsigned long flags);
+			      pteval_t flags);
 
 /**
  *  kaiser_add_mapping_cpu_entry - map the cpu entry area
diff --git a/arch/x86/mm/kaiser.c b/arch/x86/mm/kaiser.c
index 1eb27b410556..58cae2924724 100644
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -431,7 +431,7 @@ void __init kaiser_init(void)
 }
 
 int kaiser_add_mapping(unsigned long addr, unsigned long size,
-		       unsigned long flags)
+		       pteval_t flags)
 {
 	return kaiser_add_user_map((const void *)addr, size, flags);
 }
diff --git a/include/linux/kaiser.h b/include/linux/kaiser.h
index 83d465599646..f662013515a1 100644
--- a/include/linux/kaiser.h
+++ b/include/linux/kaiser.h
@@ -4,7 +4,11 @@
 #ifdef CONFIG_KAISER
 #include <asm/kaiser.h>
 #else
+
 #ifndef __ASSEMBLY__
+
+#include <asm/pgtable_types.h>
+
 /*
  * These stubs are used whenever CONFIG_KAISER is off, which
  * includes architectures that support KAISER, but have it
@@ -20,7 +24,7 @@ static inline void kaiser_remove_mapping(unsigned long start, unsigned long size
 }
 
 static inline int kaiser_add_mapping(unsigned long addr, unsigned long size,
-				     unsigned long flags)
+				     pteval_t flags)
 {
 	return 0;
 }

^ permalink raw reply related	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
@ 2017-11-24  6:35     ` Ingo Molnar
  0 siblings, 0 replies; 131+ messages in thread
From: Ingo Molnar @ 2017-11-24  6:35 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross


* Dave Hansen <dave.hansen@linux.intel.com> wrote:

> I've updated these a bit since yesterday with some minor fixes:
>  * Fixed KASLR compile bug
>  * Fixed ds.c compile problem
>  * Changed ulong to pteval_t to fix 32-bit compile problem
>  * Stop mapping cpu_current_top_of_stack (never used until after CR3 switch)
> 
> Rather than re-spamming everyone, the resulting branch is here:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/daveh/x86-kaiser.git/log/?h=kaiser-414-tipwip-20171123
> 
> If anyone wants to be re-spammed, just say the word.

So the pteval_t changes break the build on most non-x86 architectures (alpha, arm, 
arm64, etc.), because most of them don't have an asm/pgtable_types.h file.

pteval_t is an x86-ism.

So I left out the changes below.

Thanks,

	Ingo

diff --git a/arch/x86/include/asm/kaiser.h b/arch/x86/include/asm/kaiser.h
index 35f12a8a7071..2198855f7de9 100644
--- a/arch/x86/include/asm/kaiser.h
+++ b/arch/x86/include/asm/kaiser.h
@@ -18,6 +18,8 @@
 #ifndef __ASSEMBLY__
 
 #ifdef CONFIG_KAISER
+#include <asm/pgtable_types.h>
+
 /**
  *  kaiser_add_mapping - map a kernel range into the user page tables
  *  @addr: the start address of the range
@@ -31,7 +33,7 @@
  *  table.
  */
 extern int kaiser_add_mapping(unsigned long addr, unsigned long size,
-			      unsigned long flags);
+			      pteval_t flags);
 
 /**
  *  kaiser_add_mapping_cpu_entry - map the cpu entry area
diff --git a/arch/x86/mm/kaiser.c b/arch/x86/mm/kaiser.c
index 1eb27b410556..58cae2924724 100644
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -431,7 +431,7 @@ void __init kaiser_init(void)
 }
 
 int kaiser_add_mapping(unsigned long addr, unsigned long size,
-		       unsigned long flags)
+		       pteval_t flags)
 {
 	return kaiser_add_user_map((const void *)addr, size, flags);
 }
diff --git a/include/linux/kaiser.h b/include/linux/kaiser.h
index 83d465599646..f662013515a1 100644
--- a/include/linux/kaiser.h
+++ b/include/linux/kaiser.h
@@ -4,7 +4,11 @@
 #ifdef CONFIG_KAISER
 #include <asm/kaiser.h>
 #else
+
 #ifndef __ASSEMBLY__
+
+#include <asm/pgtable_types.h>
+
 /*
  * These stubs are used whenever CONFIG_KAISER is off, which
  * includes architectures that support KAISER, but have it
@@ -20,7 +24,7 @@ static inline void kaiser_remove_mapping(unsigned long start, unsigned long size
 }
 
 static inline int kaiser_add_mapping(unsigned long addr, unsigned long size,
-				     unsigned long flags)
+				     pteval_t flags)
 {
 	return 0;
 }

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
  2017-11-24  6:35     ` Ingo Molnar
@ 2017-11-24  6:41       ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-24  6:41 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross

On 11/23/2017 10:35 PM, Ingo Molnar wrote:
> So the pteval_t changes break the build on most non-x86 architectures (alpha, arm, 
> arm64, etc.), because most of them don't have an asm/pgtable_types.h file.
> 
> pteval_t is an x86-ism.
> 
> So I left out the changes below.

There was a warning on the non-PAE 32-bit builds saying that there was a
shift larger than the type.  I assumed this was because of a reference
to _PAGE_NX, and thus we needed a change to pteval_t.

But, now that I think about it more, that doesn't make sense since
_PAGE_NX should be #defined down to a 0 on those configs unless
something is wrong.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
@ 2017-11-24  6:41       ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-24  6:41 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross

On 11/23/2017 10:35 PM, Ingo Molnar wrote:
> So the pteval_t changes break the build on most non-x86 architectures (alpha, arm, 
> arm64, etc.), because most of them don't have an asm/pgtable_types.h file.
> 
> pteval_t is an x86-ism.
> 
> So I left out the changes below.

There was a warning on the non-PAE 32-bit builds saying that there was a
shift larger than the type.  I assumed this was because of a reference
to _PAGE_NX, and thus we needed a change to pteval_t.

But, now that I think about it more, that doesn't make sense since
_PAGE_NX should be #defined down to a 0 on those configs unless
something is wrong.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
  2017-11-24  6:41       ` Dave Hansen
@ 2017-11-24  7:33         ` Ingo Molnar
  -1 siblings, 0 replies; 131+ messages in thread
From: Ingo Molnar @ 2017-11-24  7:33 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross


* Dave Hansen <dave.hansen@linux.intel.com> wrote:

> On 11/23/2017 10:35 PM, Ingo Molnar wrote:
> > So the pteval_t changes break the build on most non-x86 architectures (alpha, arm, 
> > arm64, etc.), because most of them don't have an asm/pgtable_types.h file.
> > 
> > pteval_t is an x86-ism.
> > 
> > So I left out the changes below.
> 
> There was a warning on the non-PAE 32-bit builds saying that there was a
> shift larger than the type.  I assumed this was because of a reference
> to _PAGE_NX, and thus we needed a change to pteval_t.
> 
> But, now that I think about it more, that doesn't make sense since
> _PAGE_NX should be #defined down to a 0 on those configs unless
> something is wrong.

If pte flags need to be passed around then the canonical way to do it is to pass 
around a pte_t, and use pte_val() on it and such.

But please investigate the warning.

One other detail: I see you fixed some of the commit titles to use standard x86 
tags - could you please also capitalize sentences? I.e.:

  - x86/mm/kaiser: allow flushing for future ASID switches
  + x86/mm/kaiser: Allow flushing for future ASID switches

Could you please also double-check whether the merges I did in the latest 
WIP.x86/mm branch are OK? Andy changed the entry stack code a bit under Kaiser, 
which created about 3 new conflicts.

The key resolutions that I did were:

        .macro interrupt func
        cld

        testb   $3, CS-ORIG_RAX(%rsp)
        jz      1f
        SWAPGS
        SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
        call    switch_to_thread_stack
1:

Plus I also dropped the extra switch_to_thread_stack call done in:

  x86/mm/kaiser: Prepare assembly for entry/exit CR3 switching

Because Andy's latest preparatory patch does it now:

  x86/entry/64: Use a percpu trampoline stack for IDT entries

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables
@ 2017-11-24  7:33         ` Ingo Molnar
  0 siblings, 0 replies; 131+ messages in thread
From: Ingo Molnar @ 2017-11-24  7:33 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, jgross


* Dave Hansen <dave.hansen@linux.intel.com> wrote:

> On 11/23/2017 10:35 PM, Ingo Molnar wrote:
> > So the pteval_t changes break the build on most non-x86 architectures (alpha, arm, 
> > arm64, etc.), because most of them don't have an asm/pgtable_types.h file.
> > 
> > pteval_t is an x86-ism.
> > 
> > So I left out the changes below.
> 
> There was a warning on the non-PAE 32-bit builds saying that there was a
> shift larger than the type.  I assumed this was because of a reference
> to _PAGE_NX, and thus we needed a change to pteval_t.
> 
> But, now that I think about it more, that doesn't make sense since
> _PAGE_NX should be #defined down to a 0 on those configs unless
> something is wrong.

If pte flags need to be passed around then the canonical way to do it is to pass 
around a pte_t, and use pte_val() on it and such.

But please investigate the warning.

One other detail: I see you fixed some of the commit titles to use standard x86 
tags - could you please also capitalize sentences? I.e.:

  - x86/mm/kaiser: allow flushing for future ASID switches
  + x86/mm/kaiser: Allow flushing for future ASID switches

Could you please also double-check whether the merges I did in the latest 
WIP.x86/mm branch are OK? Andy changed the entry stack code a bit under Kaiser, 
which created about 3 new conflicts.

The key resolutions that I did were:

        .macro interrupt func
        cld

        testb   $3, CS-ORIG_RAX(%rsp)
        jz      1f
        SWAPGS
        SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
        call    switch_to_thread_stack
1:

Plus I also dropped the extra switch_to_thread_stack call done in:

  x86/mm/kaiser: Prepare assembly for entry/exit CR3 switching

Because Andy's latest preparatory patch does it now:

  x86/entry/64: Use a percpu trampoline stack for IDT entries

Thanks,

	Ingo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 21/23] x86, kaiser: un-poison PGDs at runtime
  2017-11-23  0:35   ` Dave Hansen
@ 2017-11-25  1:17     ` Eduardo Valentin
  -1 siblings, 0 replies; 131+ messages in thread
From: Eduardo Valentin @ 2017-11-25  1:17 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, aliguori

On Wed, Nov 22, 2017 at 04:35:21PM -0800, Dave Hansen wrote:
> 
> From: Dave Hansen <dave.hansen@linux.intel.com>
> 
> With KAISER Kernel PGDs that map userspace are "poisoned" with
> the NX bit.  This ensures that if a kernel->user CR3 switch is
> missed, userspace crashes instead of running in an unhardened
> state.
> 
> This code will be needed in a moment when KAISER is turned
> on and off at runtime.
> 
> Note that an __ASSEMBLY__ #ifdef is now required since kaiser.h
> is indirectly included into assembly.
> 
> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
> Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
> Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
> Cc: Richard Fellner <richard.fellner@student.tugraz.at>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Kees Cook <keescook@google.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: x86@kernel.org
> ---
> 
>  b/arch/x86/include/asm/pgtable_64.h |   16 ++++++++++++++-
>  b/arch/x86/mm/kaiser.c              |   38 ++++++++++++++++++++++++++++++++++++
>  b/include/linux/kaiser.h            |    3 +-
>  3 files changed, 55 insertions(+), 2 deletions(-)
> 
> diff -puN arch/x86/include/asm/pgtable_64.h~kaiser-dynamic-unpoison-pgd arch/x86/include/asm/pgtable_64.h
> --- a/arch/x86/include/asm/pgtable_64.h~kaiser-dynamic-unpoison-pgd	2017-11-22 15:45:55.818619722 -0800
> +++ b/arch/x86/include/asm/pgtable_64.h	2017-11-22 15:45:55.824619722 -0800
> @@ -3,6 +3,7 @@
>  #define _ASM_X86_PGTABLE_64_H
>  
>  #include <linux/const.h>
> +#include <linux/kaiser.h>
>  #include <asm/pgtable_64_types.h>
>  
>  #ifndef __ASSEMBLY__
> @@ -199,6 +200,18 @@ static inline bool pgd_userspace_access(
>  	return pgd.pgd & _PAGE_USER;
>  }
>  
> +static inline void kaiser_poison_pgd(pgd_t *pgd)
> +{
> +	if (pgd->pgd & _PAGE_PRESENT)
> +		pgd->pgd |= _PAGE_NX;
> +}
> +
> +static inline void kaiser_unpoison_pgd(pgd_t *pgd)
> +{
> +	if (pgd->pgd & _PAGE_PRESENT)
> +		pgd->pgd &= ~_PAGE_NX;
> +}
> +
>  /*
>   * Take a PGD location (pgdp) and a pgd value that needs
>   * to be set there.  Populates the shadow and returns
> @@ -222,7 +235,8 @@ static inline pgd_t kaiser_set_shadow_pg
>  			 * wrong CR3 value, userspace will crash
>  			 * instead of running.
>  			 */
> -			pgd.pgd |= _PAGE_NX;
> +			if (kaiser_active())
> +				kaiser_poison_pgd(&pgd);
>  		}
>  	} else if (pgd_userspace_access(*pgdp)) {
>  		/*
> diff -puN arch/x86/mm/kaiser.c~kaiser-dynamic-unpoison-pgd arch/x86/mm/kaiser.c
> --- a/arch/x86/mm/kaiser.c~kaiser-dynamic-unpoison-pgd	2017-11-22 15:45:55.819619722 -0800
> +++ b/arch/x86/mm/kaiser.c	2017-11-22 15:45:55.825619722 -0800
> @@ -501,6 +501,9 @@ static ssize_t kaiser_enabled_write_file
>  	if (enable > 1)
>  		return -EINVAL;
>  
> +	if (kaiser_enabled == enable)
> +		return count;
> +
>  	WRITE_ONCE(kaiser_enabled, enable);
>  	return count;
>  }

Shouldn't the above hunk be part of the patch that adds the debugfs entry?

> @@ -518,3 +521,38 @@ static int __init create_kaiser_enabled(
>  	return 0;
>  }
>  late_initcall(create_kaiser_enabled);
> +
> +enum poison {
> +	KAISER_POISON,
> +	KAISER_UNPOISON
> +};
> +void kaiser_poison_pgd_page(pgd_t *pgd_page, enum poison do_poison)
> +{
> +	int i = 0;
> +
> +	for (i = 0; i < PTRS_PER_PGD; i++) {
> +		pgd_t *pgd = &pgd_page[i];
> +
> +		/* Stop once we hit kernel addresses: */
> +		if (!pgdp_maps_userspace(pgd))
> +			break;
> +
> +		if (do_poison == KAISER_POISON)
> +			kaiser_poison_pgd(pgd);
> +		else
> +			kaiser_unpoison_pgd(pgd);
> +	}
> +
> +}
> +
> +void kaiser_poison_pgds(enum poison do_poison)
> +{
> +	struct page *page;
> +
> +	spin_lock(&pgd_lock);
> +	list_for_each_entry(page, &pgd_list, lru) {
> +		pgd_t *pgd = (pgd_t *)page_address(page);
> +		kaiser_poison_pgd_page(pgd, do_poison);
> +	}
> +	spin_unlock(&pgd_lock);
> +}
> diff -puN include/linux/kaiser.h~kaiser-dynamic-unpoison-pgd include/linux/kaiser.h
> --- a/include/linux/kaiser.h~kaiser-dynamic-unpoison-pgd	2017-11-22 15:45:55.821619722 -0800
> +++ b/include/linux/kaiser.h	2017-11-22 15:45:55.826619722 -0800
> @@ -4,7 +4,7 @@
>  #ifdef CONFIG_KAISER
>  #include <asm/kaiser.h>
>  #else
> -
> +#ifndef __ASSEMBLY__
>  /*
>   * These stubs are used whenever CONFIG_KAISER is off, which
>   * includes architectures that support KAISER, but have it
> @@ -33,5 +33,6 @@ static inline bool kaiser_active(void)
>  {
>  	return 0;
>  }
> +#endif /* __ASSEMBLY__ */
>  #endif /* !CONFIG_KAISER */
>  #endif /* _INCLUDE_KAISER_H */
> _
> 

-- 
All the best,
Eduardo Valentin

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 21/23] x86, kaiser: un-poison PGDs at runtime
@ 2017-11-25  1:17     ` Eduardo Valentin
  0 siblings, 0 replies; 131+ messages in thread
From: Eduardo Valentin @ 2017-11-25  1:17 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, aliguori

On Wed, Nov 22, 2017 at 04:35:21PM -0800, Dave Hansen wrote:
> 
> From: Dave Hansen <dave.hansen@linux.intel.com>
> 
> With KAISER Kernel PGDs that map userspace are "poisoned" with
> the NX bit.  This ensures that if a kernel->user CR3 switch is
> missed, userspace crashes instead of running in an unhardened
> state.
> 
> This code will be needed in a moment when KAISER is turned
> on and off at runtime.
> 
> Note that an __ASSEMBLY__ #ifdef is now required since kaiser.h
> is indirectly included into assembly.
> 
> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
> Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
> Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
> Cc: Richard Fellner <richard.fellner@student.tugraz.at>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Kees Cook <keescook@google.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: x86@kernel.org
> ---
> 
>  b/arch/x86/include/asm/pgtable_64.h |   16 ++++++++++++++-
>  b/arch/x86/mm/kaiser.c              |   38 ++++++++++++++++++++++++++++++++++++
>  b/include/linux/kaiser.h            |    3 +-
>  3 files changed, 55 insertions(+), 2 deletions(-)
> 
> diff -puN arch/x86/include/asm/pgtable_64.h~kaiser-dynamic-unpoison-pgd arch/x86/include/asm/pgtable_64.h
> --- a/arch/x86/include/asm/pgtable_64.h~kaiser-dynamic-unpoison-pgd	2017-11-22 15:45:55.818619722 -0800
> +++ b/arch/x86/include/asm/pgtable_64.h	2017-11-22 15:45:55.824619722 -0800
> @@ -3,6 +3,7 @@
>  #define _ASM_X86_PGTABLE_64_H
>  
>  #include <linux/const.h>
> +#include <linux/kaiser.h>
>  #include <asm/pgtable_64_types.h>
>  
>  #ifndef __ASSEMBLY__
> @@ -199,6 +200,18 @@ static inline bool pgd_userspace_access(
>  	return pgd.pgd & _PAGE_USER;
>  }
>  
> +static inline void kaiser_poison_pgd(pgd_t *pgd)
> +{
> +	if (pgd->pgd & _PAGE_PRESENT)
> +		pgd->pgd |= _PAGE_NX;
> +}
> +
> +static inline void kaiser_unpoison_pgd(pgd_t *pgd)
> +{
> +	if (pgd->pgd & _PAGE_PRESENT)
> +		pgd->pgd &= ~_PAGE_NX;
> +}
> +
>  /*
>   * Take a PGD location (pgdp) and a pgd value that needs
>   * to be set there.  Populates the shadow and returns
> @@ -222,7 +235,8 @@ static inline pgd_t kaiser_set_shadow_pg
>  			 * wrong CR3 value, userspace will crash
>  			 * instead of running.
>  			 */
> -			pgd.pgd |= _PAGE_NX;
> +			if (kaiser_active())
> +				kaiser_poison_pgd(&pgd);
>  		}
>  	} else if (pgd_userspace_access(*pgdp)) {
>  		/*
> diff -puN arch/x86/mm/kaiser.c~kaiser-dynamic-unpoison-pgd arch/x86/mm/kaiser.c
> --- a/arch/x86/mm/kaiser.c~kaiser-dynamic-unpoison-pgd	2017-11-22 15:45:55.819619722 -0800
> +++ b/arch/x86/mm/kaiser.c	2017-11-22 15:45:55.825619722 -0800
> @@ -501,6 +501,9 @@ static ssize_t kaiser_enabled_write_file
>  	if (enable > 1)
>  		return -EINVAL;
>  
> +	if (kaiser_enabled == enable)
> +		return count;
> +
>  	WRITE_ONCE(kaiser_enabled, enable);
>  	return count;
>  }

Shouldn't the above hunk be part of the patch that adds the debugfs entry?

> @@ -518,3 +521,38 @@ static int __init create_kaiser_enabled(
>  	return 0;
>  }
>  late_initcall(create_kaiser_enabled);
> +
> +enum poison {
> +	KAISER_POISON,
> +	KAISER_UNPOISON
> +};
> +void kaiser_poison_pgd_page(pgd_t *pgd_page, enum poison do_poison)
> +{
> +	int i = 0;
> +
> +	for (i = 0; i < PTRS_PER_PGD; i++) {
> +		pgd_t *pgd = &pgd_page[i];
> +
> +		/* Stop once we hit kernel addresses: */
> +		if (!pgdp_maps_userspace(pgd))
> +			break;
> +
> +		if (do_poison == KAISER_POISON)
> +			kaiser_poison_pgd(pgd);
> +		else
> +			kaiser_unpoison_pgd(pgd);
> +	}
> +
> +}
> +
> +void kaiser_poison_pgds(enum poison do_poison)
> +{
> +	struct page *page;
> +
> +	spin_lock(&pgd_lock);
> +	list_for_each_entry(page, &pgd_list, lru) {
> +		pgd_t *pgd = (pgd_t *)page_address(page);
> +		kaiser_poison_pgd_page(pgd, do_poison);
> +	}
> +	spin_unlock(&pgd_lock);
> +}
> diff -puN include/linux/kaiser.h~kaiser-dynamic-unpoison-pgd include/linux/kaiser.h
> --- a/include/linux/kaiser.h~kaiser-dynamic-unpoison-pgd	2017-11-22 15:45:55.821619722 -0800
> +++ b/include/linux/kaiser.h	2017-11-22 15:45:55.826619722 -0800
> @@ -4,7 +4,7 @@
>  #ifdef CONFIG_KAISER
>  #include <asm/kaiser.h>
>  #else
> -
> +#ifndef __ASSEMBLY__
>  /*
>   * These stubs are used whenever CONFIG_KAISER is off, which
>   * includes architectures that support KAISER, but have it
> @@ -33,5 +33,6 @@ static inline bool kaiser_active(void)
>  {
>  	return 0;
>  }
> +#endif /* __ASSEMBLY__ */
>  #endif /* !CONFIG_KAISER */
>  #endif /* _INCLUDE_KAISER_H */
> _
> 

-- 
All the best,
Eduardo Valentin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 20/23] x86, kaiser: add a function to check for KAISER being enabled
  2017-11-23  0:35   ` Dave Hansen
@ 2017-11-25  1:23     ` Eduardo Valentin
  -1 siblings, 0 replies; 131+ messages in thread
From: Eduardo Valentin @ 2017-11-25  1:23 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, aliguori

On Wed, Nov 22, 2017 at 04:35:18PM -0800, Dave Hansen wrote:
> 
> From: Dave Hansen <dave.hansen@linux.intel.com>
> 
> Currently, all of the checks for KAISER are compile-time checks.
> 
> Runtime checks are needed for turning it on/off at runtime.
> 
> Add a function to do that.
> 
> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
> Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
> Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
> Cc: Richard Fellner <richard.fellner@student.tugraz.at>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Kees Cook <keescook@google.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: x86@kernel.org
> ---
> 
>  b/arch/x86/include/asm/kaiser.h |    5 +++++
>  b/include/linux/kaiser.h        |    5 +++++
>  2 files changed, 10 insertions(+)
> 
> diff -puN arch/x86/include/asm/kaiser.h~kaiser-dynamic-check-func arch/x86/include/asm/kaiser.h
> --- a/arch/x86/include/asm/kaiser.h~kaiser-dynamic-check-func	2017-11-22 15:45:55.262619723 -0800
> +++ b/arch/x86/include/asm/kaiser.h	2017-11-22 15:45:55.267619723 -0800
> @@ -56,6 +56,11 @@ extern void kaiser_remove_mapping(unsign
>   */
>  extern void kaiser_init(void);
>  
> +static inline bool kaiser_active(void)
> +{
> +	extern int kaiser_enabled;

Should this really be extern ? 

I am getting a compilation error while linking the bzImage with this series:
arch/x86/boot/compressed/pagetable.o: In function `kernel_ident_mapping_init':
pagetable.c:(.text+0x336): undefined reference to `kaiser_enabled'
arch/x86/boot/compressed/Makefile:109: recipe for target 'arch/x86/boot/compressed/vmlinux' failed
make[2]: *** [arch/x86/boot/compressed/vmlinux] Error 1
arch/x86/boot/Makefile:112: recipe for target 'arch/x86/boot/compressed/vmlinux' failed
make[1]: *** [arch/x86/boot/compressed/vmlinux] Error 2
arch/x86/Makefile:296: recipe for target 'bzImage' failed
make: *** [bzImage] Error 2

What I did was to remove the extern and  EXPORT_SYMBOL(kaiser_enabled) and initialize kaiser_enabled as 0, after that I got a proper bzImage.

> +	return kaiser_enabled;
> +}
>  #endif
>  
>  #endif /* __ASSEMBLY__ */
> diff -puN include/linux/kaiser.h~kaiser-dynamic-check-func include/linux/kaiser.h
> --- a/include/linux/kaiser.h~kaiser-dynamic-check-func	2017-11-22 15:45:55.264619723 -0800
> +++ b/include/linux/kaiser.h	2017-11-22 15:45:55.268619723 -0800
> @@ -28,5 +28,10 @@ static inline int kaiser_add_mapping(uns
>  static inline void kaiser_add_mapping_cpu_entry(int cpu)
>  {
>  }
> +
> +static inline bool kaiser_active(void)
> +{
> +	return 0;
> +}
>  #endif /* !CONFIG_KAISER */
>  #endif /* _INCLUDE_KAISER_H */
> _
> 

-- 
All the best,
Eduardo Valentin

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 20/23] x86, kaiser: add a function to check for KAISER being enabled
@ 2017-11-25  1:23     ` Eduardo Valentin
  0 siblings, 0 replies; 131+ messages in thread
From: Eduardo Valentin @ 2017-11-25  1:23 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, moritz.lipp, daniel.gruss,
	michael.schwarz, richard.fellner, luto, torvalds, keescook,
	hughd, x86, aliguori

On Wed, Nov 22, 2017 at 04:35:18PM -0800, Dave Hansen wrote:
> 
> From: Dave Hansen <dave.hansen@linux.intel.com>
> 
> Currently, all of the checks for KAISER are compile-time checks.
> 
> Runtime checks are needed for turning it on/off at runtime.
> 
> Add a function to do that.
> 
> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
> Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
> Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
> Cc: Richard Fellner <richard.fellner@student.tugraz.at>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Kees Cook <keescook@google.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: x86@kernel.org
> ---
> 
>  b/arch/x86/include/asm/kaiser.h |    5 +++++
>  b/include/linux/kaiser.h        |    5 +++++
>  2 files changed, 10 insertions(+)
> 
> diff -puN arch/x86/include/asm/kaiser.h~kaiser-dynamic-check-func arch/x86/include/asm/kaiser.h
> --- a/arch/x86/include/asm/kaiser.h~kaiser-dynamic-check-func	2017-11-22 15:45:55.262619723 -0800
> +++ b/arch/x86/include/asm/kaiser.h	2017-11-22 15:45:55.267619723 -0800
> @@ -56,6 +56,11 @@ extern void kaiser_remove_mapping(unsign
>   */
>  extern void kaiser_init(void);
>  
> +static inline bool kaiser_active(void)
> +{
> +	extern int kaiser_enabled;

Should this really be extern ? 

I am getting a compilation error while linking the bzImage with this series:
arch/x86/boot/compressed/pagetable.o: In function `kernel_ident_mapping_init':
pagetable.c:(.text+0x336): undefined reference to `kaiser_enabled'
arch/x86/boot/compressed/Makefile:109: recipe for target 'arch/x86/boot/compressed/vmlinux' failed
make[2]: *** [arch/x86/boot/compressed/vmlinux] Error 1
arch/x86/boot/Makefile:112: recipe for target 'arch/x86/boot/compressed/vmlinux' failed
make[1]: *** [arch/x86/boot/compressed/vmlinux] Error 2
arch/x86/Makefile:296: recipe for target 'bzImage' failed
make: *** [bzImage] Error 2

What I did was to remove the extern and  EXPORT_SYMBOL(kaiser_enabled) and initialize kaiser_enabled as 0, after that I got a proper bzImage.

> +	return kaiser_enabled;
> +}
>  #endif
>  
>  #endif /* __ASSEMBLY__ */
> diff -puN include/linux/kaiser.h~kaiser-dynamic-check-func include/linux/kaiser.h
> --- a/include/linux/kaiser.h~kaiser-dynamic-check-func	2017-11-22 15:45:55.264619723 -0800
> +++ b/include/linux/kaiser.h	2017-11-22 15:45:55.268619723 -0800
> @@ -28,5 +28,10 @@ static inline int kaiser_add_mapping(uns
>  static inline void kaiser_add_mapping_cpu_entry(int cpu)
>  {
>  }
> +
> +static inline bool kaiser_active(void)
> +{
> +	return 0;
> +}
>  #endif /* !CONFIG_KAISER */
>  #endif /* _INCLUDE_KAISER_H */
> _
> 

-- 
All the best,
Eduardo Valentin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2017-11-23  4:07     ` Andy Lutomirski
@ 2017-11-26 16:10       ` Andy Lutomirski
  -1 siblings, 0 replies; 131+ messages in thread
From: Andy Lutomirski @ 2017-11-26 16:10 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Dave Hansen, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, Daniel Gruss, michael.schwarz, Linus Torvalds,
	Kees Cook, Hugh Dickins, X86 ML

On Wed, Nov 22, 2017 at 8:07 PM, Andy Lutomirski <luto@kernel.org> wrote:
> On Wed, Nov 22, 2017 at 4:34 PM, Dave Hansen
> <dave.hansen@linux.intel.com> wrote:
>>
>> These actions when dealing with a user address *and* the
>> PGD has _PAGE_USER set.  That way, in-kernel users of low addresses
>> typically used by userspace are not accidentally poisoned.
>
> This seems sane.
>
>> +/*
>> + * Take a PGD location (pgdp) and a pgd value that needs
>> + * to be set there.  Populates the shadow and returns
>> + * the resulting PGD that must be set in the kernel copy
>> + * of the page tables.
>> + */
>> +static inline pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
>> +{
>> +#ifdef CONFIG_KAISER
>> +       if (pgd_userspace_access(pgd)) {
>> +               if (pgdp_maps_userspace(pgdp)) {
>> +                       /*
>> +                        * The user/shadow page tables get the full
>> +                        * PGD, accessible from userspace:
>> +                        */
>> +                       kernel_to_shadow_pgdp(pgdp)->pgd = pgd.pgd;
>> +                       /*
>> +                        * For the copy of the pgd that the kernel
>> +                        * uses, make it unusable to userspace.  This
>> +                        * ensures if we get out to userspace with the
>> +                        * wrong CR3 value, userspace will crash
>> +                        * instead of running.
>> +                        */
>> +                       pgd.pgd |= _PAGE_NX;
>> +               }
>> +       } else if (pgd_userspace_access(*pgdp)) {
>> +               /*
>> +                * We are clearing a _PAGE_USER PGD for which we
>> +                * presumably populated the shadow.  We must now
>> +                * clear the shadow PGD entry.
>> +                */
>> +               if (pgdp_maps_userspace(pgdp)) {
>> +                       kernel_to_shadow_pgdp(pgdp)->pgd = pgd.pgd;
>> +               } else {
>> +                       /*
>> +                        * Attempted to clear a _PAGE_USER PGD which
>> +                        * is in the kernel porttion of the address
>> +                        * space.  PGDs are pre-populated and we
>> +                        * never clear them.
>> +                        */
>> +                       WARN_ON_ONCE(1);
>> +               }
>> +       } else {
>> +               /*
>> +                * _PAGE_USER was not set in either the PGD being set
>> +                * or cleared.  All kernel PGDs should be
>> +                * pre-populated so this should never happen after
>> +                * boot.
>> +                */
>> +       }
>> +#endif
>> +       /* return the copy of the PGD we want the kernel to use: */
>> +       return pgd;
>> +}
>> +
>
> The more I read this code, the more I dislike "shadow".  Shadow
> pagetables mean something specific in the virtualization world and,
> more importantly, the word "shadow" fails to convey *which* table it
> is.  Unless I'm extra confused, mm->pgd points to the kernelmode
> tables.  So can we replace the word "shadow" with "usermode"?  That
> will also make the entry stuff way clearer.  (Or I have it backwards,
> in which case "kernelmode" would be the right choice.)  And rename the
> argument.
>
> That confusion aside, I'm trying to wrap my head around this.  I think
> the description above makes sense, but I'm struggling to grok the code
> and how it matches the description.  May I suggest an alternative
> implementation?  (Apologies for epic whitespace damage.)
>
> /*
>  * Install an entry into the usermode pgd.  pgdp points to the kernelmode
>  * entry whose usermode counterpart we're supposed to set.  pgd is the
>  * desired entry.  Returns pgd, possibly modified if the actual entry installed
>  * into the kernelmode needs different mode bits.
>  */
> static inline pgd_t kaiser_set_usermode_pgd(pgd_t *pgdp, pgd_t pgd) {
>   VM_BUG_ON(pgdp points to a usermode table);
>
>   if (pgdp_maps_userspace(pgdp)) {
>     /* Install the pgd as requested into the usermode tables. */
>     kernelmode_to_usermode_pgdp(pgdp)->pgd = pgd.pgd;
>
>     if (pgd_val(pgd) & _PAGE_USER) {
>       /*
>        * This is a normal user pgd -- the kernelmode mapping should have NX
>        * set to prevent erroneous usermode execution with the kernel tables.
>        */
>       return __pgd(pgd_val(pgd) | _PAGE_NX;
>     } else {
>       /* This is a weird mapping, e.g. EFI.  Map it straight through. */
>       return pgd;
>     }
>   } else {
>     /*
>      * We can get here due to vmalloc, a vmalloc fault, memory
> hot-add, or initial setup
>      * of kernelmode page tables.  Regardless of which particular code
> path we're in,
>      * these mappings should not be automatically propagated to the
> usermode tables.
>      */
>     return pgd;
>   }
> }
>
> As a side benefit, this shouldn't have magical interactions with the
> vsyscall page any more.
>
> Are there cases that this would get wrong?
>

Quick ping: did this get lost?

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2017-11-26 16:10       ` Andy Lutomirski
  0 siblings, 0 replies; 131+ messages in thread
From: Andy Lutomirski @ 2017-11-26 16:10 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Dave Hansen, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, Daniel Gruss, michael.schwarz, Linus Torvalds,
	Kees Cook, Hugh Dickins, X86 ML

On Wed, Nov 22, 2017 at 8:07 PM, Andy Lutomirski <luto@kernel.org> wrote:
> On Wed, Nov 22, 2017 at 4:34 PM, Dave Hansen
> <dave.hansen@linux.intel.com> wrote:
>>
>> These actions when dealing with a user address *and* the
>> PGD has _PAGE_USER set.  That way, in-kernel users of low addresses
>> typically used by userspace are not accidentally poisoned.
>
> This seems sane.
>
>> +/*
>> + * Take a PGD location (pgdp) and a pgd value that needs
>> + * to be set there.  Populates the shadow and returns
>> + * the resulting PGD that must be set in the kernel copy
>> + * of the page tables.
>> + */
>> +static inline pgd_t kaiser_set_shadow_pgd(pgd_t *pgdp, pgd_t pgd)
>> +{
>> +#ifdef CONFIG_KAISER
>> +       if (pgd_userspace_access(pgd)) {
>> +               if (pgdp_maps_userspace(pgdp)) {
>> +                       /*
>> +                        * The user/shadow page tables get the full
>> +                        * PGD, accessible from userspace:
>> +                        */
>> +                       kernel_to_shadow_pgdp(pgdp)->pgd = pgd.pgd;
>> +                       /*
>> +                        * For the copy of the pgd that the kernel
>> +                        * uses, make it unusable to userspace.  This
>> +                        * ensures if we get out to userspace with the
>> +                        * wrong CR3 value, userspace will crash
>> +                        * instead of running.
>> +                        */
>> +                       pgd.pgd |= _PAGE_NX;
>> +               }
>> +       } else if (pgd_userspace_access(*pgdp)) {
>> +               /*
>> +                * We are clearing a _PAGE_USER PGD for which we
>> +                * presumably populated the shadow.  We must now
>> +                * clear the shadow PGD entry.
>> +                */
>> +               if (pgdp_maps_userspace(pgdp)) {
>> +                       kernel_to_shadow_pgdp(pgdp)->pgd = pgd.pgd;
>> +               } else {
>> +                       /*
>> +                        * Attempted to clear a _PAGE_USER PGD which
>> +                        * is in the kernel porttion of the address
>> +                        * space.  PGDs are pre-populated and we
>> +                        * never clear them.
>> +                        */
>> +                       WARN_ON_ONCE(1);
>> +               }
>> +       } else {
>> +               /*
>> +                * _PAGE_USER was not set in either the PGD being set
>> +                * or cleared.  All kernel PGDs should be
>> +                * pre-populated so this should never happen after
>> +                * boot.
>> +                */
>> +       }
>> +#endif
>> +       /* return the copy of the PGD we want the kernel to use: */
>> +       return pgd;
>> +}
>> +
>
> The more I read this code, the more I dislike "shadow".  Shadow
> pagetables mean something specific in the virtualization world and,
> more importantly, the word "shadow" fails to convey *which* table it
> is.  Unless I'm extra confused, mm->pgd points to the kernelmode
> tables.  So can we replace the word "shadow" with "usermode"?  That
> will also make the entry stuff way clearer.  (Or I have it backwards,
> in which case "kernelmode" would be the right choice.)  And rename the
> argument.
>
> That confusion aside, I'm trying to wrap my head around this.  I think
> the description above makes sense, but I'm struggling to grok the code
> and how it matches the description.  May I suggest an alternative
> implementation?  (Apologies for epic whitespace damage.)
>
> /*
>  * Install an entry into the usermode pgd.  pgdp points to the kernelmode
>  * entry whose usermode counterpart we're supposed to set.  pgd is the
>  * desired entry.  Returns pgd, possibly modified if the actual entry installed
>  * into the kernelmode needs different mode bits.
>  */
> static inline pgd_t kaiser_set_usermode_pgd(pgd_t *pgdp, pgd_t pgd) {
>   VM_BUG_ON(pgdp points to a usermode table);
>
>   if (pgdp_maps_userspace(pgdp)) {
>     /* Install the pgd as requested into the usermode tables. */
>     kernelmode_to_usermode_pgdp(pgdp)->pgd = pgd.pgd;
>
>     if (pgd_val(pgd) & _PAGE_USER) {
>       /*
>        * This is a normal user pgd -- the kernelmode mapping should have NX
>        * set to prevent erroneous usermode execution with the kernel tables.
>        */
>       return __pgd(pgd_val(pgd) | _PAGE_NX;
>     } else {
>       /* This is a weird mapping, e.g. EFI.  Map it straight through. */
>       return pgd;
>     }
>   } else {
>     /*
>      * We can get here due to vmalloc, a vmalloc fault, memory
> hot-add, or initial setup
>      * of kernelmode page tables.  Regardless of which particular code
> path we're in,
>      * these mappings should not be automatically propagated to the
> usermode tables.
>      */
>     return pgd;
>   }
> }
>
> As a side benefit, this shouldn't have magical interactions with the
> vsyscall page any more.
>
> Are there cases that this would get wrong?
>

Quick ping: did this get lost?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2017-11-26 16:10       ` Andy Lutomirski
@ 2017-11-26 16:24         ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-26 16:24 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-kernel, linux-mm, richard.fellner, moritz.lipp,
	Daniel Gruss, michael.schwarz, Linus Torvalds, Kees Cook,
	Hugh Dickins, X86 ML

On 11/26/2017 08:10 AM, Andy Lutomirski wrote:
>> As a side benefit, this shouldn't have magical interactions with the
>> vsyscall page any more.
>>
>> Are there cases that this would get wrong?
>>
> Quick ping: did this get lost?

It does drop a warning that the other version of the code has, but
that's pretty minor.

Basically, we need two checks:

	pgd_userspace_access() (aka _PAGE_USER) and
	pgdp_maps_userspace()

The original code does pgd_userspace_access() in a top-level if and then
the pgdp_maps_userspace() checks at the second level.  I think you are
basically suggesting that we flip that.

Logically, I'm sure we can make it work.  It's just a matter of needing
to look at other things first.

BTW, this comment is, I think incorrect:

>   if (pgdp_maps_userspace(pgdp)) {
...
>   } else {
>     /*
>      * We can get here due to vmalloc, a vmalloc fault, memory
> hot-add, or initial setup
>      * of kernelmode page tables.  Regardless of which particular code
> path we're in,
>      * these mappings should not be automatically propagated to the
> usermode tables.
>      */

Since we pre-populated the entire kernel area's PGDs, I don't think
we'll ever have a valid reason to be doing a set_pgd() again on the
kernel area.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2017-11-26 16:24         ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2017-11-26 16:24 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-kernel, linux-mm, richard.fellner, moritz.lipp,
	Daniel Gruss, michael.schwarz, Linus Torvalds, Kees Cook,
	Hugh Dickins, X86 ML

On 11/26/2017 08:10 AM, Andy Lutomirski wrote:
>> As a side benefit, this shouldn't have magical interactions with the
>> vsyscall page any more.
>>
>> Are there cases that this would get wrong?
>>
> Quick ping: did this get lost?

It does drop a warning that the other version of the code has, but
that's pretty minor.

Basically, we need two checks:

	pgd_userspace_access() (aka _PAGE_USER) and
	pgdp_maps_userspace()

The original code does pgd_userspace_access() in a top-level if and then
the pgdp_maps_userspace() checks at the second level.  I think you are
basically suggesting that we flip that.

Logically, I'm sure we can make it work.  It's just a matter of needing
to look at other things first.

BTW, this comment is, I think incorrect:

>   if (pgdp_maps_userspace(pgdp)) {
...
>   } else {
>     /*
>      * We can get here due to vmalloc, a vmalloc fault, memory
> hot-add, or initial setup
>      * of kernelmode page tables.  Regardless of which particular code
> path we're in,
>      * these mappings should not be automatically propagated to the
> usermode tables.
>      */

Since we pre-populated the entire kernel area's PGDs, I don't think
we'll ever have a valid reason to be doing a set_pgd() again on the
kernel area.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2017-11-26 16:24         ` Dave Hansen
@ 2017-11-26 16:29           ` Andy Lutomirski
  -1 siblings, 0 replies; 131+ messages in thread
From: Andy Lutomirski @ 2017-11-26 16:29 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andy Lutomirski, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, Daniel Gruss, michael.schwarz, Linus Torvalds,
	Kees Cook, Hugh Dickins, X86 ML

On Sun, Nov 26, 2017 at 8:24 AM, Dave Hansen
<dave.hansen@linux.intel.com> wrote:
> On 11/26/2017 08:10 AM, Andy Lutomirski wrote:
>>> As a side benefit, this shouldn't have magical interactions with the
>>> vsyscall page any more.
>>>
>>> Are there cases that this would get wrong?
>>>
>> Quick ping: did this get lost?
>
> It does drop a warning that the other version of the code has, but
> that's pretty minor.
>
> Basically, we need two checks:
>
>         pgd_userspace_access() (aka _PAGE_USER) and
>         pgdp_maps_userspace()
>
> The original code does pgd_userspace_access() in a top-level if and then
> the pgdp_maps_userspace() checks at the second level.  I think you are
> basically suggesting that we flip that.
>
> Logically, I'm sure we can make it work.  It's just a matter of needing
> to look at other things first.
>
> BTW, this comment is, I think incorrect:
>
>>   if (pgdp_maps_userspace(pgdp)) {
> ...
>>   } else {
>>     /*
>>      * We can get here due to vmalloc, a vmalloc fault, memory
>> hot-add, or initial setup
>>      * of kernelmode page tables.  Regardless of which particular code
>> path we're in,
>>      * these mappings should not be automatically propagated to the
>> usermode tables.
>>      */
>
> Since we pre-populated the entire kernel area's PGDs, I don't think
> we'll ever have a valid reason to be doing a set_pgd() again on the
> kernel area.

Right, forgot about that.  So it's just initial setup, then.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2017-11-26 16:29           ` Andy Lutomirski
  0 siblings, 0 replies; 131+ messages in thread
From: Andy Lutomirski @ 2017-11-26 16:29 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andy Lutomirski, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, Daniel Gruss, michael.schwarz, Linus Torvalds,
	Kees Cook, Hugh Dickins, X86 ML

On Sun, Nov 26, 2017 at 8:24 AM, Dave Hansen
<dave.hansen@linux.intel.com> wrote:
> On 11/26/2017 08:10 AM, Andy Lutomirski wrote:
>>> As a side benefit, this shouldn't have magical interactions with the
>>> vsyscall page any more.
>>>
>>> Are there cases that this would get wrong?
>>>
>> Quick ping: did this get lost?
>
> It does drop a warning that the other version of the code has, but
> that's pretty minor.
>
> Basically, we need two checks:
>
>         pgd_userspace_access() (aka _PAGE_USER) and
>         pgdp_maps_userspace()
>
> The original code does pgd_userspace_access() in a top-level if and then
> the pgdp_maps_userspace() checks at the second level.  I think you are
> basically suggesting that we flip that.
>
> Logically, I'm sure we can make it work.  It's just a matter of needing
> to look at other things first.
>
> BTW, this comment is, I think incorrect:
>
>>   if (pgdp_maps_userspace(pgdp)) {
> ...
>>   } else {
>>     /*
>>      * We can get here due to vmalloc, a vmalloc fault, memory
>> hot-add, or initial setup
>>      * of kernelmode page tables.  Regardless of which particular code
>> path we're in,
>>      * these mappings should not be automatically propagated to the
>> usermode tables.
>>      */
>
> Since we pre-populated the entire kernel area's PGDs, I don't think
> we'll ever have a valid reason to be doing a set_pgd() again on the
> kernel area.

Right, forgot about that.  So it's just initial setup, then.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2017-11-23  0:34   ` Dave Hansen
@ 2018-01-05  4:16     ` Yisheng Xie
  -1 siblings, 0 replies; 131+ messages in thread
From: Yisheng Xie @ 2018-01-05  4:16 UTC (permalink / raw)
  To: Dave Hansen, linux-kernel
  Cc: linux-mm, richard.fellner, moritz.lipp, daniel.gruss,
	michael.schwarz, luto, torvalds, keescook, hughd, x86

Hi Dava,

On 2017/11/23 8:34, Dave Hansen wrote:
> 
> From: Dave Hansen <dave.hansen@linux.intel.com>
> 
> These patches are based on work from a team at Graz University of
> Technology: https://github.com/IAIK/KAISER .  This work would not have
> been possible without their work as a starting point.
> 
> KAISER is a countermeasure against side channel attacks against kernel
> virtual memory.  It leaves the existing page tables largely alone and
> refers to them as the "kernel page tables.  It adds a "shadow" pgd for
> every process which is intended for use when running userspace.  The
> shadow pgd maps all the same user memory as the "kernel" copy, but
> only maps a minimal set of kernel memory.
> 
> Whenever entering the kernel (syscalls, interrupts, exceptions), the
> pgd is switched to the "kernel" copy.  When switching back to user
> mode, the shadow pgd is used.
> 
> The minimalistic kernel page tables try to map only what is needed to
> enter/exit the kernel such as the entry/exit functions themselves and
> the interrupt descriptors (IDT).
> 
> === Page Table Poisoning ===
> 
> KAISER has two copies of the page tables: one for the kernel and
> one for when running in userspace.  

So, we have 2 page table, thinking about this case:
If _ONE_ process includes _TWO_ threads, one run in user space, the other
run in kernel, they can run in one core with Hyper-Threading, right? So both
userspace and kernel space is valid, right? And for one core with
Hyper-Threading, they may share TLB, so the timing problem described in
the paper may still exist?

Can this case still be protected by KAISER?

Thanks
Yisheng

> There is also a kernel
> portion of each of the page tables: the part that *maps* the
> kernel.
> 
> The kernel portion is relatively static and uses pre-populated
> PGDs.  Nobody ever calls set_pgd() on the kernel portion during
> normal operation.
> 
> The userspace portion of the page tables is updated frequently as
> userspace pages are mapped and page table pages are allocated.
> These updates of the userspace *portion* of the tables need to be
> reflected into both the kernel and user/shadow copies.
> 
> The original KAISER patches did this by effectively looking at the
> address that is being updated.  If it is <PAGE_OFFSET, it is
> considered to be doing an update for the userspace portion of the page
> tables and must make an entry in the shadow.
> 
> However, this has a wrinkle: there are a few places where low
> addresses are used in supervisor (kernel) mode.  When EFI calls
> are made, they use what are traditionally user addresses in
> supervisor mode and trip over these checks.  The trampoline code
> that used for booting secondary CPUs has a similar issue.
> 
> Remember, there are two things that KAISER needs performed on a
> userspace PGD:
> 
>  1. Populate the shadow itself
>  2. Poison the kernel PGD so it can not be used by userspace.
> 
> Only perform these actions when dealing with a user address *and* the
> PGD has _PAGE_USER set.  That way, in-kernel users of low addresses
> typically used by userspace are not accidentally poisoned.
> 

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05  4:16     ` Yisheng Xie
  0 siblings, 0 replies; 131+ messages in thread
From: Yisheng Xie @ 2018-01-05  4:16 UTC (permalink / raw)
  To: Dave Hansen, linux-kernel
  Cc: linux-mm, richard.fellner, moritz.lipp, daniel.gruss,
	michael.schwarz, luto, torvalds, keescook, hughd, x86

Hi Dava,

On 2017/11/23 8:34, Dave Hansen wrote:
> 
> From: Dave Hansen <dave.hansen@linux.intel.com>
> 
> These patches are based on work from a team at Graz University of
> Technology: https://github.com/IAIK/KAISER .  This work would not have
> been possible without their work as a starting point.
> 
> KAISER is a countermeasure against side channel attacks against kernel
> virtual memory.  It leaves the existing page tables largely alone and
> refers to them as the "kernel page tables.  It adds a "shadow" pgd for
> every process which is intended for use when running userspace.  The
> shadow pgd maps all the same user memory as the "kernel" copy, but
> only maps a minimal set of kernel memory.
> 
> Whenever entering the kernel (syscalls, interrupts, exceptions), the
> pgd is switched to the "kernel" copy.  When switching back to user
> mode, the shadow pgd is used.
> 
> The minimalistic kernel page tables try to map only what is needed to
> enter/exit the kernel such as the entry/exit functions themselves and
> the interrupt descriptors (IDT).
> 
> === Page Table Poisoning ===
> 
> KAISER has two copies of the page tables: one for the kernel and
> one for when running in userspace.  

So, we have 2 page table, thinking about this case:
If _ONE_ process includes _TWO_ threads, one run in user space, the other
run in kernel, they can run in one core with Hyper-Threading, right? So both
userspace and kernel space is valid, right? And for one core with
Hyper-Threading, they may share TLB, so the timing problem described in
the paper may still exist?

Can this case still be protected by KAISER?

Thanks
Yisheng

> There is also a kernel
> portion of each of the page tables: the part that *maps* the
> kernel.
> 
> The kernel portion is relatively static and uses pre-populated
> PGDs.  Nobody ever calls set_pgd() on the kernel portion during
> normal operation.
> 
> The userspace portion of the page tables is updated frequently as
> userspace pages are mapped and page table pages are allocated.
> These updates of the userspace *portion* of the tables need to be
> reflected into both the kernel and user/shadow copies.
> 
> The original KAISER patches did this by effectively looking at the
> address that is being updated.  If it is <PAGE_OFFSET, it is
> considered to be doing an update for the userspace portion of the page
> tables and must make an entry in the shadow.
> 
> However, this has a wrinkle: there are a few places where low
> addresses are used in supervisor (kernel) mode.  When EFI calls
> are made, they use what are traditionally user addresses in
> supervisor mode and trip over these checks.  The trampoline code
> that used for booting secondary CPUs has a similar issue.
> 
> Remember, there are two things that KAISER needs performed on a
> userspace PGD:
> 
>  1. Populate the shadow itself
>  2. Poison the kernel PGD so it can not be used by userspace.
> 
> Only perform these actions when dealing with a user address *and* the
> PGD has _PAGE_USER set.  That way, in-kernel users of low addresses
> typically used by userspace are not accidentally poisoned.
> 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05  4:16     ` Yisheng Xie
@ 2018-01-05  5:18       ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2018-01-05  5:18 UTC (permalink / raw)
  To: Yisheng Xie, linux-kernel
  Cc: linux-mm, richard.fellner, moritz.lipp, daniel.gruss,
	michael.schwarz, luto, torvalds, keescook, hughd, x86

On 01/04/2018 08:16 PM, Yisheng Xie wrote:
>> === Page Table Poisoning ===
>>
>> KAISER has two copies of the page tables: one for the kernel and
>> one for when running in userspace.  
> 
> So, we have 2 page table, thinking about this case:
> If _ONE_ process includes _TWO_ threads, one run in user space, the other
> run in kernel, they can run in one core with Hyper-Threading, right?

Yes.

> So both userspace and kernel space is valid, right? And for one core
> with Hyper-Threading, they may share TLB, so the timing problem
> described in the paper may still exist?

No.  The TLB is managed per logical CPU (hyperthread), as is the CR3
register that points to the page tables.  Two threads running the same
process might use the same CR3 _value_, but that does not mean they
share TLB entries.

One thread *can* be in the kernel with the kernel page tables while the
other is in userspace with the user page tables active.  They will even
use a different PCID/ASID for the same page tables normally.

> Can this case still be protected by KAISER?

Yes.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05  5:18       ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2018-01-05  5:18 UTC (permalink / raw)
  To: Yisheng Xie, linux-kernel
  Cc: linux-mm, richard.fellner, moritz.lipp, daniel.gruss,
	michael.schwarz, luto, torvalds, keescook, hughd, x86

On 01/04/2018 08:16 PM, Yisheng Xie wrote:
>> === Page Table Poisoning ===
>>
>> KAISER has two copies of the page tables: one for the kernel and
>> one for when running in userspace.  
> 
> So, we have 2 page table, thinking about this case:
> If _ONE_ process includes _TWO_ threads, one run in user space, the other
> run in kernel, they can run in one core with Hyper-Threading, right?

Yes.

> So both userspace and kernel space is valid, right? And for one core
> with Hyper-Threading, they may share TLB, so the timing problem
> described in the paper may still exist?

No.  The TLB is managed per logical CPU (hyperthread), as is the CR3
register that points to the page tables.  Two threads running the same
process might use the same CR3 _value_, but that does not mean they
share TLB entries.

One thread *can* be in the kernel with the kernel page tables while the
other is in userspace with the user page tables active.  They will even
use a different PCID/ASID for the same page tables normally.

> Can this case still be protected by KAISER?

Yes.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05  5:18       ` Dave Hansen
@ 2018-01-05  6:16         ` Yisheng Xie
  -1 siblings, 0 replies; 131+ messages in thread
From: Yisheng Xie @ 2018-01-05  6:16 UTC (permalink / raw)
  To: Dave Hansen, linux-kernel
  Cc: linux-mm, richard.fellner, moritz.lipp, daniel.gruss,
	michael.schwarz, luto, torvalds, keescook, hughd, x86

Hi Dave,

On 2018/1/5 13:18, Dave Hansen wrote:
> On 01/04/2018 08:16 PM, Yisheng Xie wrote:
>>> === Page Table Poisoning ===
>>>
>>> KAISER has two copies of the page tables: one for the kernel and
>>> one for when running in userspace.  
>>
>> So, we have 2 page table, thinking about this case:
>> If _ONE_ process includes _TWO_ threads, one run in user space, the other
>> run in kernel, they can run in one core with Hyper-Threading, right?
> 
> Yes.
> 
>> So both userspace and kernel space is valid, right? And for one core
>> with Hyper-Threading, they may share TLB, so the timing problem
>> described in the paper may still exist?
> 
> No.  The TLB is managed per logical CPU (hyperthread), as is the CR3
> register that points to the page tables.  Two threads running the same
> process might use the same CR3 _value_, but that does not mean they
> share TLB entries.

Get it, and thanks for your explain.

BTW, we have just reported a bug caused by kaiser[1], which looks like
caused by SMEP. Could you please help to have a look?

[1] https://lkml.org/lkml/2018/1/5/3

Thanks
Yisheng

> 
> One thread *can* be in the kernel with the kernel page tables while the
> other is in userspace with the user page tables active.  They will even
> use a different PCID/ASID for the same page tables normally.
> 
>> Can this case still be protected by KAISER?
> 
> Yes.
> 
> .
> 

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05  6:16         ` Yisheng Xie
  0 siblings, 0 replies; 131+ messages in thread
From: Yisheng Xie @ 2018-01-05  6:16 UTC (permalink / raw)
  To: Dave Hansen, linux-kernel
  Cc: linux-mm, richard.fellner, moritz.lipp, daniel.gruss,
	michael.schwarz, luto, torvalds, keescook, hughd, x86

Hi Dave,

On 2018/1/5 13:18, Dave Hansen wrote:
> On 01/04/2018 08:16 PM, Yisheng Xie wrote:
>>> === Page Table Poisoning ===
>>>
>>> KAISER has two copies of the page tables: one for the kernel and
>>> one for when running in userspace.  
>>
>> So, we have 2 page table, thinking about this case:
>> If _ONE_ process includes _TWO_ threads, one run in user space, the other
>> run in kernel, they can run in one core with Hyper-Threading, right?
> 
> Yes.
> 
>> So both userspace and kernel space is valid, right? And for one core
>> with Hyper-Threading, they may share TLB, so the timing problem
>> described in the paper may still exist?
> 
> No.  The TLB is managed per logical CPU (hyperthread), as is the CR3
> register that points to the page tables.  Two threads running the same
> process might use the same CR3 _value_, but that does not mean they
> share TLB entries.

Get it, and thanks for your explain.

BTW, we have just reported a bug caused by kaiser[1], which looks like
caused by SMEP. Could you please help to have a look?

[1] https://lkml.org/lkml/2018/1/5/3

Thanks
Yisheng

> 
> One thread *can* be in the kernel with the kernel page tables while the
> other is in userspace with the user page tables active.  They will even
> use a different PCID/ASID for the same page tables normally.
> 
>> Can this case still be protected by KAISER?
> 
> Yes.
> 
> .
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05  6:16         ` Yisheng Xie
@ 2018-01-05  6:29           ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2018-01-05  6:29 UTC (permalink / raw)
  To: Yisheng Xie, linux-kernel
  Cc: linux-mm, richard.fellner, moritz.lipp, daniel.gruss,
	michael.schwarz, luto, torvalds, keescook, hughd, x86,
	Andrea Arcangeli

On 01/04/2018 10:16 PM, Yisheng Xie wrote:
> BTW, we have just reported a bug caused by kaiser[1], which looks like
> caused by SMEP. Could you please help to have a look?
> 
> [1] https://lkml.org/lkml/2018/1/5/3

Please report that to your kernel vendor.  Your EFI page tables have the
NX bit set on the low addresses.  There have been a bunch of iterations
of this, but you need to make sure that the EFI kernel mappings don't
get _PAGE_NX set on them.  Look at what __pti_set_user_pgd() does in
mainline.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05  6:29           ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2018-01-05  6:29 UTC (permalink / raw)
  To: Yisheng Xie, linux-kernel
  Cc: linux-mm, richard.fellner, moritz.lipp, daniel.gruss,
	michael.schwarz, luto, torvalds, keescook, hughd, x86,
	Andrea Arcangeli

On 01/04/2018 10:16 PM, Yisheng Xie wrote:
> BTW, we have just reported a bug caused by kaiser[1], which looks like
> caused by SMEP. Could you please help to have a look?
> 
> [1] https://lkml.org/lkml/2018/1/5/3

Please report that to your kernel vendor.  Your EFI page tables have the
NX bit set on the low addresses.  There have been a bunch of iterations
of this, but you need to make sure that the EFI kernel mappings don't
get _PAGE_NX set on them.  Look at what __pti_set_user_pgd() does in
mainline.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05  6:29           ` Dave Hansen
@ 2018-01-05 11:49             ` Andrea Arcangeli
  -1 siblings, 0 replies; 131+ messages in thread
From: Andrea Arcangeli @ 2018-01-05 11:49 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, torvalds,
	keescook, hughd, x86

Hi Yisheng and Dave,

On Thu, Jan 04, 2018 at 10:29:53PM -0800, Dave Hansen wrote:
> On 01/04/2018 10:16 PM, Yisheng Xie wrote:
> > BTW, we have just reported a bug caused by kaiser[1], which looks like
> > caused by SMEP. Could you please help to have a look?
> > 
> > [1] https://lkml.org/lkml/2018/1/5/3
> 
> Please report that to your kernel vendor.  Your EFI page tables have the
> NX bit set on the low addresses.  There have been a bunch of iterations
> of this, but you need to make sure that the EFI kernel mappings don't
> get _PAGE_NX set on them.  Look at what __pti_set_user_pgd() does in
> mainline.

Yisheng could you file a report on the vendor bz?

>From my part of course I'm fine to discuss it here, but it's not fair
to use lkml bandwidth for this, sorry for the noise.

The vast majority of the hardware boots fine and isn't running into
this. This is the first time I hear about this, sorry about that.

I fixed it with the upstream solution, greatly appreciated the pointer
Dave. I don't have hardware to verify it though so we've to follow up
on bz.

Thanks,
Andrea

>From 74e2d799b7c22f00a8d3158958e3d6d9fa45c1d2 Mon Sep 17 00:00:00 2001
From: Andrea Arcangeli <aarcange@redhat.com>
Date: Fri, 5 Jan 2018 11:39:40 +0100
Subject: [RHEL7.5 PATCH 1/1] x86/pti/mm: don't set NX on EFI mapping without
 _PAGE_USER

The kernel must be able to execute EFI code in userland (positive
virtual address space) without _PAGE_USER set, so don't set NX on
it. This only selectively disables the NX poisoning in kernel pgd so
there's no effect whatsoever on the page table isolation from userland
point of view.

Solves this crash at boot:

[    0.039130] BUG: unable to handle kernel paging request at 000000005b835f90
[    0.046101] IP: [<000000005b835f90>] 0x5b835f8f
[    0.050637] PGD 8000000001f61067 PUD 190ffefff067 PMD 190ffeffd067 PTE 5b835063
[    0.057989] Oops: 0011 [#1] SMP
[    0.061241] Modules linked in:
[    0.064304] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.0-327.59.59.46.h42.x86_64 #1
[    0.072280] Hardware name: Huawei FusionServer9032/IT91SMUB, BIOS BLXSV316 11/14/2017
[    0.080082] task: ffffffff8196e440 ti: ffffffff81958000 task.ti: ffffffff81958000
[    0.087539] RIP: 0010:[<000000005b835f90>]  [<000000005b835f90>] 0x5b835f8f
[    0.094494] RSP: 0000:ffffffff8195be28  EFLAGS: 00010046
[    0.099788] RAX: 0000000080050033 RBX: ffff910fbc802000 RCX: 00000000000002d0
[    0.106897] RDX: 0000000000000030 RSI: 00000000000002d0 RDI: 000000005b835f90
[    0.114006] RBP: ffffffff8195bf38 R08: 0000000000000001 R09: 0000090fbc802000
[    0.121116] R10: ffff88ffbcc07340 R11: 0000000000000001 R12: 0000000000000001
[    0.128225] R13: 0000090fbc802000 R14: 00000000000002d0 R15: 0000000000000001
[    0.135336] FS:  0000000000000000(0000) GS:ffffc90000000000(0000) knlGS:0000000000000000
[    0.143398] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    0.149124] CR2: 000000005b835f90 CR3: 0000000001966000 CR4: 00000000000606b0
[    0.156234] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    0.163344] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[    0.170454] Call Trace:
[    0.172899]  [<ffffffff8107512c>] ? efi_call4+0x6c/0xf0
[    0.178108]  [<ffffffff8105b3fe>] ? native_flush_tlb_global+0x8e/0xc0
[    0.184527]  [<ffffffff810652b3>] ? set_memory_x+0x43/0x50
[    0.189997]  [<ffffffff81acf91f>] ? efi_enter_virtual_mode+0x3bc/0x538
[    0.196505]  [<ffffffff81ab104b>] start_kernel+0x39f/0x44f
[    0.201972]  [<ffffffff81ab0ab5>] ? repair_env_string+0x5c/0x5c
[    0.207872]  [<ffffffff81ab0120>] ? early_idt_handlers+0x120/0x120
[    0.214030]  [<ffffffff81ab066c>] x86_64_start_reservations+0x2a/0x2c
[    0.220449]  [<ffffffff81ab07c0>] x86_64_start_kernel+0x152/0x175
[    0.226521] Code:  Bad RIP value.
[    0.229860] RIP  [<000000005b835f90>] 0x5b835f8f
[    0.234478]  RSP <ffffffff8195be28>
[    0.237955] CR2: 000000005b835f90
[    0.241266] ---[ end trace 8178226af3e802ca ]---
[    0.245869] Kernel panic - not syncing: Fatal exception

Reported-by: Yisheng Xie <xieyisheng1@huawei.com>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/x86/include/asm/pgtable_64.h | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
index 7c8bc5c23664..132176fe45e2 100644
--- a/arch/x86/include/asm/pgtable_64.h
+++ b/arch/x86/include/asm/pgtable_64.h
@@ -189,28 +189,34 @@ static inline bool pgd_userspace_access(pgd_t pgd)
 	return pgd.pgd & _PAGE_USER;
 }
 
+#define _PAGE_PTI_CAN_NX (_PAGE_PRESENT|_PAGE_USER)
+
 static inline void kaiser_poison_pgd(pgd_t *pgd)
 {
-	if (pgd->pgd & _PAGE_PRESENT && __supported_pte_mask & _PAGE_NX)
+	if ((pgd->pgd & _PAGE_PTI_CAN_NX) == _PAGE_PTI_CAN_NX &&
+	    __supported_pte_mask & _PAGE_NX)
 		pgd->pgd |= _PAGE_NX;
 }
 
 static inline void kaiser_unpoison_pgd(pgd_t *pgd)
 {
-	if (pgd->pgd & _PAGE_PRESENT && __supported_pte_mask & _PAGE_NX)
+	if ((pgd->pgd & _PAGE_PTI_CAN_NX) == _PAGE_PTI_CAN_NX &&
+	    __supported_pte_mask & _PAGE_NX)
 		pgd->pgd &= ~_PAGE_NX;
 }
 
 static inline void kaiser_poison_pgd_atomic(pgd_t *pgd)
 {
 	BUILD_BUG_ON(_PAGE_NX == 0);
-	if (pgd->pgd & _PAGE_PRESENT && __supported_pte_mask & _PAGE_NX)
+	if ((pgd->pgd & _PAGE_PTI_CAN_NX) == _PAGE_PTI_CAN_NX &&
+	    __supported_pte_mask & _PAGE_NX)
 		set_bit(_PAGE_BIT_NX, &pgd->pgd);
 }
 
 static inline void kaiser_unpoison_pgd_atomic(pgd_t *pgd)
 {
-	if (pgd->pgd & _PAGE_PRESENT && __supported_pte_mask & _PAGE_NX)
+	if ((pgd->pgd & _PAGE_PTI_CAN_NX) == _PAGE_PTI_CAN_NX &&
+	    __supported_pte_mask & _PAGE_NX)
 		clear_bit(_PAGE_BIT_NX, &pgd->pgd);
 }
 

^ permalink raw reply related	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05 11:49             ` Andrea Arcangeli
  0 siblings, 0 replies; 131+ messages in thread
From: Andrea Arcangeli @ 2018-01-05 11:49 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, torvalds,
	keescook, hughd, x86

Hi Yisheng and Dave,

On Thu, Jan 04, 2018 at 10:29:53PM -0800, Dave Hansen wrote:
> On 01/04/2018 10:16 PM, Yisheng Xie wrote:
> > BTW, we have just reported a bug caused by kaiser[1], which looks like
> > caused by SMEP. Could you please help to have a look?
> > 
> > [1] https://lkml.org/lkml/2018/1/5/3
> 
> Please report that to your kernel vendor.  Your EFI page tables have the
> NX bit set on the low addresses.  There have been a bunch of iterations
> of this, but you need to make sure that the EFI kernel mappings don't
> get _PAGE_NX set on them.  Look at what __pti_set_user_pgd() does in
> mainline.

Yisheng could you file a report on the vendor bz?

>From my part of course I'm fine to discuss it here, but it's not fair
to use lkml bandwidth for this, sorry for the noise.

The vast majority of the hardware boots fine and isn't running into
this. This is the first time I hear about this, sorry about that.

I fixed it with the upstream solution, greatly appreciated the pointer
Dave. I don't have hardware to verify it though so we've to follow up
on bz.

Thanks,
Andrea

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05  6:29           ` Dave Hansen
@ 2018-01-05 18:19             ` Jiri Kosina
  -1 siblings, 0 replies; 131+ messages in thread
From: Jiri Kosina @ 2018-01-05 18:19 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli, Hugh Dickins


[ adding Hugh ]

On Thu, 4 Jan 2018, Dave Hansen wrote:

> > BTW, we have just reported a bug caused by kaiser[1], which looks like
> > caused by SMEP. Could you please help to have a look?
> > 
> > [1] https://lkml.org/lkml/2018/1/5/3
> 
> Please report that to your kernel vendor.  Your EFI page tables have the
> NX bit set on the low addresses.  There have been a bunch of iterations
> of this, but you need to make sure that the EFI kernel mappings don't
> get _PAGE_NX set on them.  Look at what __pti_set_user_pgd() does in
> mainline.

Unfortunately this is more complicated.

The thing is -- efi=old_memmap is broken even upstream. We will probably 
not receive too many reports about this against upstream PTI, as most of 
the machines are using classic high-mapping of EFI regions; but older 
kernels force on certain machines stil old_memmap (or it can be specified 
manually on kernel cmdline), where EFI has all its mapping in the 
userspace range.

And that explodes, as those get marked NX in the kernel pagetables.

I've spent most of today tracking this down (the legacy EFI mmap is 
horrid); the patch below is confirmed to fix it both on current upstream 
kernel, as well as on original-KAISER based kernels (Hugh's backport) in 
cases old_memmap is used by EFI.

I am not super happy about this, but I din't really want to extend the 
_set_pgd() code to always figure out whether it's dealing wih low EFI 
mapping or not, as that would be way too much overhead just for this 
one-off call during boot.



From: Jiri Kosina <jkosina@suse.cz>
Subject: [PATCH] PTI: unbreak EFI old_memmap

old_memmap's efi_call_phys_prolog() calls set_pgd() with swapper PGD that 
has PAGE_USER set, which makes PTI set NX on it, and therefore EFI can't 
execute it's code.

Fix that by forcefully clearing _PAGE_NX from the PGD (this can't be done
by the pgprot API).

_PAGE_NX will be automatically reintroduced in efi_call_phys_epilog(), as 
_set_pgd() will again notice that this is _PAGE_USER, and set _PAGE_NX on 
it.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
---
 arch/x86/platform/efi/efi_64.c |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
 		save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
 		vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
 		set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
+		/*
+		 * pgprot API doesn't clear it for PGD
+		 *
+		 * Will be brought back automatically in _epilog()
+		 */
+		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
 	}
 	__flush_tlb_all();
 

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05 18:19             ` Jiri Kosina
  0 siblings, 0 replies; 131+ messages in thread
From: Jiri Kosina @ 2018-01-05 18:19 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli


[ adding Hugh ]

On Thu, 4 Jan 2018, Dave Hansen wrote:

> > BTW, we have just reported a bug caused by kaiser[1], which looks like
> > caused by SMEP. Could you please help to have a look?
> > 
> > [1] https://lkml.org/lkml/2018/1/5/3
> 
> Please report that to your kernel vendor.  Your EFI page tables have the
> NX bit set on the low addresses.  There have been a bunch of iterations
> of this, but you need to make sure that the EFI kernel mappings don't
> get _PAGE_NX set on them.  Look at what __pti_set_user_pgd() does in
> mainline.

Unfortunately this is more complicated.

The thing is -- efi=old_memmap is broken even upstream. We will probably 
not receive too many reports about this against upstream PTI, as most of 
the machines are using classic high-mapping of EFI regions; but older 
kernels force on certain machines stil old_memmap (or it can be specified 
manually on kernel cmdline), where EFI has all its mapping in the 
userspace range.

And that explodes, as those get marked NX in the kernel pagetables.

I've spent most of today tracking this down (the legacy EFI mmap is 
horrid); the patch below is confirmed to fix it both on current upstream 
kernel, as well as on original-KAISER based kernels (Hugh's backport) in 
cases old_memmap is used by EFI.

I am not super happy about this, but I din't really want to extend the 
_set_pgd() code to always figure out whether it's dealing wih low EFI 
mapping or not, as that would be way too much overhead just for this 
one-off call during boot.



From: Jiri Kosina <jkosina@suse.cz>
Subject: [PATCH] PTI: unbreak EFI old_memmap

old_memmap's efi_call_phys_prolog() calls set_pgd() with swapper PGD that 
has PAGE_USER set, which makes PTI set NX on it, and therefore EFI can't 
execute it's code.

Fix that by forcefully clearing _PAGE_NX from the PGD (this can't be done
by the pgprot API).

_PAGE_NX will be automatically reintroduced in efi_call_phys_epilog(), as 
_set_pgd() will again notice that this is _PAGE_USER, and set _PAGE_NX on 
it.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
---
 arch/x86/platform/efi/efi_64.c |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
 		save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
 		vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
 		set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
+		/*
+		 * pgprot API doesn't clear it for PGD
+		 *
+		 * Will be brought back automatically in _epilog()
+		 */
+		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
 	}
 	__flush_tlb_all();
 

-- 
Jiri Kosina
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05 18:19             ` Jiri Kosina
@ 2018-01-05 19:00               ` Jiri Kosina
  -1 siblings, 0 replies; 131+ messages in thread
From: Jiri Kosina @ 2018-01-05 19:00 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, Hugh Dickins, x86, Andrea Arcangeli, Hugh Dickins


The previous patch was for slightly older kernel, and the logic in 
_prologue() is a bit different in 4.15, but the (cofirmed) fix for 
mainline is basically the same:


From: Jiri Kosina <jkosina@suse.cz>
Subject: [PATCH] PTI: unbreak EFI old_memmap

old_memmap's efi_call_phys_prolog() calls set_pgd() with swapper PGD that 
has PAGE_USER set, which makes PTI set NX on it, and therefore EFI can't 
execute it's code.

Fix that by forcefully clearing _PAGE_NX from the PGD (this can't be done
by the pgprot API).

_PAGE_NX will be automatically reintroduced in efi_call_phys_epilog(), as 
_set_pgd() will again notice that this is _PAGE_USER, and set _PAGE_NX on 
it.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>

diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index d87ac96e37ed..2dd15e967c3f 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -135,7 +135,9 @@ pgd_t * __init efi_call_phys_prolog(void)
 				pud[j] = *pud_offset(p4d_k, vaddr);
 			}
 		}
+		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
 	}
+
 out:
 	__flush_tlb_all();
 

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply related	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05 19:00               ` Jiri Kosina
  0 siblings, 0 replies; 131+ messages in thread
From: Jiri Kosina @ 2018-01-05 19:00 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, Hugh Dickins, x86, Andrea Arcangeli


The previous patch was for slightly older kernel, and the logic in 
_prologue() is a bit different in 4.15, but the (cofirmed) fix for 
mainline is basically the same:


From: Jiri Kosina <jkosina@suse.cz>
Subject: [PATCH] PTI: unbreak EFI old_memmap

old_memmap's efi_call_phys_prolog() calls set_pgd() with swapper PGD that 
has PAGE_USER set, which makes PTI set NX on it, and therefore EFI can't 
execute it's code.

Fix that by forcefully clearing _PAGE_NX from the PGD (this can't be done
by the pgprot API).

_PAGE_NX will be automatically reintroduced in efi_call_phys_epilog(), as 
_set_pgd() will again notice that this is _PAGE_USER, and set _PAGE_NX on 
it.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>

diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index d87ac96e37ed..2dd15e967c3f 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -135,7 +135,9 @@ pgd_t * __init efi_call_phys_prolog(void)
 				pud[j] = *pud_offset(p4d_k, vaddr);
 			}
 		}
+		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
 	}
+
 out:
 	__flush_tlb_all();
 

-- 
Jiri Kosina
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05 18:19             ` Jiri Kosina
@ 2018-01-05 19:03               ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2018-01-05 19:03 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On 01/05/2018 10:19 AM, Jiri Kosina wrote:
> --- a/arch/x86/platform/efi/efi_64.c
> +++ b/arch/x86/platform/efi/efi_64.c
> @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
>  		save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
>  		vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
>  		set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
> +		/*
> +		 * pgprot API doesn't clear it for PGD
> +		 *
> +		 * Will be brought back automatically in _epilog()
> +		 */
> +		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
>  	}
>  	__flush_tlb_all();

Wait a sec...  Where does the _PAGE_USER come from?  Shouldn't we see
the &init_mm in there and *not* set _PAGE_USER?

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05 19:03               ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2018-01-05 19:03 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On 01/05/2018 10:19 AM, Jiri Kosina wrote:
> --- a/arch/x86/platform/efi/efi_64.c
> +++ b/arch/x86/platform/efi/efi_64.c
> @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
>  		save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
>  		vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
>  		set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
> +		/*
> +		 * pgprot API doesn't clear it for PGD
> +		 *
> +		 * Will be brought back automatically in _epilog()
> +		 */
> +		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
>  	}
>  	__flush_tlb_all();

Wait a sec...  Where does the _PAGE_USER come from?  Shouldn't we see
the &init_mm in there and *not* set _PAGE_USER?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05 19:03               ` Dave Hansen
@ 2018-01-05 19:17                 ` Jiri Kosina
  -1 siblings, 0 replies; 131+ messages in thread
From: Jiri Kosina @ 2018-01-05 19:17 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On Fri, 5 Jan 2018, Dave Hansen wrote:

> > --- a/arch/x86/platform/efi/efi_64.c
> > +++ b/arch/x86/platform/efi/efi_64.c
> > @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
> >  		save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
> >  		vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
> >  		set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
> > +		/*
> > +		 * pgprot API doesn't clear it for PGD
> > +		 *
> > +		 * Will be brought back automatically in _epilog()
> > +		 */
> > +		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
> >  	}
> >  	__flush_tlb_all();
> 
> Wait a sec...  Where does the _PAGE_USER come from?  Shouldn't we see
> the &init_mm in there and *not* set _PAGE_USER?

That's because pgd_populate() uses _PAGE_TABLE and not _KERNPG_TABLE for 
reasons that are behind me.

I did put this on my TODO list, but for later.

(and yes, I tried clearing _PAGE_USER from init_mm's PGD, and no obvious 
breakages appeared, but I wanted to give it more thought later).

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05 19:17                 ` Jiri Kosina
  0 siblings, 0 replies; 131+ messages in thread
From: Jiri Kosina @ 2018-01-05 19:17 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On Fri, 5 Jan 2018, Dave Hansen wrote:

> > --- a/arch/x86/platform/efi/efi_64.c
> > +++ b/arch/x86/platform/efi/efi_64.c
> > @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
> >  		save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
> >  		vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
> >  		set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
> > +		/*
> > +		 * pgprot API doesn't clear it for PGD
> > +		 *
> > +		 * Will be brought back automatically in _epilog()
> > +		 */
> > +		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
> >  	}
> >  	__flush_tlb_all();
> 
> Wait a sec...  Where does the _PAGE_USER come from?  Shouldn't we see
> the &init_mm in there and *not* set _PAGE_USER?

That's because pgd_populate() uses _PAGE_TABLE and not _KERNPG_TABLE for 
reasons that are behind me.

I did put this on my TODO list, but for later.

(and yes, I tried clearing _PAGE_USER from init_mm's PGD, and no obvious 
breakages appeared, but I wanted to give it more thought later).

-- 
Jiri Kosina
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05 19:17                 ` Jiri Kosina
@ 2018-01-05 19:18                   ` Jiri Kosina
  -1 siblings, 0 replies; 131+ messages in thread
From: Jiri Kosina @ 2018-01-05 19:18 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On Fri, 5 Jan 2018, Jiri Kosina wrote:

> That's because pgd_populate() uses _PAGE_TABLE and not _KERNPG_TABLE for 
> reasons that are behind me.

[ oh and BTW I find the fact that we have populate_pgd() and 
pgd_populate(), which do something *completely* different quite 
entertaining ]

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05 19:18                   ` Jiri Kosina
  0 siblings, 0 replies; 131+ messages in thread
From: Jiri Kosina @ 2018-01-05 19:18 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On Fri, 5 Jan 2018, Jiri Kosina wrote:

> That's because pgd_populate() uses _PAGE_TABLE and not _KERNPG_TABLE for 
> reasons that are behind me.

[ oh and BTW I find the fact that we have populate_pgd() and 
pgd_populate(), which do something *completely* different quite 
entertaining ]

-- 
Jiri Kosina
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05 19:17                 ` Jiri Kosina
@ 2018-01-05 19:55                   ` Andrea Arcangeli
  -1 siblings, 0 replies; 131+ messages in thread
From: Andrea Arcangeli @ 2018-01-05 19:55 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Dave Hansen, Yisheng Xie, linux-kernel, linux-mm,
	richard.fellner, moritz.lipp, daniel.gruss, michael.schwarz,
	luto, Linus Torvalds, keescook, hughd, x86

On Fri, Jan 05, 2018 at 08:17:17PM +0100, Jiri Kosina wrote:
> On Fri, 5 Jan 2018, Dave Hansen wrote:
> 
> > > --- a/arch/x86/platform/efi/efi_64.c
> > > +++ b/arch/x86/platform/efi/efi_64.c
> > > @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
> > >  		save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
> > >  		vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
> > >  		set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
> > > +		/*
> > > +		 * pgprot API doesn't clear it for PGD
> > > +		 *
> > > +		 * Will be brought back automatically in _epilog()
> > > +		 */
> > > +		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
> > >  	}
> > >  	__flush_tlb_all();

Upstream & downstream looks different, how the above looks completely
different I don't know, but I got it and updating is easy. Great
catch.

> > 
> > Wait a sec...  Where does the _PAGE_USER come from?  Shouldn't we see
> > the &init_mm in there and *not* set _PAGE_USER?
> 
> That's because pgd_populate() uses _PAGE_TABLE and not _KERNPG_TABLE for 
> reasons that are behind me.

For vsyscalls? I also had to single out warnings out of init_mm.pgd
for the same reasons.

How does the below (untested) look?

>From ab949b80124588c4791568429cf8a234dda16340 Mon Sep 17 00:00:00 2001
From: Jiri Kosina <jikos@kernel.org>
Date: Fri, 5 Jan 2018 20:00:25 +0100
Subject: [RHEL7.5 PATCH 1/1] x86/kaiser/efi: unbreak EFI old_memmap

old_memmap's efi_call_phys_prolog() calls set_pgd() with swapper PGD that
has PAGE_USER set, which makes PTI set NX on it, and therefore EFI can't
execute it's code.

Fix that by forcefully clearing _PAGE_NX from the PGD (this can't be done
by the pgprot API).

_PAGE_NX will be automatically reintroduced in efi_call_phys_epilog(), as
_set_pgd() will again notice that this is _PAGE_USER, and set _PAGE_NX on
it.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/x86/platform/efi/efi_64.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index f951026ea2d2..395079128d98 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -110,6 +110,7 @@ void __init efi_call_phys_prelog(void)
 		vaddr = (unsigned long)__va(pgd * PGDIR_SIZE);
 		pgd_efi = pgd_offset_k(addr_pgd);
 		save_pgd[pgd] = *pgd_efi;
+		pgd_efi->pgd &= ~_PAGE_NX;
 
 		pud = pud_alloc(&init_mm, pgd_efi, addr_pgd);
 		if (!pud) {

^ permalink raw reply related	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05 19:55                   ` Andrea Arcangeli
  0 siblings, 0 replies; 131+ messages in thread
From: Andrea Arcangeli @ 2018-01-05 19:55 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Dave Hansen, Yisheng Xie, linux-kernel, linux-mm,
	richard.fellner, moritz.lipp, daniel.gruss, michael.schwarz,
	luto, Linus Torvalds, keescook, hughd, x86

On Fri, Jan 05, 2018 at 08:17:17PM +0100, Jiri Kosina wrote:
> On Fri, 5 Jan 2018, Dave Hansen wrote:
> 
> > > --- a/arch/x86/platform/efi/efi_64.c
> > > +++ b/arch/x86/platform/efi/efi_64.c
> > > @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
> > >  		save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
> > >  		vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
> > >  		set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
> > > +		/*
> > > +		 * pgprot API doesn't clear it for PGD
> > > +		 *
> > > +		 * Will be brought back automatically in _epilog()
> > > +		 */
> > > +		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
> > >  	}
> > >  	__flush_tlb_all();

Upstream & downstream looks different, how the above looks completely
different I don't know, but I got it and updating is easy. Great
catch.

> > 
> > Wait a sec...  Where does the _PAGE_USER come from?  Shouldn't we see
> > the &init_mm in there and *not* set _PAGE_USER?
> 
> That's because pgd_populate() uses _PAGE_TABLE and not _KERNPG_TABLE for 
> reasons that are behind me.

For vsyscalls? I also had to single out warnings out of init_mm.pgd
for the same reasons.

How does the below (untested) look?

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05 19:17                 ` Jiri Kosina
@ 2018-01-05 21:07                   ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2018-01-05 21:07 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On 01/05/2018 11:17 AM, Jiri Kosina wrote:
> On Fri, 5 Jan 2018, Dave Hansen wrote:
> 
>>> --- a/arch/x86/platform/efi/efi_64.c
>>> +++ b/arch/x86/platform/efi/efi_64.c
>>> @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
>>>  		save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
>>>  		vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
>>>  		set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
>>> +		/*
>>> +		 * pgprot API doesn't clear it for PGD
>>> +		 *
>>> +		 * Will be brought back automatically in _epilog()
>>> +		 */
>>> +		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
>>>  	}
>>>  	__flush_tlb_all();
>>
>> Wait a sec...  Where does the _PAGE_USER come from?  Shouldn't we see
>> the &init_mm in there and *not* set _PAGE_USER?
> 
> That's because pgd_populate() uses _PAGE_TABLE and not _KERNPG_TABLE for 
> reasons that are behind me.
> 
> I did put this on my TODO list, but for later.
> 
> (and yes, I tried clearing _PAGE_USER from init_mm's PGD, and no obvious 
> breakages appeared, but I wanted to give it more thought later).

Feel free to add my Ack on this.  I'd personally much rather muck with
random relatively unused bits of the efi code than touch the core PGD code.

We need to go look at it again in the 4.16 timeframe, probably.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05 21:07                   ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2018-01-05 21:07 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On 01/05/2018 11:17 AM, Jiri Kosina wrote:
> On Fri, 5 Jan 2018, Dave Hansen wrote:
> 
>>> --- a/arch/x86/platform/efi/efi_64.c
>>> +++ b/arch/x86/platform/efi/efi_64.c
>>> @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
>>>  		save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
>>>  		vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
>>>  		set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
>>> +		/*
>>> +		 * pgprot API doesn't clear it for PGD
>>> +		 *
>>> +		 * Will be brought back automatically in _epilog()
>>> +		 */
>>> +		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
>>>  	}
>>>  	__flush_tlb_all();
>>
>> Wait a sec...  Where does the _PAGE_USER come from?  Shouldn't we see
>> the &init_mm in there and *not* set _PAGE_USER?
> 
> That's because pgd_populate() uses _PAGE_TABLE and not _KERNPG_TABLE for 
> reasons that are behind me.
> 
> I did put this on my TODO list, but for later.
> 
> (and yes, I tried clearing _PAGE_USER from init_mm's PGD, and no obvious 
> breakages appeared, but I wanted to give it more thought later).

Feel free to add my Ack on this.  I'd personally much rather muck with
random relatively unused bits of the efi code than touch the core PGD code.

We need to go look at it again in the 4.16 timeframe, probably.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05 21:07                   ` Dave Hansen
@ 2018-01-05 21:14                     ` Jiri Kosina
  -1 siblings, 0 replies; 131+ messages in thread
From: Jiri Kosina @ 2018-01-05 21:14 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On Fri, 5 Jan 2018, Dave Hansen wrote:

> >>> --- a/arch/x86/platform/efi/efi_64.c
> >>> +++ b/arch/x86/platform/efi/efi_64.c
> >>> @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
> >>>  		save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
> >>>  		vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
> >>>  		set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
> >>> +		/*
> >>> +		 * pgprot API doesn't clear it for PGD
> >>> +		 *
> >>> +		 * Will be brought back automatically in _epilog()
> >>> +		 */
> >>> +		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
> >>>  	}
> >>>  	__flush_tlb_all();
> >>
> >> Wait a sec...  Where does the _PAGE_USER come from?  Shouldn't we see
> >> the &init_mm in there and *not* set _PAGE_USER?
> > 
> > That's because pgd_populate() uses _PAGE_TABLE and not _KERNPG_TABLE for 
> > reasons that are behind me.
> > 
> > I did put this on my TODO list, but for later.
> > 
> > (and yes, I tried clearing _PAGE_USER from init_mm's PGD, and no obvious 
> > breakages appeared, but I wanted to give it more thought later).
> 
> Feel free to add my Ack on this.  

Thanks. I'll extract the patch out of this thread and submit it 
separately, so that it doesn't get lost buried here.

> I'd personally much rather muck with random relatively unused bits of 
> the efi code than touch the core PGD code.

Exactly. Especially at this point.

> We need to go look at it again in the 4.16 timeframe, probably.

Agreed. On my TODO list already.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05 21:14                     ` Jiri Kosina
  0 siblings, 0 replies; 131+ messages in thread
From: Jiri Kosina @ 2018-01-05 21:14 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On Fri, 5 Jan 2018, Dave Hansen wrote:

> >>> --- a/arch/x86/platform/efi/efi_64.c
> >>> +++ b/arch/x86/platform/efi/efi_64.c
> >>> @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
> >>>  		save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
> >>>  		vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
> >>>  		set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
> >>> +		/*
> >>> +		 * pgprot API doesn't clear it for PGD
> >>> +		 *
> >>> +		 * Will be brought back automatically in _epilog()
> >>> +		 */
> >>> +		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
> >>>  	}
> >>>  	__flush_tlb_all();
> >>
> >> Wait a sec...  Where does the _PAGE_USER come from?  Shouldn't we see
> >> the &init_mm in there and *not* set _PAGE_USER?
> > 
> > That's because pgd_populate() uses _PAGE_TABLE and not _KERNPG_TABLE for 
> > reasons that are behind me.
> > 
> > I did put this on my TODO list, but for later.
> > 
> > (and yes, I tried clearing _PAGE_USER from init_mm's PGD, and no obvious 
> > breakages appeared, but I wanted to give it more thought later).
> 
> Feel free to add my Ack on this.  

Thanks. I'll extract the patch out of this thread and submit it 
separately, so that it doesn't get lost buried here.

> I'd personally much rather muck with random relatively unused bits of 
> the efi code than touch the core PGD code.

Exactly. Especially at this point.

> We need to go look at it again in the 4.16 timeframe, probably.

Agreed. On my TODO list already.

Thanks,

-- 
Jiri Kosina
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05 21:14                     ` Jiri Kosina
@ 2018-01-05 21:29                       ` Andy Lutomirski
  -1 siblings, 0 replies; 131+ messages in thread
From: Andy Lutomirski @ 2018-01-05 21:29 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Dave Hansen, Yisheng Xie, linux-kernel, linux-mm,
	richard.fellner, moritz.lipp, daniel.gruss, michael.schwarz,
	luto, Linus Torvalds, keescook, hughd, x86, Andrea Arcangeli



> On Jan 5, 2018, at 1:14 PM, Jiri Kosina <jikos@kernel.org> wrote:
> 
> On Fri, 5 Jan 2018, Dave Hansen wrote:
> 
>>>>> --- a/arch/x86/platform/efi/efi_64.c
>>>>> +++ b/arch/x86/platform/efi/efi_64.c
>>>>> @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
>>>>>        save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
>>>>>        vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
>>>>>        set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
>>>>> +        /*
>>>>> +         * pgprot API doesn't clear it for PGD
>>>>> +         *
>>>>> +         * Will be brought back automatically in _epilog()
>>>>> +         */
>>>>> +        pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
>>>>>    }
>>>>>    __flush_tlb_all();
>>>> 
>>>> Wait a sec...  Where does the _PAGE_USER come from?  Shouldn't we see
>>>> the &init_mm in there and *not* set _PAGE_USER?
>>> 
>>> That's because pgd_populate() uses _PAGE_TABLE and not _KERNPG_TABLE for 
>>> reasons that are behind me.
>>> 
>>> I did put this on my TODO list, but for later.
>>> 
>>> (and yes, I tried clearing _PAGE_USER from init_mm's PGD, and no obvious 
>>> breakages appeared, but I wanted to give it more thought later).
>> 
>> Feel free to add my Ack on this.  
> 
> Thanks. I'll extract the patch out of this thread and submit it 
> separately, so that it doesn't get lost buried here.
> 
>> I'd personally much rather muck with random relatively unused bits of 
>> the efi code than touch the core PGD code.
> 
> Exactly. Especially at this point.
> 
>> We need to go look at it again in the 4.16 timeframe, probably.
> 
> Agreed. On my TODO list already.

Can we just delete the old memmap code instead?

--Andy

> 

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05 21:29                       ` Andy Lutomirski
  0 siblings, 0 replies; 131+ messages in thread
From: Andy Lutomirski @ 2018-01-05 21:29 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Dave Hansen, Yisheng Xie, linux-kernel, linux-mm,
	richard.fellner, moritz.lipp, daniel.gruss, michael.schwarz,
	luto, Linus Torvalds, keescook, hughd, x86, Andrea Arcangeli



> On Jan 5, 2018, at 1:14 PM, Jiri Kosina <jikos@kernel.org> wrote:
> 
> On Fri, 5 Jan 2018, Dave Hansen wrote:
> 
>>>>> --- a/arch/x86/platform/efi/efi_64.c
>>>>> +++ b/arch/x86/platform/efi/efi_64.c
>>>>> @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
>>>>>        save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
>>>>>        vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
>>>>>        set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
>>>>> +        /*
>>>>> +         * pgprot API doesn't clear it for PGD
>>>>> +         *
>>>>> +         * Will be brought back automatically in _epilog()
>>>>> +         */
>>>>> +        pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
>>>>>    }
>>>>>    __flush_tlb_all();
>>>> 
>>>> Wait a sec...  Where does the _PAGE_USER come from?  Shouldn't we see
>>>> the &init_mm in there and *not* set _PAGE_USER?
>>> 
>>> That's because pgd_populate() uses _PAGE_TABLE and not _KERNPG_TABLE for 
>>> reasons that are behind me.
>>> 
>>> I did put this on my TODO list, but for later.
>>> 
>>> (and yes, I tried clearing _PAGE_USER from init_mm's PGD, and no obvious 
>>> breakages appeared, but I wanted to give it more thought later).
>> 
>> Feel free to add my Ack on this.  
> 
> Thanks. I'll extract the patch out of this thread and submit it 
> separately, so that it doesn't get lost buried here.
> 
>> I'd personally much rather muck with random relatively unused bits of 
>> the efi code than touch the core PGD code.
> 
> Exactly. Especially at this point.
> 
>> We need to go look at it again in the 4.16 timeframe, probably.
> 
> Agreed. On my TODO list already.

Can we just delete the old memmap code instead?

--Andy

> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05 21:14                     ` Jiri Kosina
@ 2018-01-05 22:48                       ` Hugh Dickins
  -1 siblings, 0 replies; 131+ messages in thread
From: Hugh Dickins @ 2018-01-05 22:48 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Dave Hansen, Yisheng Xie, linux-kernel, linux-mm,
	richard.fellner, moritz.lipp, Daniel Gruss, michael.schwarz,
	Andrew Lutomirski, Linus Torvalds, Kees Cook, x86,
	Andrea Arcangeli

On Fri, Jan 5, 2018 at 1:14 PM, Jiri Kosina <jikos@kernel.org> wrote:
> On Fri, 5 Jan 2018, Dave Hansen wrote:
>
>> >>> --- a/arch/x86/platform/efi/efi_64.c
>> >>> +++ b/arch/x86/platform/efi/efi_64.c
>> >>> @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
>> >>>           save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
>> >>>           vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
>> >>>           set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
>> >>> +         /*
>> >>> +          * pgprot API doesn't clear it for PGD
>> >>> +          *
>> >>> +          * Will be brought back automatically in _epilog()
>> >>> +          */
>> >>> +         pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
>> >>>   }
>> >>>   __flush_tlb_all();
>> >>
>> >> Wait a sec...  Where does the _PAGE_USER come from?  Shouldn't we see
>> >> the &init_mm in there and *not* set _PAGE_USER?
>> >
>> > That's because pgd_populate() uses _PAGE_TABLE and not _KERNPG_TABLE for
>> > reasons that are behind me.

Oh, I completely missed that; and then the issue would have got hidden
by one of my later per-process-kaiser patches.

>> >
>> > I did put this on my TODO list, but for later.
>> >
>> > (and yes, I tried clearing _PAGE_USER from init_mm's PGD, and no obvious
>> > breakages appeared, but I wanted to give it more thought later).
>>
>> Feel free to add my Ack on this.

And mine - thanks a lot for dealing with this Jiri.

>
> Thanks. I'll extract the patch out of this thread and submit it
> separately, so that it doesn't get lost buried here.
>
>> I'd personally much rather muck with random relatively unused bits of
>> the efi code than touch the core PGD code.
>
> Exactly. Especially at this point.

Indeed.

>
>> We need to go look at it again in the 4.16 timeframe, probably.
>
> Agreed. On my TODO list already.
>
> Thanks,
>
> --
> Jiri Kosina
> SUSE Labs
>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-05 22:48                       ` Hugh Dickins
  0 siblings, 0 replies; 131+ messages in thread
From: Hugh Dickins @ 2018-01-05 22:48 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Dave Hansen, Yisheng Xie, linux-kernel, linux-mm,
	richard.fellner, moritz.lipp, Daniel Gruss, michael.schwarz,
	Andrew Lutomirski, Linus Torvalds, Kees Cook, x86,
	Andrea Arcangeli

On Fri, Jan 5, 2018 at 1:14 PM, Jiri Kosina <jikos@kernel.org> wrote:
> On Fri, 5 Jan 2018, Dave Hansen wrote:
>
>> >>> --- a/arch/x86/platform/efi/efi_64.c
>> >>> +++ b/arch/x86/platform/efi/efi_64.c
>> >>> @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
>> >>>           save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
>> >>>           vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
>> >>>           set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
>> >>> +         /*
>> >>> +          * pgprot API doesn't clear it for PGD
>> >>> +          *
>> >>> +          * Will be brought back automatically in _epilog()
>> >>> +          */
>> >>> +         pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;
>> >>>   }
>> >>>   __flush_tlb_all();
>> >>
>> >> Wait a sec...  Where does the _PAGE_USER come from?  Shouldn't we see
>> >> the &init_mm in there and *not* set _PAGE_USER?
>> >
>> > That's because pgd_populate() uses _PAGE_TABLE and not _KERNPG_TABLE for
>> > reasons that are behind me.

Oh, I completely missed that; and then the issue would have got hidden
by one of my later per-process-kaiser patches.

>> >
>> > I did put this on my TODO list, but for later.
>> >
>> > (and yes, I tried clearing _PAGE_USER from init_mm's PGD, and no obvious
>> > breakages appeared, but I wanted to give it more thought later).
>>
>> Feel free to add my Ack on this.

And mine - thanks a lot for dealing with this Jiri.

>
> Thanks. I'll extract the patch out of this thread and submit it
> separately, so that it doesn't get lost buried here.
>
>> I'd personally much rather muck with random relatively unused bits of
>> the efi code than touch the core PGD code.
>
> Exactly. Especially at this point.

Indeed.

>
>> We need to go look at it again in the 4.16 timeframe, probably.
>
> Agreed. On my TODO list already.
>
> Thanks,
>
> --
> Jiri Kosina
> SUSE Labs
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-05 18:19             ` Jiri Kosina
@ 2018-01-06  4:54               ` Hanjun Guo
  -1 siblings, 0 replies; 131+ messages in thread
From: Hanjun Guo @ 2018-01-06  4:54 UTC (permalink / raw)
  To: Jiri Kosina, Dave Hansen
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

Hi Jiri,

Thanks for the fix, comments inline.

On 2018/1/6 2:19, Jiri Kosina wrote:
> 
> [ adding Hugh ]
> 
> On Thu, 4 Jan 2018, Dave Hansen wrote:
> 
>>> BTW, we have just reported a bug caused by kaiser[1], which looks like
>>> caused by SMEP. Could you please help to have a look?
>>>
>>> [1] https://lkml.org/lkml/2018/1/5/3
>>
>> Please report that to your kernel vendor.  Your EFI page tables have the
>> NX bit set on the low addresses.  There have been a bunch of iterations
>> of this, but you need to make sure that the EFI kernel mappings don't
>> get _PAGE_NX set on them.  Look at what __pti_set_user_pgd() does in
>> mainline.
> 
> Unfortunately this is more complicated.
> 
> The thing is -- efi=old_memmap is broken even upstream. We will probably 
> not receive too many reports about this against upstream PTI, as most of 
> the machines are using classic high-mapping of EFI regions; but older 
> kernels force on certain machines stil old_memmap (or it can be specified 
> manually on kernel cmdline), where EFI has all its mapping in the 
> userspace range.
> 
> And that explodes, as those get marked NX in the kernel pagetables.
> 
> I've spent most of today tracking this down (the legacy EFI mmap is 
> horrid); the patch below is confirmed to fix it both on current upstream 
> kernel, as well as on original-KAISER based kernels (Hugh's backport) in 
> cases old_memmap is used by EFI.
> 
> I am not super happy about this, but I din't really want to extend the 
> _set_pgd() code to always figure out whether it's dealing wih low EFI 
> mapping or not, as that would be way too much overhead just for this 
> one-off call during boot.
> 
> 
> 
> From: Jiri Kosina <jkosina@suse.cz>
> Subject: [PATCH] PTI: unbreak EFI old_memmap
> 
> old_memmap's efi_call_phys_prolog() calls set_pgd() with swapper PGD that 
> has PAGE_USER set, which makes PTI set NX on it, and therefore EFI can't 
> execute it's code.
> 
> Fix that by forcefully clearing _PAGE_NX from the PGD (this can't be done
> by the pgprot API).
> 
> _PAGE_NX will be automatically reintroduced in efi_call_phys_epilog(), as 
> _set_pgd() will again notice that this is _PAGE_USER, and set _PAGE_NX on 
> it.
> 
> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
> ---
>  arch/x86/platform/efi/efi_64.c |    6 ++++++
>  1 file changed, 6 insertions(+)
> 
> --- a/arch/x86/platform/efi/efi_64.c
> +++ b/arch/x86/platform/efi/efi_64.c
> @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
>  		save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
>  		vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
>  		set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
> +		/*
> +		 * pgprot API doesn't clear it for PGD
> +		 *
> +		 * Will be brought back automatically in _epilog()
> +		 */
> +		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;

Do you mean NX bit will be brought back later? I'm asking this because
I tested this patch which it fixed the boot panic issue but the system
will hang when rebooting the system, because rebooting will also call efi
then panic as NS bit is set.

[ 1911.622675] BUG: unable to handle kernel paging request at 00000000008041c0
[ 1911.629880] IP: [<00000000008041c0>] 0x8041bf
[ 1911.634389] PGD 80000010272cb067 PUD 2025178067 PMD 10272d8067 PTE 804063
[ 1911.641472] Oops: 0011 [#1] SMP
[ 1911.711748] Modules linked in: bum(O) ip_set nfnetlink prio(O) nat(O) vport_vxlan(O) openvswitch(O) nf_defrag_ipv6 gre kboxdriver(O) kbox(O) signo_catch(O) vfat fat tg3 intel_powerclamp coretemp intel_rapl crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel i2c_i801 kvm_intel(O) ptp lrw gf128mul i2c_core glue_helper ablk_helper pps_core kvm(O) cryptd iTCO_wdt iTCO_vendor_support sg pcspkr lpc_ich mfd_core sb_edac mei_me edac_core mei shpchp acpi_power_meter acpi_pad remote_trigger(O) nf_conntrack_ipv4 nf_defrag_ipv4 vhost_net(O) tun(O) vhost(O) macvtap macvlan vfio_pci irqbypass vfio_iommu_type1 vfio xt_sctp nf_conntrack_proto_sctp nf_nat_proto_sctp nf_nat nf_conntrack sctp libcrc32c ip_tables ext3 mbcache jbd sr_mod sd_mod cdrom lpfc crc_t10dif ahci crct10dif_generic crct10dif_pclmul libahci scsi_transport_fc scsi_tgt crct10dif_common libata usb_storage megaraid_sas dm_mod [last unloaded: dev_connlimit]
[ 1911.796711] CPU: 0 PID: 12033 Comm: reboot Tainted: G           OE  ---- -------   3.10.0-327.61.59.66_22.x86_64 #1
[ 1911.807449] Hardware name: Huawei RH2288H V3/BC11HGSA0, BIOS 3.79 11/07/2017
[ 1911.814702] task: ffff881025a91700 ti: ffff8810267fc000 task.ti: ffff8810267fc000
[ 1911.822401] RIP: 0010:[<00000000008041c0>]  [<00000000008041c0>] 0x8041bf
[ 1911.829407] RSP: 0018:ffff8810267ffd50  EFLAGS: 00010086
[ 1911.834877] RAX: 00000000008041c0 RBX: 0000000000000000 RCX: ffffffffff425000
[ 1911.842220] RDX: ffff8820a4e40000 RSI: 000000000000c000 RDI: 0000002024e40000
[ 1911.849563] RBP: ffff8810267ffd60 R08: ffff882024e40000 R09: 0000000000000000
[ 1911.856908] R10: ffffffff81a8f300 R11: ffff8810267ffaae R12: 0000000028121969
[ 1911.864250] R13: ffffffff819aa8a0 R14: 0000000000000cf9 R15: 0000000000000000
[ 1911.871596] FS:  00007f89d6143880(0000) GS:ffff881040400000(0000) knlGS:0000000000000000
[ 1911.879921] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1911.885836] CR2: 00000000008041c0 CR3: 0000002024e40000 CR4: 00000000001607f0
[ 1911.893180] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1911.900522] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 1911.907863] Call Trace:
[ 1911.910384]  [<ffffffff810241ab>] ? tboot_shutdown+0x5b/0x140
[ 1911.916298]  [<ffffffff8104723c>] native_machine_emergency_restart+0x4c/0x250
[ 1911.923641]  [<ffffffff8104c102>] ? disconnect_bsp_APIC+0x82/0xc0
[ 1911.929913]  [<ffffffff81046e17>] native_machine_restart+0x37/0x40
[ 1911.936273]  [<ffffffff810470ef>] machine_restart+0xf/0x20
[ 1911.941923]  [<ffffffff8109af95>] kernel_restart+0x45/0x60
[ 1911.947570]  [<ffffffff8109b1d9>] SYSC_reboot+0x229/0x260
[ 1911.953132]  [<ffffffff811ef665>] ? vfs_writev+0x35/0x60
[ 1911.958603]  [<ffffffff8109b27e>] SyS_reboot+0xe/0x10
[ 1911.963806]  [<ffffffff8165e43d>] system_call_fastpath+0x16/0x1b
[ 1911.969987] Code:  Bad RIP value.
[ 1911.973448] RIP  [<00000000008041c0>] 0x8041bf
[ 1911.978044]  RSP <ffff8810267ffd50>
[ 1911.990106] CR2: 00000000008041c0
[ 1912.001889] ---[ end trace e8475aee26ff7d9f ]---
[ 1912.408111] Kernel panic - not syncing: Fatal exception

Thanks
Hanjun

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-06  4:54               ` Hanjun Guo
  0 siblings, 0 replies; 131+ messages in thread
From: Hanjun Guo @ 2018-01-06  4:54 UTC (permalink / raw)
  To: Jiri Kosina, Dave Hansen
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

Hi Jiri,

Thanks for the fix, comments inline.

On 2018/1/6 2:19, Jiri Kosina wrote:
> 
> [ adding Hugh ]
> 
> On Thu, 4 Jan 2018, Dave Hansen wrote:
> 
>>> BTW, we have just reported a bug caused by kaiser[1], which looks like
>>> caused by SMEP. Could you please help to have a look?
>>>
>>> [1] https://lkml.org/lkml/2018/1/5/3
>>
>> Please report that to your kernel vendor.  Your EFI page tables have the
>> NX bit set on the low addresses.  There have been a bunch of iterations
>> of this, but you need to make sure that the EFI kernel mappings don't
>> get _PAGE_NX set on them.  Look at what __pti_set_user_pgd() does in
>> mainline.
> 
> Unfortunately this is more complicated.
> 
> The thing is -- efi=old_memmap is broken even upstream. We will probably 
> not receive too many reports about this against upstream PTI, as most of 
> the machines are using classic high-mapping of EFI regions; but older 
> kernels force on certain machines stil old_memmap (or it can be specified 
> manually on kernel cmdline), where EFI has all its mapping in the 
> userspace range.
> 
> And that explodes, as those get marked NX in the kernel pagetables.
> 
> I've spent most of today tracking this down (the legacy EFI mmap is 
> horrid); the patch below is confirmed to fix it both on current upstream 
> kernel, as well as on original-KAISER based kernels (Hugh's backport) in 
> cases old_memmap is used by EFI.
> 
> I am not super happy about this, but I din't really want to extend the 
> _set_pgd() code to always figure out whether it's dealing wih low EFI 
> mapping or not, as that would be way too much overhead just for this 
> one-off call during boot.
> 
> 
> 
> From: Jiri Kosina <jkosina@suse.cz>
> Subject: [PATCH] PTI: unbreak EFI old_memmap
> 
> old_memmap's efi_call_phys_prolog() calls set_pgd() with swapper PGD that 
> has PAGE_USER set, which makes PTI set NX on it, and therefore EFI can't 
> execute it's code.
> 
> Fix that by forcefully clearing _PAGE_NX from the PGD (this can't be done
> by the pgprot API).
> 
> _PAGE_NX will be automatically reintroduced in efi_call_phys_epilog(), as 
> _set_pgd() will again notice that this is _PAGE_USER, and set _PAGE_NX on 
> it.
> 
> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
> ---
>  arch/x86/platform/efi/efi_64.c |    6 ++++++
>  1 file changed, 6 insertions(+)
> 
> --- a/arch/x86/platform/efi/efi_64.c
> +++ b/arch/x86/platform/efi/efi_64.c
> @@ -95,6 +95,12 @@ pgd_t * __init efi_call_phys_prolog(void
>  		save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
>  		vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
>  		set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
> +		/*
> +		 * pgprot API doesn't clear it for PGD
> +		 *
> +		 * Will be brought back automatically in _epilog()
> +		 */
> +		pgd_offset_k(pgd * PGDIR_SIZE)->pgd &= ~_PAGE_NX;

Do you mean NX bit will be brought back later? I'm asking this because
I tested this patch which it fixed the boot panic issue but the system
will hang when rebooting the system, because rebooting will also call efi
then panic as NS bit is set.

[ 1911.622675] BUG: unable to handle kernel paging request at 00000000008041c0
[ 1911.629880] IP: [<00000000008041c0>] 0x8041bf
[ 1911.634389] PGD 80000010272cb067 PUD 2025178067 PMD 10272d8067 PTE 804063
[ 1911.641472] Oops: 0011 [#1] SMP
[ 1911.711748] Modules linked in: bum(O) ip_set nfnetlink prio(O) nat(O) vport_vxlan(O) openvswitch(O) nf_defrag_ipv6 gre kboxdriver(O) kbox(O) signo_catch(O) vfat fat tg3 intel_powerclamp coretemp intel_rapl crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel i2c_i801 kvm_intel(O) ptp lrw gf128mul i2c_core glue_helper ablk_helper pps_core kvm(O) cryptd iTCO_wdt iTCO_vendor_support sg pcspkr lpc_ich mfd_core sb_edac mei_me edac_core mei shpchp acpi_power_meter acpi_pad remote_trigger(O) nf_conntrack_ipv4 nf_defrag_ipv4 vhost_net(O) tun(O) vhost(O) macvtap macvlan vfio_pci irqbypass vfio_iommu_type1 vfio xt_sctp nf_conntrack_proto_sctp nf_nat_proto_sctp nf_nat nf_conntrack sctp libcrc32c ip_tables ext3 mbcache jbd sr_mod sd_mod cdrom lpfc crc_t10dif ahci crct10dif_generic crct10dif_pclmul libahci scsi_transport_fc scsi_tgt crct10dif_common libata usb_storage megaraid_sas dm_mod [last unloaded: dev_connlimit]
[ 1911.796711] CPU: 0 PID: 12033 Comm: reboot Tainted: G           OE  ---- -------   3.10.0-327.61.59.66_22.x86_64 #1
[ 1911.807449] Hardware name: Huawei RH2288H V3/BC11HGSA0, BIOS 3.79 11/07/2017
[ 1911.814702] task: ffff881025a91700 ti: ffff8810267fc000 task.ti: ffff8810267fc000
[ 1911.822401] RIP: 0010:[<00000000008041c0>]  [<00000000008041c0>] 0x8041bf
[ 1911.829407] RSP: 0018:ffff8810267ffd50  EFLAGS: 00010086
[ 1911.834877] RAX: 00000000008041c0 RBX: 0000000000000000 RCX: ffffffffff425000
[ 1911.842220] RDX: ffff8820a4e40000 RSI: 000000000000c000 RDI: 0000002024e40000
[ 1911.849563] RBP: ffff8810267ffd60 R08: ffff882024e40000 R09: 0000000000000000
[ 1911.856908] R10: ffffffff81a8f300 R11: ffff8810267ffaae R12: 0000000028121969
[ 1911.864250] R13: ffffffff819aa8a0 R14: 0000000000000cf9 R15: 0000000000000000
[ 1911.871596] FS:  00007f89d6143880(0000) GS:ffff881040400000(0000) knlGS:0000000000000000
[ 1911.879921] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1911.885836] CR2: 00000000008041c0 CR3: 0000002024e40000 CR4: 00000000001607f0
[ 1911.893180] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1911.900522] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 1911.907863] Call Trace:
[ 1911.910384]  [<ffffffff810241ab>] ? tboot_shutdown+0x5b/0x140
[ 1911.916298]  [<ffffffff8104723c>] native_machine_emergency_restart+0x4c/0x250
[ 1911.923641]  [<ffffffff8104c102>] ? disconnect_bsp_APIC+0x82/0xc0
[ 1911.929913]  [<ffffffff81046e17>] native_machine_restart+0x37/0x40
[ 1911.936273]  [<ffffffff810470ef>] machine_restart+0xf/0x20
[ 1911.941923]  [<ffffffff8109af95>] kernel_restart+0x45/0x60
[ 1911.947570]  [<ffffffff8109b1d9>] SYSC_reboot+0x229/0x260
[ 1911.953132]  [<ffffffff811ef665>] ? vfs_writev+0x35/0x60
[ 1911.958603]  [<ffffffff8109b27e>] SyS_reboot+0xe/0x10
[ 1911.963806]  [<ffffffff8165e43d>] system_call_fastpath+0x16/0x1b
[ 1911.969987] Code:  Bad RIP value.
[ 1911.973448] RIP  [<00000000008041c0>] 0x8041bf
[ 1911.978044]  RSP <ffff8810267ffd50>
[ 1911.990106] CR2: 00000000008041c0
[ 1912.001889] ---[ end trace e8475aee26ff7d9f ]---
[ 1912.408111] Kernel panic - not syncing: Fatal exception

Thanks
Hanjun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-06  4:54               ` Hanjun Guo
  (?)
@ 2018-01-06  6:06               ` Dave Hansen
  2018-01-06  6:28                   ` Hanjun Guo
  -1 siblings, 1 reply; 131+ messages in thread
From: Dave Hansen @ 2018-01-06  6:06 UTC (permalink / raw)
  To: Hanjun Guo, Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

[-- Attachment #1: Type: text/plain, Size: 702 bytes --]

On 01/05/2018 08:54 PM, Hanjun Guo wrote:
> Do you mean NX bit will be brought back later? I'm asking this because
> I tested this patch which it fixed the boot panic issue but the system
> will hang when rebooting the system, because rebooting will also call efi
> then panic as NS bit is set.

Wow, you're running a lot of very lighly-used code paths!  You actually
found a similar but totally separate issue from what I gather.  Thank
you immensely for the quick testing and bug reports!

Could you test the attached fix?

For those playing along at home, I think this will end up being needed
for 4.15 and probably all the backports.  I want to see if it works
before I submit it for real, though.

[-- Attachment #2: pti-tboot-fix.patch --]
[-- Type: text/x-patch, Size: 1350 bytes --]


From: Dave Hansen <dave.hansen@linux.intel.com>

This is another case similar to what EFI does: create a new set of
page tables, map some code at a low address, and jump to it.  PTI
mistakes this low address for userspace and mistakenly marks it
non-executable in an effort to make it unusable for userspace.  Undo
the poison to allow execution.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Ning Sun <ning.sun@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: tboot-devel@lists.sourceforge.net
Cc: linux-kernel@vger.kernel.org
---

 b/arch/x86/kernel/tboot.c |    7 +++++++
 1 file changed, 7 insertions(+)

diff -puN arch/x86/kernel/tboot.c~pti-tboot-fix arch/x86/kernel/tboot.c
--- a/arch/x86/kernel/tboot.c~pti-tboot-fix	2018-01-05 21:50:55.755554960 -0800
+++ b/arch/x86/kernel/tboot.c	2018-01-05 22:01:51.393553325 -0800
@@ -124,6 +124,13 @@ static int map_tboot_page(unsigned long
 	pte_t *pte;
 
 	pgd = pgd_offset(&tboot_mm, vaddr);
+	/*
+	 * PTI poisons low addresses in the kernel page tables in the
+	 * name of making them unusable for userspace.  To execute
+	 * code at such a low address, the poison must be cleared.
+	 */
+	pgd->pgd &= ~_PAGE_NX;
+
 	p4d = p4d_alloc(&tboot_mm, pgd, vaddr);
 	if (!p4d)
 		return -1;
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-06  6:06               ` Dave Hansen
@ 2018-01-06  6:28                   ` Hanjun Guo
  0 siblings, 0 replies; 131+ messages in thread
From: Hanjun Guo @ 2018-01-06  6:28 UTC (permalink / raw)
  To: Dave Hansen, Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

Hi Dave,

Thank you very much for the quick response! Minor comments inline.

On 2018/1/6 14:06, Dave Hansen wrote:
> On 01/05/2018 08:54 PM, Hanjun Guo wrote:
>> Do you mean NX bit will be brought back later? I'm asking this because
>> I tested this patch which it fixed the boot panic issue but the system
>> will hang when rebooting the system, because rebooting will also call efi
>> then panic as NS bit is set.
> Wow, you're running a lot of very lighly-used code paths!  You actually
> found a similar but totally separate issue from what I gather.  Thank
> you immensely for the quick testing and bug reports!
> 
> Could you test the attached fix?
> 
> For those playing along at home, I think this will end up being needed
> for 4.15 and probably all the backports.  I want to see if it works
> before I submit it for real, though.
> 
> 
> pti-tboot-fix.patch
> 
> 
> From: Dave Hansen <dave.hansen@linux.intel.com>
> 
> This is another case similar to what EFI does: create a new set of
> page tables, map some code at a low address, and jump to it.  PTI
> mistakes this low address for userspace and mistakenly marks it
> non-executable in an effort to make it unusable for userspace.  Undo
> the poison to allow execution.
> 
> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Ning Sun <ning.sun@intel.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: x86@kernel.org
> Cc: tboot-devel@lists.sourceforge.net
> Cc: linux-kernel@vger.kernel.org
> ---
> 
>  b/arch/x86/kernel/tboot.c |    7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff -puN arch/x86/kernel/tboot.c~pti-tboot-fix arch/x86/kernel/tboot.c
> --- a/arch/x86/kernel/tboot.c~pti-tboot-fix	2018-01-05 21:50:55.755554960 -0800
> +++ b/arch/x86/kernel/tboot.c	2018-01-05 22:01:51.393553325 -0800
> @@ -124,6 +124,13 @@ static int map_tboot_page(unsigned long
>  	pte_t *pte;
>  
>  	pgd = pgd_offset(&tboot_mm, vaddr);
> +	/*
> +	 * PTI poisons low addresses in the kernel page tables in the
> +	 * name of making them unusable for userspace.  To execute
> +	 * code at such a low address, the poison must be cleared.
> +	 */
> +	pgd->pgd &= ~_PAGE_NX;

...

> +
>  	p4d = p4d_alloc(&tboot_mm, pgd, vaddr);

Seems pgd will be re-set after p4d_alloc(), so should
we put the code behind (or after pud_alloc())?

>  	if (!p4d)
>  		return -1;

 +	/*
 +	 * PTI poisons low addresses in the kernel page tables in the
 +	 * name of making them unusable for userspace.  To execute
 +	 * code at such a low address, the poison must be cleared.
 +	 */
 +	pgd->pgd &= ~_PAGE_NX;

We will have a try in a minute, and report back later.

Thanks
Hanjun

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-06  6:28                   ` Hanjun Guo
  0 siblings, 0 replies; 131+ messages in thread
From: Hanjun Guo @ 2018-01-06  6:28 UTC (permalink / raw)
  To: Dave Hansen, Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

Hi Dave,

Thank you very much for the quick response! Minor comments inline.

On 2018/1/6 14:06, Dave Hansen wrote:
> On 01/05/2018 08:54 PM, Hanjun Guo wrote:
>> Do you mean NX bit will be brought back later? I'm asking this because
>> I tested this patch which it fixed the boot panic issue but the system
>> will hang when rebooting the system, because rebooting will also call efi
>> then panic as NS bit is set.
> Wow, you're running a lot of very lighly-used code paths!  You actually
> found a similar but totally separate issue from what I gather.  Thank
> you immensely for the quick testing and bug reports!
> 
> Could you test the attached fix?
> 
> For those playing along at home, I think this will end up being needed
> for 4.15 and probably all the backports.  I want to see if it works
> before I submit it for real, though.
> 
> 
> pti-tboot-fix.patch
> 
> 
> From: Dave Hansen <dave.hansen@linux.intel.com>
> 
> This is another case similar to what EFI does: create a new set of
> page tables, map some code at a low address, and jump to it.  PTI
> mistakes this low address for userspace and mistakenly marks it
> non-executable in an effort to make it unusable for userspace.  Undo
> the poison to allow execution.
> 
> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Ning Sun <ning.sun@intel.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: x86@kernel.org
> Cc: tboot-devel@lists.sourceforge.net
> Cc: linux-kernel@vger.kernel.org
> ---
> 
>  b/arch/x86/kernel/tboot.c |    7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff -puN arch/x86/kernel/tboot.c~pti-tboot-fix arch/x86/kernel/tboot.c
> --- a/arch/x86/kernel/tboot.c~pti-tboot-fix	2018-01-05 21:50:55.755554960 -0800
> +++ b/arch/x86/kernel/tboot.c	2018-01-05 22:01:51.393553325 -0800
> @@ -124,6 +124,13 @@ static int map_tboot_page(unsigned long
>  	pte_t *pte;
>  
>  	pgd = pgd_offset(&tboot_mm, vaddr);
> +	/*
> +	 * PTI poisons low addresses in the kernel page tables in the
> +	 * name of making them unusable for userspace.  To execute
> +	 * code at such a low address, the poison must be cleared.
> +	 */
> +	pgd->pgd &= ~_PAGE_NX;

...

> +
>  	p4d = p4d_alloc(&tboot_mm, pgd, vaddr);

Seems pgd will be re-set after p4d_alloc(), so should
we put the code behind (or after pud_alloc())?

>  	if (!p4d)
>  		return -1;

 +	/*
 +	 * PTI poisons low addresses in the kernel page tables in the
 +	 * name of making them unusable for userspace.  To execute
 +	 * code at such a low address, the poison must be cleared.
 +	 */
 +	pgd->pgd &= ~_PAGE_NX;

We will have a try in a minute, and report back later.

Thanks
Hanjun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-06  6:28                   ` Hanjun Guo
@ 2018-01-06  6:53                     ` Hanjun Guo
  -1 siblings, 0 replies; 131+ messages in thread
From: Hanjun Guo @ 2018-01-06  6:53 UTC (permalink / raw)
  To: Dave Hansen, Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On 2018/1/6 14:28, Hanjun Guo wrote:
> Hi Dave,
> 
> Thank you very much for the quick response! Minor comments inline.
> 
> On 2018/1/6 14:06, Dave Hansen wrote:
>> On 01/05/2018 08:54 PM, Hanjun Guo wrote:
>>> Do you mean NX bit will be brought back later? I'm asking this because
>>> I tested this patch which it fixed the boot panic issue but the system
>>> will hang when rebooting the system, because rebooting will also call efi
>>> then panic as NS bit is set.
>> Wow, you're running a lot of very lighly-used code paths!  You actually
>> found a similar but totally separate issue from what I gather.  Thank
>> you immensely for the quick testing and bug reports!
>>
>> Could you test the attached fix?
>>
>> For those playing along at home, I think this will end up being needed
>> for 4.15 and probably all the backports.  I want to see if it works
>> before I submit it for real, though.
>>
>>
>> pti-tboot-fix.patch
>>
>>
>> From: Dave Hansen <dave.hansen@linux.intel.com>
>>
>> This is another case similar to what EFI does: create a new set of
>> page tables, map some code at a low address, and jump to it.  PTI
>> mistakes this low address for userspace and mistakenly marks it
>> non-executable in an effort to make it unusable for userspace.  Undo
>> the poison to allow execution.
>>
>> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
>> Cc: Ning Sun <ning.sun@intel.com>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: "H. Peter Anvin" <hpa@zytor.com>
>> Cc: x86@kernel.org
>> Cc: tboot-devel@lists.sourceforge.net
>> Cc: linux-kernel@vger.kernel.org
>> ---
>>
>>  b/arch/x86/kernel/tboot.c |    7 +++++++
>>  1 file changed, 7 insertions(+)
>>
>> diff -puN arch/x86/kernel/tboot.c~pti-tboot-fix arch/x86/kernel/tboot.c
>> --- a/arch/x86/kernel/tboot.c~pti-tboot-fix	2018-01-05 21:50:55.755554960 -0800
>> +++ b/arch/x86/kernel/tboot.c	2018-01-05 22:01:51.393553325 -0800
>> @@ -124,6 +124,13 @@ static int map_tboot_page(unsigned long
>>  	pte_t *pte;
>>  
>>  	pgd = pgd_offset(&tboot_mm, vaddr);
>> +	/*
>> +	 * PTI poisons low addresses in the kernel page tables in the
>> +	 * name of making them unusable for userspace.  To execute
>> +	 * code at such a low address, the poison must be cleared.
>> +	 */
>> +	pgd->pgd &= ~_PAGE_NX;
> 
> ...
> 
>> +
>>  	p4d = p4d_alloc(&tboot_mm, pgd, vaddr);
> 
> Seems pgd will be re-set after p4d_alloc(), so should
> we put the code behind (or after pud_alloc())?
> 
>>  	if (!p4d)
>>  		return -1;
> 
>  +	/*
>  +	 * PTI poisons low addresses in the kernel page tables in the
>  +	 * name of making them unusable for userspace.  To execute
>  +	 * code at such a low address, the poison must be cleared.
>  +	 */
>  +	pgd->pgd &= ~_PAGE_NX;
> 
> We will have a try in a minute, and report back later.

And it works,we can boot/reboot the system successfully, thank
you all the quick response and debug!

Thanks
Hanjun

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-06  6:53                     ` Hanjun Guo
  0 siblings, 0 replies; 131+ messages in thread
From: Hanjun Guo @ 2018-01-06  6:53 UTC (permalink / raw)
  To: Dave Hansen, Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On 2018/1/6 14:28, Hanjun Guo wrote:
> Hi Dave,
> 
> Thank you very much for the quick response! Minor comments inline.
> 
> On 2018/1/6 14:06, Dave Hansen wrote:
>> On 01/05/2018 08:54 PM, Hanjun Guo wrote:
>>> Do you mean NX bit will be brought back later? I'm asking this because
>>> I tested this patch which it fixed the boot panic issue but the system
>>> will hang when rebooting the system, because rebooting will also call efi
>>> then panic as NS bit is set.
>> Wow, you're running a lot of very lighly-used code paths!  You actually
>> found a similar but totally separate issue from what I gather.  Thank
>> you immensely for the quick testing and bug reports!
>>
>> Could you test the attached fix?
>>
>> For those playing along at home, I think this will end up being needed
>> for 4.15 and probably all the backports.  I want to see if it works
>> before I submit it for real, though.
>>
>>
>> pti-tboot-fix.patch
>>
>>
>> From: Dave Hansen <dave.hansen@linux.intel.com>
>>
>> This is another case similar to what EFI does: create a new set of
>> page tables, map some code at a low address, and jump to it.  PTI
>> mistakes this low address for userspace and mistakenly marks it
>> non-executable in an effort to make it unusable for userspace.  Undo
>> the poison to allow execution.
>>
>> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
>> Cc: Ning Sun <ning.sun@intel.com>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: "H. Peter Anvin" <hpa@zytor.com>
>> Cc: x86@kernel.org
>> Cc: tboot-devel@lists.sourceforge.net
>> Cc: linux-kernel@vger.kernel.org
>> ---
>>
>>  b/arch/x86/kernel/tboot.c |    7 +++++++
>>  1 file changed, 7 insertions(+)
>>
>> diff -puN arch/x86/kernel/tboot.c~pti-tboot-fix arch/x86/kernel/tboot.c
>> --- a/arch/x86/kernel/tboot.c~pti-tboot-fix	2018-01-05 21:50:55.755554960 -0800
>> +++ b/arch/x86/kernel/tboot.c	2018-01-05 22:01:51.393553325 -0800
>> @@ -124,6 +124,13 @@ static int map_tboot_page(unsigned long
>>  	pte_t *pte;
>>  
>>  	pgd = pgd_offset(&tboot_mm, vaddr);
>> +	/*
>> +	 * PTI poisons low addresses in the kernel page tables in the
>> +	 * name of making them unusable for userspace.  To execute
>> +	 * code at such a low address, the poison must be cleared.
>> +	 */
>> +	pgd->pgd &= ~_PAGE_NX;
> 
> ...
> 
>> +
>>  	p4d = p4d_alloc(&tboot_mm, pgd, vaddr);
> 
> Seems pgd will be re-set after p4d_alloc(), so should
> we put the code behind (or after pud_alloc())?
> 
>>  	if (!p4d)
>>  		return -1;
> 
>  +	/*
>  +	 * PTI poisons low addresses in the kernel page tables in the
>  +	 * name of making them unusable for userspace.  To execute
>  +	 * code at such a low address, the poison must be cleared.
>  +	 */
>  +	pgd->pgd &= ~_PAGE_NX;
> 
> We will have a try in a minute, and report back later.

And it worksi 1/4 ?we can boot/reboot the system successfully, thank
you all the quick response and debug!

Thanks
Hanjun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-06  6:28                   ` Hanjun Guo
@ 2018-01-06  7:51                     ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2018-01-06  7:51 UTC (permalink / raw)
  To: Hanjun Guo, Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On 01/05/2018 10:28 PM, Hanjun Guo wrote:
>> +
>>  	p4d = p4d_alloc(&tboot_mm, pgd, vaddr);
> Seems pgd will be re-set after p4d_alloc(), so should
> we put the code behind (or after pud_alloc())?

<sigh> Yes, it has to go below where the PGD actually gets set which is
after pud_alloc().  You can put it anywhere later in the function.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-06  7:51                     ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2018-01-06  7:51 UTC (permalink / raw)
  To: Hanjun Guo, Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On 01/05/2018 10:28 PM, Hanjun Guo wrote:
>> +
>>  	p4d = p4d_alloc(&tboot_mm, pgd, vaddr);
> Seems pgd will be re-set after p4d_alloc(), so should
> we put the code behind (or after pud_alloc())?

<sigh> Yes, it has to go below where the PGD actually gets set which is
after pud_alloc().  You can put it anywhere later in the function.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-06  6:53                     ` Hanjun Guo
@ 2018-01-06  7:55                       ` Dave Hansen
  -1 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2018-01-06  7:55 UTC (permalink / raw)
  To: Hanjun Guo, Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

[-- Attachment #1: Type: text/plain, Size: 604 bytes --]

On 01/05/2018 10:53 PM, Hanjun Guo wrote:
>>  +	/*
>>  +	 * PTI poisons low addresses in the kernel page tables in the
>>  +	 * name of making them unusable for userspace.  To execute
>>  +	 * code at such a low address, the poison must be cleared.
>>  +	 */
>>  +	pgd->pgd &= ~_PAGE_NX;
>>
>> We will have a try in a minute, and report back later.
> And it works,we can boot/reboot the system successfully, thank
> you all the quick response and debug!

I think I'll just submit the attached patch if there are no objections
(and if it works, of course!).

I just stuck the NX clearing at the bottom.

[-- Attachment #2: pti-tboot-fix.patch --]
[-- Type: text/x-patch, Size: 1449 bytes --]


From: Dave Hansen <dave.hansen@linux.intel.com>

This is another case similar to what EFI does: create a new set of
page tables, map some code at a low address, and jump to it.  PTI
mistakes this low address for userspace and mistakenly marks it
non-executable in an effort to make it unusable for userspace.  Undo
the poison to allow execution.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Ning Sun <ning.sun@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: tboot-devel@lists.sourceforge.net
Cc: linux-kernel@vger.kernel.org
---

 b/arch/x86/kernel/tboot.c |   11 +++++++++++
 1 file changed, 11 insertions(+)

diff -puN arch/x86/kernel/tboot.c~pti-tboot-fix arch/x86/kernel/tboot.c
--- a/arch/x86/kernel/tboot.c~pti-tboot-fix	2018-01-05 21:50:55.755554960 -0800
+++ b/arch/x86/kernel/tboot.c	2018-01-05 23:51:41.368536890 -0800
@@ -138,6 +138,17 @@ static int map_tboot_page(unsigned long
 		return -1;
 	set_pte_at(&tboot_mm, vaddr, pte, pfn_pte(pfn, prot));
 	pte_unmap(pte);
+
+	/*
+	 * PTI poisons low addresses in the kernel page tables in the
+	 * name of making them unusable for userspace.  To execute
+	 * code at such a low address, the poison must be cleared.
+	 *
+	 * Note: 'pgd' actually gets set in p4d_alloc() _or_
+	 * pud_alloc() depending on 4/5-level paging.
+	 */
+	pgd->pgd &= ~_PAGE_NX;
+
 	return 0;
 }
 
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-06  7:55                       ` Dave Hansen
  0 siblings, 0 replies; 131+ messages in thread
From: Dave Hansen @ 2018-01-06  7:55 UTC (permalink / raw)
  To: Hanjun Guo, Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

[-- Attachment #1: Type: text/plain, Size: 608 bytes --]

On 01/05/2018 10:53 PM, Hanjun Guo wrote:
>>  +	/*
>>  +	 * PTI poisons low addresses in the kernel page tables in the
>>  +	 * name of making them unusable for userspace.  To execute
>>  +	 * code at such a low address, the poison must be cleared.
>>  +	 */
>>  +	pgd->pgd &= ~_PAGE_NX;
>>
>> We will have a try in a minute, and report back later.
> And it worksi 1/4 ?we can boot/reboot the system successfully, thank
> you all the quick response and debug!

I think I'll just submit the attached patch if there are no objections
(and if it works, of course!).

I just stuck the NX clearing at the bottom.

[-- Attachment #2: pti-tboot-fix.patch --]
[-- Type: text/x-patch, Size: 1449 bytes --]


From: Dave Hansen <dave.hansen@linux.intel.com>

This is another case similar to what EFI does: create a new set of
page tables, map some code at a low address, and jump to it.  PTI
mistakes this low address for userspace and mistakenly marks it
non-executable in an effort to make it unusable for userspace.  Undo
the poison to allow execution.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Ning Sun <ning.sun@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: tboot-devel@lists.sourceforge.net
Cc: linux-kernel@vger.kernel.org
---

 b/arch/x86/kernel/tboot.c |   11 +++++++++++
 1 file changed, 11 insertions(+)

diff -puN arch/x86/kernel/tboot.c~pti-tboot-fix arch/x86/kernel/tboot.c
--- a/arch/x86/kernel/tboot.c~pti-tboot-fix	2018-01-05 21:50:55.755554960 -0800
+++ b/arch/x86/kernel/tboot.c	2018-01-05 23:51:41.368536890 -0800
@@ -138,6 +138,17 @@ static int map_tboot_page(unsigned long
 		return -1;
 	set_pte_at(&tboot_mm, vaddr, pte, pfn_pte(pfn, prot));
 	pte_unmap(pte);
+
+	/*
+	 * PTI poisons low addresses in the kernel page tables in the
+	 * name of making them unusable for userspace.  To execute
+	 * code at such a low address, the poison must be cleared.
+	 *
+	 * Note: 'pgd' actually gets set in p4d_alloc() _or_
+	 * pud_alloc() depending on 4/5-level paging.
+	 */
+	pgd->pgd &= ~_PAGE_NX;
+
 	return 0;
 }
 
_

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-06  7:55                       ` Dave Hansen
@ 2018-01-06  8:42                         ` Hanjun Guo
  -1 siblings, 0 replies; 131+ messages in thread
From: Hanjun Guo @ 2018-01-06  8:42 UTC (permalink / raw)
  To: Dave Hansen, Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On 2018/1/6 15:55, Dave Hansen wrote:
> On 01/05/2018 10:53 PM, Hanjun Guo wrote:
>>>  +	/*
>>>  +	 * PTI poisons low addresses in the kernel page tables in the
>>>  +	 * name of making them unusable for userspace.  To execute
>>>  +	 * code at such a low address, the poison must be cleared.
>>>  +	 */
>>>  +	pgd->pgd &= ~_PAGE_NX;
>>>
>>> We will have a try in a minute, and report back later.
>> And it works,we can boot/reboot the system successfully, thank
>> you all the quick response and debug!
> I think I'll just submit the attached patch if there are no objections
> (and if it works, of course!).

We tested that placing the NX clearing after pud_alloc(), and it works,
patch below should work as well.

> 
> I just stuck the NX clearing at the bottom.
> 
> 
> pti-tboot-fix.patch
> 
> 
> From: Dave Hansen <dave.hansen@linux.intel.com>
> 
> This is another case similar to what EFI does: create a new set of
> page tables, map some code at a low address, and jump to it.  PTI
> mistakes this low address for userspace and mistakenly marks it
> non-executable in an effort to make it unusable for userspace.  Undo
> the poison to allow execution.
> 
> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Ning Sun <ning.sun@intel.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: x86@kernel.org
> Cc: tboot-devel@lists.sourceforge.net
> Cc: linux-kernel@vger.kernel.org
> ---
> 
>  b/arch/x86/kernel/tboot.c |   11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff -puN arch/x86/kernel/tboot.c~pti-tboot-fix arch/x86/kernel/tboot.c
> --- a/arch/x86/kernel/tboot.c~pti-tboot-fix	2018-01-05 21:50:55.755554960 -0800
> +++ b/arch/x86/kernel/tboot.c	2018-01-05 23:51:41.368536890 -0800
> @@ -138,6 +138,17 @@ static int map_tboot_page(unsigned long
>  		return -1;
>  	set_pte_at(&tboot_mm, vaddr, pte, pfn_pte(pfn, prot));
>  	pte_unmap(pte);
> +
> +	/*
> +	 * PTI poisons low addresses in the kernel page tables in the
> +	 * name of making them unusable for userspace.  To execute
> +	 * code at such a low address, the poison must be cleared.
> +	 *
> +	 * Note: 'pgd' actually gets set in p4d_alloc() _or_
> +	 * pud_alloc() depending on 4/5-level paging.
> +	 */
> +	pgd->pgd &= ~_PAGE_NX;
> +
>  	return 0;
>  }

Thanks
Hanjun

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-06  8:42                         ` Hanjun Guo
  0 siblings, 0 replies; 131+ messages in thread
From: Hanjun Guo @ 2018-01-06  8:42 UTC (permalink / raw)
  To: Dave Hansen, Jiri Kosina
  Cc: Yisheng Xie, linux-kernel, linux-mm, richard.fellner,
	moritz.lipp, daniel.gruss, michael.schwarz, luto, Linus Torvalds,
	keescook, hughd, x86, Andrea Arcangeli

On 2018/1/6 15:55, Dave Hansen wrote:
> On 01/05/2018 10:53 PM, Hanjun Guo wrote:
>>>  +	/*
>>>  +	 * PTI poisons low addresses in the kernel page tables in the
>>>  +	 * name of making them unusable for userspace.  To execute
>>>  +	 * code at such a low address, the poison must be cleared.
>>>  +	 */
>>>  +	pgd->pgd &= ~_PAGE_NX;
>>>
>>> We will have a try in a minute, and report back later.
>> And it worksi 1/4 ?we can boot/reboot the system successfully, thank
>> you all the quick response and debug!
> I think I'll just submit the attached patch if there are no objections
> (and if it works, of course!).

We tested that placing the NX clearing after pud_alloc(), and it works,
patch below should work as well.

> 
> I just stuck the NX clearing at the bottom.
> 
> 
> pti-tboot-fix.patch
> 
> 
> From: Dave Hansen <dave.hansen@linux.intel.com>
> 
> This is another case similar to what EFI does: create a new set of
> page tables, map some code at a low address, and jump to it.  PTI
> mistakes this low address for userspace and mistakenly marks it
> non-executable in an effort to make it unusable for userspace.  Undo
> the poison to allow execution.
> 
> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Ning Sun <ning.sun@intel.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: x86@kernel.org
> Cc: tboot-devel@lists.sourceforge.net
> Cc: linux-kernel@vger.kernel.org
> ---
> 
>  b/arch/x86/kernel/tboot.c |   11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff -puN arch/x86/kernel/tboot.c~pti-tboot-fix arch/x86/kernel/tboot.c
> --- a/arch/x86/kernel/tboot.c~pti-tboot-fix	2018-01-05 21:50:55.755554960 -0800
> +++ b/arch/x86/kernel/tboot.c	2018-01-05 23:51:41.368536890 -0800
> @@ -138,6 +138,17 @@ static int map_tboot_page(unsigned long
>  		return -1;
>  	set_pte_at(&tboot_mm, vaddr, pte, pfn_pte(pfn, prot));
>  	pte_unmap(pte);
> +
> +	/*
> +	 * PTI poisons low addresses in the kernel page tables in the
> +	 * name of making them unusable for userspace.  To execute
> +	 * code at such a low address, the poison must be cleared.
> +	 *
> +	 * Note: 'pgd' actually gets set in p4d_alloc() _or_
> +	 * pud_alloc() depending on 4/5-level paging.
> +	 */
> +	pgd->pgd &= ~_PAGE_NX;
> +
>  	return 0;
>  }

Thanks
Hanjun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
  2018-01-06  7:51                     ` Dave Hansen
@ 2018-01-06 17:22                       ` Andrea Arcangeli
  -1 siblings, 0 replies; 131+ messages in thread
From: Andrea Arcangeli @ 2018-01-06 17:22 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Hanjun Guo, Jiri Kosina, Yisheng Xie, linux-kernel, linux-mm,
	richard.fellner, moritz.lipp, daniel.gruss, michael.schwarz,
	luto, Linus Torvalds, keescook, hughd, x86

On Fri, Jan 05, 2018 at 11:51:38PM -0800, Dave Hansen wrote:
> On 01/05/2018 10:28 PM, Hanjun Guo wrote:
> >> +
> >>  	p4d = p4d_alloc(&tboot_mm, pgd, vaddr);
> > Seems pgd will be re-set after p4d_alloc(), so should
> > we put the code behind (or after pud_alloc())?

Thanks Dave and Jiri for these two tboot and efi_64 fixes.

> 
> <sigh> Yes, it has to go below where the PGD actually gets set which is
> after pud_alloc().  You can put it anywhere later in the function.

Did the exact same oversight yesterday when porting Jiri's fix.

efi_64 booted fine verified yesterday in a respin of what I sent here
by just moving it after pud_alloc too:

		pud = pud_alloc(&init_mm, pgd_efi, addr_pgd);
		if (!pud) {
			pr_err("Failed to allocate pud table!\n");
			break;
		}
+		pgd_efi->pgd &= ~_PAGE_NX;

Now I'm having this tested for tboot too (still untested). With tboot
I expect the first build pass the test. All followups on bz.

diff --git a/arch/x86/kernel/tboot.c b/arch/x86/kernel/tboot.c
index 088681d4fc45..09cff5f4f9a4 100644
--- a/arch/x86/kernel/tboot.c
+++ b/arch/x86/kernel/tboot.c
@@ -131,6 +131,7 @@ static int map_tboot_page(unsigned long vaddr, unsigned long pfn,
 	pud = pud_alloc(&tboot_mm, pgd, vaddr);
 	if (!pud)
 		return -1;
+	pgd->pgd &= ~_PAGE_NX;
 	pmd = pmd_alloc(&tboot_mm, pud, vaddr);
 	if (!pmd)
 		return -1;

Note your upstream submitted version is less theoretically correct than the
above. It won't make a difference in practice, but it is theoretically
wrong to clear the PAGE_NX only if pte_alloc_map succeeds like your
patch does.

If in the future pte_alloc_map fails and for whatever reason the pgd
will still be used and the whole thing will not abort, your fix will
still end up with NX set in the pgd.

Only the first pud allocation establishes itself in the pgd, follow
ups don't if in __pud_alloc pgd_present() will return true.

This is why I did the strictier backport of Jiri's fix yesterday but I
was a little too strict putting it just before pud_alloc and it had to
go just after it.

Thanks,
Andrea

^ permalink raw reply related	[flat|nested] 131+ messages in thread

* Re: [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch)
@ 2018-01-06 17:22                       ` Andrea Arcangeli
  0 siblings, 0 replies; 131+ messages in thread
From: Andrea Arcangeli @ 2018-01-06 17:22 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Hanjun Guo, Jiri Kosina, Yisheng Xie, linux-kernel, linux-mm,
	richard.fellner, moritz.lipp, daniel.gruss, michael.schwarz,
	luto, Linus Torvalds, keescook, hughd, x86

On Fri, Jan 05, 2018 at 11:51:38PM -0800, Dave Hansen wrote:
> On 01/05/2018 10:28 PM, Hanjun Guo wrote:
> >> +
> >>  	p4d = p4d_alloc(&tboot_mm, pgd, vaddr);
> > Seems pgd will be re-set after p4d_alloc(), so should
> > we put the code behind (or after pud_alloc())?

Thanks Dave and Jiri for these two tboot and efi_64 fixes.

> 
> <sigh> Yes, it has to go below where the PGD actually gets set which is
> after pud_alloc().  You can put it anywhere later in the function.

Did the exact same oversight yesterday when porting Jiri's fix.

efi_64 booted fine verified yesterday in a respin of what I sent here
by just moving it after pud_alloc too:

		pud = pud_alloc(&init_mm, pgd_efi, addr_pgd);
		if (!pud) {
			pr_err("Failed to allocate pud table!\n");
			break;
		}
+		pgd_efi->pgd &= ~_PAGE_NX;

Now I'm having this tested for tboot too (still untested). With tboot
I expect the first build pass the test. All followups on bz.

diff --git a/arch/x86/kernel/tboot.c b/arch/x86/kernel/tboot.c
index 088681d4fc45..09cff5f4f9a4 100644
--- a/arch/x86/kernel/tboot.c
+++ b/arch/x86/kernel/tboot.c
@@ -131,6 +131,7 @@ static int map_tboot_page(unsigned long vaddr, unsigned long pfn,
 	pud = pud_alloc(&tboot_mm, pgd, vaddr);
 	if (!pud)
 		return -1;
+	pgd->pgd &= ~_PAGE_NX;
 	pmd = pmd_alloc(&tboot_mm, pud, vaddr);
 	if (!pmd)
 		return -1;

Note your upstream submitted version is less theoretically correct than the
above. It won't make a difference in practice, but it is theoretically
wrong to clear the PAGE_NX only if pte_alloc_map succeeds like your
patch does.

If in the future pte_alloc_map fails and for whatever reason the pgd
will still be used and the whole thing will not abort, your fix will
still end up with NX set in the pgd.

Only the first pud allocation establishes itself in the pgd, follow
ups don't if in __pud_alloc pgd_present() will return true.

This is why I did the strictier backport of Jiri's fix yesterday but I
was a little too strict putting it just before pud_alloc and it had to
go just after it.

Thanks,
Andrea

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 131+ messages in thread

end of thread, other threads:[~2018-01-06 17:22 UTC | newest]

Thread overview: 131+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-23  0:34 [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables Dave Hansen
2017-11-23  0:34 ` Dave Hansen
2017-11-23  0:34 ` [PATCH 01/23] x86, kaiser: disable global pages by default with KAISER Dave Hansen
2017-11-23  0:34   ` Dave Hansen
2017-11-23  0:34 ` [PATCH 02/23] x86, kaiser: prepare assembly for entry/exit CR3 switching Dave Hansen
2017-11-23  0:34   ` Dave Hansen
2017-11-23  0:34 ` [PATCH 03/23] x86, kaiser: introduce user-mapped per-cpu areas Dave Hansen
2017-11-23  0:34   ` Dave Hansen
2017-11-23  0:34 ` [PATCH 04/23] x86, kaiser: mark per-cpu data structures required for entry/exit Dave Hansen
2017-11-23  0:34   ` Dave Hansen
2017-11-23  0:34 ` [PATCH 05/23] x86, kaiser: unmap kernel from userspace page tables (core patch) Dave Hansen
2017-11-23  0:34   ` Dave Hansen
2017-11-23  4:07   ` Andy Lutomirski
2017-11-23  4:07     ` Andy Lutomirski
2017-11-26 16:10     ` Andy Lutomirski
2017-11-26 16:10       ` Andy Lutomirski
2017-11-26 16:24       ` Dave Hansen
2017-11-26 16:24         ` Dave Hansen
2017-11-26 16:29         ` Andy Lutomirski
2017-11-26 16:29           ` Andy Lutomirski
2018-01-05  4:16   ` Yisheng Xie
2018-01-05  4:16     ` Yisheng Xie
2018-01-05  5:18     ` Dave Hansen
2018-01-05  5:18       ` Dave Hansen
2018-01-05  6:16       ` Yisheng Xie
2018-01-05  6:16         ` Yisheng Xie
2018-01-05  6:29         ` Dave Hansen
2018-01-05  6:29           ` Dave Hansen
2018-01-05 11:49           ` Andrea Arcangeli
2018-01-05 11:49             ` Andrea Arcangeli
2018-01-05 18:19           ` Jiri Kosina
2018-01-05 18:19             ` Jiri Kosina
2018-01-05 19:00             ` Jiri Kosina
2018-01-05 19:00               ` Jiri Kosina
2018-01-05 19:03             ` Dave Hansen
2018-01-05 19:03               ` Dave Hansen
2018-01-05 19:17               ` Jiri Kosina
2018-01-05 19:17                 ` Jiri Kosina
2018-01-05 19:18                 ` Jiri Kosina
2018-01-05 19:18                   ` Jiri Kosina
2018-01-05 19:55                 ` Andrea Arcangeli
2018-01-05 19:55                   ` Andrea Arcangeli
2018-01-05 21:07                 ` Dave Hansen
2018-01-05 21:07                   ` Dave Hansen
2018-01-05 21:14                   ` Jiri Kosina
2018-01-05 21:14                     ` Jiri Kosina
2018-01-05 21:29                     ` Andy Lutomirski
2018-01-05 21:29                       ` Andy Lutomirski
2018-01-05 22:48                     ` Hugh Dickins
2018-01-05 22:48                       ` Hugh Dickins
2018-01-06  4:54             ` Hanjun Guo
2018-01-06  4:54               ` Hanjun Guo
2018-01-06  6:06               ` Dave Hansen
2018-01-06  6:28                 ` Hanjun Guo
2018-01-06  6:28                   ` Hanjun Guo
2018-01-06  6:53                   ` Hanjun Guo
2018-01-06  6:53                     ` Hanjun Guo
2018-01-06  7:55                     ` Dave Hansen
2018-01-06  7:55                       ` Dave Hansen
2018-01-06  8:42                       ` Hanjun Guo
2018-01-06  8:42                         ` Hanjun Guo
2018-01-06  7:51                   ` Dave Hansen
2018-01-06  7:51                     ` Dave Hansen
2018-01-06 17:22                     ` Andrea Arcangeli
2018-01-06 17:22                       ` Andrea Arcangeli
2017-11-23  0:34 ` [PATCH 06/23] x86, kaiser: allow NX poison to be set in p4d/pgd Dave Hansen
2017-11-23  0:34   ` Dave Hansen
2017-11-23  0:34 ` [PATCH 07/23] x86, kaiser: make sure static PGDs are 8k in size Dave Hansen
2017-11-23  0:34   ` Dave Hansen
2017-11-23  0:34 ` [PATCH 08/23] x86, kaiser: map cpu entry area Dave Hansen
2017-11-23  0:34   ` Dave Hansen
2017-11-23  0:34 ` [PATCH 09/23] x86, kaiser: map dynamically-allocated LDTs Dave Hansen
2017-11-23  0:34   ` Dave Hansen
2017-11-23 19:42   ` Eric Biggers
2017-11-23 19:42     ` Eric Biggers
2017-11-23 20:12     ` Andy Lutomirski
2017-11-23 20:12       ` Andy Lutomirski
2017-11-23  0:34 ` [PATCH 10/23] x86, kaiser: map espfix structures Dave Hansen
2017-11-23  0:34   ` Dave Hansen
2017-11-23  0:34 ` [PATCH 11/23] x86, kaiser: map entry stack variables Dave Hansen
2017-11-23  0:34   ` Dave Hansen
2017-11-23  3:31   ` Andy Lutomirski
2017-11-23  3:31     ` Andy Lutomirski
2017-11-23 15:37     ` Dave Hansen
2017-11-23 15:37       ` Dave Hansen
2017-11-23 15:55       ` Andy Lutomirski
2017-11-23 15:55         ` Andy Lutomirski
2017-11-23  0:35 ` [PATCH 12/23] x86, kaiser: map virtually-addressed performance monitoring buffers Dave Hansen
2017-11-23  0:35   ` Dave Hansen
2017-11-23  0:35 ` [PATCH 13/23] x86, mm: Move CR3 construction functions Dave Hansen
2017-11-23  0:35   ` Dave Hansen
2017-11-23  0:35 ` [PATCH 14/23] x86, mm: remove hard-coded ASID limit checks Dave Hansen
2017-11-23  0:35   ` Dave Hansen
2017-11-23  0:35 ` [PATCH 15/23] x86, mm: put mmu-to-h/w ASID translation in one place Dave Hansen
2017-11-23  0:35   ` Dave Hansen
2017-11-23  0:35 ` [PATCH 16/23] x86, pcid, kaiser: allow flushing for future ASID switches Dave Hansen
2017-11-23  0:35   ` Dave Hansen
2017-11-23  0:35 ` [PATCH 17/23] x86, kaiser: use PCID feature to make user and kernel switches faster Dave Hansen
2017-11-23  0:35   ` Dave Hansen
2017-11-23  0:35 ` [PATCH 18/23] x86, kaiser: disable native VSYSCALL Dave Hansen
2017-11-23  0:35   ` Dave Hansen
2017-11-23  0:35 ` [PATCH 19/23] x86, kaiser: add debugfs file to turn KAISER on/off at runtime Dave Hansen
2017-11-23  0:35   ` Dave Hansen
2017-11-23  0:35 ` [PATCH 20/23] x86, kaiser: add a function to check for KAISER being enabled Dave Hansen
2017-11-23  0:35   ` Dave Hansen
2017-11-25  1:23   ` Eduardo Valentin
2017-11-25  1:23     ` Eduardo Valentin
2017-11-23  0:35 ` [PATCH 21/23] x86, kaiser: un-poison PGDs at runtime Dave Hansen
2017-11-23  0:35   ` Dave Hansen
2017-11-25  1:17   ` Eduardo Valentin
2017-11-25  1:17     ` Eduardo Valentin
2017-11-23  0:35 ` [PATCH 22/23] x86, kaiser: allow KAISER to be enabled/disabled " Dave Hansen
2017-11-23  0:35   ` Dave Hansen
2017-11-23  0:35 ` [PATCH 23/23] x86, kaiser: add Kconfig Dave Hansen
2017-11-23  0:35   ` Dave Hansen
2017-11-23  7:23 ` [PATCH 00/23] [v4] KAISER: unmap most of the kernel from userspace page tables Ingo Molnar
2017-11-23  7:23   ` Ingo Molnar
2017-11-23  7:27 ` Ingo Molnar
2017-11-23  7:27   ` Ingo Molnar
2017-11-23  7:32   ` Ingo Molnar
2017-11-23  7:32     ` Ingo Molnar
2017-11-23 15:02     ` Dave Hansen
2017-11-23 15:02       ` Dave Hansen
2017-11-23 16:20 ` Dave Hansen
2017-11-23 16:20   ` Dave Hansen
2017-11-24  6:35   ` Ingo Molnar
2017-11-24  6:35     ` Ingo Molnar
2017-11-24  6:41     ` Dave Hansen
2017-11-24  6:41       ` Dave Hansen
2017-11-24  7:33       ` Ingo Molnar
2017-11-24  7:33         ` Ingo Molnar

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.