linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/7] x86: BUG() on #GP / kernel #PF in uaccess
@ 2018-08-27 18:56 Jann Horn
  2018-08-27 18:56 ` [PATCH v2 1/7] x86: refactor kprobes_fault() like kprobe_exceptions_notify() Jann Horn
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Jann Horn @ 2018-08-27 18:56 UTC (permalink / raw)
  To: Kees Cook, Thomas Gleixner, Ingo Molnar, x86, Andy Lutomirski,
	kernel-hardening, Tony Luck, Borislav Petkov, jannh
  Cc: linux-kernel, dvyukov, linux-edac

This is the second version of "[RFC PATCH 1/2] x86: WARN() when uaccess
helpers fault on kernel addresses".

See patch 6/7 ("x86: BUG() when uaccess helpers fault on kernel
addresses") for a description of the motivation for this series.

Patches 1 and 2 are cleanups that I did while working on this
series, but the series doesn't depend on them. (I first thought these
cleanups were necessary for the rest of the series, then noticed that
they actually aren't, but decided to keep them since cleanups are good
anyway.)

Patches 3, 4 and 5 are prep work; 4 and 5 are loosely based on code
from the v1 patch. They've changed quite a bit though.

Patch 6 is the main semantic change.

Patch 7 is a small testcase for verifying that patch 6 works.

Jann Horn (7):
  x86: refactor kprobes_fault() like kprobe_exceptions_notify()
  x86: inline kprobe_exceptions_notify() into do_general_protection()
  x86: stop calling fixup_exception() from kprobe_fault_handler()
  x86: introduce _ASM_EXTABLE_UA for uaccess fixups
  x86: plumb error code and fault address through to fault handlers
  x86: BUG() when uaccess helpers fault on kernel addresses
  lkdtm: test copy_to_user() on bad kernel pointer under KERNEL_DS

 arch/x86/include/asm/asm.h          |  10 ++-
 arch/x86/include/asm/extable.h      |   3 +-
 arch/x86/include/asm/fpu/internal.h |   2 +-
 arch/x86/include/asm/futex.h        |   6 +-
 arch/x86/include/asm/ptrace.h       |   2 +
 arch/x86/include/asm/uaccess.h      |  22 ++---
 arch/x86/kernel/cpu/mcheck/mce.c    |   2 +-
 arch/x86/kernel/kprobes/core.c      |  38 +--------
 arch/x86/kernel/traps.c             |  16 +++-
 arch/x86/lib/checksum_32.S          |   4 +-
 arch/x86/lib/copy_user_64.S         |  90 ++++++++++----------
 arch/x86/lib/csum-copy_64.S         |   8 +-
 arch/x86/lib/getuser.S              |  12 +--
 arch/x86/lib/putuser.S              |  10 +--
 arch/x86/lib/usercopy_32.c          | 126 ++++++++++++++--------------
 arch/x86/lib/usercopy_64.c          |   4 +-
 arch/x86/mm/extable.c               | 104 +++++++++++++++++++----
 arch/x86/mm/fault.c                 |  21 ++---
 drivers/misc/lkdtm/core.c           |   1 +
 drivers/misc/lkdtm/lkdtm.h          |   1 +
 drivers/misc/lkdtm/usercopy.c       |  13 +++
 include/linux/sched.h               |   6 ++
 mm/maccess.c                        |   6 ++
 23 files changed, 297 insertions(+), 210 deletions(-)

-- 
2.19.0.rc0.228.g281dcd1b4d0-goog


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 1/7] x86: refactor kprobes_fault() like kprobe_exceptions_notify()
  2018-08-27 18:56 [PATCH v2 0/7] x86: BUG() on #GP / kernel #PF in uaccess Jann Horn
@ 2018-08-27 18:56 ` Jann Horn
  2018-08-28 11:36   ` Borislav Petkov
  2018-08-27 18:56 ` [PATCH v2 2/7] x86: inline kprobe_exceptions_notify() into do_general_protection() Jann Horn
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Jann Horn @ 2018-08-27 18:56 UTC (permalink / raw)
  To: Kees Cook, Thomas Gleixner, Ingo Molnar, x86, Andy Lutomirski,
	kernel-hardening, Tony Luck, Borislav Petkov, jannh
  Cc: linux-kernel, dvyukov, linux-edac

This is an extension of commit b506a9d08bae ("x86: code clarification patch
to Kprobes arch code"). As that commit explains, even though
kprobe_running() can't be called with preemption enabled, you don't have to
disable preemption - if preemption is on, you can't be in a kprobe.

Also, use X86_TRAP_PF instead of 14.

Signed-off-by: Jann Horn <jannh@google.com>
---
 arch/x86/mm/fault.c | 19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index b9123c497e0a..2254a30533b9 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -44,17 +44,14 @@ kmmio_fault(struct pt_regs *regs, unsigned long addr)
 
 static nokprobe_inline int kprobes_fault(struct pt_regs *regs)
 {
-	int ret = 0;
-
-	/* kprobe_running() needs smp_processor_id() */
-	if (kprobes_built_in() && !user_mode(regs)) {
-		preempt_disable();
-		if (kprobe_running() && kprobe_fault_handler(regs, 14))
-			ret = 1;
-		preempt_enable();
-	}
-
-	return ret;
+	/*
+	 * To be potentially processing a kprobe fault and to be allowed to call
+	 * kprobe_running(), we have to be non-preemptible.
+	 */
+	if (kprobes_built_in() && !user_mode(regs) && !preemptible() &&
+	    kprobe_running() && kprobe_fault_handler(regs, X86_TRAP_PF))
+		return 1;
+	return 0;
 }
 
 /*
-- 
2.19.0.rc0.228.g281dcd1b4d0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 2/7] x86: inline kprobe_exceptions_notify() into do_general_protection()
  2018-08-27 18:56 [PATCH v2 0/7] x86: BUG() on #GP / kernel #PF in uaccess Jann Horn
  2018-08-27 18:56 ` [PATCH v2 1/7] x86: refactor kprobes_fault() like kprobe_exceptions_notify() Jann Horn
@ 2018-08-27 18:56 ` Jann Horn
  2018-08-27 18:56 ` [PATCH v2 3/7] x86: stop calling fixup_exception() from kprobe_fault_handler() Jann Horn
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Jann Horn @ 2018-08-27 18:56 UTC (permalink / raw)
  To: Kees Cook, Thomas Gleixner, Ingo Molnar, x86, Andy Lutomirski,
	kernel-hardening, Tony Luck, Borislav Petkov, jannh
  Cc: linux-kernel, dvyukov, linux-edac

The opaque plumbing of #GP from do_general_protection() through
notify_die() into kprobe_exceptions_notify() makes it hard to understand
what's going on.

Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Jann Horn <jannh@google.com>
---
 arch/x86/kernel/kprobes/core.c | 31 +------------------------------
 arch/x86/kernel/traps.c        | 10 ++++++++++
 2 files changed, 11 insertions(+), 30 deletions(-)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index b0d1e81c96bb..467ac22691b0 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -1028,42 +1028,13 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
 		if (fixup_exception(regs, trapnr))
 			return 1;
 
-		/*
-		 * fixup routine could not handle it,
-		 * Let do_page_fault() fix it.
-		 */
+		/* fixup routine could not handle it. */
 	}
 
 	return 0;
 }
 NOKPROBE_SYMBOL(kprobe_fault_handler);
 
-/*
- * Wrapper routine for handling exceptions.
- */
-int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val,
-			     void *data)
-{
-	struct die_args *args = data;
-	int ret = NOTIFY_DONE;
-
-	if (args->regs && user_mode(args->regs))
-		return ret;
-
-	if (val == DIE_GPF) {
-		/*
-		 * To be potentially processing a kprobe fault and to
-		 * trust the result from kprobe_running(), we have
-		 * be non-preemptible.
-		 */
-		if (!preemptible() && kprobe_running() &&
-		    kprobe_fault_handler(args->regs, args->trapnr))
-			ret = NOTIFY_STOP;
-	}
-	return ret;
-}
-NOKPROBE_SYMBOL(kprobe_exceptions_notify);
-
 bool arch_within_kprobe_blacklist(unsigned long addr)
 {
 	bool is_in_entry_trampoline_section = false;
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index e6db475164ed..bf9ab1aaa175 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -556,6 +556,16 @@ do_general_protection(struct pt_regs *regs, long error_code)
 
 		tsk->thread.error_code = error_code;
 		tsk->thread.trap_nr = X86_TRAP_GP;
+
+		/*
+		 * To be potentially processing a kprobe fault and to
+		 * trust the result from kprobe_running(), we have to
+		 * be non-preemptible.
+		 */
+		if (!preemptible() && kprobe_running() &&
+		    kprobe_fault_handler(regs, X86_TRAP_GP))
+			return;
+
 		if (notify_die(DIE_GPF, "general protection fault", regs, error_code,
 			       X86_TRAP_GP, SIGSEGV) != NOTIFY_STOP)
 			die("general protection fault", regs, error_code);
-- 
2.19.0.rc0.228.g281dcd1b4d0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 3/7] x86: stop calling fixup_exception() from kprobe_fault_handler()
  2018-08-27 18:56 [PATCH v2 0/7] x86: BUG() on #GP / kernel #PF in uaccess Jann Horn
  2018-08-27 18:56 ` [PATCH v2 1/7] x86: refactor kprobes_fault() like kprobe_exceptions_notify() Jann Horn
  2018-08-27 18:56 ` [PATCH v2 2/7] x86: inline kprobe_exceptions_notify() into do_general_protection() Jann Horn
@ 2018-08-27 18:56 ` Jann Horn
  2018-08-27 19:02   ` Andy Lutomirski
  2018-08-27 18:56 ` [PATCH v2 4/7] x86: introduce _ASM_EXTABLE_UA for uaccess fixups Jann Horn
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Jann Horn @ 2018-08-27 18:56 UTC (permalink / raw)
  To: Kees Cook, Thomas Gleixner, Ingo Molnar, x86, Andy Lutomirski,
	kernel-hardening, Tony Luck, Borislav Petkov, jannh
  Cc: linux-kernel, dvyukov, linux-edac

This removes the call into exception fixup that was added in
commit c28f896634f2 ("[PATCH] kprobes: fix broken fault handling for
x86_64").

On X86, kprobe_fault_handler() is called from two places:
do_general_protection() (for #GP) and kprobes_fault() (for #PF).
In both paths, the fixup_exception() call in the kprobe fault handler is
redundant.

For #GP, fixup_exception() is called immediately before
kprobe_fault_handler() is invoked - if someone wanted to fix up our #GP,
they've already done so, no need to try again. (This assumes that the
kprobe's fault handler isn't going to do something crazy like changing RIP
so that it suddenly points to an instruction that does userspace access.)

For #PF on a kernel address from kernel space, after the kprobe fault
handler has run, we'll go into no_context(), which calls fixup_exception().

Signed-off-by: Jann Horn <jannh@google.com>
---
 arch/x86/kernel/kprobes/core.c | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 467ac22691b0..7315ac202aad 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -1021,13 +1021,6 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
 		if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
 			return 1;
 
-		/*
-		 * In case the user-specified fault handler returned
-		 * zero, try to fix up.
-		 */
-		if (fixup_exception(regs, trapnr))
-			return 1;
-
 		/* fixup routine could not handle it. */
 	}
 
-- 
2.19.0.rc0.228.g281dcd1b4d0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 4/7] x86: introduce _ASM_EXTABLE_UA for uaccess fixups
  2018-08-27 18:56 [PATCH v2 0/7] x86: BUG() on #GP / kernel #PF in uaccess Jann Horn
                   ` (2 preceding siblings ...)
  2018-08-27 18:56 ` [PATCH v2 3/7] x86: stop calling fixup_exception() from kprobe_fault_handler() Jann Horn
@ 2018-08-27 18:56 ` Jann Horn
  2018-08-27 18:56 ` [PATCH v2 5/7] x86: plumb error code and fault address through to fault handlers Jann Horn
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Jann Horn @ 2018-08-27 18:56 UTC (permalink / raw)
  To: Kees Cook, Thomas Gleixner, Ingo Molnar, x86, Andy Lutomirski,
	kernel-hardening, Tony Luck, Borislav Petkov, jannh
  Cc: linux-kernel, dvyukov, linux-edac

Currently, most fixups for attempting to access userspace memory are
handled using _ASM_EXTABLE, which is also used for various other types of
fixups (e.g. safe MSR access, IRET failures, and a bunch of other things).
In order to make it possible to add special safety checks to uaccess fixups
(in particular, checking whether the fault address is actually in
userspace), introduce a new exception table handler ex_handler_uaccess()
and wire it up to all the user access fixups (excluding ones that
already use _ASM_EXTABLE_EX).

Signed-off-by: Jann Horn <jannh@google.com>
---
NOTE: I haven't touched the two uaccess fixups in asm/fpu/internal.h
because I'm not sure about what other kinds of exceptions those might
have to handle.

 arch/x86/include/asm/asm.h          |  10 ++-
 arch/x86/include/asm/fpu/internal.h |   2 +-
 arch/x86/include/asm/futex.h        |   6 +-
 arch/x86/include/asm/uaccess.h      |  22 ++---
 arch/x86/lib/checksum_32.S          |   4 +-
 arch/x86/lib/copy_user_64.S         |  90 ++++++++++----------
 arch/x86/lib/csum-copy_64.S         |   8 +-
 arch/x86/lib/getuser.S              |  12 +--
 arch/x86/lib/putuser.S              |  10 +--
 arch/x86/lib/usercopy_32.c          | 126 ++++++++++++++--------------
 arch/x86/lib/usercopy_64.c          |   4 +-
 arch/x86/mm/extable.c               |   8 ++
 12 files changed, 160 insertions(+), 142 deletions(-)

diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
index 990770f9e76b..6467757bb39f 100644
--- a/arch/x86/include/asm/asm.h
+++ b/arch/x86/include/asm/asm.h
@@ -130,6 +130,9 @@
 # define _ASM_EXTABLE(from, to)					\
 	_ASM_EXTABLE_HANDLE(from, to, ex_handler_default)
 
+# define _ASM_EXTABLE_UA(from, to)				\
+	_ASM_EXTABLE_HANDLE(from, to, ex_handler_uaccess)
+
 # define _ASM_EXTABLE_FAULT(from, to)				\
 	_ASM_EXTABLE_HANDLE(from, to, ex_handler_fault)
 
@@ -165,8 +168,8 @@
 	jmp copy_user_handle_tail
 	.previous
 
-	_ASM_EXTABLE(100b,103b)
-	_ASM_EXTABLE(101b,103b)
+	_ASM_EXTABLE_UA(100b, 103b)
+	_ASM_EXTABLE_UA(101b, 103b)
 	.endm
 
 #else
@@ -182,6 +185,9 @@
 # define _ASM_EXTABLE(from, to)					\
 	_ASM_EXTABLE_HANDLE(from, to, ex_handler_default)
 
+# define _ASM_EXTABLE_UA(from, to)				\
+	_ASM_EXTABLE_HANDLE(from, to, ex_handler_uaccess)
+
 # define _ASM_EXTABLE_FAULT(from, to)				\
 	_ASM_EXTABLE_HANDLE(from, to, ex_handler_fault)
 
diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
index a38bf5a1e37a..068ff7689268 100644
--- a/arch/x86/include/asm/fpu/internal.h
+++ b/arch/x86/include/asm/fpu/internal.h
@@ -226,7 +226,7 @@ static inline void copy_fxregs_to_kernel(struct fpu *fpu)
 		     "3: movl $-2,%[err]\n\t"				\
 		     "jmp 2b\n\t"					\
 		     ".popsection\n\t"					\
-		     _ASM_EXTABLE(1b, 3b)				\
+		     _ASM_EXTABLE_UA(1b, 3b)				\
 		     : [err] "=r" (err)					\
 		     : "D" (st), "m" (*st), "a" (lmask), "d" (hmask)	\
 		     : "memory")
diff --git a/arch/x86/include/asm/futex.h b/arch/x86/include/asm/futex.h
index de4d68852d3a..13c83fe97988 100644
--- a/arch/x86/include/asm/futex.h
+++ b/arch/x86/include/asm/futex.h
@@ -20,7 +20,7 @@
 		     "3:\tmov\t%3, %1\n"			\
 		     "\tjmp\t2b\n"				\
 		     "\t.previous\n"				\
-		     _ASM_EXTABLE(1b, 3b)			\
+		     _ASM_EXTABLE_UA(1b, 3b)			\
 		     : "=r" (oldval), "=r" (ret), "+m" (*uaddr)	\
 		     : "i" (-EFAULT), "0" (oparg), "1" (0))
 
@@ -36,8 +36,8 @@
 		     "4:\tmov\t%5, %1\n"			\
 		     "\tjmp\t3b\n"				\
 		     "\t.previous\n"				\
-		     _ASM_EXTABLE(1b, 4b)			\
-		     _ASM_EXTABLE(2b, 4b)			\
+		     _ASM_EXTABLE_UA(1b, 4b)			\
+		     _ASM_EXTABLE_UA(2b, 4b)			\
 		     : "=&a" (oldval), "=&r" (ret),		\
 		       "+m" (*uaddr), "=&r" (tem)		\
 		     : "r" (oparg), "i" (-EFAULT), "1" (0))
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index aae77eb8491c..b5e58cc0c5e7 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -198,8 +198,8 @@ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
 		     "4:	movl %3,%0\n"				\
 		     "	jmp 3b\n"					\
 		     ".previous\n"					\
-		     _ASM_EXTABLE(1b, 4b)				\
-		     _ASM_EXTABLE(2b, 4b)				\
+		     _ASM_EXTABLE_UA(1b, 4b)				\
+		     _ASM_EXTABLE_UA(2b, 4b)				\
 		     : "=r" (err)					\
 		     : "A" (x), "r" (addr), "i" (errret), "0" (err))
 
@@ -340,8 +340,8 @@ do {									\
 		     "	xorl %%edx,%%edx\n"				\
 		     "	jmp 3b\n"					\
 		     ".previous\n"					\
-		     _ASM_EXTABLE(1b, 4b)				\
-		     _ASM_EXTABLE(2b, 4b)				\
+		     _ASM_EXTABLE_UA(1b, 4b)				\
+		     _ASM_EXTABLE_UA(2b, 4b)				\
 		     : "=r" (retval), "=&A"(x)				\
 		     : "m" (__m(__ptr)), "m" __m(((u32 __user *)(__ptr)) + 1),	\
 		       "i" (errret), "0" (retval));			\
@@ -386,7 +386,7 @@ do {									\
 		     "	xor"itype" %"rtype"1,%"rtype"1\n"		\
 		     "	jmp 2b\n"					\
 		     ".previous\n"					\
-		     _ASM_EXTABLE(1b, 3b)				\
+		     _ASM_EXTABLE_UA(1b, 3b)				\
 		     : "=r" (err), ltype(x)				\
 		     : "m" (__m(addr)), "i" (errret), "0" (err))
 
@@ -398,7 +398,7 @@ do {									\
 		     "3:	mov %3,%0\n"				\
 		     "	jmp 2b\n"					\
 		     ".previous\n"					\
-		     _ASM_EXTABLE(1b, 3b)				\
+		     _ASM_EXTABLE_UA(1b, 3b)				\
 		     : "=r" (err), ltype(x)				\
 		     : "m" (__m(addr)), "i" (errret), "0" (err))
 
@@ -474,7 +474,7 @@ struct __large_struct { unsigned long buf[100]; };
 		     "3:	mov %3,%0\n"				\
 		     "	jmp 2b\n"					\
 		     ".previous\n"					\
-		     _ASM_EXTABLE(1b, 3b)				\
+		     _ASM_EXTABLE_UA(1b, 3b)				\
 		     : "=r"(err)					\
 		     : ltype(x), "m" (__m(addr)), "i" (errret), "0" (err))
 
@@ -602,7 +602,7 @@ extern void __cmpxchg_wrong_size(void)
 			"3:\tmov     %3, %0\n"				\
 			"\tjmp     2b\n"				\
 			"\t.previous\n"					\
-			_ASM_EXTABLE(1b, 3b)				\
+			_ASM_EXTABLE_UA(1b, 3b)				\
 			: "+r" (__ret), "=a" (__old), "+m" (*(ptr))	\
 			: "i" (-EFAULT), "q" (__new), "1" (__old)	\
 			: "memory"					\
@@ -618,7 +618,7 @@ extern void __cmpxchg_wrong_size(void)
 			"3:\tmov     %3, %0\n"				\
 			"\tjmp     2b\n"				\
 			"\t.previous\n"					\
-			_ASM_EXTABLE(1b, 3b)				\
+			_ASM_EXTABLE_UA(1b, 3b)				\
 			: "+r" (__ret), "=a" (__old), "+m" (*(ptr))	\
 			: "i" (-EFAULT), "r" (__new), "1" (__old)	\
 			: "memory"					\
@@ -634,7 +634,7 @@ extern void __cmpxchg_wrong_size(void)
 			"3:\tmov     %3, %0\n"				\
 			"\tjmp     2b\n"				\
 			"\t.previous\n"					\
-			_ASM_EXTABLE(1b, 3b)				\
+			_ASM_EXTABLE_UA(1b, 3b)				\
 			: "+r" (__ret), "=a" (__old), "+m" (*(ptr))	\
 			: "i" (-EFAULT), "r" (__new), "1" (__old)	\
 			: "memory"					\
@@ -653,7 +653,7 @@ extern void __cmpxchg_wrong_size(void)
 			"3:\tmov     %3, %0\n"				\
 			"\tjmp     2b\n"				\
 			"\t.previous\n"					\
-			_ASM_EXTABLE(1b, 3b)				\
+			_ASM_EXTABLE_UA(1b, 3b)				\
 			: "+r" (__ret), "=a" (__old), "+m" (*(ptr))	\
 			: "i" (-EFAULT), "r" (__new), "1" (__old)	\
 			: "memory"					\
diff --git a/arch/x86/lib/checksum_32.S b/arch/x86/lib/checksum_32.S
index 46e71a74e612..ad8e0906d1ea 100644
--- a/arch/x86/lib/checksum_32.S
+++ b/arch/x86/lib/checksum_32.S
@@ -273,11 +273,11 @@ unsigned int csum_partial_copy_generic (const char *src, char *dst,
 
 #define SRC(y...)			\
 	9999: y;			\
-	_ASM_EXTABLE(9999b, 6001f)
+	_ASM_EXTABLE_UA(9999b, 6001f)
 
 #define DST(y...)			\
 	9999: y;			\
-	_ASM_EXTABLE(9999b, 6002f)
+	_ASM_EXTABLE_UA(9999b, 6002f)
 
 #ifndef CONFIG_X86_USE_PPRO_CHECKSUM
 
diff --git a/arch/x86/lib/copy_user_64.S b/arch/x86/lib/copy_user_64.S
index 020f75cc8cf6..db4e5aa0858b 100644
--- a/arch/x86/lib/copy_user_64.S
+++ b/arch/x86/lib/copy_user_64.S
@@ -92,26 +92,26 @@ ENTRY(copy_user_generic_unrolled)
 60:	jmp copy_user_handle_tail /* ecx is zerorest also */
 	.previous
 
-	_ASM_EXTABLE(1b,30b)
-	_ASM_EXTABLE(2b,30b)
-	_ASM_EXTABLE(3b,30b)
-	_ASM_EXTABLE(4b,30b)
-	_ASM_EXTABLE(5b,30b)
-	_ASM_EXTABLE(6b,30b)
-	_ASM_EXTABLE(7b,30b)
-	_ASM_EXTABLE(8b,30b)
-	_ASM_EXTABLE(9b,30b)
-	_ASM_EXTABLE(10b,30b)
-	_ASM_EXTABLE(11b,30b)
-	_ASM_EXTABLE(12b,30b)
-	_ASM_EXTABLE(13b,30b)
-	_ASM_EXTABLE(14b,30b)
-	_ASM_EXTABLE(15b,30b)
-	_ASM_EXTABLE(16b,30b)
-	_ASM_EXTABLE(18b,40b)
-	_ASM_EXTABLE(19b,40b)
-	_ASM_EXTABLE(21b,50b)
-	_ASM_EXTABLE(22b,50b)
+	_ASM_EXTABLE_UA(1b, 30b)
+	_ASM_EXTABLE_UA(2b, 30b)
+	_ASM_EXTABLE_UA(3b, 30b)
+	_ASM_EXTABLE_UA(4b, 30b)
+	_ASM_EXTABLE_UA(5b, 30b)
+	_ASM_EXTABLE_UA(6b, 30b)
+	_ASM_EXTABLE_UA(7b, 30b)
+	_ASM_EXTABLE_UA(8b, 30b)
+	_ASM_EXTABLE_UA(9b, 30b)
+	_ASM_EXTABLE_UA(10b, 30b)
+	_ASM_EXTABLE_UA(11b, 30b)
+	_ASM_EXTABLE_UA(12b, 30b)
+	_ASM_EXTABLE_UA(13b, 30b)
+	_ASM_EXTABLE_UA(14b, 30b)
+	_ASM_EXTABLE_UA(15b, 30b)
+	_ASM_EXTABLE_UA(16b, 30b)
+	_ASM_EXTABLE_UA(18b, 40b)
+	_ASM_EXTABLE_UA(19b, 40b)
+	_ASM_EXTABLE_UA(21b, 50b)
+	_ASM_EXTABLE_UA(22b, 50b)
 ENDPROC(copy_user_generic_unrolled)
 EXPORT_SYMBOL(copy_user_generic_unrolled)
 
@@ -156,8 +156,8 @@ ENTRY(copy_user_generic_string)
 	jmp copy_user_handle_tail
 	.previous
 
-	_ASM_EXTABLE(1b,11b)
-	_ASM_EXTABLE(3b,12b)
+	_ASM_EXTABLE_UA(1b, 11b)
+	_ASM_EXTABLE_UA(3b, 12b)
 ENDPROC(copy_user_generic_string)
 EXPORT_SYMBOL(copy_user_generic_string)
 
@@ -189,7 +189,7 @@ ENTRY(copy_user_enhanced_fast_string)
 	jmp copy_user_handle_tail
 	.previous
 
-	_ASM_EXTABLE(1b,12b)
+	_ASM_EXTABLE_UA(1b, 12b)
 ENDPROC(copy_user_enhanced_fast_string)
 EXPORT_SYMBOL(copy_user_enhanced_fast_string)
 
@@ -319,27 +319,27 @@ ENTRY(__copy_user_nocache)
 	jmp copy_user_handle_tail
 	.previous
 
-	_ASM_EXTABLE(1b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(2b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(3b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(4b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(5b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(6b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(7b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(8b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(9b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(10b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(11b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(12b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(13b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(14b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(15b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(16b,.L_fixup_4x8b_copy)
-	_ASM_EXTABLE(20b,.L_fixup_8b_copy)
-	_ASM_EXTABLE(21b,.L_fixup_8b_copy)
-	_ASM_EXTABLE(30b,.L_fixup_4b_copy)
-	_ASM_EXTABLE(31b,.L_fixup_4b_copy)
-	_ASM_EXTABLE(40b,.L_fixup_1b_copy)
-	_ASM_EXTABLE(41b,.L_fixup_1b_copy)
+	_ASM_EXTABLE_UA(1b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(2b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(3b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(4b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(5b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(6b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(7b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(8b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(9b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(10b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(11b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(12b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(13b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(14b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(15b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(16b, .L_fixup_4x8b_copy)
+	_ASM_EXTABLE_UA(20b, .L_fixup_8b_copy)
+	_ASM_EXTABLE_UA(21b, .L_fixup_8b_copy)
+	_ASM_EXTABLE_UA(30b, .L_fixup_4b_copy)
+	_ASM_EXTABLE_UA(31b, .L_fixup_4b_copy)
+	_ASM_EXTABLE_UA(40b, .L_fixup_1b_copy)
+	_ASM_EXTABLE_UA(41b, .L_fixup_1b_copy)
 ENDPROC(__copy_user_nocache)
 EXPORT_SYMBOL(__copy_user_nocache)
diff --git a/arch/x86/lib/csum-copy_64.S b/arch/x86/lib/csum-copy_64.S
index 45a53dfe1859..a4a379e79259 100644
--- a/arch/x86/lib/csum-copy_64.S
+++ b/arch/x86/lib/csum-copy_64.S
@@ -31,14 +31,18 @@
 
 	.macro source
 10:
-	_ASM_EXTABLE(10b, .Lbad_source)
+	_ASM_EXTABLE_UA(10b, .Lbad_source)
 	.endm
 
 	.macro dest
 20:
-	_ASM_EXTABLE(20b, .Lbad_dest)
+	_ASM_EXTABLE_UA(20b, .Lbad_dest)
 	.endm
 
+	/*
+	 * No _ASM_EXTABLE_UA; this is used for intentional prefetch on a
+	 * potentially unmapped kernel address.
+	 */
 	.macro ignore L=.Lignore
 30:
 	_ASM_EXTABLE(30b, \L)
diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S
index 49b167f73215..74fdff968ea3 100644
--- a/arch/x86/lib/getuser.S
+++ b/arch/x86/lib/getuser.S
@@ -132,12 +132,12 @@ bad_get_user_8:
 END(bad_get_user_8)
 #endif
 
-	_ASM_EXTABLE(1b,bad_get_user)
-	_ASM_EXTABLE(2b,bad_get_user)
-	_ASM_EXTABLE(3b,bad_get_user)
+	_ASM_EXTABLE_UA(1b, bad_get_user)
+	_ASM_EXTABLE_UA(2b, bad_get_user)
+	_ASM_EXTABLE_UA(3b, bad_get_user)
 #ifdef CONFIG_X86_64
-	_ASM_EXTABLE(4b,bad_get_user)
+	_ASM_EXTABLE_UA(4b, bad_get_user)
 #else
-	_ASM_EXTABLE(4b,bad_get_user_8)
-	_ASM_EXTABLE(5b,bad_get_user_8)
+	_ASM_EXTABLE_UA(4b, bad_get_user_8)
+	_ASM_EXTABLE_UA(5b, bad_get_user_8)
 #endif
diff --git a/arch/x86/lib/putuser.S b/arch/x86/lib/putuser.S
index 96dce5fe2a35..d2e5c9c39601 100644
--- a/arch/x86/lib/putuser.S
+++ b/arch/x86/lib/putuser.S
@@ -94,10 +94,10 @@ bad_put_user:
 	EXIT
 END(bad_put_user)
 
-	_ASM_EXTABLE(1b,bad_put_user)
-	_ASM_EXTABLE(2b,bad_put_user)
-	_ASM_EXTABLE(3b,bad_put_user)
-	_ASM_EXTABLE(4b,bad_put_user)
+	_ASM_EXTABLE_UA(1b, bad_put_user)
+	_ASM_EXTABLE_UA(2b, bad_put_user)
+	_ASM_EXTABLE_UA(3b, bad_put_user)
+	_ASM_EXTABLE_UA(4b, bad_put_user)
 #ifdef CONFIG_X86_32
-	_ASM_EXTABLE(5b,bad_put_user)
+	_ASM_EXTABLE_UA(5b, bad_put_user)
 #endif
diff --git a/arch/x86/lib/usercopy_32.c b/arch/x86/lib/usercopy_32.c
index 7add8ba06887..71fb58d44d58 100644
--- a/arch/x86/lib/usercopy_32.c
+++ b/arch/x86/lib/usercopy_32.c
@@ -47,8 +47,8 @@ do {									\
 		"3:	lea 0(%2,%0,4),%0\n"				\
 		"	jmp 2b\n"					\
 		".previous\n"						\
-		_ASM_EXTABLE(0b,3b)					\
-		_ASM_EXTABLE(1b,2b)					\
+		_ASM_EXTABLE_UA(0b, 3b)					\
+		_ASM_EXTABLE_UA(1b, 2b)					\
 		: "=&c"(size), "=&D" (__d0)				\
 		: "r"(size & 3), "0"(size / 4), "1"(addr), "a"(0));	\
 } while (0)
@@ -153,44 +153,44 @@ __copy_user_intel(void __user *to, const void *from, unsigned long size)
 		       "101:   lea 0(%%eax,%0,4),%0\n"
 		       "       jmp 100b\n"
 		       ".previous\n"
-		       _ASM_EXTABLE(1b,100b)
-		       _ASM_EXTABLE(2b,100b)
-		       _ASM_EXTABLE(3b,100b)
-		       _ASM_EXTABLE(4b,100b)
-		       _ASM_EXTABLE(5b,100b)
-		       _ASM_EXTABLE(6b,100b)
-		       _ASM_EXTABLE(7b,100b)
-		       _ASM_EXTABLE(8b,100b)
-		       _ASM_EXTABLE(9b,100b)
-		       _ASM_EXTABLE(10b,100b)
-		       _ASM_EXTABLE(11b,100b)
-		       _ASM_EXTABLE(12b,100b)
-		       _ASM_EXTABLE(13b,100b)
-		       _ASM_EXTABLE(14b,100b)
-		       _ASM_EXTABLE(15b,100b)
-		       _ASM_EXTABLE(16b,100b)
-		       _ASM_EXTABLE(17b,100b)
-		       _ASM_EXTABLE(18b,100b)
-		       _ASM_EXTABLE(19b,100b)
-		       _ASM_EXTABLE(20b,100b)
-		       _ASM_EXTABLE(21b,100b)
-		       _ASM_EXTABLE(22b,100b)
-		       _ASM_EXTABLE(23b,100b)
-		       _ASM_EXTABLE(24b,100b)
-		       _ASM_EXTABLE(25b,100b)
-		       _ASM_EXTABLE(26b,100b)
-		       _ASM_EXTABLE(27b,100b)
-		       _ASM_EXTABLE(28b,100b)
-		       _ASM_EXTABLE(29b,100b)
-		       _ASM_EXTABLE(30b,100b)
-		       _ASM_EXTABLE(31b,100b)
-		       _ASM_EXTABLE(32b,100b)
-		       _ASM_EXTABLE(33b,100b)
-		       _ASM_EXTABLE(34b,100b)
-		       _ASM_EXTABLE(35b,100b)
-		       _ASM_EXTABLE(36b,100b)
-		       _ASM_EXTABLE(37b,100b)
-		       _ASM_EXTABLE(99b,101b)
+		       _ASM_EXTABLE_UA(1b, 100b)
+		       _ASM_EXTABLE_UA(2b, 100b)
+		       _ASM_EXTABLE_UA(3b, 100b)
+		       _ASM_EXTABLE_UA(4b, 100b)
+		       _ASM_EXTABLE_UA(5b, 100b)
+		       _ASM_EXTABLE_UA(6b, 100b)
+		       _ASM_EXTABLE_UA(7b, 100b)
+		       _ASM_EXTABLE_UA(8b, 100b)
+		       _ASM_EXTABLE_UA(9b, 100b)
+		       _ASM_EXTABLE_UA(10b, 100b)
+		       _ASM_EXTABLE_UA(11b, 100b)
+		       _ASM_EXTABLE_UA(12b, 100b)
+		       _ASM_EXTABLE_UA(13b, 100b)
+		       _ASM_EXTABLE_UA(14b, 100b)
+		       _ASM_EXTABLE_UA(15b, 100b)
+		       _ASM_EXTABLE_UA(16b, 100b)
+		       _ASM_EXTABLE_UA(17b, 100b)
+		       _ASM_EXTABLE_UA(18b, 100b)
+		       _ASM_EXTABLE_UA(19b, 100b)
+		       _ASM_EXTABLE_UA(20b, 100b)
+		       _ASM_EXTABLE_UA(21b, 100b)
+		       _ASM_EXTABLE_UA(22b, 100b)
+		       _ASM_EXTABLE_UA(23b, 100b)
+		       _ASM_EXTABLE_UA(24b, 100b)
+		       _ASM_EXTABLE_UA(25b, 100b)
+		       _ASM_EXTABLE_UA(26b, 100b)
+		       _ASM_EXTABLE_UA(27b, 100b)
+		       _ASM_EXTABLE_UA(28b, 100b)
+		       _ASM_EXTABLE_UA(29b, 100b)
+		       _ASM_EXTABLE_UA(30b, 100b)
+		       _ASM_EXTABLE_UA(31b, 100b)
+		       _ASM_EXTABLE_UA(32b, 100b)
+		       _ASM_EXTABLE_UA(33b, 100b)
+		       _ASM_EXTABLE_UA(34b, 100b)
+		       _ASM_EXTABLE_UA(35b, 100b)
+		       _ASM_EXTABLE_UA(36b, 100b)
+		       _ASM_EXTABLE_UA(37b, 100b)
+		       _ASM_EXTABLE_UA(99b, 101b)
 		       : "=&c"(size), "=&D" (d0), "=&S" (d1)
 		       :  "1"(to), "2"(from), "0"(size)
 		       : "eax", "edx", "memory");
@@ -259,26 +259,26 @@ static unsigned long __copy_user_intel_nocache(void *to,
 	       "9:      lea 0(%%eax,%0,4),%0\n"
 	       "16:     jmp 8b\n"
 	       ".previous\n"
-	       _ASM_EXTABLE(0b,16b)
-	       _ASM_EXTABLE(1b,16b)
-	       _ASM_EXTABLE(2b,16b)
-	       _ASM_EXTABLE(21b,16b)
-	       _ASM_EXTABLE(3b,16b)
-	       _ASM_EXTABLE(31b,16b)
-	       _ASM_EXTABLE(4b,16b)
-	       _ASM_EXTABLE(41b,16b)
-	       _ASM_EXTABLE(10b,16b)
-	       _ASM_EXTABLE(51b,16b)
-	       _ASM_EXTABLE(11b,16b)
-	       _ASM_EXTABLE(61b,16b)
-	       _ASM_EXTABLE(12b,16b)
-	       _ASM_EXTABLE(71b,16b)
-	       _ASM_EXTABLE(13b,16b)
-	       _ASM_EXTABLE(81b,16b)
-	       _ASM_EXTABLE(14b,16b)
-	       _ASM_EXTABLE(91b,16b)
-	       _ASM_EXTABLE(6b,9b)
-	       _ASM_EXTABLE(7b,16b)
+	       _ASM_EXTABLE_UA(0b, 16b)
+	       _ASM_EXTABLE_UA(1b, 16b)
+	       _ASM_EXTABLE_UA(2b, 16b)
+	       _ASM_EXTABLE_UA(21b, 16b)
+	       _ASM_EXTABLE_UA(3b, 16b)
+	       _ASM_EXTABLE_UA(31b, 16b)
+	       _ASM_EXTABLE_UA(4b, 16b)
+	       _ASM_EXTABLE_UA(41b, 16b)
+	       _ASM_EXTABLE_UA(10b, 16b)
+	       _ASM_EXTABLE_UA(51b, 16b)
+	       _ASM_EXTABLE_UA(11b, 16b)
+	       _ASM_EXTABLE_UA(61b, 16b)
+	       _ASM_EXTABLE_UA(12b, 16b)
+	       _ASM_EXTABLE_UA(71b, 16b)
+	       _ASM_EXTABLE_UA(13b, 16b)
+	       _ASM_EXTABLE_UA(81b, 16b)
+	       _ASM_EXTABLE_UA(14b, 16b)
+	       _ASM_EXTABLE_UA(91b, 16b)
+	       _ASM_EXTABLE_UA(6b, 9b)
+	       _ASM_EXTABLE_UA(7b, 16b)
 	       : "=&c"(size), "=&D" (d0), "=&S" (d1)
 	       :  "1"(to), "2"(from), "0"(size)
 	       : "eax", "edx", "memory");
@@ -321,9 +321,9 @@ do {									\
 		"3:	lea 0(%3,%0,4),%0\n"				\
 		"	jmp 2b\n"					\
 		".previous\n"						\
-		_ASM_EXTABLE(4b,5b)					\
-		_ASM_EXTABLE(0b,3b)					\
-		_ASM_EXTABLE(1b,2b)					\
+		_ASM_EXTABLE_UA(4b, 5b)					\
+		_ASM_EXTABLE_UA(0b, 3b)					\
+		_ASM_EXTABLE_UA(1b, 2b)					\
 		: "=&c"(size), "=&D" (__d0), "=&S" (__d1), "=r"(__d2)	\
 		: "3"(size), "0"(size), "1"(to), "2"(from)		\
 		: "memory");						\
diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
index 9c5606d88f61..fefe64436398 100644
--- a/arch/x86/lib/usercopy_64.c
+++ b/arch/x86/lib/usercopy_64.c
@@ -37,8 +37,8 @@ unsigned long __clear_user(void __user *addr, unsigned long size)
 		"3:	lea 0(%[size1],%[size8],8),%[size8]\n"
 		"	jmp 2b\n"
 		".previous\n"
-		_ASM_EXTABLE(0b,3b)
-		_ASM_EXTABLE(1b,2b)
+		_ASM_EXTABLE_UA(0b, 3b)
+		_ASM_EXTABLE_UA(1b, 2b)
 		: [size8] "=&c"(size), [dst] "=&D" (__d0)
 		: [size1] "r"(size & 7), "[size8]" (size / 8), "[dst]"(addr));
 	clac();
diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
index 45f5d6cf65ae..0b8b5d889eec 100644
--- a/arch/x86/mm/extable.c
+++ b/arch/x86/mm/extable.c
@@ -108,6 +108,14 @@ __visible bool ex_handler_fprestore(const struct exception_table_entry *fixup,
 }
 EXPORT_SYMBOL_GPL(ex_handler_fprestore);
 
+__visible bool ex_handler_uaccess(const struct exception_table_entry *fixup,
+				  struct pt_regs *regs, int trapnr)
+{
+	regs->ip = ex_fixup_addr(fixup);
+	return true;
+}
+EXPORT_SYMBOL(ex_handler_uaccess);
+
 __visible bool ex_handler_ext(const struct exception_table_entry *fixup,
 			      struct pt_regs *regs, int trapnr)
 {
-- 
2.19.0.rc0.228.g281dcd1b4d0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 5/7] x86: plumb error code and fault address through to fault handlers
  2018-08-27 18:56 [PATCH v2 0/7] x86: BUG() on #GP / kernel #PF in uaccess Jann Horn
                   ` (3 preceding siblings ...)
  2018-08-27 18:56 ` [PATCH v2 4/7] x86: introduce _ASM_EXTABLE_UA for uaccess fixups Jann Horn
@ 2018-08-27 18:56 ` Jann Horn
  2018-08-27 19:08   ` Andy Lutomirski
  2018-08-27 18:56 ` [PATCH v2 6/7] x86: BUG() when uaccess helpers fault on kernel addresses Jann Horn
  2018-08-27 18:56 ` [PATCH v2 7/7] lkdtm: test copy_to_user() on bad kernel pointer under KERNEL_DS Jann Horn
  6 siblings, 1 reply; 17+ messages in thread
From: Jann Horn @ 2018-08-27 18:56 UTC (permalink / raw)
  To: Kees Cook, Thomas Gleixner, Ingo Molnar, x86, Andy Lutomirski,
	kernel-hardening, Tony Luck, Borislav Petkov, jannh
  Cc: linux-kernel, dvyukov, linux-edac

This is preparation for looking at trap number and fault address in the
handlers for uaccess errors.
This patch should not change any behavior.

Signed-off-by: Jann Horn <jannh@google.com>
---
 arch/x86/include/asm/extable.h   |  3 ++-
 arch/x86/include/asm/ptrace.h    |  2 ++
 arch/x86/kernel/cpu/mcheck/mce.c |  2 +-
 arch/x86/kernel/traps.c          |  6 ++---
 arch/x86/mm/extable.c            | 40 ++++++++++++++++++++------------
 arch/x86/mm/fault.c              |  2 +-
 6 files changed, 34 insertions(+), 21 deletions(-)

diff --git a/arch/x86/include/asm/extable.h b/arch/x86/include/asm/extable.h
index f9c3a5d502f4..d8c2198d543b 100644
--- a/arch/x86/include/asm/extable.h
+++ b/arch/x86/include/asm/extable.h
@@ -29,7 +29,8 @@ struct pt_regs;
 		(b)->handler = (tmp).handler - (delta);		\
 	} while (0)
 
-extern int fixup_exception(struct pt_regs *regs, int trapnr);
+extern int fixup_exception(struct pt_regs *regs, int trapnr,
+			   unsigned long error_code, unsigned long fault_addr);
 extern int fixup_bug(struct pt_regs *regs, int trapnr);
 extern bool ex_has_fault_handler(unsigned long ip);
 extern void early_fixup_exception(struct pt_regs *regs, int trapnr);
diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
index 6de1fd3d0097..6c68d4947a8f 100644
--- a/arch/x86/include/asm/ptrace.h
+++ b/arch/x86/include/asm/ptrace.h
@@ -37,8 +37,10 @@ struct pt_regs {
 	unsigned short __esh;
 	unsigned short fs;
 	unsigned short __fsh;
+/* On interrupt, gs and __gsh store the vector number. */
 	unsigned short gs;
 	unsigned short __gsh;
+/* On interrupt, this is the error code. */
 	unsigned long orig_ax;
 	unsigned long ip;
 	unsigned short cs;
diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
index 4b767284b7f5..e822d706d8c8 100644
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -1347,7 +1347,7 @@ void do_machine_check(struct pt_regs *regs, long error_code)
 		local_irq_disable();
 		ist_end_non_atomic();
 	} else {
-		if (!fixup_exception(regs, X86_TRAP_MC))
+		if (!fixup_exception(regs, X86_TRAP_MC, error_code, 0))
 			mce_panic("Failed kernel mode recovery", &m, NULL);
 	}
 
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index bf9ab1aaa175..16c95cb90496 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -206,7 +206,7 @@ do_trap_no_signal(struct task_struct *tsk, int trapnr, char *str,
 	}
 
 	if (!user_mode(regs)) {
-		if (fixup_exception(regs, trapnr))
+		if (fixup_exception(regs, trapnr, error_code, 0))
 			return 0;
 
 		tsk->thread.error_code = error_code;
@@ -551,7 +551,7 @@ do_general_protection(struct pt_regs *regs, long error_code)
 
 	tsk = current;
 	if (!user_mode(regs)) {
-		if (fixup_exception(regs, X86_TRAP_GP))
+		if (fixup_exception(regs, X86_TRAP_GP, error_code, 0))
 			return;
 
 		tsk->thread.error_code = error_code;
@@ -848,7 +848,7 @@ static void math_error(struct pt_regs *regs, int error_code, int trapnr)
 	cond_local_irq_enable(regs);
 
 	if (!user_mode(regs)) {
-		if (fixup_exception(regs, trapnr))
+		if (fixup_exception(regs, trapnr, error_code, 0))
 			return;
 
 		task->thread.error_code = error_code;
diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
index 0b8b5d889eec..4110cca93a08 100644
--- a/arch/x86/mm/extable.c
+++ b/arch/x86/mm/extable.c
@@ -8,7 +8,7 @@
 #include <asm/kdebug.h>
 
 typedef bool (*ex_handler_t)(const struct exception_table_entry *,
-			    struct pt_regs *, int);
+			    struct pt_regs *, int, unsigned long);
 
 static inline unsigned long
 ex_fixup_addr(const struct exception_table_entry *x)
@@ -22,7 +22,8 @@ ex_fixup_handler(const struct exception_table_entry *x)
 }
 
 __visible bool ex_handler_default(const struct exception_table_entry *fixup,
-				  struct pt_regs *regs, int trapnr)
+				  struct pt_regs *regs, int trapnr,
+				  unsigned long fault_addr)
 {
 	regs->ip = ex_fixup_addr(fixup);
 	return true;
@@ -30,7 +31,8 @@ __visible bool ex_handler_default(const struct exception_table_entry *fixup,
 EXPORT_SYMBOL(ex_handler_default);
 
 __visible bool ex_handler_fault(const struct exception_table_entry *fixup,
-				struct pt_regs *regs, int trapnr)
+				struct pt_regs *regs, int trapnr,
+				unsigned long fault_addr)
 {
 	regs->ip = ex_fixup_addr(fixup);
 	regs->ax = trapnr;
@@ -43,7 +45,8 @@ EXPORT_SYMBOL_GPL(ex_handler_fault);
  * result of a refcount inc/dec/add/sub.
  */
 __visible bool ex_handler_refcount(const struct exception_table_entry *fixup,
-				   struct pt_regs *regs, int trapnr)
+				   struct pt_regs *regs, int trapnr,
+				   unsigned long fault_addr)
 {
 	/* First unconditionally saturate the refcount. */
 	*(int *)regs->cx = INT_MIN / 2;
@@ -96,7 +99,8 @@ EXPORT_SYMBOL(ex_handler_refcount);
  * out all the FPU registers) if we can't restore from the task's FPU state.
  */
 __visible bool ex_handler_fprestore(const struct exception_table_entry *fixup,
-				    struct pt_regs *regs, int trapnr)
+				    struct pt_regs *regs, int trapnr,
+				    unsigned long fault_addr)
 {
 	regs->ip = ex_fixup_addr(fixup);
 
@@ -109,7 +113,8 @@ __visible bool ex_handler_fprestore(const struct exception_table_entry *fixup,
 EXPORT_SYMBOL_GPL(ex_handler_fprestore);
 
 __visible bool ex_handler_uaccess(const struct exception_table_entry *fixup,
-				  struct pt_regs *regs, int trapnr)
+				  struct pt_regs *regs, int trapnr,
+				  unsigned long fault_addr)
 {
 	regs->ip = ex_fixup_addr(fixup);
 	return true;
@@ -117,7 +122,8 @@ __visible bool ex_handler_uaccess(const struct exception_table_entry *fixup,
 EXPORT_SYMBOL(ex_handler_uaccess);
 
 __visible bool ex_handler_ext(const struct exception_table_entry *fixup,
-			      struct pt_regs *regs, int trapnr)
+			      struct pt_regs *regs, int trapnr,
+			      unsigned long fault_addr)
 {
 	/* Special hack for uaccess_err */
 	current->thread.uaccess_err = 1;
@@ -127,7 +133,8 @@ __visible bool ex_handler_ext(const struct exception_table_entry *fixup,
 EXPORT_SYMBOL(ex_handler_ext);
 
 __visible bool ex_handler_rdmsr_unsafe(const struct exception_table_entry *fixup,
-				       struct pt_regs *regs, int trapnr)
+				       struct pt_regs *regs, int trapnr,
+				       unsigned long fault_addr)
 {
 	if (pr_warn_once("unchecked MSR access error: RDMSR from 0x%x at rIP: 0x%lx (%pF)\n",
 			 (unsigned int)regs->cx, regs->ip, (void *)regs->ip))
@@ -142,7 +149,8 @@ __visible bool ex_handler_rdmsr_unsafe(const struct exception_table_entry *fixup
 EXPORT_SYMBOL(ex_handler_rdmsr_unsafe);
 
 __visible bool ex_handler_wrmsr_unsafe(const struct exception_table_entry *fixup,
-				       struct pt_regs *regs, int trapnr)
+				       struct pt_regs *regs, int trapnr,
+				       unsigned long fault_addr)
 {
 	if (pr_warn_once("unchecked MSR access error: WRMSR to 0x%x (tried to write 0x%08x%08x) at rIP: 0x%lx (%pF)\n",
 			 (unsigned int)regs->cx, (unsigned int)regs->dx,
@@ -156,12 +164,13 @@ __visible bool ex_handler_wrmsr_unsafe(const struct exception_table_entry *fixup
 EXPORT_SYMBOL(ex_handler_wrmsr_unsafe);
 
 __visible bool ex_handler_clear_fs(const struct exception_table_entry *fixup,
-				   struct pt_regs *regs, int trapnr)
+				   struct pt_regs *regs, int trapnr,
+				   unsigned long fault_addr)
 {
 	if (static_cpu_has(X86_BUG_NULL_SEG))
 		asm volatile ("mov %0, %%fs" : : "rm" (__USER_DS));
 	asm volatile ("mov %0, %%fs" : : "rm" (0));
-	return ex_handler_default(fixup, regs, trapnr);
+	return ex_handler_default(fixup, regs, trapnr, fault_addr);
 }
 EXPORT_SYMBOL(ex_handler_clear_fs);
 
@@ -178,7 +187,8 @@ __visible bool ex_has_fault_handler(unsigned long ip)
 	return handler == ex_handler_fault;
 }
 
-int fixup_exception(struct pt_regs *regs, int trapnr)
+int fixup_exception(struct pt_regs *regs, int trapnr, unsigned long error_code,
+		    unsigned long fault_addr)
 {
 	const struct exception_table_entry *e;
 	ex_handler_t handler;
@@ -202,7 +212,7 @@ int fixup_exception(struct pt_regs *regs, int trapnr)
 		return 0;
 
 	handler = ex_fixup_handler(e);
-	return handler(e, regs, trapnr);
+	return handler(e, regs, trapnr, fault_addr);
 }
 
 extern unsigned int early_recursion_flag;
@@ -238,9 +248,9 @@ void __init early_fixup_exception(struct pt_regs *regs, int trapnr)
 	 * result in a hard-to-debug panic.
 	 *
 	 * Keep in mind that not all vectors actually get here.  Early
-	 * fage faults, for example, are special.
+	 * page faults, for example, are special.
 	 */
-	if (fixup_exception(regs, trapnr))
+	if (fixup_exception(regs, trapnr, regs->orig_ax, 0))
 		return;
 
 	if (fixup_bug(regs, trapnr))
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 2254a30533b9..60f1678dc3d7 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -706,7 +706,7 @@ no_context(struct pt_regs *regs, unsigned long error_code,
 	int sig;
 
 	/* Are we prepared to handle this kernel fault? */
-	if (fixup_exception(regs, X86_TRAP_PF)) {
+	if (fixup_exception(regs, X86_TRAP_PF, error_code, address)) {
 		/*
 		 * Any interrupt that takes a fault gets the fixup. This makes
 		 * the below recursive fault logic only apply to a faults from
-- 
2.19.0.rc0.228.g281dcd1b4d0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 6/7] x86: BUG() when uaccess helpers fault on kernel addresses
  2018-08-27 18:56 [PATCH v2 0/7] x86: BUG() on #GP / kernel #PF in uaccess Jann Horn
                   ` (4 preceding siblings ...)
  2018-08-27 18:56 ` [PATCH v2 5/7] x86: plumb error code and fault address through to fault handlers Jann Horn
@ 2018-08-27 18:56 ` Jann Horn
  2018-08-28  0:51   ` Jann Horn
  2018-08-27 18:56 ` [PATCH v2 7/7] lkdtm: test copy_to_user() on bad kernel pointer under KERNEL_DS Jann Horn
  6 siblings, 1 reply; 17+ messages in thread
From: Jann Horn @ 2018-08-27 18:56 UTC (permalink / raw)
  To: Kees Cook, Thomas Gleixner, Ingo Molnar, x86, Andy Lutomirski,
	kernel-hardening, Tony Luck, Borislav Petkov, jannh
  Cc: linux-kernel, dvyukov, linux-edac

There have been multiple kernel vulnerabilities that permitted userspace to
pass completely unchecked pointers through to userspace accessors:

 - the waitid() bug - commit 96ca579a1ecc ("waitid(): Add missing
   access_ok() checks")
 - the sg/bsg read/write APIs
 - the infiniband read/write APIs

These don't happen all that often, but when they do happen, it is hard to
test for them properly; and it is probably also hard to discover them with
fuzzing. Even when an unmapped kernel address is supplied to such buggy
code, it just returns -EFAULT instead of doing a proper BUG() or at least
WARN().

This patch attempts to make such misbehaving code a bit more visible by
refusing to do a fixup in the pagefault handler code when a userspace
accessor causes #PF on a kernel address and the current context isn't
whitelisted.

Signed-off-by: Jann Horn <jannh@google.com>
---
 arch/x86/mm/extable.c | 58 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/sched.h |  6 +++++
 mm/maccess.c          |  6 +++++
 3 files changed, 70 insertions(+)

diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
index 4110cca93a08..28298b4e5080 100644
--- a/arch/x86/mm/extable.c
+++ b/arch/x86/mm/extable.c
@@ -112,10 +112,66 @@ __visible bool ex_handler_fprestore(const struct exception_table_entry *fixup,
 }
 EXPORT_SYMBOL_GPL(ex_handler_fprestore);
 
+/* Helper to check whether a uaccess fault indicates a kernel bug. */
+static bool bogus_uaccess(struct pt_regs *regs, int trapnr,
+			  unsigned long fault_addr)
+{
+	/* This is the normal case: #PF with a fault address in userspace. */
+	if (trapnr == X86_TRAP_PF && fault_addr < TASK_SIZE_MAX)
+		return false;
+
+	/*
+	 * This code can be reached for machine checks, but only if the #MC
+	 * handler has already decided that it looks like a candidate for fixup.
+	 * This e.g. happens when attempting to access userspace memory which
+	 * the CPU can't access because of uncorrectable bad memory.
+	 */
+	if (trapnr == X86_TRAP_MC)
+		return false;
+
+	/*
+	 * There are two remaining exception types we might encounter here:
+	 *  - #PF for faulting accesses to kernel addresses
+	 *  - #GP for faulting accesses to noncanonical addresses
+	 * Complain about anything else.
+	 */
+	if (trapnr != X86_TRAP_PF && trapnr != X86_TRAP_GP) {
+		WARN(1, "unexpected trap %d in uaccess\n", trapnr);
+		return false;
+	}
+
+	/*
+	 * This is a faulting memory access in kernel space, on a kernel
+	 * address, in a usercopy function. This can e.g. be caused by improper
+	 * use of helpers like __put_user and by improper attempts to access
+	 * userspace addresses in KERNEL_DS regions.
+	 * The one (semi-)legitimate exception are probe_kernel_{read,write}(),
+	 * which can be invoked from places like kgdb, /dev/mem (for reading)
+	 * and privileged BPF code (for reading).
+	 * The probe_kernel_*() functions set the kernel_uaccess_faults_ok flag
+	 * to tell us that faulting on kernel addresses, and even noncanonical
+	 * addresses, in a userspace accessor does not necessarily imply a
+	 * kernel bug, root might just be doing weird stuff.
+	 */
+	if (current->kernel_uaccess_faults_ok)
+		return false;
+
+	/* This is bad. Refuse the fixup so that we go into die(). */
+	if (trapnr == X86_TRAP_PF) {
+		pr_emerg("BUG: pagefault on kernel address 0x%lx in non-whitelisted uaccess\n",
+			 fault_addr);
+	} else {
+		pr_emerg("BUG: GPF in non-whitelisted uaccess (non-canonical address?)\n");
+	}
+	return true;
+}
+
 __visible bool ex_handler_uaccess(const struct exception_table_entry *fixup,
 				  struct pt_regs *regs, int trapnr,
 				  unsigned long fault_addr)
 {
+	if (bogus_uaccess(regs, trapnr, fault_addr))
+		return false;
 	regs->ip = ex_fixup_addr(fixup);
 	return true;
 }
@@ -125,6 +181,8 @@ __visible bool ex_handler_ext(const struct exception_table_entry *fixup,
 			      struct pt_regs *regs, int trapnr,
 			      unsigned long fault_addr)
 {
+	if (bogus_uaccess(regs, trapnr, fault_addr))
+		return false;
 	/* Special hack for uaccess_err */
 	current->thread.uaccess_err = 1;
 	regs->ip = ex_fixup_addr(fixup);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 00de3e950dd4..7ea3f4afc0ee 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -739,6 +739,12 @@ struct task_struct {
 	unsigned			use_memdelay:1;
 #endif
 
+	/*
+	 * May usercopy functions fault on kernel addresses?
+	 * This is not just a single bit because this can potentially nest.
+	 */
+	unsigned int			kernel_uaccess_faults_ok;
+
 	unsigned long			atomic_flags; /* Flags requiring atomic access. */
 
 	struct restart_block		restart_block;
diff --git a/mm/maccess.c b/mm/maccess.c
index ec00be51a24f..f3416632e5a4 100644
--- a/mm/maccess.c
+++ b/mm/maccess.c
@@ -30,8 +30,10 @@ long __probe_kernel_read(void *dst, const void *src, size_t size)
 
 	set_fs(KERNEL_DS);
 	pagefault_disable();
+	current->kernel_uaccess_faults_ok++;
 	ret = __copy_from_user_inatomic(dst,
 			(__force const void __user *)src, size);
+	current->kernel_uaccess_faults_ok--;
 	pagefault_enable();
 	set_fs(old_fs);
 
@@ -58,7 +60,9 @@ long __probe_kernel_write(void *dst, const void *src, size_t size)
 
 	set_fs(KERNEL_DS);
 	pagefault_disable();
+	current->kernel_uaccess_faults_ok++;
 	ret = __copy_to_user_inatomic((__force void __user *)dst, src, size);
+	current->kernel_uaccess_faults_ok--;
 	pagefault_enable();
 	set_fs(old_fs);
 
@@ -94,11 +98,13 @@ long strncpy_from_unsafe(char *dst, const void *unsafe_addr, long count)
 
 	set_fs(KERNEL_DS);
 	pagefault_disable();
+	current->kernel_uaccess_faults_ok++;
 
 	do {
 		ret = __get_user(*dst++, (const char __user __force *)src++);
 	} while (dst[-1] && ret == 0 && src - unsafe_addr < count);
 
+	current->kernel_uaccess_faults_ok--;
 	dst[-1] = '\0';
 	pagefault_enable();
 	set_fs(old_fs);
-- 
2.19.0.rc0.228.g281dcd1b4d0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 7/7] lkdtm: test copy_to_user() on bad kernel pointer under KERNEL_DS
  2018-08-27 18:56 [PATCH v2 0/7] x86: BUG() on #GP / kernel #PF in uaccess Jann Horn
                   ` (5 preceding siblings ...)
  2018-08-27 18:56 ` [PATCH v2 6/7] x86: BUG() when uaccess helpers fault on kernel addresses Jann Horn
@ 2018-08-27 18:56 ` Jann Horn
  2018-08-27 22:08   ` Kees Cook
  6 siblings, 1 reply; 17+ messages in thread
From: Jann Horn @ 2018-08-27 18:56 UTC (permalink / raw)
  To: Kees Cook, Thomas Gleixner, Ingo Molnar, x86, Andy Lutomirski,
	kernel-hardening, Tony Luck, Borislav Petkov, jannh
  Cc: linux-kernel, dvyukov, linux-edac

Test whether the kernel WARN()s when, under KERNEL_DS, a bad kernel pointer
is used as "userspace" pointer. Should normally be used in "DIRECT" mode.

Signed-off-by: Jann Horn <jannh@google.com>
---
 drivers/misc/lkdtm/core.c     |  1 +
 drivers/misc/lkdtm/lkdtm.h    |  1 +
 drivers/misc/lkdtm/usercopy.c | 13 +++++++++++++
 3 files changed, 15 insertions(+)

diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
index 2154d1bfd18b..5a755590d3dc 100644
--- a/drivers/misc/lkdtm/core.c
+++ b/drivers/misc/lkdtm/core.c
@@ -183,6 +183,7 @@ static const struct crashtype crashtypes[] = {
 	CRASHTYPE(USERCOPY_STACK_FRAME_FROM),
 	CRASHTYPE(USERCOPY_STACK_BEYOND),
 	CRASHTYPE(USERCOPY_KERNEL),
+	CRASHTYPE(USERCOPY_KERNEL_DS),
 };
 
 
diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
index 9e513dcfd809..07db641d71d0 100644
--- a/drivers/misc/lkdtm/lkdtm.h
+++ b/drivers/misc/lkdtm/lkdtm.h
@@ -82,5 +82,6 @@ void lkdtm_USERCOPY_STACK_FRAME_TO(void);
 void lkdtm_USERCOPY_STACK_FRAME_FROM(void);
 void lkdtm_USERCOPY_STACK_BEYOND(void);
 void lkdtm_USERCOPY_KERNEL(void);
+void lkdtm_USERCOPY_KERNEL_DS(void);
 
 #endif
diff --git a/drivers/misc/lkdtm/usercopy.c b/drivers/misc/lkdtm/usercopy.c
index 9725aed305bb..389475b25bb7 100644
--- a/drivers/misc/lkdtm/usercopy.c
+++ b/drivers/misc/lkdtm/usercopy.c
@@ -322,6 +322,19 @@ void lkdtm_USERCOPY_KERNEL(void)
 	vm_munmap(user_addr, PAGE_SIZE);
 }
 
+void lkdtm_USERCOPY_KERNEL_DS(void)
+{
+	char __user *user_ptr = (char __user *)ERR_PTR(-EINVAL);
+	mm_segment_t old_fs = get_fs();
+	char buf[10] = {0};
+
+	pr_info("attempting copy_to_user on unmapped kernel address\n");
+	set_fs(KERNEL_DS);
+	if (copy_to_user(user_ptr, buf, sizeof(buf)))
+		pr_info("copy_to_user un unmapped kernel address failed\n");
+	set_fs(old_fs);
+}
+
 void __init lkdtm_usercopy_init(void)
 {
 	/* Prepare cache that lacks SLAB_USERCOPY flag. */
-- 
2.19.0.rc0.228.g281dcd1b4d0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/7] x86: stop calling fixup_exception() from kprobe_fault_handler()
  2018-08-27 18:56 ` [PATCH v2 3/7] x86: stop calling fixup_exception() from kprobe_fault_handler() Jann Horn
@ 2018-08-27 19:02   ` Andy Lutomirski
  2018-08-27 19:22     ` Jann Horn
  0 siblings, 1 reply; 17+ messages in thread
From: Andy Lutomirski @ 2018-08-27 19:02 UTC (permalink / raw)
  To: Jann Horn, Masami Hiramatsu, Naveen N. Rao, Anil S Keshavamurthy,
	David S. Miller
  Cc: Kees Cook, Thomas Gleixner, Ingo Molnar, X86 ML, Andy Lutomirski,
	Kernel Hardening, Tony Luck, Borislav Petkov, LKML,
	Dmitry Vyukov, linux-edac

On Mon, Aug 27, 2018 at 11:56 AM, Jann Horn <jannh@google.com> wrote:
> This removes the call into exception fixup that was added in
> commit c28f896634f2 ("[PATCH] kprobes: fix broken fault handling for
> x86_64").
>
> On X86, kprobe_fault_handler() is called from two places:
> do_general_protection() (for #GP) and kprobes_fault() (for #PF).
> In both paths, the fixup_exception() call in the kprobe fault handler is
> redundant.
>
> For #GP, fixup_exception() is called immediately before
> kprobe_fault_handler() is invoked - if someone wanted to fix up our #GP,
> they've already done so, no need to try again. (This assumes that the
> kprobe's fault handler isn't going to do something crazy like changing RIP
> so that it suddenly points to an instruction that does userspace access.)

This needs review by someone who understands kprobes better than I do.
What happens if someone puts a kprobe on a uaccess instruction and the
uaccess subsequently faults?

>
> For #PF on a kernel address from kernel space, after the kprobe fault
> handler has run, we'll go into no_context(), which calls fixup_exception().
>
> Signed-off-by: Jann Horn <jannh@google.com>
> ---
>  arch/x86/kernel/kprobes/core.c | 7 -------
>  1 file changed, 7 deletions(-)
>
> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
> index 467ac22691b0..7315ac202aad 100644
> --- a/arch/x86/kernel/kprobes/core.c
> +++ b/arch/x86/kernel/kprobes/core.c
> @@ -1021,13 +1021,6 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
>                 if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
>                         return 1;
>
> -               /*
> -                * In case the user-specified fault handler returned
> -                * zero, try to fix up.
> -                */
> -               if (fixup_exception(regs, trapnr))
> -                       return 1;
> -
>                 /* fixup routine could not handle it. */
>         }
>
> --
> 2.19.0.rc0.228.g281dcd1b4d0-goog
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 5/7] x86: plumb error code and fault address through to fault handlers
  2018-08-27 18:56 ` [PATCH v2 5/7] x86: plumb error code and fault address through to fault handlers Jann Horn
@ 2018-08-27 19:08   ` Andy Lutomirski
  2018-08-27 19:25     ` Jann Horn
  0 siblings, 1 reply; 17+ messages in thread
From: Andy Lutomirski @ 2018-08-27 19:08 UTC (permalink / raw)
  To: Jann Horn
  Cc: Kees Cook, Thomas Gleixner, Ingo Molnar, X86 ML, Andy Lutomirski,
	Kernel Hardening, Tony Luck, Borislav Petkov, LKML,
	Dmitry Vyukov, linux-edac

On Mon, Aug 27, 2018 at 11:56 AM, Jann Horn <jannh@google.com> wrote:
> This is preparation for looking at trap number and fault address in the
> handlers for uaccess errors.
> This patch should not change any behavior.

Confused.  $SUBJECT says you're plumbing the error code through, and
you are sort of doing it:

> -int fixup_exception(struct pt_regs *regs, int trapnr)
> +int fixup_exception(struct pt_regs *regs, int trapnr, unsigned long error_code,
> +                   unsigned long fault_addr)
>  {

It's available here...

>         const struct exception_table_entry *e;
>         ex_handler_t handler;
> @@ -202,7 +212,7 @@ int fixup_exception(struct pt_regs *regs, int trapnr)
>                 return 0;
>
>         handler = ex_fixup_handler(e);
> -       return handler(e, regs, trapnr);
> +       return handler(e, regs, trapnr, fault_addr);

... but you don't pass it into the handlers.  Is this intentional?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/7] x86: stop calling fixup_exception() from kprobe_fault_handler()
  2018-08-27 19:02   ` Andy Lutomirski
@ 2018-08-27 19:22     ` Jann Horn
  2018-08-27 19:25       ` Andy Lutomirski
  2018-08-28  1:21       ` Masami Hiramatsu
  0 siblings, 2 replies; 17+ messages in thread
From: Jann Horn @ 2018-08-27 19:22 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: mhiramat, naveen.n.rao, anil.s.keshavamurthy, David S. Miller,
	Kees Cook, Thomas Gleixner, Ingo Molnar,
	the arch/x86 maintainers, Kernel Hardening, tony.luck,
	Borislav Petkov, kernel list, Dmitry Vyukov, linux-edac

On Mon, Aug 27, 2018 at 9:02 PM Andy Lutomirski <luto@kernel.org> wrote:
>
> On Mon, Aug 27, 2018 at 11:56 AM, Jann Horn <jannh@google.com> wrote:
> > This removes the call into exception fixup that was added in
> > commit c28f896634f2 ("[PATCH] kprobes: fix broken fault handling for
> > x86_64").
> >
> > On X86, kprobe_fault_handler() is called from two places:
> > do_general_protection() (for #GP) and kprobes_fault() (for #PF).
> > In both paths, the fixup_exception() call in the kprobe fault handler is
> > redundant.
> >
> > For #GP, fixup_exception() is called immediately before
> > kprobe_fault_handler() is invoked - if someone wanted to fix up our #GP,
> > they've already done so, no need to try again. (This assumes that the
> > kprobe's fault handler isn't going to do something crazy like changing RIP
> > so that it suddenly points to an instruction that does userspace access.)
>
> This needs review by someone who understands kprobes better than I do.
> What happens if someone puts a kprobe on a uaccess instruction and the
> uaccess subsequently faults?

Ugh, good point. I'll admit to not having thought about that properly.

I think that's the "if (unlikely(regs->ip == (unsigned
long)cur->ainsn.insn))" branch in kprobe_fault_handler(), which I'm
not touching.

For #PF, both without and with my patch, stuff should get fixed up by
the normal pagefault handler, since the fixup happens after the kprobe
handler has fiddled with the exception state.

For #GP, we're already past the fixup call, and I think both without
and with my patch, nothing will catch it - so I think that's a bug,
but I don't think it's one I'm introducing.

> > For #PF on a kernel address from kernel space, after the kprobe fault
> > handler has run, we'll go into no_context(), which calls fixup_exception().
> >
> > Signed-off-by: Jann Horn <jannh@google.com>
> > ---
> >  arch/x86/kernel/kprobes/core.c | 7 -------
> >  1 file changed, 7 deletions(-)
> >
> > diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
> > index 467ac22691b0..7315ac202aad 100644
> > --- a/arch/x86/kernel/kprobes/core.c
> > +++ b/arch/x86/kernel/kprobes/core.c
> > @@ -1021,13 +1021,6 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
> >                 if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
> >                         return 1;
> >
> > -               /*
> > -                * In case the user-specified fault handler returned
> > -                * zero, try to fix up.
> > -                */
> > -               if (fixup_exception(regs, trapnr))
> > -                       return 1;
> > -
> >                 /* fixup routine could not handle it. */
> >         }
> >
> > --
> > 2.19.0.rc0.228.g281dcd1b4d0-goog
> >

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/7] x86: stop calling fixup_exception() from kprobe_fault_handler()
  2018-08-27 19:22     ` Jann Horn
@ 2018-08-27 19:25       ` Andy Lutomirski
  2018-08-28  1:21       ` Masami Hiramatsu
  1 sibling, 0 replies; 17+ messages in thread
From: Andy Lutomirski @ 2018-08-27 19:25 UTC (permalink / raw)
  To: Jann Horn
  Cc: Andy Lutomirski, Masami Hiramatsu, Naveen N . Rao,
	Anil S Keshavamurthy, David S. Miller, Kees Cook,
	Thomas Gleixner, Ingo Molnar, the arch/x86 maintainers,
	Kernel Hardening, Tony Luck, Borislav Petkov, kernel list,
	Dmitry Vyukov, linux-edac

On Mon, Aug 27, 2018 at 12:22 PM, Jann Horn <jannh@google.com> wrote:
> On Mon, Aug 27, 2018 at 9:02 PM Andy Lutomirski <luto@kernel.org> wrote:
>>
>> On Mon, Aug 27, 2018 at 11:56 AM, Jann Horn <jannh@google.com> wrote:
>> > This removes the call into exception fixup that was added in
>> > commit c28f896634f2 ("[PATCH] kprobes: fix broken fault handling for
>> > x86_64").
>> >
>> > On X86, kprobe_fault_handler() is called from two places:
>> > do_general_protection() (for #GP) and kprobes_fault() (for #PF).
>> > In both paths, the fixup_exception() call in the kprobe fault handler is
>> > redundant.
>> >
>> > For #GP, fixup_exception() is called immediately before
>> > kprobe_fault_handler() is invoked - if someone wanted to fix up our #GP,
>> > they've already done so, no need to try again. (This assumes that the
>> > kprobe's fault handler isn't going to do something crazy like changing RIP
>> > so that it suddenly points to an instruction that does userspace access.)
>>
>> This needs review by someone who understands kprobes better than I do.
>> What happens if someone puts a kprobe on a uaccess instruction and the
>> uaccess subsequently faults?
>
> Ugh, good point. I'll admit to not having thought about that properly.
>
> I think that's the "if (unlikely(regs->ip == (unsigned
> long)cur->ainsn.insn))" branch in kprobe_fault_handler(), which I'm
> not touching.
>
> For #PF, both without and with my patch, stuff should get fixed up by
> the normal pagefault handler, since the fixup happens after the kprobe
> handler has fiddled with the exception state.
>
> For #GP, we're already past the fixup call, and I think both without
> and with my patch, nothing will catch it - so I think that's a bug,
> but I don't think it's one I'm introducing.

Fair enough.  If there is indeed a problem, it'll be easier to fix
sanely with your patch applied :)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 5/7] x86: plumb error code and fault address through to fault handlers
  2018-08-27 19:08   ` Andy Lutomirski
@ 2018-08-27 19:25     ` Jann Horn
  0 siblings, 0 replies; 17+ messages in thread
From: Jann Horn @ 2018-08-27 19:25 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Kees Cook, Thomas Gleixner, Ingo Molnar,
	the arch/x86 maintainers, Kernel Hardening, tony.luck,
	Borislav Petkov, kernel list, Dmitry Vyukov, linux-edac

On Mon, Aug 27, 2018 at 9:09 PM Andy Lutomirski <luto@kernel.org> wrote:
>
> On Mon, Aug 27, 2018 at 11:56 AM, Jann Horn <jannh@google.com> wrote:
> > This is preparation for looking at trap number and fault address in the
> > handlers for uaccess errors.
> > This patch should not change any behavior.
>
> Confused.  $SUBJECT says you're plumbing the error code through, and
> you are sort of doing it:
>
> > -int fixup_exception(struct pt_regs *regs, int trapnr)
> > +int fixup_exception(struct pt_regs *regs, int trapnr, unsigned long error_code,
> > +                   unsigned long fault_addr)
> >  {
>
> It's available here...
>
> >         const struct exception_table_entry *e;
> >         ex_handler_t handler;
> > @@ -202,7 +212,7 @@ int fixup_exception(struct pt_regs *regs, int trapnr)
> >                 return 0;
> >
> >         handler = ex_fixup_handler(e);
> > -       return handler(e, regs, trapnr);
> > +       return handler(e, regs, trapnr, fault_addr);
>
> ... but you don't pass it into the handlers.  Is this intentional?

Whoops. No, that's not intentional. I'll fix it up for the next version.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 7/7] lkdtm: test copy_to_user() on bad kernel pointer under KERNEL_DS
  2018-08-27 18:56 ` [PATCH v2 7/7] lkdtm: test copy_to_user() on bad kernel pointer under KERNEL_DS Jann Horn
@ 2018-08-27 22:08   ` Kees Cook
  0 siblings, 0 replies; 17+ messages in thread
From: Kees Cook @ 2018-08-27 22:08 UTC (permalink / raw)
  To: Jann Horn
  Cc: Thomas Gleixner, Ingo Molnar, X86 ML, Andy Lutomirski,
	Kernel Hardening, Tony Luck, Borislav Petkov, LKML,
	Dmitry Vyukov, linux-edac

On Mon, Aug 27, 2018 at 11:56 AM, Jann Horn <jannh@google.com> wrote:
> Test whether the kernel WARN()s when, under KERNEL_DS, a bad kernel pointer
> is used as "userspace" pointer. Should normally be used in "DIRECT" mode.
>
> Signed-off-by: Jann Horn <jannh@google.com>

Acked-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
>  drivers/misc/lkdtm/core.c     |  1 +
>  drivers/misc/lkdtm/lkdtm.h    |  1 +
>  drivers/misc/lkdtm/usercopy.c | 13 +++++++++++++
>  3 files changed, 15 insertions(+)
>
> diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
> index 2154d1bfd18b..5a755590d3dc 100644
> --- a/drivers/misc/lkdtm/core.c
> +++ b/drivers/misc/lkdtm/core.c
> @@ -183,6 +183,7 @@ static const struct crashtype crashtypes[] = {
>         CRASHTYPE(USERCOPY_STACK_FRAME_FROM),
>         CRASHTYPE(USERCOPY_STACK_BEYOND),
>         CRASHTYPE(USERCOPY_KERNEL),
> +       CRASHTYPE(USERCOPY_KERNEL_DS),
>  };
>
>
> diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
> index 9e513dcfd809..07db641d71d0 100644
> --- a/drivers/misc/lkdtm/lkdtm.h
> +++ b/drivers/misc/lkdtm/lkdtm.h
> @@ -82,5 +82,6 @@ void lkdtm_USERCOPY_STACK_FRAME_TO(void);
>  void lkdtm_USERCOPY_STACK_FRAME_FROM(void);
>  void lkdtm_USERCOPY_STACK_BEYOND(void);
>  void lkdtm_USERCOPY_KERNEL(void);
> +void lkdtm_USERCOPY_KERNEL_DS(void);
>
>  #endif
> diff --git a/drivers/misc/lkdtm/usercopy.c b/drivers/misc/lkdtm/usercopy.c
> index 9725aed305bb..389475b25bb7 100644
> --- a/drivers/misc/lkdtm/usercopy.c
> +++ b/drivers/misc/lkdtm/usercopy.c
> @@ -322,6 +322,19 @@ void lkdtm_USERCOPY_KERNEL(void)
>         vm_munmap(user_addr, PAGE_SIZE);
>  }
>
> +void lkdtm_USERCOPY_KERNEL_DS(void)
> +{
> +       char __user *user_ptr = (char __user *)ERR_PTR(-EINVAL);
> +       mm_segment_t old_fs = get_fs();
> +       char buf[10] = {0};
> +
> +       pr_info("attempting copy_to_user on unmapped kernel address\n");
> +       set_fs(KERNEL_DS);
> +       if (copy_to_user(user_ptr, buf, sizeof(buf)))
> +               pr_info("copy_to_user un unmapped kernel address failed\n");
> +       set_fs(old_fs);
> +}
> +
>  void __init lkdtm_usercopy_init(void)
>  {
>         /* Prepare cache that lacks SLAB_USERCOPY flag. */
> --
> 2.19.0.rc0.228.g281dcd1b4d0-goog
>



-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 6/7] x86: BUG() when uaccess helpers fault on kernel addresses
  2018-08-27 18:56 ` [PATCH v2 6/7] x86: BUG() when uaccess helpers fault on kernel addresses Jann Horn
@ 2018-08-28  0:51   ` Jann Horn
  0 siblings, 0 replies; 17+ messages in thread
From: Jann Horn @ 2018-08-28  0:51 UTC (permalink / raw)
  To: Kees Cook, Thomas Gleixner, Ingo Molnar,
	the arch/x86 maintainers, Andy Lutomirski, Kernel Hardening,
	tony.luck, Borislav Petkov
  Cc: kernel list, Dmitry Vyukov, linux-edac

On Mon, Aug 27, 2018 at 8:58 PM Jann Horn <jannh@google.com> wrote:
>
> There have been multiple kernel vulnerabilities that permitted userspace to
> pass completely unchecked pointers through to userspace accessors:
>
>  - the waitid() bug - commit 96ca579a1ecc ("waitid(): Add missing
>    access_ok() checks")
>  - the sg/bsg read/write APIs
>  - the infiniband read/write APIs
>
> These don't happen all that often, but when they do happen, it is hard to
> test for them properly; and it is probably also hard to discover them with
> fuzzing. Even when an unmapped kernel address is supplied to such buggy
> code, it just returns -EFAULT instead of doing a proper BUG() or at least
> WARN().
>
> This patch attempts to make such misbehaving code a bit more visible by
> refusing to do a fixup in the pagefault handler code when a userspace
> accessor causes #PF on a kernel address and the current context isn't
> whitelisted.

Kees noticed that when you have vmapped stacks on (I was testing with
KASAN, so I couldn't enable vmapped stacks), exact_copy_from_user() in
copy_mount_options() is incompatible with this patch when used on
kernel pointers. do_mount_root() and devtmpfsd() call kys_mount() with
kernel pointers. So I'll have to either add that to the whitelist
and/or refactor the mount options copy logic first (so that it uses
strncpy_from_user() as long as the data argument isn't binary data).

I've wanted to fiddle with that code anyway - silent truncation at
0xfff bytes isn't very robust, especially when suid binaries call
sys_mount() with a partly user-controlled options string...

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/7] x86: stop calling fixup_exception() from kprobe_fault_handler()
  2018-08-27 19:22     ` Jann Horn
  2018-08-27 19:25       ` Andy Lutomirski
@ 2018-08-28  1:21       ` Masami Hiramatsu
  1 sibling, 0 replies; 17+ messages in thread
From: Masami Hiramatsu @ 2018-08-28  1:21 UTC (permalink / raw)
  To: Jann Horn
  Cc: Andy Lutomirski, mhiramat, naveen.n.rao, anil.s.keshavamurthy,
	David S. Miller, Kees Cook, Thomas Gleixner, Ingo Molnar,
	the arch/x86 maintainers, Kernel Hardening, tony.luck,
	Borislav Petkov, kernel list, Dmitry Vyukov, linux-edac

On Mon, 27 Aug 2018 21:22:19 +0200
Jann Horn <jannh@google.com> wrote:

> On Mon, Aug 27, 2018 at 9:02 PM Andy Lutomirski <luto@kernel.org> wrote:
> >
> > On Mon, Aug 27, 2018 at 11:56 AM, Jann Horn <jannh@google.com> wrote:
> > > This removes the call into exception fixup that was added in
> > > commit c28f896634f2 ("[PATCH] kprobes: fix broken fault handling for
> > > x86_64").
> > >
> > > On X86, kprobe_fault_handler() is called from two places:
> > > do_general_protection() (for #GP) and kprobes_fault() (for #PF).
> > > In both paths, the fixup_exception() call in the kprobe fault handler is
> > > redundant.
> > >
> > > For #GP, fixup_exception() is called immediately before
> > > kprobe_fault_handler() is invoked - if someone wanted to fix up our #GP,
> > > they've already done so, no need to try again. (This assumes that the
> > > kprobe's fault handler isn't going to do something crazy like changing RIP
> > > so that it suddenly points to an instruction that does userspace access.)
> >
> > This needs review by someone who understands kprobes better than I do.
> > What happens if someone puts a kprobe on a uaccess instruction and the
> > uaccess subsequently faults?
> 
> Ugh, good point. I'll admit to not having thought about that properly.
> 
> I think that's the "if (unlikely(regs->ip == (unsigned
> long)cur->ainsn.insn))" branch in kprobe_fault_handler(), which I'm
> not touching.

Correct, probing on uaccess is handled by that block.
So this fixup_exception() is just for safeness.
As Jann said, no_context() handles it correctly, we don't need it.

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

Thank you!

> 
> For #PF, both without and with my patch, stuff should get fixed up by
> the normal pagefault handler, since the fixup happens after the kprobe
> handler has fiddled with the exception state.
> 
> For #GP, we're already past the fixup call, and I think both without
> and with my patch, nothing will catch it - so I think that's a bug,
> but I don't think it's one I'm introducing.
> 
> > > For #PF on a kernel address from kernel space, after the kprobe fault
> > > handler has run, we'll go into no_context(), which calls fixup_exception().
> > >
> > > Signed-off-by: Jann Horn <jannh@google.com>
> > > ---
> > >  arch/x86/kernel/kprobes/core.c | 7 -------
> > >  1 file changed, 7 deletions(-)
> > >
> > > diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
> > > index 467ac22691b0..7315ac202aad 100644
> > > --- a/arch/x86/kernel/kprobes/core.c
> > > +++ b/arch/x86/kernel/kprobes/core.c
> > > @@ -1021,13 +1021,6 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
> > >                 if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
> > >                         return 1;
> > >
> > > -               /*
> > > -                * In case the user-specified fault handler returned
> > > -                * zero, try to fix up.
> > > -                */
> > > -               if (fixup_exception(regs, trapnr))
> > > -                       return 1;
> > > -
> > >                 /* fixup routine could not handle it. */
> > >         }
> > >
> > > --
> > > 2.19.0.rc0.228.g281dcd1b4d0-goog
> > >


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/7] x86: refactor kprobes_fault() like kprobe_exceptions_notify()
  2018-08-27 18:56 ` [PATCH v2 1/7] x86: refactor kprobes_fault() like kprobe_exceptions_notify() Jann Horn
@ 2018-08-28 11:36   ` Borislav Petkov
  0 siblings, 0 replies; 17+ messages in thread
From: Borislav Petkov @ 2018-08-28 11:36 UTC (permalink / raw)
  To: Jann Horn
  Cc: Kees Cook, Thomas Gleixner, Ingo Molnar, x86, Andy Lutomirski,
	kernel-hardening, Tony Luck, linux-kernel, dvyukov, linux-edac

On Mon, Aug 27, 2018 at 08:56:25PM +0200, Jann Horn wrote:
> This is an extension of commit b506a9d08bae ("x86: code clarification patch
> to Kprobes arch code"). As that commit explains, even though
> kprobe_running() can't be called with preemption enabled, you don't have to
> disable preemption - if preemption is on, you can't be in a kprobe.
> 
> Also, use X86_TRAP_PF instead of 14.
> 
> Signed-off-by: Jann Horn <jannh@google.com>
> ---
>  arch/x86/mm/fault.c | 19 ++++++++-----------
>  1 file changed, 8 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
> index b9123c497e0a..2254a30533b9 100644
> --- a/arch/x86/mm/fault.c
> +++ b/arch/x86/mm/fault.c
> @@ -44,17 +44,14 @@ kmmio_fault(struct pt_regs *regs, unsigned long addr)
>  
>  static nokprobe_inline int kprobes_fault(struct pt_regs *regs)
>  {
> -	int ret = 0;
> -
> -	/* kprobe_running() needs smp_processor_id() */
> -	if (kprobes_built_in() && !user_mode(regs)) {
> -		preempt_disable();
> -		if (kprobe_running() && kprobe_fault_handler(regs, 14))
> -			ret = 1;
> -		preempt_enable();
> -	}
> -
> -	return ret;
> +	/*
> +	 * To be potentially processing a kprobe fault and to be allowed to call
> +	 * kprobe_running(), we have to be non-preemptible.
> +	 */
> +	if (kprobes_built_in() && !user_mode(regs) && !preemptible() &&
> +	    kprobe_running() && kprobe_fault_handler(regs, X86_TRAP_PF))
> +		return 1;
> +	return 0;

Maybe even:

	return (kprobes_built_in() &&
		!user_mode(regs) &&
		!preemptible() &&
		kprobe_running() &&
		kprobe_fault_handler(regs, X86_TRAP_PF));

Although I'd do it a bit more readable by flipping the checks and
splitting them:

	if (!kprobes_built_in())
		return 0;

	if (user_mode(regs))
		return 0;

	...

	return kprobe_fault_handler(regs, X86_TRAP_PF);
}

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-08-28 11:38 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-27 18:56 [PATCH v2 0/7] x86: BUG() on #GP / kernel #PF in uaccess Jann Horn
2018-08-27 18:56 ` [PATCH v2 1/7] x86: refactor kprobes_fault() like kprobe_exceptions_notify() Jann Horn
2018-08-28 11:36   ` Borislav Petkov
2018-08-27 18:56 ` [PATCH v2 2/7] x86: inline kprobe_exceptions_notify() into do_general_protection() Jann Horn
2018-08-27 18:56 ` [PATCH v2 3/7] x86: stop calling fixup_exception() from kprobe_fault_handler() Jann Horn
2018-08-27 19:02   ` Andy Lutomirski
2018-08-27 19:22     ` Jann Horn
2018-08-27 19:25       ` Andy Lutomirski
2018-08-28  1:21       ` Masami Hiramatsu
2018-08-27 18:56 ` [PATCH v2 4/7] x86: introduce _ASM_EXTABLE_UA for uaccess fixups Jann Horn
2018-08-27 18:56 ` [PATCH v2 5/7] x86: plumb error code and fault address through to fault handlers Jann Horn
2018-08-27 19:08   ` Andy Lutomirski
2018-08-27 19:25     ` Jann Horn
2018-08-27 18:56 ` [PATCH v2 6/7] x86: BUG() when uaccess helpers fault on kernel addresses Jann Horn
2018-08-28  0:51   ` Jann Horn
2018-08-27 18:56 ` [PATCH v2 7/7] lkdtm: test copy_to_user() on bad kernel pointer under KERNEL_DS Jann Horn
2018-08-27 22:08   ` Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).