linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/10] x86/alternative: text_poke() fixes
@ 2018-11-10 23:17 Nadav Amit
  2018-11-10 23:17 ` [PATCH v4 01/10] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Nadav Amit
                   ` (9 more replies)
  0 siblings, 10 replies; 29+ messages in thread
From: Nadav Amit @ 2018-11-10 23:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit

This patch-set addresses some issues that might affect the security and
the correctness of code patching.

The main issue that the patches deal with is the fact that the fixmap
PTEs that are used for patching are available for access from other
cores and might be exploited. They are not even flushed from the TLB in
remote cores, so the risk is even higher. This set addresses this issue
by introducing a temporary mm that is only used during patching.
Unfortunately, due to init ordering, fixmap is still used during
boot-time patching. Future patches can eliminate the need for it.

To do so, we need to avoid using text_poke() before the poking-mm is
initialized and instead use text_poke_early().

During v3 of this set, Andy & Thomas suggested that early patching of
modules can be improved by simply writing to the memory. This actually
raises a security concern: there should not be any W+X mappings at any
given moment, and modules loading breaks this protection for no good
reason. So this patch also addresses this issue, while (presumably)
improving patching speed by making module memory initially RW(+NX) and
before execution changing it into RO(+X).

In addition the patch addresses various issues that are related to code
patching, and do some cleanup. I removed in this version some
tested-by and reviewed-by tags due to some extensive changes of some
patches.

v3->v4:
- Setting modules as RO when loading [Andy, tglx]
- Adding text_poke_kgdb() to keep the text_mutex assertion [tglx]
- Simpler logic to decide when to use early-poking [peterZ]
- More cleanup

v2->v3:
- Remove the fallback path in text_poke() [peterZ]
- poking_init() was broken due to the local variable poking_addr
- Preallocate tables for the temporary-mm to avoid sleep-in-atomic
- Prevent KASAN from yelling at text_poke()

v1->v2:
- Partial revert of 9222f606506c added to 1/6 [masami]
- Added Masami's reviewed-by tag

RFC->v1:
- Added handling of error in get_locked_pte()
- Remove lockdep assertion, clarify text_mutex use instead [masami]
- Comment fix [peterz]
- Removed remainders of text_poke return value [masami]
- Use __weak for poking_init instead of macros [masami]
- Simplify error handling in poking_init [masami]

Andy Lutomirski (1):
  x86/mm: temporary mm struct

Nadav Amit (9):
  Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  x86/jump_label: Use text_poke_early() during early init
  fork: provide a function for copying init_mm
  x86/alternative: initializing temporary mm for patching
  x86/alternative: use temporary mm for text poking
  x86/kgdb: avoid redundant comparison of code
  x86: avoid W^X being broken during modules loading
  x86/jump-label: remove support for custom poker
  x86/alternative: remove the return value of text_poke_*()

 arch/x86/include/asm/fixmap.h        |   2 -
 arch/x86/include/asm/mmu_context.h   |  20 +++
 arch/x86/include/asm/pgtable.h       |   3 +
 arch/x86/include/asm/text-patching.h |   9 +-
 arch/x86/kernel/alternative.c        | 208 +++++++++++++++++++++------
 arch/x86/kernel/jump_label.c         |  24 ++--
 arch/x86/kernel/kgdb.c               |  19 +--
 arch/x86/kernel/module.c             |   2 +-
 arch/x86/mm/init_64.c                |  39 +++++
 include/linux/filter.h               |   6 +
 include/linux/sched/task.h           |   1 +
 init/main.c                          |   3 +
 kernel/fork.c                        |  24 +++-
 kernel/module.c                      |  10 ++
 14 files changed, 289 insertions(+), 81 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH v4 01/10] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2018-11-10 23:17 [PATCH v4 00/10] x86/alternative: text_poke() fixes Nadav Amit
@ 2018-11-10 23:17 ` Nadav Amit
  2018-11-12  2:54   ` Masami Hiramatsu
  2018-11-10 23:17 ` [PATCH v4 02/10] x86/jump_label: Use text_poke_early() during early init Nadav Amit
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 29+ messages in thread
From: Nadav Amit @ 2018-11-10 23:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Jiri Kosina,
	Andy Lutomirski, Kees Cook, Dave Hansen, Masami Hiramatsu

text_mutex is currently expected to be held before text_poke() is
called, but we kgdb does not take the mutex, and instead *supposedly*
ensures the lock is not taken and will not be acquired by any other core
while text_poke() is running.

The reason for the "supposedly" comment is that it is not entirely clear
that this would be the case if gdb_do_roundup is zero.

This patch creates two wrapper functions, text_poke() and
text_poke_kgdb() which do or do not run the lockdep assertion
respectively.

While we are at it, change the return code of text_poke() to something
meaningful. One day, callers might actually respect it and the existing
BUG_ON() when patching fails could be removed. For kgdb, the return
value can actually be used.

Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Fixes: 9222f606506c ("x86/alternatives: Lockdep-enforce text_mutex in text_poke*()")
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/include/asm/text-patching.h |  3 +-
 arch/x86/kernel/alternative.c        | 72 +++++++++++++++++++++-------
 arch/x86/kernel/kgdb.c               | 15 ++++--
 3 files changed, 66 insertions(+), 24 deletions(-)

diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
index e85ff65c43c3..5a2600370763 100644
--- a/arch/x86/include/asm/text-patching.h
+++ b/arch/x86/include/asm/text-patching.h
@@ -34,7 +34,8 @@ extern void *text_poke_early(void *addr, const void *opcode, size_t len);
  * On the local CPU you need to be protected again NMI or MCE handlers seeing an
  * inconsistent instruction while you patch.
  */
-extern void *text_poke(void *addr, const void *opcode, size_t len);
+extern int text_poke(void *addr, const void *opcode, size_t len);
+extern int text_poke_kgdb(void *addr, const void *opcode, size_t len);
 extern int poke_int3_handler(struct pt_regs *regs);
 extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
 extern int after_bootmem;
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index ebeac487a20c..ebe9210dc92e 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -678,23 +678,12 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode,
 	return addr;
 }
 
-/**
- * text_poke - Update instructions on a live kernel
- * @addr: address to modify
- * @opcode: source of the copy
- * @len: length to copy
- *
- * Only atomic text poke/set should be allowed when not doing early patching.
- * It means the size must be writable atomically and the address must be aligned
- * in a way that permits an atomic write. It also makes sure we fit on a single
- * page.
- */
-void *text_poke(void *addr, const void *opcode, size_t len)
+static int __text_poke(void *addr, const void *opcode, size_t len)
 {
 	unsigned long flags;
 	char *vaddr;
 	struct page *pages[2];
-	int i;
+	int i, r = 0;
 
 	/*
 	 * While boot memory allocator is runnig we cannot use struct
@@ -702,8 +691,6 @@ void *text_poke(void *addr, const void *opcode, size_t len)
 	 */
 	BUG_ON(!after_bootmem);
 
-	lockdep_assert_held(&text_mutex);
-
 	if (!core_kernel_text((unsigned long)addr)) {
 		pages[0] = vmalloc_to_page(addr);
 		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
@@ -712,7 +699,8 @@ void *text_poke(void *addr, const void *opcode, size_t len)
 		WARN_ON(!PageReserved(pages[0]));
 		pages[1] = virt_to_page(addr + PAGE_SIZE);
 	}
-	BUG_ON(!pages[0]);
+	if (!pages[0])
+		return -EFAULT;
 	local_irq_save(flags);
 	set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0]));
 	if (pages[1])
@@ -727,9 +715,57 @@ void *text_poke(void *addr, const void *opcode, size_t len)
 	/* Could also do a CLFLUSH here to speed up CPU recovery; but
 	   that causes hangs on some VIA CPUs. */
 	for (i = 0; i < len; i++)
-		BUG_ON(((char *)addr)[i] != ((char *)opcode)[i]);
+		if (((char *)addr)[i] != ((char *)opcode)[i])
+			r = -EFAULT;
 	local_irq_restore(flags);
-	return addr;
+	return r;
+}
+
+/**
+ * text_poke - Update instructions on a live kernel
+ * @addr: address to modify
+ * @opcode: source of the copy
+ * @len: length to copy
+ *
+ * Only atomic text poke/set should be allowed when not doing early patching.
+ * It means the size must be writable atomically and the address must be aligned
+ * in a way that permits an atomic write. It also makes sure we fit on a single
+ * page.
+ */
+int text_poke(void *addr, const void *opcode, size_t len)
+{
+	int r;
+
+	lockdep_assert_held(&text_mutex);
+
+	r = __text_poke(addr, opcode, len);
+
+	/*
+	 * TODO: change the callers to consider the return value and remove this
+	 *       historical assertion.
+	 */
+	BUG_ON(r);
+
+	return r;
+}
+
+/**
+ * text_poke_kgdb - Update instructions on a live kernel by kgdb
+ * @addr: address to modify
+ * @opcode: source of the copy
+ * @len: length to copy
+ *
+ * Only atomic text poke/set should be allowed when not doing early patching.
+ * It means the size must be writable atomically and the address must be aligned
+ * in a way that permits an atomic write. It also makes sure we fit on a single
+ * page.
+ *
+ * Context: should only be used by kgdb, which ensures no other core is running,
+ *	    despite the fact it does not hold the text_mutex.
+ */
+int text_poke_kgdb(void *addr, const void *opcode, size_t len)
+{
+	return __text_poke(addr, opcode, len);
 }
 
 static void do_sync_core(void *info)
diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
index 8e36f249646e..8091b2e381d4 100644
--- a/arch/x86/kernel/kgdb.c
+++ b/arch/x86/kernel/kgdb.c
@@ -763,13 +763,15 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 	if (!err)
 		return err;
 	/*
-	 * It is safe to call text_poke() because normal kernel execution
+	 * It is safe to call text_poke_kgdb() because normal kernel execution
 	 * is stopped on all cores, so long as the text_mutex is not locked.
 	 */
 	if (mutex_is_locked(&text_mutex))
 		return -EBUSY;
-	text_poke((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
-		  BREAK_INSTR_SIZE);
+	err = text_poke_kgdb((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
+			     BREAK_INSTR_SIZE);
+	if (err)
+		return err;
 	err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
 	if (err)
 		return err;
@@ -788,12 +790,15 @@ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
 	if (bpt->type != BP_POKE_BREAKPOINT)
 		goto knl_write;
 	/*
-	 * It is safe to call text_poke() because normal kernel execution
+	 * It is safe to call text_poke_kgdb() because normal kernel execution
 	 * is stopped on all cores, so long as the text_mutex is not locked.
 	 */
 	if (mutex_is_locked(&text_mutex))
 		goto knl_write;
-	text_poke((void *)bpt->bpt_addr, bpt->saved_instr, BREAK_INSTR_SIZE);
+	err = text_poke_kgdb((void *)bpt->bpt_addr, bpt->saved_instr,
+			     BREAK_INSTR_SIZE);
+	if (err)
+		return err;
 	err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
 	if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE))
 		goto knl_write;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 02/10] x86/jump_label: Use text_poke_early() during early init
  2018-11-10 23:17 [PATCH v4 00/10] x86/alternative: text_poke() fixes Nadav Amit
  2018-11-10 23:17 ` [PATCH v4 01/10] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Nadav Amit
@ 2018-11-10 23:17 ` Nadav Amit
  2018-11-12 20:12   ` Nadav Amit
  2018-11-10 23:17 ` [PATCH v4 03/10] x86/mm: temporary mm struct Nadav Amit
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 29+ messages in thread
From: Nadav Amit @ 2018-11-10 23:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Andy Lutomirski,
	Kees Cook, Dave Hansen, Masami Hiramatsu

There is no apparent reason not to use text_poke_early() while we are
during early-init and we do not patch code that might be on the stack
(i.e., we'll return to the middle of the patched code). This appears to
be the case of jump-labels, so do so.

This is required for the next patches that would set a temporary mm for
patching, which is initialized after some static-keys are
enabled/disabled.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Co-Developed-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/kernel/jump_label.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index aac0c1f7e354..ed5fe274a7d8 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -52,7 +52,12 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
 	jmp.offset = jump_entry_target(entry) -
 		     (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
 
-	if (early_boot_irqs_disabled)
+	/*
+	 * As long as we're UP and not yet marked RO, we can use
+	 * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to
+	 * SYSTEM_SCHEDULING before going either.
+	 */
+	if (system_state == SYSTEM_BOOTING)
 		poker = text_poke_early;
 
 	if (type == JUMP_LABEL_JMP) {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 03/10] x86/mm: temporary mm struct
  2018-11-10 23:17 [PATCH v4 00/10] x86/alternative: text_poke() fixes Nadav Amit
  2018-11-10 23:17 ` [PATCH v4 01/10] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Nadav Amit
  2018-11-10 23:17 ` [PATCH v4 02/10] x86/jump_label: Use text_poke_early() during early init Nadav Amit
@ 2018-11-10 23:17 ` Nadav Amit
  2018-11-10 23:17 ` [PATCH v4 04/10] fork: provide a function for copying init_mm Nadav Amit
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 29+ messages in thread
From: Nadav Amit @ 2018-11-10 23:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Peter Zijlstra, Dave Hansen, Nadav Amit

From: Andy Lutomirski <luto@kernel.org>

Sometimes we want to set a temporary page-table entries (PTEs) in one of
the cores, without allowing other cores to use - even speculatively -
these mappings. There are two benefits for doing so:

(1) Security: if sensitive PTEs are set, temporary mm prevents their use
in other cores. This hardens the security as it prevents exploding a
dangling pointer to overwrite sensitive data using the sensitive PTE.

(2) Avoiding TLB shootdowns: the PTEs do not need to be flushed in
remote page-tables.

To do so a temporary mm_struct can be used. Mappings which are private
for this mm can be set in the userspace part of the address-space.
During the whole time in which the temporary mm is loaded, interrupts
must be disabled.

The first use-case for temporary PTEs, which will follow, is for poking
the kernel text.

[ Commit message was written by Nadav ]

Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/include/asm/mmu_context.h | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 0ca50611e8ce..7cc8e5c50bf6 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -338,4 +338,24 @@ static inline unsigned long __get_current_cr3_fast(void)
 	return cr3;
 }
 
+typedef struct {
+	struct mm_struct *prev;
+} temporary_mm_state_t;
+
+static inline temporary_mm_state_t use_temporary_mm(struct mm_struct *mm)
+{
+	temporary_mm_state_t state;
+
+	lockdep_assert_irqs_disabled();
+	state.prev = this_cpu_read(cpu_tlbstate.loaded_mm);
+	switch_mm_irqs_off(NULL, mm, current);
+	return state;
+}
+
+static inline void unuse_temporary_mm(temporary_mm_state_t prev)
+{
+	lockdep_assert_irqs_disabled();
+	switch_mm_irqs_off(NULL, prev.prev, current);
+}
+
 #endif /* _ASM_X86_MMU_CONTEXT_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 04/10] fork: provide a function for copying init_mm
  2018-11-10 23:17 [PATCH v4 00/10] x86/alternative: text_poke() fixes Nadav Amit
                   ` (2 preceding siblings ...)
  2018-11-10 23:17 ` [PATCH v4 03/10] x86/mm: temporary mm struct Nadav Amit
@ 2018-11-10 23:17 ` Nadav Amit
  2018-11-10 23:17 ` [PATCH v4 05/10] x86/alternative: initializing temporary mm for patching Nadav Amit
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 29+ messages in thread
From: Nadav Amit @ 2018-11-10 23:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Andy Lutomirski,
	Kees Cook, Peter Zijlstra, Dave Hansen

Provide a function for copying init_mm. This function will be later used
for setting a temporary mm.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 include/linux/sched/task.h |  1 +
 kernel/fork.c              | 24 ++++++++++++++++++------
 2 files changed, 19 insertions(+), 6 deletions(-)

diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index 108ede99e533..ac0a675678f5 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -74,6 +74,7 @@ extern void exit_itimers(struct signal_struct *);
 extern long _do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *, unsigned long);
 extern long do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *);
 struct task_struct *fork_idle(int);
+struct mm_struct *copy_init_mm(void);
 extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
 extern long kernel_wait4(pid_t, int __user *, int, struct rusage *);
 
diff --git a/kernel/fork.c b/kernel/fork.c
index 07cddff89c7b..01d3f5b39363 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1297,13 +1297,20 @@ void mm_release(struct task_struct *tsk, struct mm_struct *mm)
 		complete_vfork_done(tsk);
 }
 
-/*
- * Allocate a new mm structure and copy contents from the
- * mm structure of the passed in task structure.
+/**
+ * dup_mm() - duplicates an existing mm structure
+ * @tsk: the task_struct with which the new mm will be associated.
+ * @oldmm: the mm to duplicate.
+ *
+ * Allocates a new mm structure and copy contents from the provided
+ * @oldmm structure.
+ *
+ * Return: the duplicated mm or NULL on failure.
  */
-static struct mm_struct *dup_mm(struct task_struct *tsk)
+static struct mm_struct *dup_mm(struct task_struct *tsk,
+				struct mm_struct *oldmm)
 {
-	struct mm_struct *mm, *oldmm = current->mm;
+	struct mm_struct *mm;
 	int err;
 
 	mm = allocate_mm();
@@ -1370,7 +1377,7 @@ static int copy_mm(unsigned long clone_flags, struct task_struct *tsk)
 	}
 
 	retval = -ENOMEM;
-	mm = dup_mm(tsk);
+	mm = dup_mm(tsk, current->mm);
 	if (!mm)
 		goto fail_nomem;
 
@@ -2176,6 +2183,11 @@ struct task_struct *fork_idle(int cpu)
 	return task;
 }
 
+struct mm_struct *copy_init_mm(void)
+{
+	return dup_mm(NULL, &init_mm);
+}
+
 /*
  *  Ok, this is the main fork-routine.
  *
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 05/10] x86/alternative: initializing temporary mm for patching
  2018-11-10 23:17 [PATCH v4 00/10] x86/alternative: text_poke() fixes Nadav Amit
                   ` (3 preceding siblings ...)
  2018-11-10 23:17 ` [PATCH v4 04/10] fork: provide a function for copying init_mm Nadav Amit
@ 2018-11-10 23:17 ` Nadav Amit
  2018-11-11 14:43   ` Peter Zijlstra
  2018-11-10 23:17 ` [PATCH v4 06/10] x86/alternative: use temporary mm for text poking Nadav Amit
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 29+ messages in thread
From: Nadav Amit @ 2018-11-10 23:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Kees Cook,
	Peter Zijlstra, Dave Hansen

To prevent improper use of the PTEs that are used for text patching, we
want to use a temporary mm struct. We initailize it by copying the init
mm.

The address that will be used for patching is taken from the lower area
that is usually used for the task memory. Doing so prevents the need to
frequently synchronize the temporary-mm (e.g., when BPF programs are
installed), since different PGDs are used for the task memory.

Finally, we randomize the address of the PTEs to harden against exploits
that use these PTEs.

Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/include/asm/pgtable.h       |  3 +++
 arch/x86/include/asm/text-patching.h |  2 ++
 arch/x86/kernel/alternative.c        |  3 +++
 arch/x86/mm/init_64.c                | 39 ++++++++++++++++++++++++++++
 init/main.c                          |  3 +++
 5 files changed, 50 insertions(+)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 40616e805292..e8f630d9a2ed 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1021,6 +1021,9 @@ static inline void __meminit init_trampoline_default(void)
 	/* Default trampoline pgd value */
 	trampoline_pgd_entry = init_top_pgt[pgd_index(__PAGE_OFFSET)];
 }
+
+void __init poking_init(void);
+
 # ifdef CONFIG_RANDOMIZE_MEMORY
 void __meminit init_trampoline(void);
 # else
diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
index 5a2600370763..e5716ef9a721 100644
--- a/arch/x86/include/asm/text-patching.h
+++ b/arch/x86/include/asm/text-patching.h
@@ -39,5 +39,7 @@ extern int text_poke_kgdb(void *addr, const void *opcode, size_t len);
 extern int poke_int3_handler(struct pt_regs *regs);
 extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
 extern int after_bootmem;
+extern __ro_after_init struct mm_struct *poking_mm;
+extern __ro_after_init unsigned long poking_addr;
 
 #endif /* _ASM_X86_TEXT_PATCHING_H */
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index ebe9210dc92e..d3ae5c26e5a0 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -678,6 +678,9 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode,
 	return addr;
 }
 
+__ro_after_init struct mm_struct *poking_mm;
+__ro_after_init unsigned long poking_addr;
+
 static int __text_poke(void *addr, const void *opcode, size_t len)
 {
 	unsigned long flags;
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 5fab264948c2..56d56d77aa66 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -53,6 +53,7 @@
 #include <asm/init.h>
 #include <asm/uv/uv.h>
 #include <asm/setup.h>
+#include <asm/text-patching.h>
 
 #include "mm_internal.h"
 
@@ -1388,6 +1389,44 @@ unsigned long memory_block_size_bytes(void)
 	return memory_block_size_probed;
 }
 
+/*
+ * Initialize an mm_struct to be used during poking and a pointer to be used
+ * during patching. If anything fails during initialization, poking will be done
+ * using the fixmap, which is unsafe, so warn the user about it.
+ */
+void __init poking_init(void)
+{
+	spinlock_t *ptl;
+	pte_t *ptep;
+
+	poking_mm = copy_init_mm();
+	if (!poking_mm) {
+		pr_err("x86/mm: error setting a separate poking address space");
+		return;
+	}
+
+	/*
+	 * Randomize the poking address, but make sure that the following page
+	 * will be mapped at the same PMD. We need 2 pages, so find space for 3,
+	 * and adjust the address if the PMD ends after the first one.
+	 */
+	poking_addr = TASK_UNMAPPED_BASE +
+		(kaslr_get_random_long("Poking") & PAGE_MASK) %
+		(TASK_SIZE - TASK_UNMAPPED_BASE - 3 * PAGE_SIZE);
+
+	if (((poking_addr + PAGE_SIZE) & ~PMD_MASK) == 0)
+		poking_addr += PAGE_SIZE;
+
+	/*
+	 * We need to trigger the allocation of the page-tables that will be
+	 * needed for poking now. Later, poking may be performed in an atomic
+	 * section, which might cause allocation to fail.
+	 */
+	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
+	if (!WARN_ON(!ptep))
+		pte_unmap_unlock(ptep, ptl);
+}
+
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
 /*
  * Initialise the sparsemem vmemmap using huge-pages at the PMD level.
diff --git a/init/main.c b/init/main.c
index ee147103ba1b..a461150adfb1 100644
--- a/init/main.c
+++ b/init/main.c
@@ -497,6 +497,8 @@ void __init __weak thread_stack_cache_init(void)
 
 void __init __weak mem_encrypt_init(void) { }
 
+void __init __weak poking_init(void) { }
+
 bool initcall_debug;
 core_param(initcall_debug, initcall_debug, bool, 0644);
 
@@ -731,6 +733,7 @@ asmlinkage __visible void __init start_kernel(void)
 	taskstats_init_early();
 	delayacct_init();
 
+	poking_init();
 	check_bugs();
 
 	acpi_subsystem_init();
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
  2018-11-10 23:17 [PATCH v4 00/10] x86/alternative: text_poke() fixes Nadav Amit
                   ` (4 preceding siblings ...)
  2018-11-10 23:17 ` [PATCH v4 05/10] x86/alternative: initializing temporary mm for patching Nadav Amit
@ 2018-11-10 23:17 ` Nadav Amit
  2018-11-11 14:59   ` Peter Zijlstra
  2018-11-11 19:11   ` Damian Tometzki
  2018-11-10 23:17 ` [PATCH v4 07/10] x86/kgdb: avoid redundant comparison of code Nadav Amit
                   ` (3 subsequent siblings)
  9 siblings, 2 replies; 29+ messages in thread
From: Nadav Amit @ 2018-11-10 23:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Andy Lutomirski,
	Kees Cook, Peter Zijlstra, Dave Hansen, Masami Hiramatsu

text_poke() can potentially compromise the security as it sets temporary
PTEs in the fixmap. These PTEs might be used to rewrite the kernel code
from other cores accidentally or maliciously, if an attacker gains the
ability to write onto kernel memory.

Moreover, since remote TLBs are not flushed after the temporary PTEs are
removed, the time-window in which the code is writable is not limited if
the fixmap PTEs - maliciously or accidentally - are cached in the TLB.
To address these potential security hazards, we use a temporary mm for
patching the code.

Finally, text_poke() is also not conservative enough when mapping pages,
as it always tries to map 2 pages, even when a single one is sufficient.
So try to be more conservative, and do not map more than needed.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/include/asm/fixmap.h |   2 -
 arch/x86/kernel/alternative.c | 112 +++++++++++++++++++++++++++-------
 2 files changed, 89 insertions(+), 25 deletions(-)

diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
index 50ba74a34a37..9da8cccdf3fb 100644
--- a/arch/x86/include/asm/fixmap.h
+++ b/arch/x86/include/asm/fixmap.h
@@ -103,8 +103,6 @@ enum fixed_addresses {
 #ifdef CONFIG_PARAVIRT
 	FIX_PARAVIRT_BOOTMAP,
 #endif
-	FIX_TEXT_POKE1,	/* reserve 2 pages for text_poke() */
-	FIX_TEXT_POKE0, /* first page is last, because allocation is backward */
 #ifdef	CONFIG_X86_INTEL_MID
 	FIX_LNW_VRTC,
 #endif
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index d3ae5c26e5a0..96607ef285c3 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -11,6 +11,7 @@
 #include <linux/stop_machine.h>
 #include <linux/slab.h>
 #include <linux/kdebug.h>
+#include <linux/mmu_context.h>
 #include <asm/text-patching.h>
 #include <asm/alternative.h>
 #include <asm/sections.h>
@@ -683,43 +684,108 @@ __ro_after_init unsigned long poking_addr;
 
 static int __text_poke(void *addr, const void *opcode, size_t len)
 {
+	bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE;
+	temporary_mm_state_t prev;
+	struct page *pages[2] = {NULL};
 	unsigned long flags;
-	char *vaddr;
-	struct page *pages[2];
-	int i, r = 0;
+	pte_t pte, *ptep;
+	spinlock_t *ptl;
+	int r = 0;
 
 	/*
-	 * While boot memory allocator is runnig we cannot use struct
-	 * pages as they are not yet initialized.
+	 * While boot memory allocator is running we cannot use struct pages as
+	 * they are not yet initialized.
 	 */
 	BUG_ON(!after_bootmem);
 
 	if (!core_kernel_text((unsigned long)addr)) {
 		pages[0] = vmalloc_to_page(addr);
-		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
+		if (cross_page_boundary)
+			pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
 	} else {
 		pages[0] = virt_to_page(addr);
 		WARN_ON(!PageReserved(pages[0]));
-		pages[1] = virt_to_page(addr + PAGE_SIZE);
+		if (cross_page_boundary)
+			pages[1] = virt_to_page(addr + PAGE_SIZE);
 	}
-	if (!pages[0])
+
+	if (!pages[0] || (cross_page_boundary && !pages[1]))
 		return -EFAULT;
+
 	local_irq_save(flags);
-	set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0]));
-	if (pages[1])
-		set_fixmap(FIX_TEXT_POKE1, page_to_phys(pages[1]));
-	vaddr = (char *)fix_to_virt(FIX_TEXT_POKE0);
-	memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len);
-	clear_fixmap(FIX_TEXT_POKE0);
-	if (pages[1])
-		clear_fixmap(FIX_TEXT_POKE1);
-	local_flush_tlb();
-	sync_core();
-	/* Could also do a CLFLUSH here to speed up CPU recovery; but
-	   that causes hangs on some VIA CPUs. */
-	for (i = 0; i < len; i++)
-		if (((char *)addr)[i] != ((char *)opcode)[i])
-			r = -EFAULT;
+
+	/*
+	 * The lock is not really needed, but this allows to avoid open-coding.
+	 */
+	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
+
+	/*
+	 * If we failed to allocate a PTE, fail. This should *never* happen,
+	 * since we preallocate the PTE.
+	 */
+	if (WARN_ON_ONCE(!ptep))
+		goto out;
+
+	pte = mk_pte(pages[0], PAGE_KERNEL);
+	set_pte_at(poking_mm, poking_addr, ptep, pte);
+
+	if (cross_page_boundary) {
+		pte = mk_pte(pages[1], PAGE_KERNEL);
+		set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte);
+	}
+
+	/*
+	 * Loading the temporary mm behaves as a compiler barrier, which
+	 * guarantees that the PTE will be set at the time memcpy() is done.
+	 */
+	prev = use_temporary_mm(poking_mm);
+
+	kasan_disable_current();
+	memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len);
+	kasan_enable_current();
+
+	/*
+	 * Ensure that the PTE is only cleared after the instructions of memcpy
+	 * were issued by using a compiler barrier.
+	 */
+	barrier();
+
+	pte_clear(poking_mm, poking_addr, ptep);
+
+	/*
+	 * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
+	 * as it also flushes the corresponding "user" address spaces, which
+	 * does not exist.
+	 *
+	 * Poking, however, is already very inefficient since it does not try to
+	 * batch updates, so we ignore this problem for the time being.
+	 *
+	 * Since the PTEs do not exist in other kernel address-spaces, we do
+	 * not use __flush_tlb_one_kernel(), which when PTI is on would cause
+	 * more unwarranted TLB flushes.
+	 *
+	 * There is a slight anomaly here: the PTE is a supervisor-only and
+	 * (potentially) global and we use __flush_tlb_one_user() but this
+	 * should be fine.
+	 */
+	__flush_tlb_one_user(poking_addr);
+	if (cross_page_boundary) {
+		pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
+		__flush_tlb_one_user(poking_addr + PAGE_SIZE);
+	}
+
+	/*
+	 * Loading the previous page-table hierarchy requires a serializing
+	 * instruction that already allows the core to see the updated version.
+	 * Xen-PV is assumed to serialize execution in a similar manner.
+	 */
+	unuse_temporary_mm(prev);
+
+	pte_unmap_unlock(ptep, ptl);
+out:
+	if (memcmp(addr, opcode, len))
+		r = -EFAULT;
+
 	local_irq_restore(flags);
 	return r;
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 07/10] x86/kgdb: avoid redundant comparison of code
  2018-11-10 23:17 [PATCH v4 00/10] x86/alternative: text_poke() fixes Nadav Amit
                   ` (5 preceding siblings ...)
  2018-11-10 23:17 ` [PATCH v4 06/10] x86/alternative: use temporary mm for text poking Nadav Amit
@ 2018-11-10 23:17 ` Nadav Amit
  2018-11-10 23:17 ` [PATCH v4 08/10] x86: avoid W^X being broken during modules loading Nadav Amit
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 29+ messages in thread
From: Nadav Amit @ 2018-11-10 23:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit

text_poke() already ensures that the written value is the correct one
and fails if that is not the case. There is no need for an additional
comparison. Remove it.

Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/kernel/kgdb.c | 10 ----------
 1 file changed, 10 deletions(-)

diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
index 8091b2e381d4..d14e1be576fd 100644
--- a/arch/x86/kernel/kgdb.c
+++ b/arch/x86/kernel/kgdb.c
@@ -751,7 +751,6 @@ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long ip)
 int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 {
 	int err;
-	char opc[BREAK_INSTR_SIZE];
 
 	bpt->type = BP_BREAKPOINT;
 	err = probe_kernel_read(bpt->saved_instr, (char *)bpt->bpt_addr,
@@ -772,11 +771,6 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 			     BREAK_INSTR_SIZE);
 	if (err)
 		return err;
-	err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
-	if (err)
-		return err;
-	if (memcmp(opc, arch_kgdb_ops.gdb_bpt_instr, BREAK_INSTR_SIZE))
-		return -EINVAL;
 	bpt->type = BP_POKE_BREAKPOINT;
 
 	return err;
@@ -785,7 +779,6 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
 {
 	int err;
-	char opc[BREAK_INSTR_SIZE];
 
 	if (bpt->type != BP_POKE_BREAKPOINT)
 		goto knl_write;
@@ -798,9 +791,6 @@ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
 	err = text_poke_kgdb((void *)bpt->bpt_addr, bpt->saved_instr,
 			     BREAK_INSTR_SIZE);
 	if (err)
-		return err;
-	err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
-	if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE))
 		goto knl_write;
 	return err;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 08/10] x86: avoid W^X being broken during modules loading
  2018-11-10 23:17 [PATCH v4 00/10] x86/alternative: text_poke() fixes Nadav Amit
                   ` (6 preceding siblings ...)
  2018-11-10 23:17 ` [PATCH v4 07/10] x86/kgdb: avoid redundant comparison of code Nadav Amit
@ 2018-11-10 23:17 ` Nadav Amit
  2018-11-10 23:17 ` [PATCH v4 09/10] x86/jump-label: remove support for custom poker Nadav Amit
  2018-11-10 23:17 ` [PATCH v4 10/10] x86/alternative: remove the return value of text_poke_*() Nadav Amit
  9 siblings, 0 replies; 29+ messages in thread
From: Nadav Amit @ 2018-11-10 23:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Andy Lutomirski,
	Kees Cook, Peter Zijlstra, Dave Hansen, Masami Hiramatsu

When modules and BPF filters are loaded, there is a time window in
which some memory is both writable and executable. An attacker that has
already found another vulnerability (e.g., a dangling pointer) might be
able to exploit this behavior to overwrite kernel code. This patch
prevents having writable executable PTEs in this stage.

In addition, avoiding having R+X mappings can also slightly simplify the
patching of modules code on initialization (e.g., by alternatives and
static-key), as would be done in the next patch.

To avoid having W+X mappings, set them initially as RW (NX) and after
they are set as RO set them as X as well. Setting them as executable is
done as a separate step to avoid one core in which the old PTE is cached
(hence writable), and another which sees the updated PTE (executable),
which would break the W^X protection.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Suggested-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/kernel/alternative.c | 28 +++++++++++++++++++++-------
 arch/x86/kernel/module.c      |  2 +-
 include/linux/filter.h        |  6 ++++++
 kernel/module.c               | 10 ++++++++++
 4 files changed, 38 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 96607ef285c3..70827332da0f 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -667,15 +667,29 @@ void __init alternative_instructions(void)
  * handlers seeing an inconsistent instruction while you patch.
  */
 void *__init_or_module text_poke_early(void *addr, const void *opcode,
-					      size_t len)
+				       size_t len)
 {
 	unsigned long flags;
-	local_irq_save(flags);
-	memcpy(addr, opcode, len);
-	local_irq_restore(flags);
-	sync_core();
-	/* Could also do a CLFLUSH here to speed up CPU recovery; but
-	   that causes hangs on some VIA CPUs. */
+
+	if (static_cpu_has(X86_FEATURE_NX) &&
+	    is_module_text_address((unsigned long)addr)) {
+		/*
+		 * Modules text is marked initially as non-executable, so the
+		 * code cannot be running and speculative code-fetches are
+		 * prevented. We can just change the code.
+		 */
+		memcpy(addr, opcode, len);
+	} else {
+		local_irq_save(flags);
+		memcpy(addr, opcode, len);
+		local_irq_restore(flags);
+		sync_core();
+
+		/*
+		 * Could also do a CLFLUSH here to speed up CPU recovery; but
+		 * that causes hangs on some VIA CPUs.
+		 */
+	}
 	return addr;
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index b052e883dd8c..cfa3106faee4 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -87,7 +87,7 @@ void *module_alloc(unsigned long size)
 	p = __vmalloc_node_range(size, MODULE_ALIGN,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL,
-				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
+				    PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 	if (p && (kasan_module_alloc(p, size) < 0)) {
 		vfree(p);
diff --git a/include/linux/filter.h b/include/linux/filter.h
index de629b706d1d..ee9ae03c5f56 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -704,7 +704,13 @@ static inline void bpf_prog_unlock_ro(struct bpf_prog *fp)
 
 static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
 {
+	/*
+	 * Perform mapping changes in two stages to avoid opening a time-window
+	 * in which a PTE is cached in any TLB as writable, but marked as
+	 * executable in the memory-resident mappings (e.g., page-tables).
+	 */
 	set_memory_ro((unsigned long)hdr, hdr->pages);
+	set_memory_x((unsigned long)hdr, hdr->pages);
 }
 
 static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr)
diff --git a/kernel/module.c b/kernel/module.c
index 49a405891587..7cb207249437 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -1946,9 +1946,19 @@ void module_enable_ro(const struct module *mod, bool after_init)
 	if (!rodata_enabled)
 		return;
 
+	/*
+	 * Perform mapping changes in two stages to avoid opening a time-window
+	 * in which a PTE is cached in any TLB as writable, but marked as
+	 * executable in the memory-resident mappings (e.g., page-tables).
+	 */
 	frob_text(&mod->core_layout, set_memory_ro);
+	frob_text(&mod->core_layout, set_memory_x);
+
 	frob_rodata(&mod->core_layout, set_memory_ro);
+
 	frob_text(&mod->init_layout, set_memory_ro);
+	frob_text(&mod->init_layout, set_memory_x);
+
 	frob_rodata(&mod->init_layout, set_memory_ro);
 
 	if (after_init)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 09/10] x86/jump-label: remove support for custom poker
  2018-11-10 23:17 [PATCH v4 00/10] x86/alternative: text_poke() fixes Nadav Amit
                   ` (7 preceding siblings ...)
  2018-11-10 23:17 ` [PATCH v4 08/10] x86: avoid W^X being broken during modules loading Nadav Amit
@ 2018-11-10 23:17 ` Nadav Amit
  2018-11-11 15:05   ` Peter Zijlstra
  2018-11-10 23:17 ` [PATCH v4 10/10] x86/alternative: remove the return value of text_poke_*() Nadav Amit
  9 siblings, 1 reply; 29+ messages in thread
From: Nadav Amit @ 2018-11-10 23:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Andy Lutomirski,
	Kees Cook, Peter Zijlstra, Dave Hansen, Masami Hiramatsu

There are only two types of poking: early and breakpoint based. The use
of a function pointer to perform poking complicates the code and is
probably inefficient due to the use of indirect branches.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/kernel/jump_label.c | 17 +++++++----------
 1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index ed5fe274a7d8..7947df599e58 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -39,13 +39,13 @@ static void bug_at(unsigned char *ip, int line)
 
 static void __ref __jump_label_transform(struct jump_entry *entry,
 					 enum jump_label_type type,
-					 void *(*poker)(void *, const void *, size_t),
 					 int init)
 {
 	union jump_code_union jmp;
 	const unsigned char default_nop[] = { STATIC_KEY_INIT_NOP };
 	const unsigned char *ideal_nop = ideal_nops[NOP_ATOMIC5];
 	const void *expect, *code;
+	bool early_poking = init;
 	int line;
 
 	jmp.jump = 0xe9;
@@ -58,7 +58,7 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
 	 * SYSTEM_SCHEDULING before going either.
 	 */
 	if (system_state == SYSTEM_BOOTING)
-		poker = text_poke_early;
+		early_poking = true;
 
 	if (type == JUMP_LABEL_JMP) {
 		if (init) {
@@ -82,16 +82,13 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
 		bug_at((void *)jump_entry_code(entry), line);
 
 	/*
-	 * Make text_poke_bp() a default fallback poker.
-	 *
 	 * At the time the change is being done, just ignore whether we
 	 * are doing nop -> jump or jump -> nop transition, and assume
 	 * always nop being the 'currently valid' instruction
-	 *
 	 */
-	if (poker) {
-		(*poker)((void *)jump_entry_code(entry), code,
-			 JUMP_LABEL_NOP_SIZE);
+	if (early_poking) {
+		text_poke_early((void *)jump_entry_code(entry), code,
+				JUMP_LABEL_NOP_SIZE);
 		return;
 	}
 
@@ -103,7 +100,7 @@ void arch_jump_label_transform(struct jump_entry *entry,
 			       enum jump_label_type type)
 {
 	mutex_lock(&text_mutex);
-	__jump_label_transform(entry, type, NULL, 0);
+	__jump_label_transform(entry, type, 0);
 	mutex_unlock(&text_mutex);
 }
 
@@ -133,7 +130,7 @@ __init_or_module void arch_jump_label_transform_static(struct jump_entry *entry,
 			jlstate = JL_STATE_NO_UPDATE;
 	}
 	if (jlstate == JL_STATE_UPDATE)
-		__jump_label_transform(entry, type, text_poke_early, 1);
+		__jump_label_transform(entry, type, 1);
 }
 
 #endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 10/10] x86/alternative: remove the return value of text_poke_*()
  2018-11-10 23:17 [PATCH v4 00/10] x86/alternative: text_poke() fixes Nadav Amit
                   ` (8 preceding siblings ...)
  2018-11-10 23:17 ` [PATCH v4 09/10] x86/jump-label: remove support for custom poker Nadav Amit
@ 2018-11-10 23:17 ` Nadav Amit
  9 siblings, 0 replies; 29+ messages in thread
From: Nadav Amit @ 2018-11-10 23:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Andy Lutomirski,
	Kees Cook, Peter Zijlstra, Dave Hansen, Masami Hiramatsu

The return value of text_poke_early() and text_poke_bp() is useless.
Remove it.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/include/asm/text-patching.h |  4 ++--
 arch/x86/kernel/alternative.c        | 11 ++++-------
 2 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
index e5716ef9a721..a7234cd435d2 100644
--- a/arch/x86/include/asm/text-patching.h
+++ b/arch/x86/include/asm/text-patching.h
@@ -18,7 +18,7 @@ static inline void apply_paravirt(struct paravirt_patch_site *start,
 #define __parainstructions_end	NULL
 #endif
 
-extern void *text_poke_early(void *addr, const void *opcode, size_t len);
+extern void text_poke_early(void *addr, const void *opcode, size_t len);
 
 /*
  * Clear and restore the kernel write-protection flag on the local CPU.
@@ -37,7 +37,7 @@ extern void *text_poke_early(void *addr, const void *opcode, size_t len);
 extern int text_poke(void *addr, const void *opcode, size_t len);
 extern int text_poke_kgdb(void *addr, const void *opcode, size_t len);
 extern int poke_int3_handler(struct pt_regs *regs);
-extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
+extern void text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
 extern int after_bootmem;
 extern __ro_after_init struct mm_struct *poking_mm;
 extern __ro_after_init unsigned long poking_addr;
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 70827332da0f..ab0278c7ecfa 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -264,7 +264,7 @@ static void __init_or_module add_nops(void *insns, unsigned int len)
 
 extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
 extern s32 __smp_locks[], __smp_locks_end[];
-void *text_poke_early(void *addr, const void *opcode, size_t len);
+void text_poke_early(void *addr, const void *opcode, size_t len);
 
 /*
  * Are we looking at a near JMP with a 1 or 4-byte displacement.
@@ -666,8 +666,8 @@ void __init alternative_instructions(void)
  * instructions. And on the local CPU you need to be protected again NMI or MCE
  * handlers seeing an inconsistent instruction while you patch.
  */
-void *__init_or_module text_poke_early(void *addr, const void *opcode,
-				       size_t len)
+void __init_or_module text_poke_early(void *addr, const void *opcode,
+				      size_t len)
 {
 	unsigned long flags;
 
@@ -690,7 +690,6 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode,
 		 * that causes hangs on some VIA CPUs.
 		 */
 	}
-	return addr;
 }
 
 __ro_after_init struct mm_struct *poking_mm;
@@ -906,7 +905,7 @@ int poke_int3_handler(struct pt_regs *regs)
  *	  replacing opcode
  *	- sync cores
  */
-void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler)
+void text_poke_bp(void *addr, const void *opcode, size_t len, void *handler)
 {
 	unsigned char int3 = 0xcc;
 
@@ -948,7 +947,5 @@ void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler)
 	 * the writing of the new instruction.
 	 */
 	bp_patching_in_progress = false;
-
-	return addr;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 05/10] x86/alternative: initializing temporary mm for patching
  2018-11-10 23:17 ` [PATCH v4 05/10] x86/alternative: initializing temporary mm for patching Nadav Amit
@ 2018-11-11 14:43   ` Peter Zijlstra
  2018-11-11 20:38     ` Nadav Amit
  0 siblings, 1 reply; 29+ messages in thread
From: Peter Zijlstra @ 2018-11-11 14:43 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Kees Cook, Dave Hansen


I don't seem to have gotten patches 0-2,7 for some reason; I'll try and
dig them out of the LKML folder.

On Sat, Nov 10, 2018 at 03:17:27PM -0800, Nadav Amit wrote:
> +void __init poking_init(void)
> +{
> +	spinlock_t *ptl;
> +	pte_t *ptep;
> +
> +	poking_mm = copy_init_mm();
> +	if (!poking_mm) {
> +		pr_err("x86/mm: error setting a separate poking address space");
> +		return;
> +	}
> +
> +	/*
> +	 * Randomize the poking address, but make sure that the following page
> +	 * will be mapped at the same PMD. We need 2 pages, so find space for 3,
> +	 * and adjust the address if the PMD ends after the first one.
> +	 */
> +	poking_addr = TASK_UNMAPPED_BASE +
> +		(kaslr_get_random_long("Poking") & PAGE_MASK) %
> +		(TASK_SIZE - TASK_UNMAPPED_BASE - 3 * PAGE_SIZE);
> +
> +	if (((poking_addr + PAGE_SIZE) & ~PMD_MASK) == 0)
> +		poking_addr += PAGE_SIZE;
> +
> +	/*
> +	 * We need to trigger the allocation of the page-tables that will be
> +	 * needed for poking now. Later, poking may be performed in an atomic
> +	 * section, which might cause allocation to fail.
> +	 */
> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
> +	if (!WARN_ON(!ptep))
> +		pte_unmap_unlock(ptep, ptl);
> +}

The difference in how we deal with -ENOMEM here is weird. I think we
have a _lot_ of code that simply hard assumes we don't fail memory alloc
on init.

I for instance would not mind to simply remove both branches and let the
kernel crash and burn if we ever fail here.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
  2018-11-10 23:17 ` [PATCH v4 06/10] x86/alternative: use temporary mm for text poking Nadav Amit
@ 2018-11-11 14:59   ` Peter Zijlstra
  2018-11-11 20:53     ` Nadav Amit
  2018-11-11 19:11   ` Damian Tometzki
  1 sibling, 1 reply; 29+ messages in thread
From: Peter Zijlstra @ 2018-11-11 14:59 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Sat, Nov 10, 2018 at 03:17:28PM -0800, Nadav Amit wrote:
> @@ -683,43 +684,108 @@ __ro_after_init unsigned long poking_addr;
>  
>  static int __text_poke(void *addr, const void *opcode, size_t len)
>  {
> +	bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE;
> +	temporary_mm_state_t prev;
> +	struct page *pages[2] = {NULL};
>  	unsigned long flags;
> +	pte_t pte, *ptep;
> +	spinlock_t *ptl;
> +	int r = 0;
>  
>  	/*
> +	 * While boot memory allocator is running we cannot use struct pages as
> +	 * they are not yet initialized.
>  	 */
>  	BUG_ON(!after_bootmem);
>  
>  	if (!core_kernel_text((unsigned long)addr)) {
>  		pages[0] = vmalloc_to_page(addr);
> +		if (cross_page_boundary)
> +			pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
>  	} else {
>  		pages[0] = virt_to_page(addr);
>  		WARN_ON(!PageReserved(pages[0]));
> +		if (cross_page_boundary)
> +			pages[1] = virt_to_page(addr + PAGE_SIZE);
>  	}
> +
> +	if (!pages[0] || (cross_page_boundary && !pages[1]))
>  		return -EFAULT;
> +
>  	local_irq_save(flags);
> +
> +	/*
> +	 * The lock is not really needed, but this allows to avoid open-coding.
> +	 */
> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
> +
> +	/*
> +	 * If we failed to allocate a PTE, fail. This should *never* happen,
> +	 * since we preallocate the PTE.
> +	 */
> +	if (WARN_ON_ONCE(!ptep))
> +		goto out;

Since we hard rely on init getting that right; can't we simply get rid
of this?

> +
> +	pte = mk_pte(pages[0], PAGE_KERNEL);
> +	set_pte_at(poking_mm, poking_addr, ptep, pte);
> +
> +	if (cross_page_boundary) {
> +		pte = mk_pte(pages[1], PAGE_KERNEL);
> +		set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte);
> +	}
> +
> +	/*
> +	 * Loading the temporary mm behaves as a compiler barrier, which
> +	 * guarantees that the PTE will be set at the time memcpy() is done.
> +	 */
> +	prev = use_temporary_mm(poking_mm);
> +
> +	kasan_disable_current();
> +	memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len);
> +	kasan_enable_current();
> +
> +	/*
> +	 * Ensure that the PTE is only cleared after the instructions of memcpy
> +	 * were issued by using a compiler barrier.
> +	 */
> +	barrier();
> +
> +	pte_clear(poking_mm, poking_addr, ptep);
> +
> +	/*
> +	 * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
> +	 * as it also flushes the corresponding "user" address spaces, which
> +	 * does not exist.
> +	 *
> +	 * Poking, however, is already very inefficient since it does not try to
> +	 * batch updates, so we ignore this problem for the time being.
> +	 *
> +	 * Since the PTEs do not exist in other kernel address-spaces, we do
> +	 * not use __flush_tlb_one_kernel(), which when PTI is on would cause
> +	 * more unwarranted TLB flushes.
> +	 *
> +	 * There is a slight anomaly here: the PTE is a supervisor-only and
> +	 * (potentially) global and we use __flush_tlb_one_user() but this
> +	 * should be fine.
> +	 */
> +	__flush_tlb_one_user(poking_addr);
> +	if (cross_page_boundary) {
> +		pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
> +		__flush_tlb_one_user(poking_addr + PAGE_SIZE);
> +	}
> +
> +	/*
> +	 * Loading the previous page-table hierarchy requires a serializing
> +	 * instruction that already allows the core to see the updated version.
> +	 * Xen-PV is assumed to serialize execution in a similar manner.
> +	 */
> +	unuse_temporary_mm(prev);
> +
> +	pte_unmap_unlock(ptep, ptl);
> +out:
> +	if (memcmp(addr, opcode, len))
> +		r = -EFAULT;

How could this ever fail? And how can we reliably recover from that?

I mean, we can move that BUG_ON() we have in text_poke() down a level,
but for example the static_key/jump_label code has no real option on
failing this.

> +
>  	local_irq_restore(flags);
>  	return r;
>  }

Other than that, this looks really good!

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 09/10] x86/jump-label: remove support for custom poker
  2018-11-10 23:17 ` [PATCH v4 09/10] x86/jump-label: remove support for custom poker Nadav Amit
@ 2018-11-11 15:05   ` Peter Zijlstra
  2018-11-11 20:31     ` Nadav Amit
  0 siblings, 1 reply; 29+ messages in thread
From: Peter Zijlstra @ 2018-11-11 15:05 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Sat, Nov 10, 2018 at 03:17:31PM -0800, Nadav Amit wrote:
> There are only two types of poking: early and breakpoint based. The use
> of a function pointer to perform poking complicates the code and is
> probably inefficient due to the use of indirect branches.

Right; we used to have a 3rd way, but that is long gone.

Nice cleanup!

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
  2018-11-10 23:17 ` [PATCH v4 06/10] x86/alternative: use temporary mm for text poking Nadav Amit
  2018-11-11 14:59   ` Peter Zijlstra
@ 2018-11-11 19:11   ` Damian Tometzki
  2018-11-11 20:41     ` Nadav Amit
  1 sibling, 1 reply; 29+ messages in thread
From: Damian Tometzki @ 2018-11-11 19:11 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Peter Zijlstra, Dave Hansen, Masami Hiramatsu

On Sa, 10. Nov 15:17, Nadav Amit wrote:
> text_poke() can potentially compromise the security as it sets temporary
> PTEs in the fixmap. These PTEs might be used to rewrite the kernel code
> from other cores accidentally or maliciously, if an attacker gains the
> ability to write onto kernel memory.
> 
> Moreover, since remote TLBs are not flushed after the temporary PTEs are
> removed, the time-window in which the code is writable is not limited if
> the fixmap PTEs - maliciously or accidentally - are cached in the TLB.
> To address these potential security hazards, we use a temporary mm for
> patching the code.
> 
> Finally, text_poke() is also not conservative enough when mapping pages,
> as it always tries to map 2 pages, even when a single one is sufficient.
> So try to be more conservative, and do not map more than needed.
> 
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Signed-off-by: Nadav Amit <namit@vmware.com>
> ---
>  arch/x86/include/asm/fixmap.h |   2 -
>  arch/x86/kernel/alternative.c | 112 +++++++++++++++++++++++++++-------
>  2 files changed, 89 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
> index 50ba74a34a37..9da8cccdf3fb 100644
> --- a/arch/x86/include/asm/fixmap.h
> +++ b/arch/x86/include/asm/fixmap.h
> @@ -103,8 +103,6 @@ enum fixed_addresses {
>  #ifdef CONFIG_PARAVIRT
>  	FIX_PARAVIRT_BOOTMAP,
>  #endif

Hello Nadav,

with the remove of FIX_TEXT_POKE1 and FIX_TEXT_POKE0 i get the following
build error:

/home/damian/kernel/linux/arch/x86/xen/mmu_pv.c:2321:7: Fehler: »FIX_TEXT_POKE0« nicht deklariert (erstmalige Verwendung in dieser Funktion); meinten Sie »FIX_TBOOT_BASE«?
  case FIX_TEXT_POKE0:
       ^~~~~~~~~~~~~~
       FIX_TBOOT_BASE
/home/damian/kernel/linux/arch/x86/xen/mmu_pv.c:2321:7: Anmerkung: jeder nicht deklarierte Bezeichner wird nur einmal für jede Funktion, in der er vorkommt, gemeldet
/home/damian/kernel/linux/arch/x86/xen/mmu_pv.c:2322:7: Fehler: »FIX_TEXT_POKE1« nicht deklariert (erstmalige Verwendung in dieser Funktion); meinten Sie »FIX_TBOOT_BASE«?
  case FIX_TEXT_POKE1:
       ^~~~~~~~~~~~~~
       FIX_TBOOT_BASE

Best regards
Damian

> -	FIX_TEXT_POKE1,	/* reserve 2 pages for text_poke() */
> -	FIX_TEXT_POKE0, /* first page is last, because allocation is backward */
>  #ifdef	CONFIG_X86_INTEL_MID
>  	FIX_LNW_VRTC,
>  #endif
> diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
> index d3ae5c26e5a0..96607ef285c3 100644
> --- a/arch/x86/kernel/alternative.c
> +++ b/arch/x86/kernel/alternative.c
> @@ -11,6 +11,7 @@
>  #include <linux/stop_machine.h>
>  #include <linux/slab.h>
>  #include <linux/kdebug.h>
> +#include <linux/mmu_context.h>
>  #include <asm/text-patching.h>
>  #include <asm/alternative.h>
>  #include <asm/sections.h>
> @@ -683,43 +684,108 @@ __ro_after_init unsigned long poking_addr;
>  
>  static int __text_poke(void *addr, const void *opcode, size_t len)
>  {
> +	bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE;
> +	temporary_mm_state_t prev;
> +	struct page *pages[2] = {NULL};
>  	unsigned long flags;
> -	char *vaddr;
> -	struct page *pages[2];
> -	int i, r = 0;
> +	pte_t pte, *ptep;
> +	spinlock_t *ptl;
> +	int r = 0;
>  
>  	/*
> -	 * While boot memory allocator is runnig we cannot use struct
> -	 * pages as they are not yet initialized.
> +	 * While boot memory allocator is running we cannot use struct pages as
> +	 * they are not yet initialized.
>  	 */
>  	BUG_ON(!after_bootmem);
>  
>  	if (!core_kernel_text((unsigned long)addr)) {
>  		pages[0] = vmalloc_to_page(addr);
> -		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
> +		if (cross_page_boundary)
> +			pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
>  	} else {
>  		pages[0] = virt_to_page(addr);
>  		WARN_ON(!PageReserved(pages[0]));
> -		pages[1] = virt_to_page(addr + PAGE_SIZE);
> +		if (cross_page_boundary)
> +			pages[1] = virt_to_page(addr + PAGE_SIZE);
>  	}
> -	if (!pages[0])
> +
> +	if (!pages[0] || (cross_page_boundary && !pages[1]))
>  		return -EFAULT;
> +
>  	local_irq_save(flags);
> -	set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0]));
> -	if (pages[1])
> -		set_fixmap(FIX_TEXT_POKE1, page_to_phys(pages[1]));
> -	vaddr = (char *)fix_to_virt(FIX_TEXT_POKE0);
> -	memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len);
> -	clear_fixmap(FIX_TEXT_POKE0);
> -	if (pages[1])
> -		clear_fixmap(FIX_TEXT_POKE1);
> -	local_flush_tlb();
> -	sync_core();
> -	/* Could also do a CLFLUSH here to speed up CPU recovery; but
> -	   that causes hangs on some VIA CPUs. */
> -	for (i = 0; i < len; i++)
> -		if (((char *)addr)[i] != ((char *)opcode)[i])
> -			r = -EFAULT;
> +
> +	/*
> +	 * The lock is not really needed, but this allows to avoid open-coding.
> +	 */
> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
> +
> +	/*
> +	 * If we failed to allocate a PTE, fail. This should *never* happen,
> +	 * since we preallocate the PTE.
> +	 */
> +	if (WARN_ON_ONCE(!ptep))
> +		goto out;
> +
> +	pte = mk_pte(pages[0], PAGE_KERNEL);
> +	set_pte_at(poking_mm, poking_addr, ptep, pte);
> +
> +	if (cross_page_boundary) {
> +		pte = mk_pte(pages[1], PAGE_KERNEL);
> +		set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte);
> +	}
> +
> +	/*
> +	 * Loading the temporary mm behaves as a compiler barrier, which
> +	 * guarantees that the PTE will be set at the time memcpy() is done.
> +	 */
> +	prev = use_temporary_mm(poking_mm);
> +
> +	kasan_disable_current();
> +	memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len);
> +	kasan_enable_current();
> +
> +	/*
> +	 * Ensure that the PTE is only cleared after the instructions of memcpy
> +	 * were issued by using a compiler barrier.
> +	 */
> +	barrier();
> +
> +	pte_clear(poking_mm, poking_addr, ptep);
> +
> +	/*
> +	 * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
> +	 * as it also flushes the corresponding "user" address spaces, which
> +	 * does not exist.
> +	 *
> +	 * Poking, however, is already very inefficient since it does not try to
> +	 * batch updates, so we ignore this problem for the time being.
> +	 *
> +	 * Since the PTEs do not exist in other kernel address-spaces, we do
> +	 * not use __flush_tlb_one_kernel(), which when PTI is on would cause
> +	 * more unwarranted TLB flushes.
> +	 *
> +	 * There is a slight anomaly here: the PTE is a supervisor-only and
> +	 * (potentially) global and we use __flush_tlb_one_user() but this
> +	 * should be fine.
> +	 */
> +	__flush_tlb_one_user(poking_addr);
> +	if (cross_page_boundary) {
> +		pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
> +		__flush_tlb_one_user(poking_addr + PAGE_SIZE);
> +	}
> +
> +	/*
> +	 * Loading the previous page-table hierarchy requires a serializing
> +	 * instruction that already allows the core to see the updated version.
> +	 * Xen-PV is assumed to serialize execution in a similar manner.
> +	 */
> +	unuse_temporary_mm(prev);
> +
> +	pte_unmap_unlock(ptep, ptl);
> +out:
> +	if (memcmp(addr, opcode, len))
> +		r = -EFAULT;
> +
>  	local_irq_restore(flags);
>  	return r;
>  }
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 09/10] x86/jump-label: remove support for custom poker
  2018-11-11 15:05   ` Peter Zijlstra
@ 2018-11-11 20:31     ` Nadav Amit
  0 siblings, 0 replies; 29+ messages in thread
From: Nadav Amit @ 2018-11-11 20:31 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

From: Peter Zijlstra
Sent: November 11, 2018 at 3:05:53 PM GMT
> To: Nadav Amit <namit@vmware.com>
> Cc: Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, x86@kernel.org, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
> Subject: Re: [PATCH v4 09/10] x86/jump-label: remove support for custom poker
> 
> 
> On Sat, Nov 10, 2018 at 03:17:31PM -0800, Nadav Amit wrote:
>> There are only two types of poking: early and breakpoint based. The use
>> of a function pointer to perform poking complicates the code and is
>> probably inefficient due to the use of indirect branches.
> 
> Right; we used to have a 3rd way, but that is long gone.
> 
> Nice cleanup!

Thanks, but I actually should have got rid of the early_poking variable.
I will do it for v5.


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 05/10] x86/alternative: initializing temporary mm for patching
  2018-11-11 14:43   ` Peter Zijlstra
@ 2018-11-11 20:38     ` Nadav Amit
  2018-11-12  0:34       ` Peter Zijlstra
  0 siblings, 1 reply; 29+ messages in thread
From: Nadav Amit @ 2018-11-11 20:38 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Kees Cook, Dave Hansen

From: Peter Zijlstra
Sent: November 11, 2018 at 2:43:27 PM GMT
> To: Nadav Amit <namit@vmware.com>
> Cc: Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, x86@kernel.org, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>
> Subject: Re: [PATCH v4 05/10] x86/alternative: initializing temporary mm for patching
> 
> 
> 
> I don't seem to have gotten patches 0-2,7 for some reason; I'll try and
> dig them out of the LKML folder.
> 
> On Sat, Nov 10, 2018 at 03:17:27PM -0800, Nadav Amit wrote:
>> +void __init poking_init(void)
>> +{
>> +	spinlock_t *ptl;
>> +	pte_t *ptep;
>> +
>> +	poking_mm = copy_init_mm();
>> +	if (!poking_mm) {
>> +		pr_err("x86/mm: error setting a separate poking address space");
>> +		return;
>> +	}
>> +
>> +	/*
>> +	 * Randomize the poking address, but make sure that the following page
>> +	 * will be mapped at the same PMD. We need 2 pages, so find space for 3,
>> +	 * and adjust the address if the PMD ends after the first one.
>> +	 */
>> +	poking_addr = TASK_UNMAPPED_BASE +
>> +		(kaslr_get_random_long("Poking") & PAGE_MASK) %
>> +		(TASK_SIZE - TASK_UNMAPPED_BASE - 3 * PAGE_SIZE);
>> +
>> +	if (((poking_addr + PAGE_SIZE) & ~PMD_MASK) == 0)
>> +		poking_addr += PAGE_SIZE;
>> +
>> +	/*
>> +	 * We need to trigger the allocation of the page-tables that will be
>> +	 * needed for poking now. Later, poking may be performed in an atomic
>> +	 * section, which might cause allocation to fail.
>> +	 */
>> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
>> +	if (!WARN_ON(!ptep))
>> +		pte_unmap_unlock(ptep, ptl);
>> +}
> 
> The difference in how we deal with -ENOMEM here is weird. I think we
> have a _lot_ of code that simply hard assumes we don't fail memory alloc
> on init.
> 
> I for instance would not mind to simply remove both branches and let the
> kernel crash and burn if we ever fail here.

Actually, now that we removed the fallback of patching without poking_mm, a
failure to allocate poking_mm should have had a BUG_ON().

For the second case, I think we still need either WARN_ON() or BUG_ON(), at
least as some sort of an in-code comment. I’ll change it to BUG_ON() if you
prefer.


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
  2018-11-11 19:11   ` Damian Tometzki
@ 2018-11-11 20:41     ` Nadav Amit
  0 siblings, 0 replies; 29+ messages in thread
From: Nadav Amit @ 2018-11-11 20:41 UTC (permalink / raw)
  To: Damian Tometzki
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Peter Zijlstra, Dave Hansen, Masami Hiramatsu

From: Damian Tometzki
Sent: November 11, 2018 at 7:11:42 PM GMT
> To: Nadav Amit <namit@vmware.com>
> Cc: Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org>, x86@kernel.org>, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Peter Zijlstra <peterz@infradead.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
> Subject: Re: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
> 
> 
> On Sa, 10. Nov 15:17, Nadav Amit wrote:
>> text_poke() can potentially compromise the security as it sets temporary
>> PTEs in the fixmap. These PTEs might be used to rewrite the kernel code
>> from other cores accidentally or maliciously, if an attacker gains the
>> ability to write onto kernel memory.
>> 
>> Moreover, since remote TLBs are not flushed after the temporary PTEs are
>> removed, the time-window in which the code is writable is not limited if
>> the fixmap PTEs - maliciously or accidentally - are cached in the TLB.
>> To address these potential security hazards, we use a temporary mm for
>> patching the code.
>> 
>> Finally, text_poke() is also not conservative enough when mapping pages,
>> as it always tries to map 2 pages, even when a single one is sufficient.
>> So try to be more conservative, and do not map more than needed.
>> 
>> Cc: Andy Lutomirski <luto@kernel.org>
>> Cc: Kees Cook <keescook@chromium.org>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Dave Hansen <dave.hansen@intel.com>
>> Cc: Masami Hiramatsu <mhiramat@kernel.org>
>> Signed-off-by: Nadav Amit <namit@vmware.com>
>> ---
>> arch/x86/include/asm/fixmap.h |   2 -
>> arch/x86/kernel/alternative.c | 112 +++++++++++++++++++++++++++-------
>> 2 files changed, 89 insertions(+), 25 deletions(-)
>> 
>> diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
>> index 50ba74a34a37..9da8cccdf3fb 100644
>> --- a/arch/x86/include/asm/fixmap.h
>> +++ b/arch/x86/include/asm/fixmap.h
>> @@ -103,8 +103,6 @@ enum fixed_addresses {
>> #ifdef CONFIG_PARAVIRT
>> 	FIX_PARAVIRT_BOOTMAP,
>> #endif
> 
> Hello Nadav,
> 
> with the remove of FIX_TEXT_POKE1 and FIX_TEXT_POKE0 i get the following
> build error:
> 
> /home/damian/kernel/linux/arch/x86/xen/mmu_pv.c:2321:7: Fehler: »FIX_TEXT_POKE0« nicht deklariert (erstmalige Verwendung in dieser Funktion); meinten Sie »FIX_TBOOT_BASE«?
>  case FIX_TEXT_POKE0:
>       ^~~~~~~~~~~~~~
>       FIX_TBOOT_BASE
> /home/damian/kernel/linux/arch/x86/xen/mmu_pv.c:2321:7: Anmerkung: jeder nicht deklarierte Bezeichner wird nur einmal für jede Funktion, in der er vorkommt, gemeldet
> /home/damian/kernel/linux/arch/x86/xen/mmu_pv.c:2322:7: Fehler: »FIX_TEXT_POKE1« nicht deklariert (erstmalige Verwendung in dieser Funktion); meinten Sie »FIX_TBOOT_BASE«?
>  case FIX_TEXT_POKE1:
>       ^~~~~~~~~~~~~~
>       FIX_TBOOT_BASE

Thanks for letting me know. I’ll simply remove them in v5.

Regards,
Nadav

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
  2018-11-11 14:59   ` Peter Zijlstra
@ 2018-11-11 20:53     ` Nadav Amit
  2018-11-11 23:52       ` Peter Zijlstra
  0 siblings, 1 reply; 29+ messages in thread
From: Nadav Amit @ 2018-11-11 20:53 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

From: Peter Zijlstra
Sent: November 11, 2018 at 2:59:36 PM GMT
> To: Nadav Amit <namit@vmware.com>
> Cc: Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, x86@kernel.org, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
> Subject: Re: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
> 
> 
> On Sat, Nov 10, 2018 at 03:17:28PM -0800, Nadav Amit wrote:
>> @@ -683,43 +684,108 @@ __ro_after_init unsigned long poking_addr;
>> 
>> static int __text_poke(void *addr, const void *opcode, size_t len)
>> {
>> +	bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE;
>> +	temporary_mm_state_t prev;
>> +	struct page *pages[2] = {NULL};
>> 	unsigned long flags;
>> +	pte_t pte, *ptep;
>> +	spinlock_t *ptl;
>> +	int r = 0;
>> 
>> 	/*
>> +	 * While boot memory allocator is running we cannot use struct pages as
>> +	 * they are not yet initialized.
>> 	 */
>> 	BUG_ON(!after_bootmem);
>> 
>> 	if (!core_kernel_text((unsigned long)addr)) {
>> 		pages[0] = vmalloc_to_page(addr);
>> +		if (cross_page_boundary)
>> +			pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
>> 	} else {
>> 		pages[0] = virt_to_page(addr);
>> 		WARN_ON(!PageReserved(pages[0]));
>> +		if (cross_page_boundary)
>> +			pages[1] = virt_to_page(addr + PAGE_SIZE);
>> 	}
>> +
>> +	if (!pages[0] || (cross_page_boundary && !pages[1]))
>> 		return -EFAULT;
>> +
>> 	local_irq_save(flags);
>> +
>> +	/*
>> +	 * The lock is not really needed, but this allows to avoid open-coding.
>> +	 */
>> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
>> +
>> +	/*
>> +	 * If we failed to allocate a PTE, fail. This should *never* happen,
>> +	 * since we preallocate the PTE.
>> +	 */
>> +	if (WARN_ON_ONCE(!ptep))
>> +		goto out;
> 
> Since we hard rely on init getting that right; can't we simply get rid
> of this?

This is a repeated complaint of yours, which I do not feel comfortable with.
One day someone will run some static analysis tool and start finding that
all these checks are missing.

The question is why do you care about them. If it is because they affect the
generated code and make it less efficient, I can fully understand and perhaps
we should have something like PARANOID_WARN_ON_ONCE() which compiles into nothing
unless a certain debug option is set.

If it is about the way the source code looks - I guess it doesn’t sore my
eyes as hard as some other stuff, and I cannot do much about it (other than
removing it as you asked).

>> +
>> +	pte = mk_pte(pages[0], PAGE_KERNEL);
>> +	set_pte_at(poking_mm, poking_addr, ptep, pte);
>> +
>> +	if (cross_page_boundary) {
>> +		pte = mk_pte(pages[1], PAGE_KERNEL);
>> +		set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte);
>> +	}
>> +
>> +	/*
>> +	 * Loading the temporary mm behaves as a compiler barrier, which
>> +	 * guarantees that the PTE will be set at the time memcpy() is done.
>> +	 */
>> +	prev = use_temporary_mm(poking_mm);
>> +
>> +	kasan_disable_current();
>> +	memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len);
>> +	kasan_enable_current();
>> +
>> +	/*
>> +	 * Ensure that the PTE is only cleared after the instructions of memcpy
>> +	 * were issued by using a compiler barrier.
>> +	 */
>> +	barrier();
>> +
>> +	pte_clear(poking_mm, poking_addr, ptep);
>> +
>> +	/*
>> +	 * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
>> +	 * as it also flushes the corresponding "user" address spaces, which
>> +	 * does not exist.
>> +	 *
>> +	 * Poking, however, is already very inefficient since it does not try to
>> +	 * batch updates, so we ignore this problem for the time being.
>> +	 *
>> +	 * Since the PTEs do not exist in other kernel address-spaces, we do
>> +	 * not use __flush_tlb_one_kernel(), which when PTI is on would cause
>> +	 * more unwarranted TLB flushes.
>> +	 *
>> +	 * There is a slight anomaly here: the PTE is a supervisor-only and
>> +	 * (potentially) global and we use __flush_tlb_one_user() but this
>> +	 * should be fine.
>> +	 */
>> +	__flush_tlb_one_user(poking_addr);
>> +	if (cross_page_boundary) {
>> +		pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
>> +		__flush_tlb_one_user(poking_addr + PAGE_SIZE);
>> +	}
>> +
>> +	/*
>> +	 * Loading the previous page-table hierarchy requires a serializing
>> +	 * instruction that already allows the core to see the updated version.
>> +	 * Xen-PV is assumed to serialize execution in a similar manner.
>> +	 */
>> +	unuse_temporary_mm(prev);
>> +
>> +	pte_unmap_unlock(ptep, ptl);
>> +out:
>> +	if (memcmp(addr, opcode, len))
>> +		r = -EFAULT;
> 
> How could this ever fail? And how can we reliably recover from that?

This code has been there before (with slightly uglier code). Before this
patch, a BUG_ON() was used here. However, I noticed that kgdb actually
checks that text_poke() succeeded after calling it and gracefully fail.
However, this was useless, since text_poke() would panic before kgdb gets
the chance to do anything (see patch 7).

> I mean, we can move that BUG_ON() we have in text_poke() down a level,
> but for example the static_key/jump_label code has no real option on
> failing this.
> 
>> +
>> 	local_irq_restore(flags);
>> 	return r;
>> }
> 
> Other than that, this looks really good!



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
  2018-11-11 20:53     ` Nadav Amit
@ 2018-11-11 23:52       ` Peter Zijlstra
  2018-11-12  0:09         ` Nadav Amit
                           ` (2 more replies)
  0 siblings, 3 replies; 29+ messages in thread
From: Peter Zijlstra @ 2018-11-11 23:52 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Sun, Nov 11, 2018 at 08:53:07PM +0000, Nadav Amit wrote:

> >> +	/*
> >> +	 * The lock is not really needed, but this allows to avoid open-coding.
> >> +	 */
> >> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
> >> +
> >> +	/*
> >> +	 * If we failed to allocate a PTE, fail. This should *never* happen,
> >> +	 * since we preallocate the PTE.
> >> +	 */
> >> +	if (WARN_ON_ONCE(!ptep))
> >> +		goto out;
> > 
> > Since we hard rely on init getting that right; can't we simply get rid
> > of this?
> 
> This is a repeated complaint of yours, which I do not feel comfortable with.
> One day someone will run some static analysis tool and start finding that
> all these checks are missing.
> 
> The question is why do you care about them.

Mostly because they should not be happening, ever. And if they happen,
there really isn't anything sensible we can do about it.

> If it is because they affect the
> generated code and make it less efficient, I can fully understand and perhaps
> we should have something like PARANOID_WARN_ON_ONCE() which compiles into nothing
> unless a certain debug option is set.
> 
> If it is about the way the source code looks - I guess it doesn’t sore my
> eyes as hard as some other stuff, and I cannot do much about it (other than
> removing it as you asked).

And yes on the above two points. It adds both runtime overhead (albeit
trivially small) and code complexity.

> >> +out:
> >> +	if (memcmp(addr, opcode, len))
> >> +		r = -EFAULT;
> > 
> > How could this ever fail? And how can we reliably recover from that?
> 
> This code has been there before (with slightly uglier code). Before this
> patch, a BUG_ON() was used here. However, I noticed that kgdb actually
> checks that text_poke() succeeded after calling it and gracefully fail.
> However, this was useless, since text_poke() would panic before kgdb gets
> the chance to do anything (see patch 7).

Yes, I know it was there before, and I did see kgdb do it too. But aside
from that out-label case, which we also should never hit, how can we
realistically ever fail that memcmp()?

If we fail here, something is _seriously_ buggered.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
  2018-11-11 23:52       ` Peter Zijlstra
@ 2018-11-12  0:09         ` Nadav Amit
  2018-11-12  0:41           ` Peter Zijlstra
  2018-11-12  0:36         ` Peter Zijlstra
  2018-11-12  3:46         ` Ingo Molnar
  2 siblings, 1 reply; 29+ messages in thread
From: Nadav Amit @ 2018-11-12  0:09 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

From: Peter Zijlstra
Sent: November 11, 2018 at 11:52:20 PM GMT
> To: Nadav Amit <namit@vmware.com>
> Cc: Ingo Molnar <mingo@redhat.com>, LKML <linux-kernel@vger.kernel.org>, X86 ML <x86@kernel.org>, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
> Subject: Re: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
> 
> 
> On Sun, Nov 11, 2018 at 08:53:07PM +0000, Nadav Amit wrote:
> 
>>>> +	/*
>>>> +	 * The lock is not really needed, but this allows to avoid open-coding.
>>>> +	 */
>>>> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
>>>> +
>>>> +	/*
>>>> +	 * If we failed to allocate a PTE, fail. This should *never* happen,
>>>> +	 * since we preallocate the PTE.
>>>> +	 */
>>>> +	if (WARN_ON_ONCE(!ptep))
>>>> +		goto out;
>>> 
>>> Since we hard rely on init getting that right; can't we simply get rid
>>> of this?
>> 
>> This is a repeated complaint of yours, which I do not feel comfortable with.
>> One day someone will run some static analysis tool and start finding that
>> all these checks are missing.
>> 
>> The question is why do you care about them.
> 
> Mostly because they should not be happening, ever. And if they happen,
> there really isn't anything sensible we can do about it.
> 
>> If it is because they affect the
>> generated code and make it less efficient, I can fully understand and perhaps
>> we should have something like PARANOID_WARN_ON_ONCE() which compiles into nothing
>> unless a certain debug option is set.
>> 
>> If it is about the way the source code looks - I guess it doesn’t sore my
>> eyes as hard as some other stuff, and I cannot do much about it (other than
>> removing it as you asked).
> 
> And yes on the above two points. It adds both runtime overhead (albeit
> trivially small) and code complexity.

I understand. So the question is - what would you prefer: something like
PARANOID_WARN_ON_ONCE() or should I just remove the assertion?

>>>> +out:
>>>> +	if (memcmp(addr, opcode, len))
>>>> +		r = -EFAULT;
>>> 
>>> How could this ever fail? And how can we reliably recover from that?
>> 
>> This code has been there before (with slightly uglier code). Before this
>> patch, a BUG_ON() was used here. However, I noticed that kgdb actually
>> checks that text_poke() succeeded after calling it and gracefully fail.
>> However, this was useless, since text_poke() would panic before kgdb gets
>> the chance to do anything (see patch 7).
> 
> Yes, I know it was there before, and I did see kgdb do it too. But aside
> from that out-label case, which we also should never hit, how can we
> realistically ever fail that memcmp()?
> 
> If we fail here, something is _seriously_ buggered.

I agree. But as it may be useful at least to warn in such a case, as
debugging of SMC/CMC is hard. For example, if there is some sort of a race
between module (un)loading and static-keys - such a check might be
beneficial to indicate so. Having said that, changing it into VM_BUG_ON() or
something similar may make more sense.

Personally, I don’t care much - I’m just worried that I made some intrusive
changes *and* you want me to remove the assertion that checks that I didn’t
screw up.


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 05/10] x86/alternative: initializing temporary mm for patching
  2018-11-11 20:38     ` Nadav Amit
@ 2018-11-12  0:34       ` Peter Zijlstra
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Zijlstra @ 2018-11-12  0:34 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Kees Cook, Dave Hansen

On Sun, Nov 11, 2018 at 08:38:53PM +0000, Nadav Amit wrote:
> From: Peter Zijlstra
> > On Sat, Nov 10, 2018 at 03:17:27PM -0800, Nadav Amit wrote:
> >> +void __init poking_init(void)
> >> +{
> >> +	spinlock_t *ptl;
> >> +	pte_t *ptep;
> >> +
> >> +	poking_mm = copy_init_mm();
> >> +	if (!poking_mm) {
> >> +		pr_err("x86/mm: error setting a separate poking address space");
> >> +		return;
> >> +	}
> >> +
> >> +	/*
> >> +	 * Randomize the poking address, but make sure that the following page
> >> +	 * will be mapped at the same PMD. We need 2 pages, so find space for 3,
> >> +	 * and adjust the address if the PMD ends after the first one.
> >> +	 */
> >> +	poking_addr = TASK_UNMAPPED_BASE +
> >> +		(kaslr_get_random_long("Poking") & PAGE_MASK) %
> >> +		(TASK_SIZE - TASK_UNMAPPED_BASE - 3 * PAGE_SIZE);
> >> +
> >> +	if (((poking_addr + PAGE_SIZE) & ~PMD_MASK) == 0)
> >> +		poking_addr += PAGE_SIZE;
> >> +
> >> +	/*
> >> +	 * We need to trigger the allocation of the page-tables that will be
> >> +	 * needed for poking now. Later, poking may be performed in an atomic
> >> +	 * section, which might cause allocation to fail.
> >> +	 */
> >> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
> >> +	if (!WARN_ON(!ptep))
> >> +		pte_unmap_unlock(ptep, ptl);
> >> +}
> > 
> > The difference in how we deal with -ENOMEM here is weird. I think we
> > have a _lot_ of code that simply hard assumes we don't fail memory alloc
> > on init.
> > 
> > I for instance would not mind to simply remove both branches and let the
> > kernel crash and burn if we ever fail here.
> 
> Actually, now that we removed the fallback of patching without poking_mm, a
> failure to allocate poking_mm should have had a BUG_ON().
> 
> For the second case, I think we still need either WARN_ON() or BUG_ON(), at
> least as some sort of an in-code comment. I’ll change it to BUG_ON() if you
> prefer.

Sure, two BUG_ON()s works for me.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
  2018-11-11 23:52       ` Peter Zijlstra
  2018-11-12  0:09         ` Nadav Amit
@ 2018-11-12  0:36         ` Peter Zijlstra
  2018-11-12  3:46         ` Ingo Molnar
  2 siblings, 0 replies; 29+ messages in thread
From: Peter Zijlstra @ 2018-11-12  0:36 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Mon, Nov 12, 2018 at 12:52:20AM +0100, Peter Zijlstra wrote:
> On Sun, Nov 11, 2018 at 08:53:07PM +0000, Nadav Amit wrote:
> 
> > >> +	/*
> > >> +	 * The lock is not really needed, but this allows to avoid open-coding.
> > >> +	 */
> > >> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
> > >> +
> > >> +	/*
> > >> +	 * If we failed to allocate a PTE, fail. This should *never* happen,
> > >> +	 * since we preallocate the PTE.
> > >> +	 */
> > >> +	if (WARN_ON_ONCE(!ptep))
> > >> +		goto out;
> > > 
> > > Since we hard rely on init getting that right; can't we simply get rid
> > > of this?

> > If it is about the way the source code looks - I guess it doesn’t sore my
> > eyes as hard as some other stuff, and I cannot do much about it (other than
> > removing it as you asked).

FWIW per the same argument we should be checking if poking_mm is !NULL.
We also don't do that.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
  2018-11-12  0:09         ` Nadav Amit
@ 2018-11-12  0:41           ` Peter Zijlstra
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Zijlstra @ 2018-11-12  0:41 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Mon, Nov 12, 2018 at 12:09:32AM +0000, Nadav Amit wrote:
> > On Sun, Nov 11, 2018 at 08:53:07PM +0000, Nadav Amit wrote:
> > 
> >>>> +	/*
> >>>> +	 * The lock is not really needed, but this allows to avoid open-coding.
> >>>> +	 */
> >>>> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
> >>>> +
> >>>> +	/*
> >>>> +	 * If we failed to allocate a PTE, fail. This should *never* happen,
> >>>> +	 * since we preallocate the PTE.
> >>>> +	 */
> >>>> +	if (WARN_ON_ONCE(!ptep))
> >>>> +		goto out;
> >>> 
> >>> Since we hard rely on init getting that right; can't we simply get rid
> >>> of this?

> I understand. So the question is - what would you prefer: something like
> PARANOID_WARN_ON_ONCE() or should I just remove the assertion?

Something like:

	/*
	 * @ptep cannot be NULL per construction in poking_init().
	 */

And then leave it at that. If it ever comes unstuck we'll get the NULL
deref, which is just as good as a BUG_ON().

> >>>> +out:
> >>>> +	if (memcmp(addr, opcode, len))
> >>>> +		r = -EFAULT;
> >>> 
> >>> How could this ever fail? And how can we reliably recover from that?
> >> 
> >> This code has been there before (with slightly uglier code). Before this
> >> patch, a BUG_ON() was used here. However, I noticed that kgdb actually
> >> checks that text_poke() succeeded after calling it and gracefully fail.
> >> However, this was useless, since text_poke() would panic before kgdb gets
> >> the chance to do anything (see patch 7).
> > 
> > Yes, I know it was there before, and I did see kgdb do it too. But aside
> > from that out-label case, which we also should never hit, how can we
> > realistically ever fail that memcmp()?
> > 
> > If we fail here, something is _seriously_ buggered.
> 
> I agree. But as it may be useful at least to warn in such a case, as
> debugging of SMC/CMC is hard. For example, if there is some sort of a race
> between module (un)loading and static-keys - such a check might be
> beneficial to indicate so. Having said that, changing it into VM_BUG_ON() or
> something similar may make more sense.
> 
> Personally, I don’t care much - I’m just worried that I made some intrusive
> changes *and* you want me to remove the assertion that checks that I didn’t
> screw up.

Ah, so I'm perfectly fine with something like:

	VM_BUG_ON(memcmp());

I just don't see value in the whole return code here. If this comes
unstuck, we're buggered beyond repair.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 01/10] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2018-11-10 23:17 ` [PATCH v4 01/10] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Nadav Amit
@ 2018-11-12  2:54   ` Masami Hiramatsu
  2018-11-12 10:59     ` Jiri Kosina
  0 siblings, 1 reply; 29+ messages in thread
From: Masami Hiramatsu @ 2018-11-12  2:54 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Jiri Kosina, Andy Lutomirski,
	Kees Cook, Dave Hansen, Masami Hiramatsu

On Sat, 10 Nov 2018 15:17:23 -0800
Nadav Amit <namit@vmware.com> wrote:

> text_mutex is currently expected to be held before text_poke() is
> called, but we kgdb does not take the mutex, and instead *supposedly*
> ensures the lock is not taken and will not be acquired by any other core
> while text_poke() is running.
> 
> The reason for the "supposedly" comment is that it is not entirely clear
> that this would be the case if gdb_do_roundup is zero.
> 
> This patch creates two wrapper functions, text_poke() and
> text_poke_kgdb() which do or do not run the lockdep assertion
> respectively.
> 
> While we are at it, change the return code of text_poke() to something
> meaningful. One day, callers might actually respect it and the existing
> BUG_ON() when patching fails could be removed. For kgdb, the return
> value can actually be used.

Hm, this looks reasonable and good to me.

Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>

Thank you!

> 
> Cc: Jiri Kosina <jkosina@suse.cz>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Fixes: 9222f606506c ("x86/alternatives: Lockdep-enforce text_mutex in text_poke*()")
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Nadav Amit <namit@vmware.com>
> ---
>  arch/x86/include/asm/text-patching.h |  3 +-
>  arch/x86/kernel/alternative.c        | 72 +++++++++++++++++++++-------
>  arch/x86/kernel/kgdb.c               | 15 ++++--
>  3 files changed, 66 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
> index e85ff65c43c3..5a2600370763 100644
> --- a/arch/x86/include/asm/text-patching.h
> +++ b/arch/x86/include/asm/text-patching.h
> @@ -34,7 +34,8 @@ extern void *text_poke_early(void *addr, const void *opcode, size_t len);
>   * On the local CPU you need to be protected again NMI or MCE handlers seeing an
>   * inconsistent instruction while you patch.
>   */
> -extern void *text_poke(void *addr, const void *opcode, size_t len);
> +extern int text_poke(void *addr, const void *opcode, size_t len);
> +extern int text_poke_kgdb(void *addr, const void *opcode, size_t len);
>  extern int poke_int3_handler(struct pt_regs *regs);
>  extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
>  extern int after_bootmem;
> diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
> index ebeac487a20c..ebe9210dc92e 100644
> --- a/arch/x86/kernel/alternative.c
> +++ b/arch/x86/kernel/alternative.c
> @@ -678,23 +678,12 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode,
>  	return addr;
>  }
>  
> -/**
> - * text_poke - Update instructions on a live kernel
> - * @addr: address to modify
> - * @opcode: source of the copy
> - * @len: length to copy
> - *
> - * Only atomic text poke/set should be allowed when not doing early patching.
> - * It means the size must be writable atomically and the address must be aligned
> - * in a way that permits an atomic write. It also makes sure we fit on a single
> - * page.
> - */
> -void *text_poke(void *addr, const void *opcode, size_t len)
> +static int __text_poke(void *addr, const void *opcode, size_t len)
>  {
>  	unsigned long flags;
>  	char *vaddr;
>  	struct page *pages[2];
> -	int i;
> +	int i, r = 0;
>  
>  	/*
>  	 * While boot memory allocator is runnig we cannot use struct
> @@ -702,8 +691,6 @@ void *text_poke(void *addr, const void *opcode, size_t len)
>  	 */
>  	BUG_ON(!after_bootmem);
>  
> -	lockdep_assert_held(&text_mutex);
> -
>  	if (!core_kernel_text((unsigned long)addr)) {
>  		pages[0] = vmalloc_to_page(addr);
>  		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
> @@ -712,7 +699,8 @@ void *text_poke(void *addr, const void *opcode, size_t len)
>  		WARN_ON(!PageReserved(pages[0]));
>  		pages[1] = virt_to_page(addr + PAGE_SIZE);
>  	}
> -	BUG_ON(!pages[0]);
> +	if (!pages[0])
> +		return -EFAULT;
>  	local_irq_save(flags);
>  	set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0]));
>  	if (pages[1])
> @@ -727,9 +715,57 @@ void *text_poke(void *addr, const void *opcode, size_t len)
>  	/* Could also do a CLFLUSH here to speed up CPU recovery; but
>  	   that causes hangs on some VIA CPUs. */
>  	for (i = 0; i < len; i++)
> -		BUG_ON(((char *)addr)[i] != ((char *)opcode)[i]);
> +		if (((char *)addr)[i] != ((char *)opcode)[i])
> +			r = -EFAULT;
>  	local_irq_restore(flags);
> -	return addr;
> +	return r;
> +}
> +
> +/**
> + * text_poke - Update instructions on a live kernel
> + * @addr: address to modify
> + * @opcode: source of the copy
> + * @len: length to copy
> + *
> + * Only atomic text poke/set should be allowed when not doing early patching.
> + * It means the size must be writable atomically and the address must be aligned
> + * in a way that permits an atomic write. It also makes sure we fit on a single
> + * page.
> + */
> +int text_poke(void *addr, const void *opcode, size_t len)
> +{
> +	int r;
> +
> +	lockdep_assert_held(&text_mutex);
> +
> +	r = __text_poke(addr, opcode, len);
> +
> +	/*
> +	 * TODO: change the callers to consider the return value and remove this
> +	 *       historical assertion.
> +	 */
> +	BUG_ON(r);
> +
> +	return r;
> +}
> +
> +/**
> + * text_poke_kgdb - Update instructions on a live kernel by kgdb
> + * @addr: address to modify
> + * @opcode: source of the copy
> + * @len: length to copy
> + *
> + * Only atomic text poke/set should be allowed when not doing early patching.
> + * It means the size must be writable atomically and the address must be aligned
> + * in a way that permits an atomic write. It also makes sure we fit on a single
> + * page.
> + *
> + * Context: should only be used by kgdb, which ensures no other core is running,
> + *	    despite the fact it does not hold the text_mutex.
> + */
> +int text_poke_kgdb(void *addr, const void *opcode, size_t len)
> +{
> +	return __text_poke(addr, opcode, len);
>  }
>  
>  static void do_sync_core(void *info)
> diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
> index 8e36f249646e..8091b2e381d4 100644
> --- a/arch/x86/kernel/kgdb.c
> +++ b/arch/x86/kernel/kgdb.c
> @@ -763,13 +763,15 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
>  	if (!err)
>  		return err;
>  	/*
> -	 * It is safe to call text_poke() because normal kernel execution
> +	 * It is safe to call text_poke_kgdb() because normal kernel execution
>  	 * is stopped on all cores, so long as the text_mutex is not locked.
>  	 */
>  	if (mutex_is_locked(&text_mutex))
>  		return -EBUSY;
> -	text_poke((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
> -		  BREAK_INSTR_SIZE);
> +	err = text_poke_kgdb((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
> +			     BREAK_INSTR_SIZE);
> +	if (err)
> +		return err;
>  	err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
>  	if (err)
>  		return err;
> @@ -788,12 +790,15 @@ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
>  	if (bpt->type != BP_POKE_BREAKPOINT)
>  		goto knl_write;
>  	/*
> -	 * It is safe to call text_poke() because normal kernel execution
> +	 * It is safe to call text_poke_kgdb() because normal kernel execution
>  	 * is stopped on all cores, so long as the text_mutex is not locked.
>  	 */
>  	if (mutex_is_locked(&text_mutex))
>  		goto knl_write;
> -	text_poke((void *)bpt->bpt_addr, bpt->saved_instr, BREAK_INSTR_SIZE);
> +	err = text_poke_kgdb((void *)bpt->bpt_addr, bpt->saved_instr,
> +			     BREAK_INSTR_SIZE);
> +	if (err)
> +		return err;
>  	err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
>  	if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE))
>  		goto knl_write;
> -- 
> 2.17.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
  2018-11-11 23:52       ` Peter Zijlstra
  2018-11-12  0:09         ` Nadav Amit
  2018-11-12  0:36         ` Peter Zijlstra
@ 2018-11-12  3:46         ` Ingo Molnar
  2018-11-12  8:50           ` Peter Zijlstra
  2 siblings, 1 reply; 29+ messages in thread
From: Ingo Molnar @ 2018-11-12  3:46 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Nadav Amit, Ingo Molnar, LKML, X86 ML, H. Peter Anvin,
	Thomas Gleixner, Borislav Petkov, Dave Hansen, Andy Lutomirski,
	Kees Cook, Dave Hansen, Masami Hiramatsu


* Peter Zijlstra <peterz@infradead.org> wrote:

> On Sun, Nov 11, 2018 at 08:53:07PM +0000, Nadav Amit wrote:
> 
> > >> +	/*
> > >> +	 * The lock is not really needed, but this allows to avoid open-coding.
> > >> +	 */
> > >> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
> > >> +
> > >> +	/*
> > >> +	 * If we failed to allocate a PTE, fail. This should *never* happen,
> > >> +	 * since we preallocate the PTE.
> > >> +	 */
> > >> +	if (WARN_ON_ONCE(!ptep))
> > >> +		goto out;
> > > 
> > > Since we hard rely on init getting that right; can't we simply get rid
> > > of this?
> > 
> > This is a repeated complaint of yours, which I do not feel comfortable with.
> > One day someone will run some static analysis tool and start finding that
> > all these checks are missing.
> > 
> > The question is why do you care about them.
> 
> Mostly because they should not be happening, ever.

Since get_locked_pte() might in principle return NULL, it's an entirely 
routine pattern to check the return for NULL. This will save reviewer 
time in the future.

> [...] And if they happen, there really isn't anything sensible we can 
> do about it.

Warning about it is 'something', even if we cash afterwards, isn't it?

> > If it is because they affect the
> > generated code and make it less efficient, I can fully understand and perhaps
> > we should have something like PARANOID_WARN_ON_ONCE() which compiles into nothing
> > unless a certain debug option is set.
> > 
> > If it is about the way the source code looks - I guess it doesn’t sore my
> > eyes as hard as some other stuff, and I cannot do much about it (other than
> > removing it as you asked).
> 
> And yes on the above two points. It adds both runtime overhead (albeit
> trivially small) and code complexity.

It's trivially small cycle level overhead in something that will be 
burdened by two TLB flushes anyway is is utterly slow.

> > >> +out:
> > >> +	if (memcmp(addr, opcode, len))
> > >> +		r = -EFAULT;
> > > 
> > > How could this ever fail? And how can we reliably recover from that?
> > 
> > This code has been there before (with slightly uglier code). Before this
> > patch, a BUG_ON() was used here. However, I noticed that kgdb actually
> > checks that text_poke() succeeded after calling it and gracefully fail.
> > However, this was useless, since text_poke() would panic before kgdb gets
> > the chance to do anything (see patch 7).
> 
> Yes, I know it was there before, and I did see kgdb do it too. But aside
> from that out-label case, which we also should never hit, how can we
> realistically ever fail that memcmp()?
> 
> If we fail here, something is _seriously_ buggered.

So wouldn't it be better to just document and verify our assumptions of 
this non-trivial code by using return values intelligently?

I mean, being worried about overhead would be legitimate in the syscall 
entry code. In code patching code, which is essentially a slow path, we 
should be much more worried about *robustness*.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
  2018-11-12  3:46         ` Ingo Molnar
@ 2018-11-12  8:50           ` Peter Zijlstra
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Zijlstra @ 2018-11-12  8:50 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Nadav Amit, Ingo Molnar, LKML, X86 ML, H. Peter Anvin,
	Thomas Gleixner, Borislav Petkov, Dave Hansen, Andy Lutomirski,
	Kees Cook, Dave Hansen, Masami Hiramatsu

On Mon, Nov 12, 2018 at 04:46:46AM +0100, Ingo Molnar wrote:
> 
> * Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > On Sun, Nov 11, 2018 at 08:53:07PM +0000, Nadav Amit wrote:
> > 
> > > >> +	/*
> > > >> +	 * The lock is not really needed, but this allows to avoid open-coding.
> > > >> +	 */
> > > >> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
> > > >> +
> > > >> +	/*
> > > >> +	 * If we failed to allocate a PTE, fail. This should *never* happen,
> > > >> +	 * since we preallocate the PTE.
> > > >> +	 */
> > > >> +	if (WARN_ON_ONCE(!ptep))
> > > >> +		goto out;
> > > > 
> > > > Since we hard rely on init getting that right; can't we simply get rid
> > > > of this?
> > > 
> > > This is a repeated complaint of yours, which I do not feel comfortable with.
> > > One day someone will run some static analysis tool and start finding that
> > > all these checks are missing.
> > > 
> > > The question is why do you care about them.
> > 
> > Mostly because they should not be happening, ever.
> 
> Since get_locked_pte() might in principle return NULL, it's an entirely 
> routine pattern to check the return for NULL. This will save reviewer 
> time in the future.

The reviewer can read a comment.

> > > If it is because they affect the
> > > generated code and make it less efficient, I can fully understand and perhaps
> > > we should have something like PARANOID_WARN_ON_ONCE() which compiles into nothing
> > > unless a certain debug option is set.
> > > 
> > > If it is about the way the source code looks - I guess it doesn’t sore my
> > > eyes as hard as some other stuff, and I cannot do much about it (other than
> > > removing it as you asked).
> > 
> > And yes on the above two points. It adds both runtime overhead (albeit
> > trivially small) and code complexity.
> 
> It's trivially small cycle level overhead in something that will be 
> burdened by two TLB flushes anyway is is utterly slow.

The code complexity not so much.

> > > >> +out:
> > > >> +	if (memcmp(addr, opcode, len))
> > > >> +		r = -EFAULT;
> > > > 
> > > > How could this ever fail? And how can we reliably recover from that?
> > > 
> > > This code has been there before (with slightly uglier code). Before this
> > > patch, a BUG_ON() was used here. However, I noticed that kgdb actually
> > > checks that text_poke() succeeded after calling it and gracefully fail.
> > > However, this was useless, since text_poke() would panic before kgdb gets
> > > the chance to do anything (see patch 7).
> > 
> > Yes, I know it was there before, and I did see kgdb do it too. But aside
> > from that out-label case, which we also should never hit, how can we
> > realistically ever fail that memcmp()?
> > 
> > If we fail here, something is _seriously_ buggered.
> 
> So wouldn't it be better to just document and verify our assumptions of 
> this non-trivial code by using return values intelligently?

The thing is, I don't think there is realistically anything the caller
can do; our text is not what we expect it to be, that is a fairly
fundamentally buggered situation to be in.

I'm fine with validating it; I'm as paranoid as the next guy; but
passing along that information seems pointless. At best we can try
poking again, but that's not going to help much if it failed the first
time around.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 01/10] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2018-11-12  2:54   ` Masami Hiramatsu
@ 2018-11-12 10:59     ` Jiri Kosina
  0 siblings, 0 replies; 29+ messages in thread
From: Jiri Kosina @ 2018-11-12 10:59 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Nadav Amit, Ingo Molnar, linux-kernel, x86, H. Peter Anvin,
	Thomas Gleixner, Borislav Petkov, Dave Hansen, Andy Lutomirski,
	Kees Cook, Dave Hansen

On Mon, 12 Nov 2018, Masami Hiramatsu wrote:

> > text_mutex is currently expected to be held before text_poke() is 
> > called, but we kgdb does not take the mutex, and instead *supposedly* 
> > ensures the lock is not taken and will not be acquired by any other 
> > core while text_poke() is running.
> > 
> > The reason for the "supposedly" comment is that it is not entirely clear
> > that this would be the case if gdb_do_roundup is zero.
> > 
> > This patch creates two wrapper functions, text_poke() and
> > text_poke_kgdb() which do or do not run the lockdep assertion
> > respectively.
> > 
> > While we are at it, change the return code of text_poke() to something
> > meaningful. One day, callers might actually respect it and the existing
> > BUG_ON() when patching fails could be removed. For kgdb, the return
> > value can actually be used.
> 
> Hm, this looks reasonable and good to me.
> 
> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>

Yes, I guess this is much better than putting the 'enforcement by comment' 
back in place :)

	Acked-by: Jiri Kosina <jkosina@suse.cz>

Thanks.

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 02/10] x86/jump_label: Use text_poke_early() during early init
  2018-11-10 23:17 ` [PATCH v4 02/10] x86/jump_label: Use text_poke_early() during early init Nadav Amit
@ 2018-11-12 20:12   ` Nadav Amit
  0 siblings, 0 replies; 29+ messages in thread
From: Nadav Amit @ 2018-11-12 20:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, X86 ML, H. Peter Anvin, Thomas Gleixner, Borislav Petkov,
	Dave Hansen, Andy Lutomirski, Kees Cook, Dave Hansen,
	Masami Hiramatsu, Ingo Molnar

Peter,

I have put you as a “Co-Developed-by”, since the patch ended up as being the
single line that you wrote in the correspondence of the previous version.

I would therefore need to ask for your signed-off-by.

Regards,
Nadav

From: Nadav Amit
Sent: November 10, 2018 at 11:17:24 PM GMT
> To: Ingo Molnar <mingo@redhat.com>
> Cc: linux-kernel@vger.kernel.org>, x86@kernel.org>, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Nadav Amit <namit@vmware.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
> Subject: [PATCH v4 02/10] x86/jump_label: Use text_poke_early() during early init
> 
> 
> There is no apparent reason not to use text_poke_early() while we are
> during early-init and we do not patch code that might be on the stack
> (i.e., we'll return to the middle of the patched code). This appears to
> be the case of jump-labels, so do so.
> 
> This is required for the next patches that would set a temporary mm for
> patching, which is initialized after some static-keys are
> enabled/disabled.
> 
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Co-Developed-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Nadav Amit <namit@vmware.com>
> ---
> arch/x86/kernel/jump_label.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
> index aac0c1f7e354..ed5fe274a7d8 100644
> --- a/arch/x86/kernel/jump_label.c
> +++ b/arch/x86/kernel/jump_label.c
> @@ -52,7 +52,12 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
> 	jmp.offset = jump_entry_target(entry) -
> 		     (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
> 
> -	if (early_boot_irqs_disabled)
> +	/*
> +	 * As long as we're UP and not yet marked RO, we can use
> +	 * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to
> +	 * SYSTEM_SCHEDULING before going either.
> +	 */
> +	if (system_state == SYSTEM_BOOTING)
> 		poker = text_poke_early;
> 
> 	if (type == JUMP_LABEL_JMP) {
> -- 
> 2.17.1



^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2018-11-12 20:13 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-10 23:17 [PATCH v4 00/10] x86/alternative: text_poke() fixes Nadav Amit
2018-11-10 23:17 ` [PATCH v4 01/10] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Nadav Amit
2018-11-12  2:54   ` Masami Hiramatsu
2018-11-12 10:59     ` Jiri Kosina
2018-11-10 23:17 ` [PATCH v4 02/10] x86/jump_label: Use text_poke_early() during early init Nadav Amit
2018-11-12 20:12   ` Nadav Amit
2018-11-10 23:17 ` [PATCH v4 03/10] x86/mm: temporary mm struct Nadav Amit
2018-11-10 23:17 ` [PATCH v4 04/10] fork: provide a function for copying init_mm Nadav Amit
2018-11-10 23:17 ` [PATCH v4 05/10] x86/alternative: initializing temporary mm for patching Nadav Amit
2018-11-11 14:43   ` Peter Zijlstra
2018-11-11 20:38     ` Nadav Amit
2018-11-12  0:34       ` Peter Zijlstra
2018-11-10 23:17 ` [PATCH v4 06/10] x86/alternative: use temporary mm for text poking Nadav Amit
2018-11-11 14:59   ` Peter Zijlstra
2018-11-11 20:53     ` Nadav Amit
2018-11-11 23:52       ` Peter Zijlstra
2018-11-12  0:09         ` Nadav Amit
2018-11-12  0:41           ` Peter Zijlstra
2018-11-12  0:36         ` Peter Zijlstra
2018-11-12  3:46         ` Ingo Molnar
2018-11-12  8:50           ` Peter Zijlstra
2018-11-11 19:11   ` Damian Tometzki
2018-11-11 20:41     ` Nadav Amit
2018-11-10 23:17 ` [PATCH v4 07/10] x86/kgdb: avoid redundant comparison of code Nadav Amit
2018-11-10 23:17 ` [PATCH v4 08/10] x86: avoid W^X being broken during modules loading Nadav Amit
2018-11-10 23:17 ` [PATCH v4 09/10] x86/jump-label: remove support for custom poker Nadav Amit
2018-11-11 15:05   ` Peter Zijlstra
2018-11-11 20:31     ` Nadav Amit
2018-11-10 23:17 ` [PATCH v4 10/10] x86/alternative: remove the return value of text_poke_*() Nadav Amit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).