All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/7] x86/alternatives: text_poke() fixes
@ 2018-11-02 23:29 Nadav Amit
  2018-11-02 23:29 ` [PATCH v3 1/7] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Nadav Amit
                   ` (6 more replies)
  0 siblings, 7 replies; 30+ messages in thread
From: Nadav Amit @ 2018-11-02 23:29 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Jiri Kosina,
	Andy Lutomirski, Masami Hiramatsu, Kees Cook, Peter Zijlstra

This patch-set addresses some issues that were raised in a recent
correspondence and might affect the security and the correctness of code
patching. (Note that patching performance is not addressed by this
patch-set).

The main issue that the patches deal with is the fact that the fixmap
PTEs that are used for patching are available for access from other
cores and might be exploited. They are not even flushed from the TLB in
remote cores, so the risk is even higher. Address this issue by
introducing a temporary mm that is only used during patching.
Unfortunately, due to init ordering, fixmap is still used during
boot-time patching. Future patches can eliminate the need for it.

To do so, we need to avoid using text_poke() before the poking-mm is
initialized and instead use text_poke_early().

The second issue is the lockdep assertion that ensures the text_mutex is
taken. It is actually not taken by kgdb. I did not find an easy
solution, as mutex_trylock() should not be called from an IRQ context.
Instead, remove the assertion.

Finally, try to be more conservative and to map a single page, instead
of two, when possible. This helps both security and performance.

In addition, there is some cleanup of the patching code to make it more
readable.

v2->v3:
- Remove the fallback path in text_poke() [peterZ]
- poking_init() was broken due to the local variable poking_addr
- Preallocate tables for the temporary-mm to avoid sleep-in-atomic
- Prevent KASAN from yelling at text_poke()

v1->v2:
- Partial revert of 9222f606506c added to 1/6 [masami]
- Added Masami's reviewed-by tag

RFC->v1:
- Added handling of error in get_locked_pte()
- Remove lockdep assertion, clarify text_mutex use instead [masami]
- Comment fix [peterz]
- Removed remainders of text_poke return value [masami]
- Use __weak for poking_init instead of macros [masami]
- Simplify error handling in poking_init [masami]

Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>

Andy Lutomirski (1):
  x86/mm: temporary mm struct

Nadav Amit (6):
  Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  x86/jump_label: Use text_poke_early() during early_init
  fork: provide a function for copying init_mm
  x86/alternatives: initializing temporary mm for patching
  x86/alternatives: use temporary mm for text poking
  x86/alternatives: remove text_poke() return value

 arch/x86/include/asm/fixmap.h        |   2 -
 arch/x86/include/asm/mmu_context.h   |  20 +++++
 arch/x86/include/asm/pgtable.h       |   3 +
 arch/x86/include/asm/text-patching.h |   4 +-
 arch/x86/kernel/alternative.c        | 125 +++++++++++++++++++++------
 arch/x86/kernel/jump_label.c         |   8 +-
 arch/x86/mm/init_64.c                |  39 +++++++++
 include/linux/kernel.h               |   1 +
 include/linux/sched/task.h           |   1 +
 init/main.c                          |   7 ++
 kernel/fork.c                        |  24 +++--
 11 files changed, 199 insertions(+), 35 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v3 1/7] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2018-11-02 23:29 [PATCH v3 0/7] x86/alternatives: text_poke() fixes Nadav Amit
@ 2018-11-02 23:29 ` Nadav Amit
  2018-11-03 10:11   ` Jiri Kosina
  2018-11-04 20:58   ` Thomas Gleixner
  2018-11-02 23:29 ` [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init Nadav Amit
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 30+ messages in thread
From: Nadav Amit @ 2018-11-02 23:29 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Jiri Kosina,
	Andy Lutomirski, Kees Cook, Dave Hansen

text_mutex is expected to be held before text_poke() is called, but we
cannot add a lockdep assertion since kgdb does not take it, and instead
*supposedly* ensures the lock is not taken and will not be acquired by
any other core while text_poke() is running.

The reason for the "supposedly" comment is that it is not entirely clear
that this would be the case if gdb_do_roundup is zero.

Add a comment to clarify this behavior, and restore the assertions as
they were before the recent commit.

This partially reverts commit 9222f606506c ("x86/alternatives:
Lockdep-enforce text_mutex in text_poke*()")

Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Fixes: 9222f606506c ("x86/alternatives: Lockdep-enforce text_mutex in text_poke*()")
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/kernel/alternative.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index ebeac487a20c..1511d96d2e69 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -688,6 +688,11 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode,
  * It means the size must be writable atomically and the address must be aligned
  * in a way that permits an atomic write. It also makes sure we fit on a single
  * page.
+ *
+ * Context: Must be called under text_mutex. kgdb is an exception: it does not
+ *	    hold the mutex, as it *supposedly* ensures that no other core is
+ *	    holding the mutex and ensures that none of them will acquire the
+ *	    mutex while the code runs.
  */
 void *text_poke(void *addr, const void *opcode, size_t len)
 {
@@ -702,8 +707,6 @@ void *text_poke(void *addr, const void *opcode, size_t len)
 	 */
 	BUG_ON(!after_bootmem);
 
-	lockdep_assert_held(&text_mutex);
-
 	if (!core_kernel_text((unsigned long)addr)) {
 		pages[0] = vmalloc_to_page(addr);
 		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
  2018-11-02 23:29 [PATCH v3 0/7] x86/alternatives: text_poke() fixes Nadav Amit
  2018-11-02 23:29 ` [PATCH v3 1/7] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Nadav Amit
@ 2018-11-02 23:29 ` Nadav Amit
  2018-11-05 12:39   ` Peter Zijlstra
  2018-11-05 14:09   ` Peter Zijlstra
  2018-11-02 23:29 ` [PATCH v3 3/7] x86/mm: temporary mm struct Nadav Amit
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 30+ messages in thread
From: Nadav Amit @ 2018-11-02 23:29 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Andy Lutomirski,
	Kees Cook, Peter Zijlstra, Dave Hansen, Masami Hiramatsu

There is no apparent reason not to use text_poke_early() while we are
during early-init and we do not patch code that might be on the stack
(i.e., we'll return to the middle of the patched code).  This appears to
be the case of jump-labels, so do so.

This is required for the next patches that would set a temporary mm for
patching, which is initialized after some static-keys are
enabled/disabled.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/kernel/jump_label.c | 8 +++++++-
 include/linux/kernel.h       | 1 +
 init/main.c                  | 4 ++++
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index aac0c1f7e354..367c1d0c20a3 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -52,7 +52,13 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
 	jmp.offset = jump_entry_target(entry) -
 		     (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
 
-	if (early_boot_irqs_disabled)
+	/*
+	 * As long as we are in early boot, we can use text_poke_early(), which
+	 * is more efficient: the memory was still not marked as read-only (it
+	 * is only marked after poking_init()). This also prevents us from using
+	 * text_poke() before poking_init() is called.
+	 */
+	if (!early_boot_done)
 		poker = text_poke_early;
 
 	if (type == JUMP_LABEL_JMP) {
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index d6aac75b51ba..3e86ff3c64c4 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -564,6 +564,7 @@ extern unsigned long get_taint(void);
 extern int root_mountflags;
 
 extern bool early_boot_irqs_disabled;
+extern u8 early_boot_done;
 
 /*
  * Values used for system_state. Ordering of the states must not be changed
diff --git a/init/main.c b/init/main.c
index a664246450d1..b0fa26637496 100644
--- a/init/main.c
+++ b/init/main.c
@@ -117,6 +117,8 @@ extern void radix_tree_init(void);
  */
 bool early_boot_irqs_disabled __read_mostly;
 
+u8 early_boot_done __read_mostly;
+
 enum system_states system_state __read_mostly;
 EXPORT_SYMBOL(system_state);
 
@@ -735,6 +737,8 @@ asmlinkage __visible void __init start_kernel(void)
 		efi_free_boot_services();
 	}
 
+	early_boot_done = true;
+
 	/* Do the rest non-__init'ed, we're now alive */
 	rest_init();
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 3/7] x86/mm: temporary mm struct
  2018-11-02 23:29 [PATCH v3 0/7] x86/alternatives: text_poke() fixes Nadav Amit
  2018-11-02 23:29 ` [PATCH v3 1/7] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Nadav Amit
  2018-11-02 23:29 ` [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init Nadav Amit
@ 2018-11-02 23:29 ` Nadav Amit
  2018-11-02 23:29 ` [PATCH v3 4/7] fork: provide a function for copying init_mm Nadav Amit
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 30+ messages in thread
From: Nadav Amit @ 2018-11-02 23:29 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Peter Zijlstra, Dave Hansen, Nadav Amit

From: Andy Lutomirski <luto@kernel.org>

Sometimes we want to set a temporary page-table entries (PTEs) in one of
the cores, without allowing other cores to use - even speculatively -
these mappings. There are two benefits for doing so:

(1) Security: if sensitive PTEs are set, temporary mm prevents their use
in other cores. This hardens the security as it prevents exploding a
dangling pointer to overwrite sensitive data using the sensitive PTE.

(2) Avoiding TLB shootdowns: the PTEs do not need to be flushed in
remote page-tables.

To do so a temporary mm_struct can be used. Mappings which are private
for this mm can be set in the userspace part of the address-space.
During the whole time in which the temporary mm is loaded, interrupts
must be disabled.

The first use-case for temporary PTEs, which will follow, is for poking
the kernel text.

[ Commit message was written by Nadav ]

Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/include/asm/mmu_context.h | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 0ca50611e8ce..7cc8e5c50bf6 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -338,4 +338,24 @@ static inline unsigned long __get_current_cr3_fast(void)
 	return cr3;
 }
 
+typedef struct {
+	struct mm_struct *prev;
+} temporary_mm_state_t;
+
+static inline temporary_mm_state_t use_temporary_mm(struct mm_struct *mm)
+{
+	temporary_mm_state_t state;
+
+	lockdep_assert_irqs_disabled();
+	state.prev = this_cpu_read(cpu_tlbstate.loaded_mm);
+	switch_mm_irqs_off(NULL, mm, current);
+	return state;
+}
+
+static inline void unuse_temporary_mm(temporary_mm_state_t prev)
+{
+	lockdep_assert_irqs_disabled();
+	switch_mm_irqs_off(NULL, prev.prev, current);
+}
+
 #endif /* _ASM_X86_MMU_CONTEXT_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 4/7] fork: provide a function for copying init_mm
  2018-11-02 23:29 [PATCH v3 0/7] x86/alternatives: text_poke() fixes Nadav Amit
                   ` (2 preceding siblings ...)
  2018-11-02 23:29 ` [PATCH v3 3/7] x86/mm: temporary mm struct Nadav Amit
@ 2018-11-02 23:29 ` Nadav Amit
  2018-11-02 23:29 ` [PATCH v3 5/7] x86/alternatives: initializing temporary mm for patching Nadav Amit
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 30+ messages in thread
From: Nadav Amit @ 2018-11-02 23:29 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Andy Lutomirski,
	Kees Cook, Peter Zijlstra, Dave Hansen

Provide a function for copying init_mm. This function will be later used
for setting a temporary mm.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 include/linux/sched/task.h |  1 +
 kernel/fork.c              | 24 ++++++++++++++++++------
 2 files changed, 19 insertions(+), 6 deletions(-)

diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index 108ede99e533..ac0a675678f5 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -74,6 +74,7 @@ extern void exit_itimers(struct signal_struct *);
 extern long _do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *, unsigned long);
 extern long do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *);
 struct task_struct *fork_idle(int);
+struct mm_struct *copy_init_mm(void);
 extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
 extern long kernel_wait4(pid_t, int __user *, int, struct rusage *);
 
diff --git a/kernel/fork.c b/kernel/fork.c
index f0b58479534f..11233c370157 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1253,13 +1253,20 @@ void mm_release(struct task_struct *tsk, struct mm_struct *mm)
 		complete_vfork_done(tsk);
 }
 
-/*
- * Allocate a new mm structure and copy contents from the
- * mm structure of the passed in task structure.
+/**
+ * dup_mm() - duplicates an existing mm structure
+ * @tsk: the task_struct with which the new mm will be associated.
+ * @oldmm: the mm to duplicate.
+ *
+ * Allocates a new mm structure and copy contents from the provided
+ * @oldmm structure.
+ *
+ * Return: the duplicated mm or NULL on failure.
  */
-static struct mm_struct *dup_mm(struct task_struct *tsk)
+static struct mm_struct *dup_mm(struct task_struct *tsk,
+				struct mm_struct *oldmm)
 {
-	struct mm_struct *mm, *oldmm = current->mm;
+	struct mm_struct *mm;
 	int err;
 
 	mm = allocate_mm();
@@ -1326,7 +1333,7 @@ static int copy_mm(unsigned long clone_flags, struct task_struct *tsk)
 	}
 
 	retval = -ENOMEM;
-	mm = dup_mm(tsk);
+	mm = dup_mm(tsk, current->mm);
 	if (!mm)
 		goto fail_nomem;
 
@@ -2126,6 +2133,11 @@ struct task_struct *fork_idle(int cpu)
 	return task;
 }
 
+struct mm_struct *copy_init_mm(void)
+{
+	return dup_mm(NULL, &init_mm);
+}
+
 /*
  *  Ok, this is the main fork-routine.
  *
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 5/7] x86/alternatives: initializing temporary mm for patching
  2018-11-02 23:29 [PATCH v3 0/7] x86/alternatives: text_poke() fixes Nadav Amit
                   ` (3 preceding siblings ...)
  2018-11-02 23:29 ` [PATCH v3 4/7] fork: provide a function for copying init_mm Nadav Amit
@ 2018-11-02 23:29 ` Nadav Amit
  2018-11-02 23:29 ` [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking Nadav Amit
  2018-11-02 23:29 ` [PATCH v3 7/7] x86/alternatives: remove text_poke() return value Nadav Amit
  6 siblings, 0 replies; 30+ messages in thread
From: Nadav Amit @ 2018-11-02 23:29 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Kees Cook,
	Peter Zijlstra, Dave Hansen

To prevent improper use of the PTEs that are used for text patching, we
want to use a temporary mm struct. We initailize it by copying the init
mm.

The address that will be used for patching is taken from the lower area
that is usually used for the task memory. Doing so prevents the need to
frequently synchronize the temporary-mm (e.g., when BPF programs are
installed), since different PGDs are used for the task memory.

Finally, we randomize the address of the PTEs to harden against exploits
that use these PTEs.

Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/include/asm/pgtable.h       |  3 +++
 arch/x86/include/asm/text-patching.h |  2 ++
 arch/x86/kernel/alternative.c        |  3 +++
 arch/x86/mm/init_64.c                | 39 ++++++++++++++++++++++++++++
 init/main.c                          |  3 +++
 5 files changed, 50 insertions(+)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 40616e805292..e8f630d9a2ed 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1021,6 +1021,9 @@ static inline void __meminit init_trampoline_default(void)
 	/* Default trampoline pgd value */
 	trampoline_pgd_entry = init_top_pgt[pgd_index(__PAGE_OFFSET)];
 }
+
+void __init poking_init(void);
+
 # ifdef CONFIG_RANDOMIZE_MEMORY
 void __meminit init_trampoline(void);
 # else
diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
index e85ff65c43c3..ffe7902cc326 100644
--- a/arch/x86/include/asm/text-patching.h
+++ b/arch/x86/include/asm/text-patching.h
@@ -38,5 +38,7 @@ extern void *text_poke(void *addr, const void *opcode, size_t len);
 extern int poke_int3_handler(struct pt_regs *regs);
 extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
 extern int after_bootmem;
+extern __ro_after_init struct mm_struct *poking_mm;
+extern __ro_after_init unsigned long poking_addr;
 
 #endif /* _ASM_X86_TEXT_PATCHING_H */
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 1511d96d2e69..9ceae28db1af 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -678,6 +678,9 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode,
 	return addr;
 }
 
+__ro_after_init struct mm_struct *poking_mm;
+__ro_after_init unsigned long poking_addr;
+
 /**
  * text_poke - Update instructions on a live kernel
  * @addr: address to modify
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index dd519f372169..612d17760e20 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -54,6 +54,7 @@
 #include <asm/init.h>
 #include <asm/uv/uv.h>
 #include <asm/setup.h>
+#include <asm/text-patching.h>
 
 #include "mm_internal.h"
 
@@ -1389,6 +1390,44 @@ unsigned long memory_block_size_bytes(void)
 	return memory_block_size_probed;
 }
 
+/*
+ * Initialize an mm_struct to be used during poking and a pointer to be used
+ * during patching. If anything fails during initialization, poking will be done
+ * using the fixmap, which is unsafe, so warn the user about it.
+ */
+void __init poking_init(void)
+{
+	spinlock_t *ptl;
+	pte_t *ptep;
+
+	poking_mm = copy_init_mm();
+	if (!poking_mm) {
+		pr_err("x86/mm: error setting a separate poking address space");
+		return;
+	}
+
+	/*
+	 * Randomize the poking address, but make sure that the following page
+	 * will be mapped at the same PMD. We need 2 pages, so find space for 3,
+	 * and adjust the address if the PMD ends after the first one.
+	 */
+	poking_addr = TASK_UNMAPPED_BASE +
+		(kaslr_get_random_long("Poking") & PAGE_MASK) %
+		(TASK_SIZE - TASK_UNMAPPED_BASE - 3 * PAGE_SIZE);
+
+	if (((poking_addr + PAGE_SIZE) & ~PMD_MASK) == 0)
+		poking_addr += PAGE_SIZE;
+
+	/*
+	 * We need to trigger the allocation of the page-tables that will be
+	 * needed for poking now. Later, poking may be performed in an atomic
+	 * section, which might cause allocation to fail.
+	 */
+	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
+	if (!WARN_ON(!ptep))
+		pte_unmap_unlock(ptep, ptl);
+}
+
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
 /*
  * Initialise the sparsemem vmemmap using huge-pages at the PMD level.
diff --git a/init/main.c b/init/main.c
index b0fa26637496..2c7ceffcf805 100644
--- a/init/main.c
+++ b/init/main.c
@@ -498,6 +498,8 @@ void __init __weak thread_stack_cache_init(void)
 
 void __init __weak mem_encrypt_init(void) { }
 
+void __init __weak poking_init(void) { }
+
 bool initcall_debug;
 core_param(initcall_debug, initcall_debug, bool, 0644);
 
@@ -727,6 +729,7 @@ asmlinkage __visible void __init start_kernel(void)
 	taskstats_init_early();
 	delayacct_init();
 
+	poking_init();
 	check_bugs();
 
 	acpi_subsystem_init();
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking
  2018-11-02 23:29 [PATCH v3 0/7] x86/alternatives: text_poke() fixes Nadav Amit
                   ` (4 preceding siblings ...)
  2018-11-02 23:29 ` [PATCH v3 5/7] x86/alternatives: initializing temporary mm for patching Nadav Amit
@ 2018-11-02 23:29 ` Nadav Amit
  2018-11-05 13:19   ` Peter Zijlstra
  2018-11-05 13:30   ` Peter Zijlstra
  2018-11-02 23:29 ` [PATCH v3 7/7] x86/alternatives: remove text_poke() return value Nadav Amit
  6 siblings, 2 replies; 30+ messages in thread
From: Nadav Amit @ 2018-11-02 23:29 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Andy Lutomirski,
	Kees Cook, Peter Zijlstra, Dave Hansen, Masami Hiramatsu

text_poke() can potentially compromise the security as it sets temporary
PTEs in the fixmap. These PTEs might be used to rewrite the kernel code
from other cores accidentally or maliciously, if an attacker gains the
ability to write onto kernel memory.

Moreover, since remote TLBs are not flushed after the temporary PTEs are
removed, the time-window in which the code is writable is not limited if
the fixmap PTEs - maliciously or accidentally - are cached in the TLB.
To address these potential security hazards, we use a temporary mm for
patching the code.

More adventurous developers can try to reorder the init sequence or use
text_poke_early() instead of text_poke() to remove the use of fixmap for
patching completely.

Finally, text_poke() is also not conservative enough when mapping pages,
as it always tries to map 2 pages, even when a single one is sufficient.
So try to be more conservative, and do not map more than needed.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/include/asm/fixmap.h |   2 -
 arch/x86/kernel/alternative.c | 112 +++++++++++++++++++++++++++-------
 2 files changed, 91 insertions(+), 23 deletions(-)

diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
index 50ba74a34a37..9da8cccdf3fb 100644
--- a/arch/x86/include/asm/fixmap.h
+++ b/arch/x86/include/asm/fixmap.h
@@ -103,8 +103,6 @@ enum fixed_addresses {
 #ifdef CONFIG_PARAVIRT
 	FIX_PARAVIRT_BOOTMAP,
 #endif
-	FIX_TEXT_POKE1,	/* reserve 2 pages for text_poke() */
-	FIX_TEXT_POKE0, /* first page is last, because allocation is backward */
 #ifdef	CONFIG_X86_INTEL_MID
 	FIX_LNW_VRTC,
 #endif
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 9ceae28db1af..1a40df4db450 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -11,6 +11,7 @@
 #include <linux/stop_machine.h>
 #include <linux/slab.h>
 #include <linux/kdebug.h>
+#include <linux/mmu_context.h>
 #include <asm/text-patching.h>
 #include <asm/alternative.h>
 #include <asm/sections.h>
@@ -699,41 +700,110 @@ __ro_after_init unsigned long poking_addr;
  */
 void *text_poke(void *addr, const void *opcode, size_t len)
 {
-	unsigned long flags;
-	char *vaddr;
+	bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE;
+	temporary_mm_state_t prev;
 	struct page *pages[2];
-	int i;
+	unsigned long flags;
+	pte_t pte, *ptep;
+	spinlock_t *ptl;
 
 	/*
-	 * While boot memory allocator is runnig we cannot use struct
-	 * pages as they are not yet initialized.
+	 * While boot memory allocator is running we cannot use struct pages as
+	 * they are not yet initialized.
 	 */
 	BUG_ON(!after_bootmem);
 
 	if (!core_kernel_text((unsigned long)addr)) {
 		pages[0] = vmalloc_to_page(addr);
-		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
+		if (cross_page_boundary)
+			pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
 	} else {
 		pages[0] = virt_to_page(addr);
 		WARN_ON(!PageReserved(pages[0]));
-		pages[1] = virt_to_page(addr + PAGE_SIZE);
+		if (cross_page_boundary)
+			pages[1] = virt_to_page(addr + PAGE_SIZE);
 	}
+
+	/* TODO: let the caller deal with a failure and fail gracefully. */
 	BUG_ON(!pages[0]);
+	BUG_ON(cross_page_boundary && !pages[1]);
 	local_irq_save(flags);
-	set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0]));
-	if (pages[1])
-		set_fixmap(FIX_TEXT_POKE1, page_to_phys(pages[1]));
-	vaddr = (char *)fix_to_virt(FIX_TEXT_POKE0);
-	memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len);
-	clear_fixmap(FIX_TEXT_POKE0);
-	if (pages[1])
-		clear_fixmap(FIX_TEXT_POKE1);
-	local_flush_tlb();
-	sync_core();
-	/* Could also do a CLFLUSH here to speed up CPU recovery; but
-	   that causes hangs on some VIA CPUs. */
-	for (i = 0; i < len; i++)
-		BUG_ON(((char *)addr)[i] != ((char *)opcode)[i]);
+
+	/*
+	 * The lock is not really needed, but this allows to avoid open-coding.
+	 */
+	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
+
+	/*
+	 * If we failed to allocate a PTE, fail silently. The caller (text_poke)
+	 * will detect that the write failed when it compares the memory with
+	 * the new opcode.
+	 */
+	if (unlikely(!ptep))
+		goto out;
+
+	pte = mk_pte(pages[0], PAGE_KERNEL);
+	set_pte_at(poking_mm, poking_addr, ptep, pte);
+
+	if (cross_page_boundary) {
+		pte = mk_pte(pages[1], PAGE_KERNEL);
+		set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte);
+	}
+
+	/*
+	 * Loading the temporary mm behaves as a compiler barrier, which
+	 * guarantees that the PTE will be set at the time memcpy() is done.
+	 */
+	prev = use_temporary_mm(poking_mm);
+
+	kasan_disable_current();
+	memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len);
+	kasan_enable_current();
+
+	/*
+	 * Ensure that the PTE is only cleared after the instructions of memcpy
+	 * were issued by using a compiler barrier.
+	 */
+	barrier();
+
+	pte_clear(poking_mm, poking_addr, ptep);
+
+	/*
+	 * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
+	 * as it also flushes the corresponding "user" address spaces, which
+	 * does not exist.
+	 *
+	 * Poking, however, is already very inefficient since it does not try to
+	 * batch updates, so we ignore this problem for the time being.
+	 *
+	 * Since the PTEs do not exist in other kernel address-spaces, we do
+	 * not use __flush_tlb_one_kernel(), which when PTI is on would cause
+	 * more unwarranted TLB flushes.
+	 *
+	 * There is a slight anomaly here: the PTE is a supervisor-only and
+	 * (potentially) global and we use __flush_tlb_one_user() but this
+	 * should be fine.
+	 */
+	__flush_tlb_one_user(poking_addr);
+	if (cross_page_boundary) {
+		pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
+		__flush_tlb_one_user(poking_addr + PAGE_SIZE);
+	}
+
+	/*
+	 * Loading the previous page-table hierarchy requires a serializing
+	 * instruction that already allows the core to see the updated version.
+	 * Xen-PV is assumed to serialize execution in a similar manner.
+	 */
+	unuse_temporary_mm(prev);
+
+	pte_unmap_unlock(ptep, ptl);
+out:
+	/*
+	 * TODO: allow the callers to deal with potential failures and do not
+	 * panic so easily.
+	 */
+	BUG_ON(memcmp(addr, opcode, len));
 	local_irq_restore(flags);
 	return addr;
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 7/7] x86/alternatives: remove text_poke() return value
  2018-11-02 23:29 [PATCH v3 0/7] x86/alternatives: text_poke() fixes Nadav Amit
                   ` (5 preceding siblings ...)
  2018-11-02 23:29 ` [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking Nadav Amit
@ 2018-11-02 23:29 ` Nadav Amit
  6 siblings, 0 replies; 30+ messages in thread
From: Nadav Amit @ 2018-11-02 23:29 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Nadav Amit, Andy Lutomirski,
	Kees Cook, Peter Zijlstra

The return value of text_poke() is meaningless - it is one of the
function inputs. One day someone may allow the callers to deal with
text_poke() failures, if those actually happen.

In the meanwhile, remove the return value.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/include/asm/text-patching.h | 2 +-
 arch/x86/kernel/alternative.c        | 3 +--
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
index ffe7902cc326..1f73f71b4de2 100644
--- a/arch/x86/include/asm/text-patching.h
+++ b/arch/x86/include/asm/text-patching.h
@@ -34,7 +34,7 @@ extern void *text_poke_early(void *addr, const void *opcode, size_t len);
  * On the local CPU you need to be protected again NMI or MCE handlers seeing an
  * inconsistent instruction while you patch.
  */
-extern void *text_poke(void *addr, const void *opcode, size_t len);
+extern void text_poke(void *addr, const void *opcode, size_t len);
 extern int poke_int3_handler(struct pt_regs *regs);
 extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
 extern int after_bootmem;
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 1a40df4db450..6d64d7f8c2ed 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -698,7 +698,7 @@ __ro_after_init unsigned long poking_addr;
  *	    holding the mutex and ensures that none of them will acquire the
  *	    mutex while the code runs.
  */
-void *text_poke(void *addr, const void *opcode, size_t len)
+void text_poke(void *addr, const void *opcode, size_t len)
 {
 	bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE;
 	temporary_mm_state_t prev;
@@ -805,7 +805,6 @@ void *text_poke(void *addr, const void *opcode, size_t len)
 	 */
 	BUG_ON(memcmp(addr, opcode, len));
 	local_irq_restore(flags);
-	return addr;
 }
 
 static void do_sync_core(void *info)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 1/7] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2018-11-02 23:29 ` [PATCH v3 1/7] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Nadav Amit
@ 2018-11-03 10:11   ` Jiri Kosina
  2018-11-04 20:58   ` Thomas Gleixner
  1 sibling, 0 replies; 30+ messages in thread
From: Jiri Kosina @ 2018-11-03 10:11 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen

On Fri, 2 Nov 2018, Nadav Amit wrote:

> text_mutex is expected to be held before text_poke() is called, but we
> cannot add a lockdep assertion since kgdb does not take it, and instead
> *supposedly* ensures the lock is not taken and will not be acquired by
> any other core while text_poke() is running.
> 
> The reason for the "supposedly" comment is that it is not entirely clear
> that this would be the case if gdb_do_roundup is zero.
> 
> Add a comment to clarify this behavior, and restore the assertions as
> they were before the recent commit.
> 
> This partially reverts commit 9222f606506c ("x86/alternatives:
> Lockdep-enforce text_mutex in text_poke*()")

Alright, what can we do. It's probably better to have this, rather than to 
trying to work this around in kgdb to accomodate the rest of the world.

> Cc: Jiri Kosina <jkosina@suse.cz>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Fixes: 9222f606506c ("x86/alternatives: Lockdep-enforce text_mutex in text_poke*()")
> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
> Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Nadav Amit <namit@vmware.com>

Acked-by: Jiri Kosina <jkosina@suse.cz>

Thanks,

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 1/7] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2018-11-02 23:29 ` [PATCH v3 1/7] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Nadav Amit
  2018-11-03 10:11   ` Jiri Kosina
@ 2018-11-04 20:58   ` Thomas Gleixner
  2018-11-05 18:14     ` Nadav Amit
  1 sibling, 1 reply; 30+ messages in thread
From: Thomas Gleixner @ 2018-11-04 20:58 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, linux-kernel, x86, H. Peter Anvin, Borislav Petkov,
	Dave Hansen, Jiri Kosina, Andy Lutomirski, Kees Cook,
	Dave Hansen

On Fri, 2 Nov 2018, Nadav Amit wrote:

> text_mutex is expected to be held before text_poke() is called, but we
> cannot add a lockdep assertion since kgdb does not take it, and instead
> *supposedly* ensures the lock is not taken and will not be acquired by
> any other core while text_poke() is running.
> 
> The reason for the "supposedly" comment is that it is not entirely clear
> that this would be the case if gdb_do_roundup is zero.
> 
> Add a comment to clarify this behavior, and restore the assertions as
> they were before the recent commit.

It restores nothing. It just removes the assertion.

> This partially reverts commit 9222f606506c ("x86/alternatives:
> Lockdep-enforce text_mutex in text_poke*()")

That opens up the same can of worms again, which took us a while to close.

Can we please instead split out the text_poke() code into a helper function
and have two callers:

    text_poke() which contains the assert

    text_poke_kgdb() which does not

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
  2018-11-02 23:29 ` [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init Nadav Amit
@ 2018-11-05 12:39   ` Peter Zijlstra
  2018-11-05 13:33     ` Peter Zijlstra
  2018-11-05 14:09   ` Peter Zijlstra
  1 sibling, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2018-11-05 12:39 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Fri, Nov 02, 2018 at 04:29:41PM -0700, Nadav Amit wrote:
> diff --git a/init/main.c b/init/main.c
> index a664246450d1..b0fa26637496 100644
> --- a/init/main.c
> +++ b/init/main.c
> @@ -117,6 +117,8 @@ extern void radix_tree_init(void);
>   */
>  bool early_boot_irqs_disabled __read_mostly;
>  
> +u8 early_boot_done __read_mostly;
> +
>  enum system_states system_state __read_mostly;
>  EXPORT_SYMBOL(system_state);

Should this not be using system_state ^ ? The site is very close to
SYSTEM_SCHEDULING, can we use that or should we add another state ?

> @@ -735,6 +737,8 @@ asmlinkage __visible void __init start_kernel(void)
>  		efi_free_boot_services();
>  	}
>  
> +	early_boot_done = true;
> +
>  	/* Do the rest non-__init'ed, we're now alive */
>  	rest_init();
>  }
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking
  2018-11-02 23:29 ` [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking Nadav Amit
@ 2018-11-05 13:19   ` Peter Zijlstra
  2018-11-05 13:30   ` Peter Zijlstra
  1 sibling, 0 replies; 30+ messages in thread
From: Peter Zijlstra @ 2018-11-05 13:19 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Fri, Nov 02, 2018 at 04:29:45PM -0700, Nadav Amit wrote:
> diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
> index 9ceae28db1af..1a40df4db450 100644
> --- a/arch/x86/kernel/alternative.c
> +++ b/arch/x86/kernel/alternative.c

> @@ -699,41 +700,110 @@ __ro_after_init unsigned long poking_addr;
>   */
>  void *text_poke(void *addr, const void *opcode, size_t len)
>  {
> +	bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE;
> +	temporary_mm_state_t prev;
>  	struct page *pages[2];
> +	unsigned long flags;
> +	pte_t pte, *ptep;
> +	spinlock_t *ptl;
>  
>  	/*
> +	 * While boot memory allocator is running we cannot use struct pages as
> +	 * they are not yet initialized.
>  	 */
>  	BUG_ON(!after_bootmem);
>  
>  	if (!core_kernel_text((unsigned long)addr)) {
>  		pages[0] = vmalloc_to_page(addr);
> +		if (cross_page_boundary)
> +			pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
>  	} else {
>  		pages[0] = virt_to_page(addr);
>  		WARN_ON(!PageReserved(pages[0]));
> +		if (cross_page_boundary)
> +			pages[1] = virt_to_page(addr + PAGE_SIZE);
>  	}
> +
> +	/* TODO: let the caller deal with a failure and fail gracefully. */
>  	BUG_ON(!pages[0]);
> +	BUG_ON(cross_page_boundary && !pages[1]);
>  	local_irq_save(flags);
> +
> +	/*
> +	 * The lock is not really needed, but this allows to avoid open-coding.
> +	 */
> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
> +
> +	/*
> +	 * If we failed to allocate a PTE, fail silently. The caller (text_poke)

we _are_ text_poke()..

> +	 * will detect that the write failed when it compares the memory with
> +	 * the new opcode.
> +	 */
> +	if (unlikely(!ptep))
> +		goto out;

This is the one site I'm a little uncomfortable with; OTOH it really
never should happen, since we explicitily instantiate these page-tables
earlier.

Can't we simply assume ptep will not be zero here? Like with so many
boot time memory allocations, we mostly assume they'll work.

> +	pte = mk_pte(pages[0], PAGE_KERNEL);
> +	set_pte_at(poking_mm, poking_addr, ptep, pte);
> +
> +	if (cross_page_boundary) {
> +		pte = mk_pte(pages[1], PAGE_KERNEL);
> +		set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte);
> +	}
> +
> +	/*
> +	 * Loading the temporary mm behaves as a compiler barrier, which
> +	 * guarantees that the PTE will be set at the time memcpy() is done.
> +	 */
> +	prev = use_temporary_mm(poking_mm);
> +
> +	kasan_disable_current();
> +	memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len);
> +	kasan_enable_current();
> +
> +	/*
> +	 * Ensure that the PTE is only cleared after the instructions of memcpy
> +	 * were issued by using a compiler barrier.
> +	 */
> +	barrier();
> +
> +	pte_clear(poking_mm, poking_addr, ptep);
> +
> +	/*
> +	 * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
> +	 * as it also flushes the corresponding "user" address spaces, which
> +	 * does not exist.
> +	 *
> +	 * Poking, however, is already very inefficient since it does not try to
> +	 * batch updates, so we ignore this problem for the time being.
> +	 *
> +	 * Since the PTEs do not exist in other kernel address-spaces, we do
> +	 * not use __flush_tlb_one_kernel(), which when PTI is on would cause
> +	 * more unwarranted TLB flushes.
> +	 *
> +	 * There is a slight anomaly here: the PTE is a supervisor-only and
> +	 * (potentially) global and we use __flush_tlb_one_user() but this
> +	 * should be fine.
> +	 */
> +	__flush_tlb_one_user(poking_addr);
> +	if (cross_page_boundary) {
> +		pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
> +		__flush_tlb_one_user(poking_addr + PAGE_SIZE);
> +	}
> +
> +	/*
> +	 * Loading the previous page-table hierarchy requires a serializing
> +	 * instruction that already allows the core to see the updated version.
> +	 * Xen-PV is assumed to serialize execution in a similar manner.
> +	 */
> +	unuse_temporary_mm(prev);
> +
> +	pte_unmap_unlock(ptep, ptl);
> +out:
> +	/*
> +	 * TODO: allow the callers to deal with potential failures and do not
> +	 * panic so easily.
> +	 */
> +	BUG_ON(memcmp(addr, opcode, len));
>  	local_irq_restore(flags);
>  	return addr;
>  }

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking
  2018-11-02 23:29 ` [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking Nadav Amit
  2018-11-05 13:19   ` Peter Zijlstra
@ 2018-11-05 13:30   ` Peter Zijlstra
  2018-11-05 18:04     ` Nadav Amit
  1 sibling, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2018-11-05 13:30 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Fri, Nov 02, 2018 at 04:29:45PM -0700, Nadav Amit wrote:
> +	unuse_temporary_mm(prev);
> +
> +	pte_unmap_unlock(ptep, ptl);

That; that does kunmap_atomic() on 32bit.

I've been thinking that the whole kmap_atomic thing on x86_32 is
terminally broken, and with that most of x86_32 is.

kmap_atomic does the per-cpu fixmap pte fun-and-games we're here saying
is broken. Yes, only the one CPU will (explicitly) use those fixmap PTEs
and thus the local invalidate _should_ work. However nothing prohibits
speculation on another CPU from using our fixmap addresses. Which can
lead to the remote CPU populating its TLBs for our fixmap entry.

And, as we've found, there are AMD parts that #MC when there are
mis-matched TLB entries.

So what do we do? mark x86_32 SMP broken?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
  2018-11-05 12:39   ` Peter Zijlstra
@ 2018-11-05 13:33     ` Peter Zijlstra
  0 siblings, 0 replies; 30+ messages in thread
From: Peter Zijlstra @ 2018-11-05 13:33 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Mon, Nov 05, 2018 at 01:39:53PM +0100, Peter Zijlstra wrote:
> On Fri, Nov 02, 2018 at 04:29:41PM -0700, Nadav Amit wrote:
> > diff --git a/init/main.c b/init/main.c
> > index a664246450d1..b0fa26637496 100644
> > --- a/init/main.c
> > +++ b/init/main.c
> > @@ -117,6 +117,8 @@ extern void radix_tree_init(void);
> >   */
> >  bool early_boot_irqs_disabled __read_mostly;
> >  
> > +u8 early_boot_done __read_mostly;
> > +
> >  enum system_states system_state __read_mostly;
> >  EXPORT_SYMBOL(system_state);
> 
> Should this not be using system_state ^ ? The site is very close to
> SYSTEM_SCHEDULING, can we use that or should we add another state ?

We must be before kernel_init() -> kernel_init_freeable() -> smp_init().

So we need another state, something like SYSTEM_BOOTING_SMP I suppose ?

> > @@ -735,6 +737,8 @@ asmlinkage __visible void __init start_kernel(void)
> >  		efi_free_boot_services();
> >  	}
> >  
> > +	early_boot_done = true;
> > +
> >  	/* Do the rest non-__init'ed, we're now alive */
> >  	rest_init();
> >  }
> > -- 
> > 2.17.1
> > 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
  2018-11-02 23:29 ` [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init Nadav Amit
  2018-11-05 12:39   ` Peter Zijlstra
@ 2018-11-05 14:09   ` Peter Zijlstra
  2018-11-05 17:22     ` Andy Lutomirski
  2018-11-07 19:13     ` Nadav Amit
  1 sibling, 2 replies; 30+ messages in thread
From: Peter Zijlstra @ 2018-11-05 14:09 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, linux-kernel, x86, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Fri, Nov 02, 2018 at 04:29:41PM -0700, Nadav Amit wrote:
> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
> index aac0c1f7e354..367c1d0c20a3 100644
> --- a/arch/x86/kernel/jump_label.c
> +++ b/arch/x86/kernel/jump_label.c
> @@ -52,7 +52,13 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
>  	jmp.offset = jump_entry_target(entry) -
>  		     (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>  
> -	if (early_boot_irqs_disabled)
> +	/*
> +	 * As long as we are in early boot, we can use text_poke_early(), which
> +	 * is more efficient: the memory was still not marked as read-only (it
> +	 * is only marked after poking_init()). This also prevents us from using
> +	 * text_poke() before poking_init() is called.
> +	 */
> +	if (!early_boot_done)
>  		poker = text_poke_early;
>  
>  	if (type == JUMP_LABEL_JMP) {

It took me a while to untangle init/maze^H^Hin.c... but I think this
is all we need:

diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index aac0c1f7e354..ed5fe274a7d8 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -52,7 +52,12 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
 	jmp.offset = jump_entry_target(entry) -
 		     (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
 
-	if (early_boot_irqs_disabled)
+	/*
+	 * As long as we're UP and not yet marked RO, we can use
+	 * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to
+	 * SYSTEM_SCHEDULING before going either.
+	 */
+	if (system_state == SYSTEM_BOOTING)
 		poker = text_poke_early;
 
 	if (type == JUMP_LABEL_JMP) {

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
  2018-11-05 14:09   ` Peter Zijlstra
@ 2018-11-05 17:22     ` Andy Lutomirski
  2018-11-05 17:49       ` Nadav Amit
  2018-11-07 19:13     ` Nadav Amit
  1 sibling, 1 reply; 30+ messages in thread
From: Andy Lutomirski @ 2018-11-05 17:22 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Nadav Amit, Ingo Molnar, linux-kernel, x86, H. Peter Anvin,
	Thomas Gleixner, Borislav Petkov, Dave Hansen, Andy Lutomirski,
	Kees Cook, Dave Hansen, Masami Hiramatsu


> On Nov 5, 2018, at 6:09 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> 
>> On Fri, Nov 02, 2018 at 04:29:41PM -0700, Nadav Amit wrote:
>> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
>> index aac0c1f7e354..367c1d0c20a3 100644
>> --- a/arch/x86/kernel/jump_label.c
>> +++ b/arch/x86/kernel/jump_label.c
>> @@ -52,7 +52,13 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
>>    jmp.offset = jump_entry_target(entry) -
>>             (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>> 
>> -    if (early_boot_irqs_disabled)
>> +    /*
>> +     * As long as we are in early boot, we can use text_poke_early(), which
>> +     * is more efficient: the memory was still not marked as read-only (it
>> +     * is only marked after poking_init()). This also prevents us from using
>> +     * text_poke() before poking_init() is called.
>> +     */
>> +    if (!early_boot_done)
>>        poker = text_poke_early;
>> 
>>    if (type == JUMP_LABEL_JMP) {
> 
> It took me a while to untangle init/maze^H^Hin.c... but I think this
> is all we need:
> 
> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
> index aac0c1f7e354..ed5fe274a7d8 100644
> --- a/arch/x86/kernel/jump_label.c
> +++ b/arch/x86/kernel/jump_label.c
> @@ -52,7 +52,12 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
>    jmp.offset = jump_entry_target(entry) -
>             (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
> 
> -    if (early_boot_irqs_disabled)
> +    /*
> +     * As long as we're UP and not yet marked RO, we can use
> +     * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to
> +     * SYSTEM_SCHEDULING before going either.
> +     */
> +    if (system_state == SYSTEM_BOOTING)
>        poker = text_poke_early;
> 
>    if (type == JUMP_LABEL_JMP) {

Can we move this logic into text_poke() and get rid of text_poke_early()?

FWIW, alternative patching was, at some point, a significant fraction of total boot time in some cases. This was probably mostly due to unnecessary sync_core() calls.  Although I think this was reported on a VM, and sync_core() used to be *extremely* expensive on a VM, but that’s fixed now, and it even got backported, I think.

(Hmm. Maybe we can also make jump label patching work in early boot, too!)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
  2018-11-05 17:22     ` Andy Lutomirski
@ 2018-11-05 17:49       ` Nadav Amit
  2018-11-05 19:03         ` Andy Lutomirski
  0 siblings, 1 reply; 30+ messages in thread
From: Nadav Amit @ 2018-11-05 17:49 UTC (permalink / raw)
  To: Andy Lutomirski, Peter Zijlstra
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

From: Andy Lutomirski
Sent: November 5, 2018 at 5:22:32 PM GMT
> To: Peter Zijlstra <peterz@infradead.org>
> Cc: Nadav Amit <namit@vmware.com>, Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, x86@kernel.org, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
> Subject: Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
> 
> 
> 
>> On Nov 5, 2018, at 6:09 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>> 
>>> On Fri, Nov 02, 2018 at 04:29:41PM -0700, Nadav Amit wrote:
>>> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
>>> index aac0c1f7e354..367c1d0c20a3 100644
>>> --- a/arch/x86/kernel/jump_label.c
>>> +++ b/arch/x86/kernel/jump_label.c
>>> @@ -52,7 +52,13 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
>>>   jmp.offset = jump_entry_target(entry) -
>>>            (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>>> 
>>> -    if (early_boot_irqs_disabled)
>>> +    /*
>>> +     * As long as we are in early boot, we can use text_poke_early(), which
>>> +     * is more efficient: the memory was still not marked as read-only (it
>>> +     * is only marked after poking_init()). This also prevents us from using
>>> +     * text_poke() before poking_init() is called.
>>> +     */
>>> +    if (!early_boot_done)
>>>       poker = text_poke_early;
>>> 
>>>   if (type == JUMP_LABEL_JMP) {
>> 
>> It took me a while to untangle init/maze^H^Hin.c... but I think this
>> is all we need:
>> 
>> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
>> index aac0c1f7e354..ed5fe274a7d8 100644
>> --- a/arch/x86/kernel/jump_label.c
>> +++ b/arch/x86/kernel/jump_label.c
>> @@ -52,7 +52,12 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
>>   jmp.offset = jump_entry_target(entry) -
>>            (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>> 
>> -    if (early_boot_irqs_disabled)
>> +    /*
>> +     * As long as we're UP and not yet marked RO, we can use
>> +     * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to
>> +     * SYSTEM_SCHEDULING before going either.
>> +     */
>> +    if (system_state == SYSTEM_BOOTING)
>>       poker = text_poke_early;
>> 
>>   if (type == JUMP_LABEL_JMP) {
> 
> Can we move this logic into text_poke() and get rid of text_poke_early()?

This will negatively affect poking of modules doing module loading, e.g.,
apply_paravirt(). This can be resolved by keeping track when the module is
write-protected and giving a module parameter to text_poke(). Does it worth
the complexity?

> FWIW, alternative patching was, at some point, a significant fraction of
> total boot time in some cases. This was probably mostly due to unnecessary
> sync_core() calls. Although I think this was reported on a VM, and
> sync_core() used to be *extremely* expensive on a VM, but that’s fixed
> now, and it even got backported, I think.
> 
> (Hmm. Maybe we can also make jump label patching work in early boot, too!)

It may be possible to resolve the dependencies between poking_init() and the
other *_init(). I first considered doing that, yet, it makes the code very
fragile, and I don’t see the value in getting rid of text_poke_early() from
security or simplicity point of views. Let me know if you think otherwise.

Regards,
Nadav

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking
  2018-11-05 13:30   ` Peter Zijlstra
@ 2018-11-05 18:04     ` Nadav Amit
  2018-11-06  8:20       ` Peter Zijlstra
  0 siblings, 1 reply; 30+ messages in thread
From: Nadav Amit @ 2018-11-05 18:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

From: Peter Zijlstra
Sent: November 5, 2018 at 1:30:41 PM GMT
> To: Nadav Amit <namit@vmware.com>
> Cc: Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, x86@kernel.org, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
> Subject: Re: [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking
> 
> 
> On Fri, Nov 02, 2018 at 04:29:45PM -0700, Nadav Amit wrote:
>> +	unuse_temporary_mm(prev);
>> +
>> +	pte_unmap_unlock(ptep, ptl);
> 
> That; that does kunmap_atomic() on 32bit.
> 
> I've been thinking that the whole kmap_atomic thing on x86_32 is
> terminally broken, and with that most of x86_32 is.
> 
> kmap_atomic does the per-cpu fixmap pte fun-and-games we're here saying
> is broken. Yes, only the one CPU will (explicitly) use those fixmap PTEs
> and thus the local invalidate _should_ work. However nothing prohibits
> speculation on another CPU from using our fixmap addresses. Which can
> lead to the remote CPU populating its TLBs for our fixmap entry.
> 
> And, as we've found, there are AMD parts that #MC when there are
> mis-matched TLB entries.
> 
> So what do we do? mark x86_32 SMP broken?

pte_unmap() seems to only use kunmap_atomic() when CONFIG_HIGHPTE is set, no?

Do most distributions run with CONFIG_HIGHPTE?


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 1/7] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2018-11-04 20:58   ` Thomas Gleixner
@ 2018-11-05 18:14     ` Nadav Amit
  0 siblings, 0 replies; 30+ messages in thread
From: Nadav Amit @ 2018-11-05 18:14 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Borislav Petkov,
	Dave Hansen, Jiri Kosina, Andy Lutomirski, Kees Cook,
	Dave Hansen

From: Thomas Gleixner
Sent: November 4, 2018 at 8:58:20 PM GMT
> To: Nadav Amit <namit@vmware.com>
> Cc: Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org>, x86@kernel.org>, H. Peter Anvin <hpa@zytor.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Jiri Kosina <jkosina@suse.cz>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>
> Subject: Re: [PATCH v3 1/7] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
> 
> 
> On Fri, 2 Nov 2018, Nadav Amit wrote:
> 
>> text_mutex is expected to be held before text_poke() is called, but we
>> cannot add a lockdep assertion since kgdb does not take it, and instead
>> *supposedly* ensures the lock is not taken and will not be acquired by
>> any other core while text_poke() is running.
>> 
>> The reason for the "supposedly" comment is that it is not entirely clear
>> that this would be the case if gdb_do_roundup is zero.
>> 
>> Add a comment to clarify this behavior, and restore the assertions as
>> they were before the recent commit.
> 
> It restores nothing. It just removes the assertion.

Sorry - wrong commit log. There were no other assertions before. 

> 
>> This partially reverts commit 9222f606506c ("x86/alternatives:
>> Lockdep-enforce text_mutex in text_poke*()")
> 
> That opens up the same can of worms again, which took us a while to close.

I’m surprised. This patch only removes one assertion that was added two
months ago.

> Can we please instead split out the text_poke() code into a helper function
> and have two callers:
> 
>    text_poke() which contains the assert
> 
>    text_poke_kgdb() which does not

Sure. I will send another version once I realize how to deal with the other
concerns that Peter and Andy raised.

Regards,
Nadav


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
  2018-11-05 17:49       ` Nadav Amit
@ 2018-11-05 19:03         ` Andy Lutomirski
  2018-11-05 19:25           ` Nadav Amit
  0 siblings, 1 reply; 30+ messages in thread
From: Andy Lutomirski @ 2018-11-05 19:03 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Peter Zijlstra, Ingo Molnar, LKML, X86 ML, H. Peter Anvin,
	Thomas Gleixner, Borislav Petkov, Dave Hansen, Andy Lutomirski,
	Kees Cook, Dave Hansen, Masami Hiramatsu



> On Nov 5, 2018, at 9:49 AM, Nadav Amit <namit@vmware.com> wrote:
> 
> From: Andy Lutomirski
> Sent: November 5, 2018 at 5:22:32 PM GMT
>> To: Peter Zijlstra <peterz@infradead.org>
>> Cc: Nadav Amit <namit@vmware.com>, Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, x86@kernel.org, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
>> Subject: Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
>> 
>> 
>> 
>>>> On Nov 5, 2018, at 6:09 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>>>> 
>>>> On Fri, Nov 02, 2018 at 04:29:41PM -0700, Nadav Amit wrote:
>>>> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
>>>> index aac0c1f7e354..367c1d0c20a3 100644
>>>> --- a/arch/x86/kernel/jump_label.c
>>>> +++ b/arch/x86/kernel/jump_label.c
>>>> @@ -52,7 +52,13 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
>>>>  jmp.offset = jump_entry_target(entry) -
>>>>           (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>>>> 
>>>> -    if (early_boot_irqs_disabled)
>>>> +    /*
>>>> +     * As long as we are in early boot, we can use text_poke_early(), which
>>>> +     * is more efficient: the memory was still not marked as read-only (it
>>>> +     * is only marked after poking_init()). This also prevents us from using
>>>> +     * text_poke() before poking_init() is called.
>>>> +     */
>>>> +    if (!early_boot_done)
>>>>      poker = text_poke_early;
>>>> 
>>>>  if (type == JUMP_LABEL_JMP) {
>>> 
>>> It took me a while to untangle init/maze^H^Hin.c... but I think this
>>> is all we need:
>>> 
>>> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
>>> index aac0c1f7e354..ed5fe274a7d8 100644
>>> --- a/arch/x86/kernel/jump_label.c
>>> +++ b/arch/x86/kernel/jump_label.c
>>> @@ -52,7 +52,12 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
>>>  jmp.offset = jump_entry_target(entry) -
>>>           (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>>> 
>>> -    if (early_boot_irqs_disabled)
>>> +    /*
>>> +     * As long as we're UP and not yet marked RO, we can use
>>> +     * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to
>>> +     * SYSTEM_SCHEDULING before going either.
>>> +     */
>>> +    if (system_state == SYSTEM_BOOTING)
>>>      poker = text_poke_early;
>>> 
>>>  if (type == JUMP_LABEL_JMP) {
>> 
>> Can we move this logic into text_poke() and get rid of text_poke_early()?
> 
> This will negatively affect poking of modules doing module loading, e.g.,
> apply_paravirt(). This can be resolved by keeping track when the module is
> write-protected and giving a module parameter to text_poke(). Does it worth
> the complexity?

Probably not.

OTOH, why does alternative patching need text_poke() at all?  Can’t it just write to the text?

> 
>> FWIW, alternative patching was, at some point, a significant fraction of
>> total boot time in some cases. This was probably mostly due to unnecessary
>> sync_core() calls. Although I think this was reported on a VM, and
>> sync_core() used to be *extremely* expensive on a VM, but that’s fixed
>> now, and it even got backported, I think.
>> 
>> (Hmm. Maybe we can also make jump label patching work in early boot, too!)
> 
> It may be possible to resolve the dependencies between poking_init() and the
> other *_init(). I first considered doing that, yet, it makes the code very
> fragile, and I don’t see the value in getting rid of text_poke_early() from
> security or simplicity point of views. Let me know if you think otherwise.
> 
> Regards,
> Nadav

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
  2018-11-05 19:03         ` Andy Lutomirski
@ 2018-11-05 19:25           ` Nadav Amit
  2018-11-05 20:05             ` Andy Lutomirski
  0 siblings, 1 reply; 30+ messages in thread
From: Nadav Amit @ 2018-11-05 19:25 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Peter Zijlstra, Ingo Molnar, LKML, X86 ML, H. Peter Anvin,
	Thomas Gleixner, Borislav Petkov, Dave Hansen, Andy Lutomirski,
	Kees Cook, Dave Hansen, Masami Hiramatsu

From: Andy Lutomirski
Sent: November 5, 2018 at 7:03:49 PM GMT
> To: Nadav Amit <namit@vmware.com>
> Cc: Peter Zijlstra <peterz@infradead.org>, Ingo Molnar <mingo@redhat.com>, LKML <linux-kernel@vger.kernel.org>, X86 ML <x86@kernel.org>, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
> Subject: Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
> 
> 
> 
> 
>> On Nov 5, 2018, at 9:49 AM, Nadav Amit <namit@vmware.com> wrote:
>> 
>> From: Andy Lutomirski
>> Sent: November 5, 2018 at 5:22:32 PM GMT
>>> To: Peter Zijlstra <peterz@infradead.org>
>>> Cc: Nadav Amit <namit@vmware.com>, Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, x86@kernel.org, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
>>> Subject: Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
>>> 
>>> 
>>> 
>>>>> On Nov 5, 2018, at 6:09 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>>>>> 
>>>>> On Fri, Nov 02, 2018 at 04:29:41PM -0700, Nadav Amit wrote:
>>>>> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
>>>>> index aac0c1f7e354..367c1d0c20a3 100644
>>>>> --- a/arch/x86/kernel/jump_label.c
>>>>> +++ b/arch/x86/kernel/jump_label.c
>>>>> @@ -52,7 +52,13 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
>>>>> jmp.offset = jump_entry_target(entry) -
>>>>>          (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>>>>> 
>>>>> -    if (early_boot_irqs_disabled)
>>>>> +    /*
>>>>> +     * As long as we are in early boot, we can use text_poke_early(), which
>>>>> +     * is more efficient: the memory was still not marked as read-only (it
>>>>> +     * is only marked after poking_init()). This also prevents us from using
>>>>> +     * text_poke() before poking_init() is called.
>>>>> +     */
>>>>> +    if (!early_boot_done)
>>>>>     poker = text_poke_early;
>>>>> 
>>>>> if (type == JUMP_LABEL_JMP) {
>>>> 
>>>> It took me a while to untangle init/maze^H^Hin.c... but I think this
>>>> is all we need:
>>>> 
>>>> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
>>>> index aac0c1f7e354..ed5fe274a7d8 100644
>>>> --- a/arch/x86/kernel/jump_label.c
>>>> +++ b/arch/x86/kernel/jump_label.c
>>>> @@ -52,7 +52,12 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
>>>> jmp.offset = jump_entry_target(entry) -
>>>>          (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>>>> 
>>>> -    if (early_boot_irqs_disabled)
>>>> +    /*
>>>> +     * As long as we're UP and not yet marked RO, we can use
>>>> +     * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to
>>>> +     * SYSTEM_SCHEDULING before going either.
>>>> +     */
>>>> +    if (system_state == SYSTEM_BOOTING)
>>>>     poker = text_poke_early;
>>>> 
>>>> if (type == JUMP_LABEL_JMP) {
>>> 
>>> Can we move this logic into text_poke() and get rid of text_poke_early()?
>> 
>> This will negatively affect poking of modules doing module loading, e.g.,
>> apply_paravirt(). This can be resolved by keeping track when the module is
>> write-protected and giving a module parameter to text_poke(). Does it worth
>> the complexity?
> 
> Probably not.
> 
> OTOH, why does alternative patching need text_poke() at all? Can’t it just
> write to the text?

Good question. According to my understanding, these games of
text_poke_early() are not needed, at least for modules (on Intel).

Intel SDM 11.6 "SELF-MODIFYING CODE” says: 

"A write to a memory location in a code segment that is currently cached in
the processor causes the associated cache line (or lines) to be invalidated.
This check is based on the physical address of the instruction.”

Then the manual talks about prefetched instructions, but the modules code is
presumably not be “prefetchable” at this point. So I think it should be
safe, but I guess that you reviewed Intel/AMD manuals better when you wrote
sync_core().

Anyhow, there should be a function that wraps the memcpy() to keep track
when someone changes the text (for potential future use).

Does it make sense? Do you want me to give it a spin?

Thanks,
Nadav

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
  2018-11-05 19:25           ` Nadav Amit
@ 2018-11-05 20:05             ` Andy Lutomirski
  2018-11-05 20:28               ` Thomas Gleixner
  0 siblings, 1 reply; 30+ messages in thread
From: Andy Lutomirski @ 2018-11-05 20:05 UTC (permalink / raw)
  To: Nadav Amit, Linus Torvalds, H. Peter Anvin
  Cc: Peter Zijlstra, Ingo Molnar, LKML, X86 ML, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andrew Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Mon, Nov 5, 2018 at 11:25 AM Nadav Amit <namit@vmware.com> wrote:
>
> From: Andy Lutomirski
> Sent: November 5, 2018 at 7:03:49 PM GMT
> > To: Nadav Amit <namit@vmware.com>
> > Cc: Peter Zijlstra <peterz@infradead.org>, Ingo Molnar <mingo@redhat.com>, LKML <linux-kernel@vger.kernel.org>, X86 ML <x86@kernel.org>, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
> > Subject: Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
> >
> >
> >
> >
> >> On Nov 5, 2018, at 9:49 AM, Nadav Amit <namit@vmware.com> wrote:
> >>
> >> From: Andy Lutomirski
> >> Sent: November 5, 2018 at 5:22:32 PM GMT
> >>> To: Peter Zijlstra <peterz@infradead.org>
> >>> Cc: Nadav Amit <namit@vmware.com>, Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, x86@kernel.org, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
> >>> Subject: Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
> >>>
> >>>
> >>>
> >>>>> On Nov 5, 2018, at 6:09 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> >>>>>
> >>>>> On Fri, Nov 02, 2018 at 04:29:41PM -0700, Nadav Amit wrote:
> >>>>> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
> >>>>> index aac0c1f7e354..367c1d0c20a3 100644
> >>>>> --- a/arch/x86/kernel/jump_label.c
> >>>>> +++ b/arch/x86/kernel/jump_label.c
> >>>>> @@ -52,7 +52,13 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
> >>>>> jmp.offset = jump_entry_target(entry) -
> >>>>>          (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
> >>>>>
> >>>>> -    if (early_boot_irqs_disabled)
> >>>>> +    /*
> >>>>> +     * As long as we are in early boot, we can use text_poke_early(), which
> >>>>> +     * is more efficient: the memory was still not marked as read-only (it
> >>>>> +     * is only marked after poking_init()). This also prevents us from using
> >>>>> +     * text_poke() before poking_init() is called.
> >>>>> +     */
> >>>>> +    if (!early_boot_done)
> >>>>>     poker = text_poke_early;
> >>>>>
> >>>>> if (type == JUMP_LABEL_JMP) {
> >>>>
> >>>> It took me a while to untangle init/maze^H^Hin.c... but I think this
> >>>> is all we need:
> >>>>
> >>>> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
> >>>> index aac0c1f7e354..ed5fe274a7d8 100644
> >>>> --- a/arch/x86/kernel/jump_label.c
> >>>> +++ b/arch/x86/kernel/jump_label.c
> >>>> @@ -52,7 +52,12 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
> >>>> jmp.offset = jump_entry_target(entry) -
> >>>>          (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
> >>>>
> >>>> -    if (early_boot_irqs_disabled)
> >>>> +    /*
> >>>> +     * As long as we're UP and not yet marked RO, we can use
> >>>> +     * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to
> >>>> +     * SYSTEM_SCHEDULING before going either.
> >>>> +     */
> >>>> +    if (system_state == SYSTEM_BOOTING)
> >>>>     poker = text_poke_early;
> >>>>
> >>>> if (type == JUMP_LABEL_JMP) {
> >>>
> >>> Can we move this logic into text_poke() and get rid of text_poke_early()?
> >>
> >> This will negatively affect poking of modules doing module loading, e.g.,
> >> apply_paravirt(). This can be resolved by keeping track when the module is
> >> write-protected and giving a module parameter to text_poke(). Does it worth
> >> the complexity?
> >
> > Probably not.
> >
> > OTOH, why does alternative patching need text_poke() at all? Can’t it just
> > write to the text?
>
> Good question. According to my understanding, these games of
> text_poke_early() are not needed, at least for modules (on Intel).
>
> Intel SDM 11.6 "SELF-MODIFYING CODE” says:
>
> "A write to a memory location in a code segment that is currently cached in
> the processor causes the associated cache line (or lines) to be invalidated.
> This check is based on the physical address of the instruction.”
>
> Then the manual talks about prefetched instructions, but the modules code is
> presumably not be “prefetchable” at this point. So I think it should be
> safe, but I guess that you reviewed Intel/AMD manuals better when you wrote
> sync_core().

Beats the heck out of me.

Linus, hpa, or Dave, a question for you: suppose I map some page
writably, write to it, then upgrade permissions to allow execute.
Must I force all CPUs that might execute from it without first
serializing to serialize?  I suspect this doesn't really affect user
code, but it may affect the module loader.

To be safe, shouldn't the module loader broadcast an IPI to
sync_core() everywhere after loading a module and before making it
runnable, regardless of alternative patching?

IOW, the right sequence of events probably ought to me:

1. Allocate the memory and map it.
2. Copy in the text.
3. Patch alternatives, etc.  This is logically just like (2) from an
architectural perspective -- we're just writing to memory that won't
be executed.
4. Serialize everything.
5. Run it!

>
> Anyhow, there should be a function that wraps the memcpy() to keep track
> when someone changes the text (for potential future use).
>
> Does it make sense? Do you want me to give it a spin?

Sure, I guess.  Linus, what do you think?

>
> Thanks,
> Nadav



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
  2018-11-05 20:05             ` Andy Lutomirski
@ 2018-11-05 20:28               ` Thomas Gleixner
  2018-11-05 21:31                 ` Nadav Amit
  0 siblings, 1 reply; 30+ messages in thread
From: Thomas Gleixner @ 2018-11-05 20:28 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Nadav Amit, Linus Torvalds, H. Peter Anvin, Peter Zijlstra,
	Ingo Molnar, LKML, X86 ML, Borislav Petkov, Dave Hansen,
	Andrew Lutomirski, Kees Cook, Dave Hansen, Masami Hiramatsu

On Mon, 5 Nov 2018, Andy Lutomirski wrote:
> On Mon, Nov 5, 2018 at 11:25 AM Nadav Amit <namit@vmware.com> wrote:
> Linus, hpa, or Dave, a question for you: suppose I map some page
> writably, write to it, then upgrade permissions to allow execute.
> Must I force all CPUs that might execute from it without first
> serializing to serialize?  I suspect this doesn't really affect user
> code, but it may affect the module loader.
> 
> To be safe, shouldn't the module loader broadcast an IPI to
> sync_core() everywhere after loading a module and before making it
> runnable, regardless of alternative patching?
> 
> IOW, the right sequence of events probably ought to me:
> 
> 1. Allocate the memory and map it.
> 2. Copy in the text.
> 3. Patch alternatives, etc.  This is logically just like (2) from an
> architectural perspective -- we're just writing to memory that won't
> be executed.
> 4. Serialize everything.
> 5. Run it!

I'd make that:

1. Allocate the memory and map it RW
2. Copy in the text.
3. Patch alternatives, etc.  This is logically just like (2) from an
   architectural perspective -- we're just writing to memory that won't
   be executed.
4. Map it RX
5. Serialize everything.
6. Run it!

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
  2018-11-05 20:28               ` Thomas Gleixner
@ 2018-11-05 21:31                 ` Nadav Amit
  0 siblings, 0 replies; 30+ messages in thread
From: Nadav Amit @ 2018-11-05 21:31 UTC (permalink / raw)
  To: Thomas Gleixner, Andy Lutomirski
  Cc: Linus Torvalds, H. Peter Anvin, Peter Zijlstra, Ingo Molnar,
	LKML, X86 ML, Borislav Petkov, Dave Hansen, Andrew Lutomirski,
	Kees Cook, Dave Hansen, Masami Hiramatsu

From: Thomas Gleixner
Sent: November 5, 2018 at 8:28:29 PM GMT
> To: Andy Lutomirski <luto@amacapital.net>
> Cc: Nadav Amit <namit@vmware.com>, Linus Torvalds <torvalds@linux-foundation.org>, H. Peter Anvin <hpa@zytor.com>, Peter Zijlstra <peterz@infradead.org>, Ingo Molnar <mingo@redhat.com>, LKML <linux-kernel@vger.kernel.org>, X86 ML <x86@kernel.org>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andrew Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
> Subject: Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
> 
> 
> On Mon, 5 Nov 2018, Andy Lutomirski wrote:
>> On Mon, Nov 5, 2018 at 11:25 AM Nadav Amit <namit@vmware.com> wrote:
>> Linus, hpa, or Dave, a question for you: suppose I map some page
>> writably, write to it, then upgrade permissions to allow execute.
>> Must I force all CPUs that might execute from it without first
>> serializing to serialize?  I suspect this doesn't really affect user
>> code, but it may affect the module loader.
>> 
>> To be safe, shouldn't the module loader broadcast an IPI to
>> sync_core() everywhere after loading a module and before making it
>> runnable, regardless of alternative patching?
>> 
>> IOW, the right sequence of events probably ought to me:
>> 
>> 1. Allocate the memory and map it.
>> 2. Copy in the text.
>> 3. Patch alternatives, etc.  This is logically just like (2) from an
>> architectural perspective -- we're just writing to memory that won't
>> be executed.
>> 4. Serialize everything.
>> 5. Run it!
> 
> I'd make that:
> 
> 1. Allocate the memory and map it RW
> 2. Copy in the text.
> 3. Patch alternatives, etc.  This is logically just like (2) from an
>   architectural perspective -- we're just writing to memory that won't
>   be executed.
> 4. Map it RX
> 5. Serialize everything.
> 6. Run it!

Thanks. I will do something along these lines. This can improve module
loading time (saving IRQ save/restore time), but it will not make things
much prettier, since two code-paths for “early init kernel” and “early init
module” would be needed.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking
  2018-11-05 18:04     ` Nadav Amit
@ 2018-11-06  8:20       ` Peter Zijlstra
  2018-11-06 13:11         ` Peter Zijlstra
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2018-11-06  8:20 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Mon, Nov 05, 2018 at 06:04:42PM +0000, Nadav Amit wrote:
> From: Peter Zijlstra
> Sent: November 5, 2018 at 1:30:41 PM GMT
> > To: Nadav Amit <namit@vmware.com>
> > Cc: Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, x86@kernel.org, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
> > Subject: Re: [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking
> > 
> > 
> > On Fri, Nov 02, 2018 at 04:29:45PM -0700, Nadav Amit wrote:
> >> +	unuse_temporary_mm(prev);
> >> +
> >> +	pte_unmap_unlock(ptep, ptl);
> > 
> > That; that does kunmap_atomic() on 32bit.
> > 
> > I've been thinking that the whole kmap_atomic thing on x86_32 is
> > terminally broken, and with that most of x86_32 is.
> > 
> > kmap_atomic does the per-cpu fixmap pte fun-and-games we're here saying
> > is broken. Yes, only the one CPU will (explicitly) use those fixmap PTEs
> > and thus the local invalidate _should_ work. However nothing prohibits
> > speculation on another CPU from using our fixmap addresses. Which can
> > lead to the remote CPU populating its TLBs for our fixmap entry.
> > 
> > And, as we've found, there are AMD parts that #MC when there are
> > mis-matched TLB entries.
> > 
> > So what do we do? mark x86_32 SMP broken?
> 
> pte_unmap() seems to only use kunmap_atomic() when CONFIG_HIGHPTE is set, no?
> 
> Do most distributions run with CONFIG_HIGHPTE?

Sure; but all of x86_32 relies on kmap_atomic. This was just the the one
way I ran into it again.

By our current way of thinking, kmap_atomic simply is not correct.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking
  2018-11-06  8:20       ` Peter Zijlstra
@ 2018-11-06 13:11         ` Peter Zijlstra
  2018-11-06 18:11           ` Nadav Amit
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2018-11-06 13:11 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Tue, Nov 06, 2018 at 09:20:19AM +0100, Peter Zijlstra wrote:

> By our current way of thinking, kmap_atomic simply is not correct.

Something like the below; which weirdly builds an x86_32 kernel.
Although I imagine a very sad one.

---

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index ba7e3464ee92..e273f3879d04 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1449,6 +1449,16 @@ config PAGE_OFFSET
 config HIGHMEM
 	def_bool y
 	depends on X86_32 && (HIGHMEM64G || HIGHMEM4G)
+	depends on !SMP || BROKEN
+	help
+	  By current thinking kmap_atomic() is broken, since it relies on per
+	  CPU PTEs in the global (kernel) address space and relies on CPU local
+	  TLB invalidates to completely invalidate these PTEs. However there is
+	  nothing that guarantees other CPUs will not speculatively touch upon
+	  'our' fixmap PTEs and load then into their TLBs, after which our
+	  local TLB invalidate will not invalidate them.
+
+	  There are AMD chips that will #MC on inconsistent TLB states.
 
 config X86_PAE
 	bool "PAE (Physical Address Extension) Support"

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking
  2018-11-06 13:11         ` Peter Zijlstra
@ 2018-11-06 18:11           ` Nadav Amit
  2018-11-06 19:08             ` Peter Zijlstra
  0 siblings, 1 reply; 30+ messages in thread
From: Nadav Amit @ 2018-11-06 18:11 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

From: Peter Zijlstra
Sent: November 6, 2018 at 1:11:19 PM GMT
> To: Nadav Amit <namit@vmware.com>
> Cc: Ingo Molnar <mingo@redhat.com>, LKML <linux-kernel@vger.kernel.org>, X86 ML <x86@kernel.org>, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
> Subject: Re: [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking
> 
> 
> On Tue, Nov 06, 2018 at 09:20:19AM +0100, Peter Zijlstra wrote:
> 
>> By our current way of thinking, kmap_atomic simply is not correct.
> 
> Something like the below; which weirdly builds an x86_32 kernel.
> Although I imagine a very sad one.
> 
> ---
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index ba7e3464ee92..e273f3879d04 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -1449,6 +1449,16 @@ config PAGE_OFFSET
> config HIGHMEM
> 	def_bool y
> 	depends on X86_32 && (HIGHMEM64G || HIGHMEM4G)
> +	depends on !SMP || BROKEN
> +	help
> +	  By current thinking kmap_atomic() is broken, since it relies on per
> +	  CPU PTEs in the global (kernel) address space and relies on CPU local
> +	  TLB invalidates to completely invalidate these PTEs. However there is
> +	  nothing that guarantees other CPUs will not speculatively touch upon
> +	  'our' fixmap PTEs and load then into their TLBs, after which our
> +	  local TLB invalidate will not invalidate them.
> +
> +	  There are AMD chips that will #MC on inconsistent TLB states.
> 
> config X86_PAE
> 	bool "PAE (Physical Address Extension) Support”

Please help me understand the scenario you are worried about. I see several
(potentially) concerning situations due to long lived mappings:

1. Inconsistent cachability in the PAT (between two different mappings of
the same physical memory), causing memory ordering issues.

2. Inconsistent access-control (between two different mappings of the same
physical memory), allowing to circumvent security hardening mechanisms.

3. Invalid cachability in the PAT for MMIO, causing #MC

4. Faulty memory being mapped, causing #MC

5. Some potential data leakage due to long lived mappings

The #MC you mention, I think, regards something that resembles (3) -
speculative page-walks using cachable memory caused #MC when this memory was
set on MMIO region. This memory, IIUC, was mistakenly presumed to be used by
page-tables, so I don’t see how it is relevant for kmap_atomic().

As for the other situations, excluding (2), which this series is intended to
deal with, I don’t see a huge problem which cannot be resolved in different
means.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking
  2018-11-06 18:11           ` Nadav Amit
@ 2018-11-06 19:08             ` Peter Zijlstra
  0 siblings, 0 replies; 30+ messages in thread
From: Peter Zijlstra @ 2018-11-06 19:08 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Tue, Nov 06, 2018 at 06:11:18PM +0000, Nadav Amit wrote:
> From: Peter Zijlstra
> > On Tue, Nov 06, 2018 at 09:20:19AM +0100, Peter Zijlstra wrote:
> > 
> >> By our current way of thinking, kmap_atomic simply is not correct.
> > 
> > Something like the below; which weirdly builds an x86_32 kernel.
> > Although I imagine a very sad one.
> > 
> > ---
> > 
> > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > index ba7e3464ee92..e273f3879d04 100644
> > --- a/arch/x86/Kconfig
> > +++ b/arch/x86/Kconfig
> > @@ -1449,6 +1449,16 @@ config PAGE_OFFSET
> > config HIGHMEM
> > 	def_bool y
> > 	depends on X86_32 && (HIGHMEM64G || HIGHMEM4G)
> > +	depends on !SMP || BROKEN
> > +	help
> > +	  By current thinking kmap_atomic() is broken, since it relies on per
> > +	  CPU PTEs in the global (kernel) address space and relies on CPU local
> > +	  TLB invalidates to completely invalidate these PTEs. However there is
> > +	  nothing that guarantees other CPUs will not speculatively touch upon
> > +	  'our' fixmap PTEs and load then into their TLBs, after which our
> > +	  local TLB invalidate will not invalidate them.
> > +
> > +	  There are AMD chips that will #MC on inconsistent TLB states.
> > 
> > config X86_PAE
> > 	bool "PAE (Physical Address Extension) Support”
> 
> Please help me understand the scenario you are worried about. I see several
> (potentially) concerning situations due to long lived mappings:
> 
> 1. Inconsistent cachability in the PAT (between two different mappings of
> the same physical memory), causing memory ordering issues.
> 
> 2. Inconsistent access-control (between two different mappings of the same
> physical memory), allowing to circumvent security hardening mechanisms.
> 
> 3. Invalid cachability in the PAT for MMIO, causing #MC
> 
> 4. Faulty memory being mapped, causing #MC
> 
> 5. Some potential data leakage due to long lived mappings
> 
> The #MC you mention, I think, regards something that resembles (3) -
> speculative page-walks using cachable memory caused #MC when this memory was
> set on MMIO region. This memory, IIUC, was mistakenly presumed to be used by
> page-tables, so I don’t see how it is relevant for kmap_atomic().
> 
> As for the other situations, excluding (2), which this series is intended to
> deal with, I don’t see a huge problem which cannot be resolved in different
> means.

mostly #3 and related I think; kmap_atomic is a stack and any entry can
be used for whatever is needed. When the remote CPU does a speculative
hit on our fixmap entry, that translation will get populated.

When we then unmap and flush (locally) and re-establish that mapping for
something else; the CPU might #MC because the translations are
incompatible.

Imagine one being some MMIO mapping for i915 and another being a regular
user address with incompatible cachebility or something.

Now the remote CPU will never actually use those translations except for
speculation. But I'm terribly uncomfortable with this.

It might all just work; but not doing global flushes for global mapping
changes makes me itch.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
  2018-11-05 14:09   ` Peter Zijlstra
  2018-11-05 17:22     ` Andy Lutomirski
@ 2018-11-07 19:13     ` Nadav Amit
  2018-11-08 10:41       ` Peter Zijlstra
  1 sibling, 1 reply; 30+ messages in thread
From: Nadav Amit @ 2018-11-07 19:13 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

From: Peter Zijlstra
Sent: November 5, 2018 at 2:09:25 PM GMT
> To: Nadav Amit <namit@vmware.com>
> Cc: Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, x86@kernel.org, H. Peter Anvin <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Kees Cook <keescook@chromium.org>, Dave Hansen <dave.hansen@intel.com>, Masami Hiramatsu <mhiramat@kernel.org>
> Subject: Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
> 
> 
> On Fri, Nov 02, 2018 at 04:29:41PM -0700, Nadav Amit wrote:
>> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
>> index aac0c1f7e354..367c1d0c20a3 100644
>> --- a/arch/x86/kernel/jump_label.c
>> +++ b/arch/x86/kernel/jump_label.c
>> @@ -52,7 +52,13 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
>> 	jmp.offset = jump_entry_target(entry) -
>> 		     (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>> 
>> -	if (early_boot_irqs_disabled)
>> +	/*
>> +	 * As long as we are in early boot, we can use text_poke_early(), which
>> +	 * is more efficient: the memory was still not marked as read-only (it
>> +	 * is only marked after poking_init()). This also prevents us from using
>> +	 * text_poke() before poking_init() is called.
>> +	 */
>> +	if (!early_boot_done)
>> 		poker = text_poke_early;
>> 
>> 	if (type == JUMP_LABEL_JMP) {
> 
> It took me a while to untangle init/maze^H^Hin.c... but I think this
> is all we need:
> 
> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
> index aac0c1f7e354..ed5fe274a7d8 100644
> --- a/arch/x86/kernel/jump_label.c
> +++ b/arch/x86/kernel/jump_label.c
> @@ -52,7 +52,12 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
> 	jmp.offset = jump_entry_target(entry) -
> 		     (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
> 
> -	if (early_boot_irqs_disabled)
> +	/*
> +	 * As long as we're UP and not yet marked RO, we can use
> +	 * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to
> +	 * SYSTEM_SCHEDULING before going either.
> +	 */
> +	if (system_state == SYSTEM_BOOTING)
> 		poker = text_poke_early;
> 
> 	if (type == JUMP_LABEL_JMP) {

Thanks for this change, I will incorporate it.

I wanted to point a small difference from my version. Although this version
ensures we are UP and the kernel is still RW, preemption is possible with
this version. I presume that it should not affect jump-labels, since it
switches between JMP and multi-byte NOPs.

Thanks,
Nadav

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init
  2018-11-07 19:13     ` Nadav Amit
@ 2018-11-08 10:41       ` Peter Zijlstra
  0 siblings, 0 replies; 30+ messages in thread
From: Peter Zijlstra @ 2018-11-08 10:41 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Ingo Molnar, LKML, X86 ML, H. Peter Anvin, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Andy Lutomirski, Kees Cook,
	Dave Hansen, Masami Hiramatsu

On Wed, Nov 07, 2018 at 07:13:03PM +0000, Nadav Amit wrote:
> > diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
> > index aac0c1f7e354..ed5fe274a7d8 100644
> > --- a/arch/x86/kernel/jump_label.c
> > +++ b/arch/x86/kernel/jump_label.c
> > @@ -52,7 +52,12 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
> > 	jmp.offset = jump_entry_target(entry) -
> > 		     (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
> > 
> > -	if (early_boot_irqs_disabled)
> > +	/*
> > +	 * As long as we're UP and not yet marked RO, we can use
> > +	 * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to
> > +	 * SYSTEM_SCHEDULING before going either.
> > +	 */
> > +	if (system_state == SYSTEM_BOOTING)
> > 		poker = text_poke_early;
> > 
> > 	if (type == JUMP_LABEL_JMP) {
> 
> Thanks for this change, I will incorporate it.
> 
> I wanted to point a small difference from my version. Although this version
> ensures we are UP and the kernel is still RW, preemption is possible with
> this version. I presume that it should not affect jump-labels, since it
> switches between JMP and multi-byte NOPs.

Right, we're never running the code we're going to change on UP.

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2018-11-08 11:06 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-02 23:29 [PATCH v3 0/7] x86/alternatives: text_poke() fixes Nadav Amit
2018-11-02 23:29 ` [PATCH v3 1/7] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Nadav Amit
2018-11-03 10:11   ` Jiri Kosina
2018-11-04 20:58   ` Thomas Gleixner
2018-11-05 18:14     ` Nadav Amit
2018-11-02 23:29 ` [PATCH v3 2/7] x86/jump_label: Use text_poke_early() during early_init Nadav Amit
2018-11-05 12:39   ` Peter Zijlstra
2018-11-05 13:33     ` Peter Zijlstra
2018-11-05 14:09   ` Peter Zijlstra
2018-11-05 17:22     ` Andy Lutomirski
2018-11-05 17:49       ` Nadav Amit
2018-11-05 19:03         ` Andy Lutomirski
2018-11-05 19:25           ` Nadav Amit
2018-11-05 20:05             ` Andy Lutomirski
2018-11-05 20:28               ` Thomas Gleixner
2018-11-05 21:31                 ` Nadav Amit
2018-11-07 19:13     ` Nadav Amit
2018-11-08 10:41       ` Peter Zijlstra
2018-11-02 23:29 ` [PATCH v3 3/7] x86/mm: temporary mm struct Nadav Amit
2018-11-02 23:29 ` [PATCH v3 4/7] fork: provide a function for copying init_mm Nadav Amit
2018-11-02 23:29 ` [PATCH v3 5/7] x86/alternatives: initializing temporary mm for patching Nadav Amit
2018-11-02 23:29 ` [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking Nadav Amit
2018-11-05 13:19   ` Peter Zijlstra
2018-11-05 13:30   ` Peter Zijlstra
2018-11-05 18:04     ` Nadav Amit
2018-11-06  8:20       ` Peter Zijlstra
2018-11-06 13:11         ` Peter Zijlstra
2018-11-06 18:11           ` Nadav Amit
2018-11-06 19:08             ` Peter Zijlstra
2018-11-02 23:29 ` [PATCH v3 7/7] x86/alternatives: remove text_poke() return value Nadav Amit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.