Linux-Integrity Archive on lore.kernel.org
 help / Atom feed
* [PATCH 00/17] Merge text_poke fixes and executable lockdowns
@ 2019-01-17  0:32 Rick Edgecombe
  2019-01-17  0:32 ` [PATCH 01/17] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Rick Edgecombe
                   ` (17 more replies)
  0 siblings, 18 replies; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rick Edgecombe

This patchset improves several overlapping issues around stale TLB
entries and W^X violations. It is combined from a slightly tweaked
"x86/alternative: text_poke() enhancements v7" [1] and a next version of
the "Don’t leave executable TLB entries to freed pages v2" [2]
patchsets that were conflicting.

The related issues that this fixes:
1. Fixmap PTEs that are used for patching are available for access from
   other cores and might be exploited. They are not even flushed from
   the TLB in remote cores, so the risk is even higher. Address this
   issue by introducing a temporary mm that is only used during
   patching. Unfortunately, due to init ordering, fixmap is still used
   during boot-time patching. Future patches can eliminate the need for
   it.
2. Missing lockdep assertion to ensure text_mutex is taken. It is
   actually not always taken, so fix the instances that were found not
   to take the lock (although they should be safe even without taking
   the lock).
3. Module_alloc returning memory that is RWX until a module is finished
   loading.
4. Sometimes when memory is freed via the module subsystem, an
   executable permissioned TLB entry can remain to a freed page. If the
   page is re-used to back an address that will receive data from
   userspace, it can result in user data being mapped as executable in
   the kernel. The root of this behavior is vfree lazily flushing the
   TLB, but not lazily freeing the underlying pages.

The new changes from "Don’t leave executable TLB entries to freed pages
v2":
 - Add support for case of hibernate trying to save an unmapped page
   on the directmap. (Ard Biesheuvel)
 - No week arch breakout for vfree-ing special memory (Andy Lutomirski)
 - Avoid changing deferred free code by moving modules init free to work
   queue (Andy Lutomirski)
 - Plug in new flag for kprobes and ftrace
 - More arch generic names for set_pages functions (Ard Biesheuvel)
 - Fix for TLB not always flushing the directmap (Nadav Amit)
 
New changes from from "x86/alternative: text_poke() enhancements v7"
 - Fix build failure on CONFIG_RANDOMIZE_BASE=n (Rick)
 - Remove text_poke usage from ftrace (Nadav)
 
[1] https://lkml.org/lkml/2018/12/5/200
[2] https://lkml.org/lkml/2018/12/11/1571

Andy Lutomirski (1):
  x86/mm: temporary mm struct

Nadav Amit (12):
  Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  x86/jump_label: Use text_poke_early() during early init
  fork: provide a function for copying init_mm
  x86/alternative: initializing temporary mm for patching
  x86/alternative: use temporary mm for text poking
  x86/kgdb: avoid redundant comparison of patched code
  x86/ftrace: set trampoline pages as executable
  x86/kprobes: Instruction pages initialization enhancements
  x86: avoid W^X being broken during modules loading
  x86/jump-label: remove support for custom poker
  x86/alternative: Remove the return value of text_poke_*()
  module: Prevent module removal racing with text_poke()

Rick Edgecombe (4):
  Add set_alias_ function and x86 implementation
  mm: Make hibernate handle unmapped pages
  vmalloc: New flags for safe vfree on special perms
  Plug in new special vfree flag

 arch/Kconfig                         |   4 +
 arch/x86/Kconfig                     |   1 +
 arch/x86/include/asm/fixmap.h        |   2 -
 arch/x86/include/asm/mmu_context.h   |  32 +++++
 arch/x86/include/asm/pgtable.h       |   3 +
 arch/x86/include/asm/set_memory.h    |   3 +
 arch/x86/include/asm/text-patching.h |   7 +-
 arch/x86/kernel/alternative.c        | 197 ++++++++++++++++++++-------
 arch/x86/kernel/ftrace.c             |  15 +-
 arch/x86/kernel/jump_label.c         |  19 ++-
 arch/x86/kernel/kgdb.c               |  25 +---
 arch/x86/kernel/kprobes/core.c       |  19 ++-
 arch/x86/kernel/module.c             |   2 +-
 arch/x86/mm/init_64.c                |  36 +++++
 arch/x86/mm/pageattr.c               |  16 ++-
 arch/x86/xen/mmu_pv.c                |   2 -
 include/linux/filter.h               |  18 +--
 include/linux/mm.h                   |  18 +--
 include/linux/sched/task.h           |   1 +
 include/linux/set_memory.h           |  10 ++
 include/linux/vmalloc.h              |  13 ++
 init/main.c                          |   3 +
 kernel/bpf/core.c                    |   1 -
 kernel/fork.c                        |  24 +++-
 kernel/module.c                      |  87 ++++++------
 mm/page_alloc.c                      |   6 +-
 mm/vmalloc.c                         | 122 ++++++++++++++---
 27 files changed, 497 insertions(+), 189 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 01/17] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17  6:47   ` Masami Hiramatsu
  2019-01-25  9:30   ` Borislav Petkov
  2019-01-17  0:32 ` [PATCH 02/17] x86/jump_label: Use text_poke_early() during early init Rick Edgecombe
                   ` (16 subsequent siblings)
  17 siblings, 2 replies; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Masami Hiramatsu,
	Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

text_mutex is currently expected to be held before text_poke() is
called, but we kgdb does not take the mutex, and instead *supposedly*
ensures the lock is not taken and will not be acquired by any other core
while text_poke() is running.

The reason for the "supposedly" comment is that it is not entirely clear
that this would be the case if gdb_do_roundup is zero.

This patch creates two wrapper functions, text_poke() and
text_poke_kgdb() which do or do not run the lockdep assertion
respectively.

While we are at it, change the return code of text_poke() to something
meaningful. One day, callers might actually respect it and the existing
BUG_ON() when patching fails could be removed. For kgdb, the return
value can actually be used.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Fixes: 9222f606506c ("x86/alternatives: Lockdep-enforce text_mutex in text_poke*()")
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/include/asm/text-patching.h |  1 +
 arch/x86/kernel/alternative.c        | 52 ++++++++++++++++++++--------
 arch/x86/kernel/kgdb.c               | 11 +++---
 3 files changed, 45 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
index e85ff65c43c3..f8fc8e86cf01 100644
--- a/arch/x86/include/asm/text-patching.h
+++ b/arch/x86/include/asm/text-patching.h
@@ -35,6 +35,7 @@ extern void *text_poke_early(void *addr, const void *opcode, size_t len);
  * inconsistent instruction while you patch.
  */
 extern void *text_poke(void *addr, const void *opcode, size_t len);
+extern void *text_poke_kgdb(void *addr, const void *opcode, size_t len);
 extern int poke_int3_handler(struct pt_regs *regs);
 extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
 extern int after_bootmem;
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index ebeac487a20c..c6a3a10a2fd5 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -678,18 +678,7 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode,
 	return addr;
 }
 
-/**
- * text_poke - Update instructions on a live kernel
- * @addr: address to modify
- * @opcode: source of the copy
- * @len: length to copy
- *
- * Only atomic text poke/set should be allowed when not doing early patching.
- * It means the size must be writable atomically and the address must be aligned
- * in a way that permits an atomic write. It also makes sure we fit on a single
- * page.
- */
-void *text_poke(void *addr, const void *opcode, size_t len)
+static void *__text_poke(void *addr, const void *opcode, size_t len)
 {
 	unsigned long flags;
 	char *vaddr;
@@ -702,8 +691,6 @@ void *text_poke(void *addr, const void *opcode, size_t len)
 	 */
 	BUG_ON(!after_bootmem);
 
-	lockdep_assert_held(&text_mutex);
-
 	if (!core_kernel_text((unsigned long)addr)) {
 		pages[0] = vmalloc_to_page(addr);
 		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
@@ -732,6 +719,43 @@ void *text_poke(void *addr, const void *opcode, size_t len)
 	return addr;
 }
 
+/**
+ * text_poke - Update instructions on a live kernel
+ * @addr: address to modify
+ * @opcode: source of the copy
+ * @len: length to copy
+ *
+ * Only atomic text poke/set should be allowed when not doing early patching.
+ * It means the size must be writable atomically and the address must be aligned
+ * in a way that permits an atomic write. It also makes sure we fit on a single
+ * page.
+ */
+void *text_poke(void *addr, const void *opcode, size_t len)
+{
+	lockdep_assert_held(&text_mutex);
+
+	return __text_poke(addr, opcode, len);
+}
+
+/**
+ * text_poke_kgdb - Update instructions on a live kernel by kgdb
+ * @addr: address to modify
+ * @opcode: source of the copy
+ * @len: length to copy
+ *
+ * Only atomic text poke/set should be allowed when not doing early patching.
+ * It means the size must be writable atomically and the address must be aligned
+ * in a way that permits an atomic write. It also makes sure we fit on a single
+ * page.
+ *
+ * Context: should only be used by kgdb, which ensures no other core is running,
+ *	    despite the fact it does not hold the text_mutex.
+ */
+void *text_poke_kgdb(void *addr, const void *opcode, size_t len)
+{
+	return __text_poke(addr, opcode, len);
+}
+
 static void do_sync_core(void *info)
 {
 	sync_core();
diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
index 5db08425063e..1461544cba8b 100644
--- a/arch/x86/kernel/kgdb.c
+++ b/arch/x86/kernel/kgdb.c
@@ -758,13 +758,13 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 	if (!err)
 		return err;
 	/*
-	 * It is safe to call text_poke() because normal kernel execution
+	 * It is safe to call text_poke_kgdb() because normal kernel execution
 	 * is stopped on all cores, so long as the text_mutex is not locked.
 	 */
 	if (mutex_is_locked(&text_mutex))
 		return -EBUSY;
-	text_poke((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
-		  BREAK_INSTR_SIZE);
+	text_poke_kgdb((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
+		       BREAK_INSTR_SIZE);
 	err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
 	if (err)
 		return err;
@@ -783,12 +783,13 @@ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
 	if (bpt->type != BP_POKE_BREAKPOINT)
 		goto knl_write;
 	/*
-	 * It is safe to call text_poke() because normal kernel execution
+	 * It is safe to call text_poke_kgdb() because normal kernel execution
 	 * is stopped on all cores, so long as the text_mutex is not locked.
 	 */
 	if (mutex_is_locked(&text_mutex))
 		goto knl_write;
-	text_poke((void *)bpt->bpt_addr, bpt->saved_instr, BREAK_INSTR_SIZE);
+	text_poke_kgdb((void *)bpt->bpt_addr, bpt->saved_instr,
+		       BREAK_INSTR_SIZE);
 	err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
 	if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE))
 		goto knl_write;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 02/17] x86/jump_label: Use text_poke_early() during early init
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
  2019-01-17  0:32 ` [PATCH 01/17] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17  0:32 ` [PATCH 03/17] x86/mm: temporary mm struct Rick Edgecombe
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Masami Hiramatsu,
	Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

There is no apparent reason not to use text_poke_early() while we are
during early-init and we do not patch code that might be on the stack
(i.e., we'll return to the middle of the patched code). This appears to
be the case of jump-labels, so do so.

This is required for the next patches that would set a temporary mm for
patching, which is initialized after some static-keys are
enabled/disabled.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/jump_label.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index f99bd26bd3f1..e36cfec0f35e 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -50,7 +50,12 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
 	jmp.offset = jump_entry_target(entry) -
 		     (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
 
-	if (early_boot_irqs_disabled)
+	/*
+	 * As long as we're UP and not yet marked RO, we can use
+	 * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to
+	 * SYSTEM_SCHEDULING before going either.
+	 */
+	if (system_state == SYSTEM_BOOTING)
 		poker = text_poke_early;
 
 	if (type == JUMP_LABEL_JMP) {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 03/17] x86/mm: temporary mm struct
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
  2019-01-17  0:32 ` [PATCH 01/17] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Rick Edgecombe
  2019-01-17  0:32 ` [PATCH 02/17] x86/jump_label: Use text_poke_early() during early init Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17  0:32 ` [PATCH 04/17] fork: provide a function for copying init_mm Rick Edgecombe
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Kees Cook, Dave Hansen, Nadav Amit, Rick Edgecombe

From: Andy Lutomirski <luto@kernel.org>

Sometimes we want to set a temporary page-table entries (PTEs) in one of
the cores, without allowing other cores to use - even speculatively -
these mappings. There are two benefits for doing so:

(1) Security: if sensitive PTEs are set, temporary mm prevents their use
in other cores. This hardens the security as it prevents exploding a
dangling pointer to overwrite sensitive data using the sensitive PTE.

(2) Avoiding TLB shootdowns: the PTEs do not need to be flushed in
remote page-tables.

To do so a temporary mm_struct can be used. Mappings which are private
for this mm can be set in the userspace part of the address-space.
During the whole time in which the temporary mm is loaded, interrupts
must be disabled.

The first use-case for temporary PTEs, which will follow, is for poking
the kernel text.

[ Commit message was written by Nadav ]

Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/include/asm/mmu_context.h | 32 ++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 0ca50611e8ce..0141b7fa6d01 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -338,4 +338,36 @@ static inline unsigned long __get_current_cr3_fast(void)
 	return cr3;
 }
 
+typedef struct {
+	struct mm_struct *prev;
+} temporary_mm_state_t;
+
+/*
+ * Using a temporary mm allows to set temporary mappings that are not accessible
+ * by other cores. Such mappings are needed to perform sensitive memory writes
+ * that override the kernel memory protections (e.g., W^X), without exposing the
+ * temporary page-table mappings that are required for these write operations to
+ * other cores.
+ *
+ * Context: The temporary mm needs to be used exclusively by a single core. To
+ *          harden security IRQs must be disabled while the temporary mm is
+ *          loaded, thereby preventing interrupt handler bugs from override the
+ *          kernel memory protection.
+ */
+static inline temporary_mm_state_t use_temporary_mm(struct mm_struct *mm)
+{
+	temporary_mm_state_t state;
+
+	lockdep_assert_irqs_disabled();
+	state.prev = this_cpu_read(cpu_tlbstate.loaded_mm);
+	switch_mm_irqs_off(NULL, mm, current);
+	return state;
+}
+
+static inline void unuse_temporary_mm(temporary_mm_state_t prev)
+{
+	lockdep_assert_irqs_disabled();
+	switch_mm_irqs_off(NULL, prev.prev, current);
+}
+
 #endif /* _ASM_X86_MMU_CONTEXT_H */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 04/17] fork: provide a function for copying init_mm
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (2 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 03/17] x86/mm: temporary mm struct Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17  0:32 ` [PATCH 05/17] x86/alternative: initializing temporary mm for patching Rick Edgecombe
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

Provide a function for copying init_mm. This function will be later used
for setting a temporary mm.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 include/linux/sched/task.h |  1 +
 kernel/fork.c              | 24 ++++++++++++++++++------
 2 files changed, 19 insertions(+), 6 deletions(-)

diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index 44c6f15800ff..c5a00a7b3beb 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -76,6 +76,7 @@ extern void exit_itimers(struct signal_struct *);
 extern long _do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *, unsigned long);
 extern long do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *);
 struct task_struct *fork_idle(int);
+struct mm_struct *copy_init_mm(void);
 extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
 extern long kernel_wait4(pid_t, int __user *, int, struct rusage *);
 
diff --git a/kernel/fork.c b/kernel/fork.c
index b69248e6f0e0..d7b156c49f29 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1299,13 +1299,20 @@ void mm_release(struct task_struct *tsk, struct mm_struct *mm)
 		complete_vfork_done(tsk);
 }
 
-/*
- * Allocate a new mm structure and copy contents from the
- * mm structure of the passed in task structure.
+/**
+ * dup_mm() - duplicates an existing mm structure
+ * @tsk: the task_struct with which the new mm will be associated.
+ * @oldmm: the mm to duplicate.
+ *
+ * Allocates a new mm structure and copy contents from the provided
+ * @oldmm structure.
+ *
+ * Return: the duplicated mm or NULL on failure.
  */
-static struct mm_struct *dup_mm(struct task_struct *tsk)
+static struct mm_struct *dup_mm(struct task_struct *tsk,
+				struct mm_struct *oldmm)
 {
-	struct mm_struct *mm, *oldmm = current->mm;
+	struct mm_struct *mm;
 	int err;
 
 	mm = allocate_mm();
@@ -1372,7 +1379,7 @@ static int copy_mm(unsigned long clone_flags, struct task_struct *tsk)
 	}
 
 	retval = -ENOMEM;
-	mm = dup_mm(tsk);
+	mm = dup_mm(tsk, current->mm);
 	if (!mm)
 		goto fail_nomem;
 
@@ -2187,6 +2194,11 @@ struct task_struct *fork_idle(int cpu)
 	return task;
 }
 
+struct mm_struct *copy_init_mm(void)
+{
+	return dup_mm(NULL, &init_mm);
+}
+
 /*
  *  Ok, this is the main fork-routine.
  *
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 05/17] x86/alternative: initializing temporary mm for patching
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (3 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 04/17] fork: provide a function for copying init_mm Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17  0:32 ` [PATCH 06/17] x86/alternative: use temporary mm for text poking Rick Edgecombe
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

To prevent improper use of the PTEs that are used for text patching, we
want to use a temporary mm struct. We initailize it by copying the init
mm.

The address that will be used for patching is taken from the lower area
that is usually used for the task memory. Doing so prevents the need to
frequently synchronize the temporary-mm (e.g., when BPF programs are
installed), since different PGDs are used for the task memory.

Finally, we randomize the address of the PTEs to harden against exploits
that use these PTEs.

Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/include/asm/pgtable.h       |  3 +++
 arch/x86/include/asm/text-patching.h |  2 ++
 arch/x86/kernel/alternative.c        |  3 +++
 arch/x86/mm/init_64.c                | 36 ++++++++++++++++++++++++++++
 init/main.c                          |  3 +++
 5 files changed, 47 insertions(+)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 40616e805292..e8f630d9a2ed 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1021,6 +1021,9 @@ static inline void __meminit init_trampoline_default(void)
 	/* Default trampoline pgd value */
 	trampoline_pgd_entry = init_top_pgt[pgd_index(__PAGE_OFFSET)];
 }
+
+void __init poking_init(void);
+
 # ifdef CONFIG_RANDOMIZE_MEMORY
 void __meminit init_trampoline(void);
 # else
diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
index f8fc8e86cf01..a75eed841eed 100644
--- a/arch/x86/include/asm/text-patching.h
+++ b/arch/x86/include/asm/text-patching.h
@@ -39,5 +39,7 @@ extern void *text_poke_kgdb(void *addr, const void *opcode, size_t len);
 extern int poke_int3_handler(struct pt_regs *regs);
 extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
 extern int after_bootmem;
+extern __ro_after_init struct mm_struct *poking_mm;
+extern __ro_after_init unsigned long poking_addr;
 
 #endif /* _ASM_X86_TEXT_PATCHING_H */
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index c6a3a10a2fd5..57fdde308bb6 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -678,6 +678,9 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode,
 	return addr;
 }
 
+__ro_after_init struct mm_struct *poking_mm;
+__ro_after_init unsigned long poking_addr;
+
 static void *__text_poke(void *addr, const void *opcode, size_t len)
 {
 	unsigned long flags;
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index bccff68e3267..125c8c48aa24 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -53,6 +53,7 @@
 #include <asm/init.h>
 #include <asm/uv/uv.h>
 #include <asm/setup.h>
+#include <asm/text-patching.h>
 
 #include "mm_internal.h"
 
@@ -1383,6 +1384,41 @@ unsigned long memory_block_size_bytes(void)
 	return memory_block_size_probed;
 }
 
+/*
+ * Initialize an mm_struct to be used during poking and a pointer to be used
+ * during patching.
+ */
+void __init poking_init(void)
+{
+	spinlock_t *ptl;
+	pte_t *ptep;
+
+	poking_mm = copy_init_mm();
+	BUG_ON(!poking_mm);
+
+	/*
+	 * Randomize the poking address, but make sure that the following page
+	 * will be mapped at the same PMD. We need 2 pages, so find space for 3,
+	 * and adjust the address if the PMD ends after the first one.
+	 */
+	poking_addr = TASK_UNMAPPED_BASE;
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+		poking_addr += (kaslr_get_random_long("Poking") & PAGE_MASK) %
+			(TASK_SIZE - TASK_UNMAPPED_BASE - 3 * PAGE_SIZE);
+
+	if (((poking_addr + PAGE_SIZE) & ~PMD_MASK) == 0)
+		poking_addr += PAGE_SIZE;
+
+	/*
+	 * We need to trigger the allocation of the page-tables that will be
+	 * needed for poking now. Later, poking may be performed in an atomic
+	 * section, which might cause allocation to fail.
+	 */
+	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
+	BUG_ON(!ptep);
+	pte_unmap_unlock(ptep, ptl);
+}
+
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
 /*
  * Initialise the sparsemem vmemmap using huge-pages at the PMD level.
diff --git a/init/main.c b/init/main.c
index e2e80ca3165a..f5947ba53bb4 100644
--- a/init/main.c
+++ b/init/main.c
@@ -496,6 +496,8 @@ void __init __weak thread_stack_cache_init(void)
 
 void __init __weak mem_encrypt_init(void) { }
 
+void __init __weak poking_init(void) { }
+
 bool initcall_debug;
 core_param(initcall_debug, initcall_debug, bool, 0644);
 
@@ -730,6 +732,7 @@ asmlinkage __visible void __init start_kernel(void)
 	taskstats_init_early();
 	delayacct_init();
 
+	poking_init();
 	check_bugs();
 
 	acpi_subsystem_init();
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 06/17] x86/alternative: use temporary mm for text poking
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (4 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 05/17] x86/alternative: initializing temporary mm for patching Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17 20:27   ` Andy Lutomirski
  2019-01-17  0:32 ` [PATCH 07/17] x86/kgdb: avoid redundant comparison of patched code Rick Edgecombe
                   ` (11 subsequent siblings)
  17 siblings, 1 reply; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Masami Hiramatsu,
	Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

text_poke() can potentially compromise the security as it sets temporary
PTEs in the fixmap. These PTEs might be used to rewrite the kernel code
from other cores accidentally or maliciously, if an attacker gains the
ability to write onto kernel memory.

Moreover, since remote TLBs are not flushed after the temporary PTEs are
removed, the time-window in which the code is writable is not limited if
the fixmap PTEs - maliciously or accidentally - are cached in the TLB.
To address these potential security hazards, we use a temporary mm for
patching the code.

Finally, text_poke() is also not conservative enough when mapping pages,
as it always tries to map 2 pages, even when a single one is sufficient.
So try to be more conservative, and do not map more than needed.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/include/asm/fixmap.h |   2 -
 arch/x86/kernel/alternative.c | 109 +++++++++++++++++++++++++++-------
 arch/x86/xen/mmu_pv.c         |   2 -
 3 files changed, 87 insertions(+), 26 deletions(-)

diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
index 50ba74a34a37..9da8cccdf3fb 100644
--- a/arch/x86/include/asm/fixmap.h
+++ b/arch/x86/include/asm/fixmap.h
@@ -103,8 +103,6 @@ enum fixed_addresses {
 #ifdef CONFIG_PARAVIRT
 	FIX_PARAVIRT_BOOTMAP,
 #endif
-	FIX_TEXT_POKE1,	/* reserve 2 pages for text_poke() */
-	FIX_TEXT_POKE0, /* first page is last, because allocation is backward */
 #ifdef	CONFIG_X86_INTEL_MID
 	FIX_LNW_VRTC,
 #endif
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 57fdde308bb6..8fc4685f3117 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -11,6 +11,7 @@
 #include <linux/stop_machine.h>
 #include <linux/slab.h>
 #include <linux/kdebug.h>
+#include <linux/mmu_context.h>
 #include <asm/text-patching.h>
 #include <asm/alternative.h>
 #include <asm/sections.h>
@@ -683,41 +684,105 @@ __ro_after_init unsigned long poking_addr;
 
 static void *__text_poke(void *addr, const void *opcode, size_t len)
 {
+	bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE;
+	temporary_mm_state_t prev;
+	struct page *pages[2] = {NULL};
 	unsigned long flags;
-	char *vaddr;
-	struct page *pages[2];
-	int i;
+	pte_t pte, *ptep;
+	spinlock_t *ptl;
 
 	/*
-	 * While boot memory allocator is runnig we cannot use struct
-	 * pages as they are not yet initialized.
+	 * While boot memory allocator is running we cannot use struct pages as
+	 * they are not yet initialized.
 	 */
 	BUG_ON(!after_bootmem);
 
 	if (!core_kernel_text((unsigned long)addr)) {
 		pages[0] = vmalloc_to_page(addr);
-		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
+		if (cross_page_boundary)
+			pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
 	} else {
 		pages[0] = virt_to_page(addr);
 		WARN_ON(!PageReserved(pages[0]));
-		pages[1] = virt_to_page(addr + PAGE_SIZE);
+		if (cross_page_boundary)
+			pages[1] = virt_to_page(addr + PAGE_SIZE);
 	}
-	BUG_ON(!pages[0]);
+	BUG_ON(!pages[0] || (cross_page_boundary && !pages[1]));
+
 	local_irq_save(flags);
-	set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0]));
-	if (pages[1])
-		set_fixmap(FIX_TEXT_POKE1, page_to_phys(pages[1]));
-	vaddr = (char *)fix_to_virt(FIX_TEXT_POKE0);
-	memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len);
-	clear_fixmap(FIX_TEXT_POKE0);
-	if (pages[1])
-		clear_fixmap(FIX_TEXT_POKE1);
-	local_flush_tlb();
-	sync_core();
-	/* Could also do a CLFLUSH here to speed up CPU recovery; but
-	   that causes hangs on some VIA CPUs. */
-	for (i = 0; i < len; i++)
-		BUG_ON(((char *)addr)[i] != ((char *)opcode)[i]);
+
+	/*
+	 * The lock is not really needed, but this allows to avoid open-coding.
+	 */
+	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
+
+	/*
+	 * This must not fail; preallocated in poking_init().
+	 */
+	VM_BUG_ON(!ptep);
+
+	pte = mk_pte(pages[0], PAGE_KERNEL);
+	set_pte_at(poking_mm, poking_addr, ptep, pte);
+
+	if (cross_page_boundary) {
+		pte = mk_pte(pages[1], PAGE_KERNEL);
+		set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte);
+	}
+
+	/*
+	 * Loading the temporary mm behaves as a compiler barrier, which
+	 * guarantees that the PTE will be set at the time memcpy() is done.
+	 */
+	prev = use_temporary_mm(poking_mm);
+
+	kasan_disable_current();
+	memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len);
+	kasan_enable_current();
+
+	/*
+	 * Ensure that the PTE is only cleared after the instructions of memcpy
+	 * were issued by using a compiler barrier.
+	 */
+	barrier();
+
+	pte_clear(poking_mm, poking_addr, ptep);
+
+	/*
+	 * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
+	 * as it also flushes the corresponding "user" address spaces, which
+	 * does not exist.
+	 *
+	 * Poking, however, is already very inefficient since it does not try to
+	 * batch updates, so we ignore this problem for the time being.
+	 *
+	 * Since the PTEs do not exist in other kernel address-spaces, we do
+	 * not use __flush_tlb_one_kernel(), which when PTI is on would cause
+	 * more unwarranted TLB flushes.
+	 *
+	 * There is a slight anomaly here: the PTE is a supervisor-only and
+	 * (potentially) global and we use __flush_tlb_one_user() but this
+	 * should be fine.
+	 */
+	__flush_tlb_one_user(poking_addr);
+	if (cross_page_boundary) {
+		pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
+		__flush_tlb_one_user(poking_addr + PAGE_SIZE);
+	}
+
+	/*
+	 * Loading the previous page-table hierarchy requires a serializing
+	 * instruction that already allows the core to see the updated version.
+	 * Xen-PV is assumed to serialize execution in a similar manner.
+	 */
+	unuse_temporary_mm(prev);
+
+	pte_unmap_unlock(ptep, ptl);
+	/*
+	 * If the text doesn't match what we just wrote; something is
+	 * fundamentally screwy, there's nothing we can really do about that.
+	 */
+	BUG_ON(memcmp(addr, opcode, len));
+
 	local_irq_restore(flags);
 	return addr;
 }
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index 0f4fe206dcc2..82b181fcefe5 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -2319,8 +2319,6 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 #elif defined(CONFIG_X86_VSYSCALL_EMULATION)
 	case VSYSCALL_PAGE:
 #endif
-	case FIX_TEXT_POKE0:
-	case FIX_TEXT_POKE1:
 		/* All local page mappings */
 		pte = pfn_pte(phys, prot);
 		break;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 07/17] x86/kgdb: avoid redundant comparison of patched code
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (5 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 06/17] x86/alternative: use temporary mm for text poking Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17  0:32 ` [PATCH 08/17] x86/ftrace: set trampoline pages as executable Rick Edgecombe
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

text_poke() already ensures that the written value is the correct one
and fails if that is not the case. There is no need for an additional
comparison. Remove it.

Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/kgdb.c | 14 +-------------
 1 file changed, 1 insertion(+), 13 deletions(-)

diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
index 1461544cba8b..057af9187a04 100644
--- a/arch/x86/kernel/kgdb.c
+++ b/arch/x86/kernel/kgdb.c
@@ -746,7 +746,6 @@ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long ip)
 int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 {
 	int err;
-	char opc[BREAK_INSTR_SIZE];
 
 	bpt->type = BP_BREAKPOINT;
 	err = probe_kernel_read(bpt->saved_instr, (char *)bpt->bpt_addr,
@@ -765,11 +764,6 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 		return -EBUSY;
 	text_poke_kgdb((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
 		       BREAK_INSTR_SIZE);
-	err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
-	if (err)
-		return err;
-	if (memcmp(opc, arch_kgdb_ops.gdb_bpt_instr, BREAK_INSTR_SIZE))
-		return -EINVAL;
 	bpt->type = BP_POKE_BREAKPOINT;
 
 	return err;
@@ -777,9 +771,6 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 
 int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
 {
-	int err;
-	char opc[BREAK_INSTR_SIZE];
-
 	if (bpt->type != BP_POKE_BREAKPOINT)
 		goto knl_write;
 	/*
@@ -790,10 +781,7 @@ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
 		goto knl_write;
 	text_poke_kgdb((void *)bpt->bpt_addr, bpt->saved_instr,
 		       BREAK_INSTR_SIZE);
-	err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
-	if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE))
-		goto knl_write;
-	return err;
+	return 0;
 
 knl_write:
 	return probe_kernel_write((char *)bpt->bpt_addr,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 08/17] x86/ftrace: set trampoline pages as executable
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (6 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 07/17] x86/kgdb: avoid redundant comparison of patched code Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-02-06 16:22   ` Steven Rostedt
  2019-01-17  0:32 ` [PATCH 09/17] x86/kprobes: Instruction pages initialization enhancements Rick Edgecombe
                   ` (9 subsequent siblings)
  17 siblings, 1 reply; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Steven Rostedt, Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

Since alloc_module() will not set the pages as executable soon, we need
to do so for ftrace trampoline pages after they are allocated.

For the time being, we do not change ftrace to use the text_poke()
interface. As a result, ftrace breaks still breaks W^X.

Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/ftrace.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 8257a59704ae..eb4a1937e72c 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -742,6 +742,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
 	unsigned long end_offset;
 	unsigned long op_offset;
 	unsigned long offset;
+	unsigned long npages;
 	unsigned long size;
 	unsigned long retq;
 	unsigned long *ptr;
@@ -774,6 +775,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
 		return 0;
 
 	*tramp_size = size + RET_SIZE + sizeof(void *);
+	npages = DIV_ROUND_UP(*tramp_size, PAGE_SIZE);
 
 	/* Copy ftrace_caller onto the trampoline memory */
 	ret = probe_kernel_read(trampoline, (void *)start_offset, size);
@@ -818,6 +820,13 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
 	/* ALLOC_TRAMP flags lets us know we created it */
 	ops->flags |= FTRACE_OPS_FL_ALLOC_TRAMP;
 
+	/*
+	 * Module allocation needs to be completed by making the page
+	 * executable. The page is still writable, which is a security hazard,
+	 * but anyhow ftrace breaks W^X completely.
+	 */
+	set_memory_x((unsigned long)trampoline, npages);
+
 	return (unsigned long)trampoline;
 fail:
 	tramp_free(trampoline, *tramp_size);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 09/17] x86/kprobes: Instruction pages initialization enhancements
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (7 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 08/17] x86/ftrace: set trampoline pages as executable Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17  6:51   ` Masami Hiramatsu
  2019-01-17  0:32 ` [PATCH 10/17] x86: avoid W^X being broken during modules loading Rick Edgecombe
                   ` (8 subsequent siblings)
  17 siblings, 1 reply; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Masami Hiramatsu, Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

This patch is a preparatory patch for a following patch that makes
module allocated pages non-executable. The patch sets the page as
executable after allocation.

In the future, we may get better protection of executables. For example,
by using hypercalls to request the hypervisor to protect VM executable
pages from modifications using nested page-tables. This would allow
us to ensure the executable has not changed between allocation and
its write-protection.

While at it, do some small cleanup of what appears to be unnecessary
masking.

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/kprobes/core.c | 24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 4ba75afba527..fac692e36833 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -431,8 +431,20 @@ void *alloc_insn_page(void)
 	void *page;
 
 	page = module_alloc(PAGE_SIZE);
-	if (page)
-		set_memory_ro((unsigned long)page & PAGE_MASK, 1);
+	if (page == NULL)
+		return NULL;
+
+	/*
+	 * First make the page read-only, and then only then make it executable
+	 * to prevent it from being W+X in between.
+	 */
+	set_memory_ro((unsigned long)page, 1);
+
+	/*
+	 * TODO: Once additional kernel code protection mechanisms are set, ensure
+	 * that the page was not maliciously altered and it is still zeroed.
+	 */
+	set_memory_x((unsigned long)page, 1);
 
 	return page;
 }
@@ -440,8 +452,12 @@ void *alloc_insn_page(void)
 /* Recover page to RW mode before releasing it */
 void free_insn_page(void *page)
 {
-	set_memory_nx((unsigned long)page & PAGE_MASK, 1);
-	set_memory_rw((unsigned long)page & PAGE_MASK, 1);
+	/*
+	 * First make the page non-executable, and then only then make it
+	 * writable to prevent it from being W+X in between.
+	 */
+	set_memory_nx((unsigned long)page, 1);
+	set_memory_rw((unsigned long)page, 1);
 	module_memfree(page);
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 10/17] x86: avoid W^X being broken during modules loading
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (8 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 09/17] x86/kprobes: Instruction pages initialization enhancements Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17  0:32 ` [PATCH 11/17] x86/jump-label: remove support for custom poker Rick Edgecombe
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Masami Hiramatsu,
	Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

When modules and BPF filters are loaded, there is a time window in
which some memory is both writable and executable. An attacker that has
already found another vulnerability (e.g., a dangling pointer) might be
able to exploit this behavior to overwrite kernel code. This patch
prevents having writable executable PTEs in this stage.

In addition, avoiding having R+X mappings can also slightly simplify the
patching of modules code on initialization (e.g., by alternatives and
static-key), as would be done in the next patch. This was actually the
main motivation for this patch.

To avoid having W+X mappings, set them initially as RW (NX) and after
they are set as RO set them as X as well. Setting them as executable is
done as a separate step to avoid one core in which the old PTE is cached
(hence writable), and another which sees the updated PTE (executable),
which would break the W^X protection.

Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Suggested-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/alternative.c | 28 +++++++++++++++++++++-------
 arch/x86/kernel/module.c      |  2 +-
 include/linux/filter.h        |  4 ++--
 kernel/module.c               |  5 +++++
 4 files changed, 29 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 8fc4685f3117..18415e3b6000 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -667,15 +667,29 @@ void __init alternative_instructions(void)
  * handlers seeing an inconsistent instruction while you patch.
  */
 void *__init_or_module text_poke_early(void *addr, const void *opcode,
-					      size_t len)
+				       size_t len)
 {
 	unsigned long flags;
-	local_irq_save(flags);
-	memcpy(addr, opcode, len);
-	local_irq_restore(flags);
-	sync_core();
-	/* Could also do a CLFLUSH here to speed up CPU recovery; but
-	   that causes hangs on some VIA CPUs. */
+
+	if (static_cpu_has(X86_FEATURE_NX) &&
+	    is_module_text_address((unsigned long)addr)) {
+		/*
+		 * Modules text is marked initially as non-executable, so the
+		 * code cannot be running and speculative code-fetches are
+		 * prevented. We can just change the code.
+		 */
+		memcpy(addr, opcode, len);
+	} else {
+		local_irq_save(flags);
+		memcpy(addr, opcode, len);
+		local_irq_restore(flags);
+		sync_core();
+
+		/*
+		 * Could also do a CLFLUSH here to speed up CPU recovery; but
+		 * that causes hangs on some VIA CPUs.
+		 */
+	}
 	return addr;
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index b052e883dd8c..cfa3106faee4 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -87,7 +87,7 @@ void *module_alloc(unsigned long size)
 	p = __vmalloc_node_range(size, MODULE_ALIGN,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL,
-				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
+				    PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 	if (p && (kasan_module_alloc(p, size) < 0)) {
 		vfree(p);
diff --git a/include/linux/filter.h b/include/linux/filter.h
index ad106d845b22..f18cd317faf8 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -483,7 +483,7 @@ struct bpf_prog {
 	u16			pages;		/* Number of allocated pages */
 	u16			jited:1,	/* Is our filter JIT'ed? */
 				jit_requested:1,/* archs need to JIT the prog */
-				undo_set_mem:1,	/* Passed set_memory_ro() checkpoint */
+				undo_set_mem:1, /* Passed set_memory_ro() checkpoint */
 				gpl_compatible:1, /* Is filter GPL compatible? */
 				cb_access:1,	/* Is control block accessed? */
 				dst_needed:1,	/* Do we need dst entry? */
@@ -681,7 +681,6 @@ bpf_ctx_narrow_access_ok(u32 off, u32 size, u32 size_default)
 
 static inline void bpf_prog_lock_ro(struct bpf_prog *fp)
 {
-	fp->undo_set_mem = 1;
 	set_memory_ro((unsigned long)fp, fp->pages);
 }
 
@@ -694,6 +693,7 @@ static inline void bpf_prog_unlock_ro(struct bpf_prog *fp)
 static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
 {
 	set_memory_ro((unsigned long)hdr, hdr->pages);
+	set_memory_x((unsigned long)hdr, hdr->pages);
 }
 
 static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr)
diff --git a/kernel/module.c b/kernel/module.c
index 2ad1b5239910..ae1b77da6a20 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -1950,8 +1950,13 @@ void module_enable_ro(const struct module *mod, bool after_init)
 		return;
 
 	frob_text(&mod->core_layout, set_memory_ro);
+	frob_text(&mod->core_layout, set_memory_x);
+
 	frob_rodata(&mod->core_layout, set_memory_ro);
+
 	frob_text(&mod->init_layout, set_memory_ro);
+	frob_text(&mod->init_layout, set_memory_x);
+
 	frob_rodata(&mod->init_layout, set_memory_ro);
 
 	if (after_init)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 11/17] x86/jump-label: remove support for custom poker
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (9 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 10/17] x86: avoid W^X being broken during modules loading Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17  0:32 ` [PATCH 12/17] x86/alternative: Remove the return value of text_poke_*() Rick Edgecombe
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Masami Hiramatsu,
	Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

There are only two types of poking: early and breakpoint based. The use
of a function pointer to perform poking complicates the code and is
probably inefficient due to the use of indirect branches.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/jump_label.c | 24 ++++++++----------------
 1 file changed, 8 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index e36cfec0f35e..427facef8aff 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -37,7 +37,6 @@ static void bug_at(unsigned char *ip, int line)
 
 static void __ref __jump_label_transform(struct jump_entry *entry,
 					 enum jump_label_type type,
-					 void *(*poker)(void *, const void *, size_t),
 					 int init)
 {
 	union jump_code_union jmp;
@@ -50,14 +49,6 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
 	jmp.offset = jump_entry_target(entry) -
 		     (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
 
-	/*
-	 * As long as we're UP and not yet marked RO, we can use
-	 * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to
-	 * SYSTEM_SCHEDULING before going either.
-	 */
-	if (system_state == SYSTEM_BOOTING)
-		poker = text_poke_early;
-
 	if (type == JUMP_LABEL_JMP) {
 		if (init) {
 			expect = default_nop; line = __LINE__;
@@ -80,16 +71,17 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
 		bug_at((void *)jump_entry_code(entry), line);
 
 	/*
-	 * Make text_poke_bp() a default fallback poker.
+	 * As long as we're UP and not yet marked RO, we can use
+	 * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to
+	 * SYSTEM_SCHEDULING before going either.
 	 *
 	 * At the time the change is being done, just ignore whether we
 	 * are doing nop -> jump or jump -> nop transition, and assume
 	 * always nop being the 'currently valid' instruction
-	 *
 	 */
-	if (poker) {
-		(*poker)((void *)jump_entry_code(entry), code,
-			 JUMP_LABEL_NOP_SIZE);
+	if (init || system_state == SYSTEM_BOOTING) {
+		text_poke_early((void *)jump_entry_code(entry), code,
+				JUMP_LABEL_NOP_SIZE);
 		return;
 	}
 
@@ -101,7 +93,7 @@ void arch_jump_label_transform(struct jump_entry *entry,
 			       enum jump_label_type type)
 {
 	mutex_lock(&text_mutex);
-	__jump_label_transform(entry, type, NULL, 0);
+	__jump_label_transform(entry, type, 0);
 	mutex_unlock(&text_mutex);
 }
 
@@ -131,5 +123,5 @@ __init_or_module void arch_jump_label_transform_static(struct jump_entry *entry,
 			jlstate = JL_STATE_NO_UPDATE;
 	}
 	if (jlstate == JL_STATE_UPDATE)
-		__jump_label_transform(entry, type, text_poke_early, 1);
+		__jump_label_transform(entry, type, 1);
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 12/17] x86/alternative: Remove the return value of text_poke_*()
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (10 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 11/17] x86/jump-label: remove support for custom poker Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17  0:32 ` [PATCH 13/17] Add set_alias_ function and x86 implementation Rick Edgecombe
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Masami Hiramatsu,
	Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

The return value of text_poke_early() and text_poke_bp() is useless.
Remove it.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/include/asm/text-patching.h |  4 ++--
 arch/x86/kernel/alternative.c        | 11 ++++-------
 2 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
index a75eed841eed..c90678fd391a 100644
--- a/arch/x86/include/asm/text-patching.h
+++ b/arch/x86/include/asm/text-patching.h
@@ -18,7 +18,7 @@ static inline void apply_paravirt(struct paravirt_patch_site *start,
 #define __parainstructions_end	NULL
 #endif
 
-extern void *text_poke_early(void *addr, const void *opcode, size_t len);
+extern void text_poke_early(void *addr, const void *opcode, size_t len);
 
 /*
  * Clear and restore the kernel write-protection flag on the local CPU.
@@ -37,7 +37,7 @@ extern void *text_poke_early(void *addr, const void *opcode, size_t len);
 extern void *text_poke(void *addr, const void *opcode, size_t len);
 extern void *text_poke_kgdb(void *addr, const void *opcode, size_t len);
 extern int poke_int3_handler(struct pt_regs *regs);
-extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
+extern void text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
 extern int after_bootmem;
 extern __ro_after_init struct mm_struct *poking_mm;
 extern __ro_after_init unsigned long poking_addr;
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 18415e3b6000..2740ad2c6f21 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -264,7 +264,7 @@ static void __init_or_module add_nops(void *insns, unsigned int len)
 
 extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
 extern s32 __smp_locks[], __smp_locks_end[];
-void *text_poke_early(void *addr, const void *opcode, size_t len);
+void text_poke_early(void *addr, const void *opcode, size_t len);
 
 /*
  * Are we looking at a near JMP with a 1 or 4-byte displacement.
@@ -666,8 +666,8 @@ void __init alternative_instructions(void)
  * instructions. And on the local CPU you need to be protected again NMI or MCE
  * handlers seeing an inconsistent instruction while you patch.
  */
-void *__init_or_module text_poke_early(void *addr, const void *opcode,
-				       size_t len)
+void __init_or_module text_poke_early(void *addr, const void *opcode,
+				      size_t len)
 {
 	unsigned long flags;
 
@@ -690,7 +690,6 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode,
 		 * that causes hangs on some VIA CPUs.
 		 */
 	}
-	return addr;
 }
 
 __ro_after_init struct mm_struct *poking_mm;
@@ -893,7 +892,7 @@ int poke_int3_handler(struct pt_regs *regs)
  *	  replacing opcode
  *	- sync cores
  */
-void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler)
+void text_poke_bp(void *addr, const void *opcode, size_t len, void *handler)
 {
 	unsigned char int3 = 0xcc;
 
@@ -935,7 +934,5 @@ void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler)
 	 * the writing of the new instruction.
 	 */
 	bp_patching_in_progress = false;
-
-	return addr;
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 13/17] Add set_alias_ function and x86 implementation
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (11 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 12/17] x86/alternative: Remove the return value of text_poke_*() Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17  0:32 ` [PATCH 14/17] mm: Make hibernate handle unmapped pages Rick Edgecombe
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rick Edgecombe

This adds two new functions set_alias_default_noflush and set_alias_nv_noflush
for setting the alias mapping for the page to its default valid permissions
and to an invalid state that cannot be cached in a TLB, respectively. These
functions to not flush the TLB.

Note, __kernel_map_pages does something similar but flushes the TLB and doesn't
reset the permission bits to default on all architectures.

There is also an ARCH config ARCH_HAS_SET_ALIAS for specifying whether these
have an actual implementation or a default empty one.

Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/Kconfig                      |  4 ++++
 arch/x86/Kconfig                  |  1 +
 arch/x86/include/asm/set_memory.h |  3 +++
 arch/x86/mm/pageattr.c            | 14 +++++++++++---
 include/linux/set_memory.h        | 10 ++++++++++
 5 files changed, 29 insertions(+), 3 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 4cfb6de48f79..4ef9db190f2d 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -249,6 +249,10 @@ config ARCH_HAS_FORTIFY_SOURCE
 config ARCH_HAS_SET_MEMORY
 	bool
 
+# Select if arch has all set_alias_nv/default() functions
+config ARCH_HAS_SET_ALIAS
+	bool
+
 # Select if arch init_task must go in the __init_task_data section
 config ARCH_TASK_STRUCT_ON_STACK
        bool
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 15af091611e2..14ad28769256 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -66,6 +66,7 @@ config X86
 	select ARCH_HAS_UACCESS_FLUSHCACHE	if X86_64
 	select ARCH_HAS_UACCESS_MCSAFE		if X86_64 && X86_MCE
 	select ARCH_HAS_SET_MEMORY
+	select ARCH_HAS_SET_ALIAS
 	select ARCH_HAS_STRICT_KERNEL_RWX
 	select ARCH_HAS_STRICT_MODULE_RWX
 	select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 07a25753e85c..2ef4e4222df1 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -85,6 +85,9 @@ int set_pages_nx(struct page *page, int numpages);
 int set_pages_ro(struct page *page, int numpages);
 int set_pages_rw(struct page *page, int numpages);
 
+int set_alias_nv_noflush(struct page *page);
+int set_alias_default_noflush(struct page *page);
+
 extern int kernel_set_to_readonly;
 void set_kernel_text_rw(void);
 void set_kernel_text_ro(void);
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 4f8972311a77..3a51915a1410 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -2209,8 +2209,6 @@ int set_pages_rw(struct page *page, int numpages)
 	return set_memory_rw(addr, numpages);
 }
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
-
 static int __set_pages_p(struct page *page, int numpages)
 {
 	unsigned long tempaddr = (unsigned long) page_address(page);
@@ -2249,6 +2247,17 @@ static int __set_pages_np(struct page *page, int numpages)
 	return __change_page_attr_set_clr(&cpa, 0);
 }
 
+int set_alias_nv_noflush(struct page *page)
+{
+	return __set_pages_np(page, 1);
+}
+
+int set_alias_default_noflush(struct page *page)
+{
+	return __set_pages_p(page, 1);
+}
+
+#ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
 	if (PageHighMem(page))
@@ -2282,7 +2291,6 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
 }
 
 #ifdef CONFIG_HIBERNATION
-
 bool kernel_page_present(struct page *page)
 {
 	unsigned int level;
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index 2a986d282a97..d19481ac6a8f 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -10,6 +10,16 @@
 
 #ifdef CONFIG_ARCH_HAS_SET_MEMORY
 #include <asm/set_memory.h>
+#ifndef CONFIG_ARCH_HAS_SET_ALIAS
+static inline int set_alias_nv_noflush(struct page *page)
+{
+	return 0;
+}
+static inline int set_alias_default_noflush(struct page *page)
+{
+	return 0;
+}
+#endif
 #else
 static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
 static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 14/17] mm: Make hibernate handle unmapped pages
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (12 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 13/17] Add set_alias_ function and x86 implementation Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17  9:39   ` Pavel Machek
  2019-01-17  0:32 ` [PATCH 15/17] vmalloc: New flags for safe vfree on special perms Rick Edgecombe
                   ` (3 subsequent siblings)
  17 siblings, 1 reply; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rick Edgecombe, Rafael J. Wysocki, Pavel Machek

For architectures with CONFIG_ARCH_HAS_SET_ALIAS, pages can be unmapped
briefly on the directmap, even when CONFIG_DEBUG_PAGEALLOC is not configured.
So this changes kernel_map_pages and kernel_page_present to be defined when
CONFIG_ARCH_HAS_SET_ALIAS is defined as well. It also changes places
(page_alloc.c) where those functions are assumed to only be implemented when
CONFIG_DEBUG_PAGEALLOC is defined.

So now when CONFIG_ARCH_HAS_SET_ALIAS=y, hibernate will handle not present
page when saving. Previously this was already done when CONFIG_DEBUG_PAGEALLOC
was configured. It does not appear to have a big hibernating performance
impact.

Before:
[    4.670938] PM: Wrote 171996 kbytes in 0.21 seconds (819.02 MB/s)

After:
[    4.504714] PM: Wrote 178932 kbytes in 0.22 seconds (813.32 MB/s)

Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/mm/pageattr.c |  4 ----
 include/linux/mm.h     | 18 ++++++------------
 mm/page_alloc.c        |  6 ++++--
 3 files changed, 10 insertions(+), 18 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 3a51915a1410..717bdc188aab 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -2257,7 +2257,6 @@ int set_alias_default_noflush(struct page *page)
 	return __set_pages_p(page, 1);
 }
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
 	if (PageHighMem(page))
@@ -2302,11 +2301,8 @@ bool kernel_page_present(struct page *page)
 	pte = lookup_address((unsigned long)page_address(page), &level);
 	return (pte_val(*pte) & _PAGE_PRESENT);
 }
-
 #endif /* CONFIG_HIBERNATION */
 
-#endif /* CONFIG_DEBUG_PAGEALLOC */
-
 int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
 				   unsigned numpages, unsigned long page_flags)
 {
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 80bb6408fe73..b362a280a919 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2642,37 +2642,31 @@ static inline void kernel_poison_pages(struct page *page, int numpages,
 					int enable) { }
 #endif
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
 extern bool _debug_pagealloc_enabled;
-extern void __kernel_map_pages(struct page *page, int numpages, int enable);
 
 static inline bool debug_pagealloc_enabled(void)
 {
-	return _debug_pagealloc_enabled;
+	return IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) && _debug_pagealloc_enabled;
 }
 
+#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_ALIAS)
+extern void __kernel_map_pages(struct page *page, int numpages, int enable);
+
 static inline void
 kernel_map_pages(struct page *page, int numpages, int enable)
 {
-	if (!debug_pagealloc_enabled())
-		return;
-
 	__kernel_map_pages(page, numpages, enable);
 }
 #ifdef CONFIG_HIBERNATION
 extern bool kernel_page_present(struct page *page);
 #endif	/* CONFIG_HIBERNATION */
-#else	/* CONFIG_DEBUG_PAGEALLOC */
+#else	/* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_ALIAS */
 static inline void
 kernel_map_pages(struct page *page, int numpages, int enable) {}
 #ifdef CONFIG_HIBERNATION
 static inline bool kernel_page_present(struct page *page) { return true; }
 #endif	/* CONFIG_HIBERNATION */
-static inline bool debug_pagealloc_enabled(void)
-{
-	return false;
-}
-#endif	/* CONFIG_DEBUG_PAGEALLOC */
+#endif	/* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_ALIAS */
 
 #ifdef __HAVE_ARCH_GATE_AREA
 extern struct vm_area_struct *get_gate_vma(struct mm_struct *mm);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d295c9bc01a8..c10a0d484aa6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1074,7 +1074,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
 	}
 	arch_free_page(page, order);
 	kernel_poison_pages(page, 1 << order, 0);
-	kernel_map_pages(page, 1 << order, 0);
+	if (debug_pagealloc_enabled())
+		kernel_map_pages(page, 1 << order, 0);
 	kasan_free_nondeferred_pages(page, order);
 
 	return true;
@@ -1944,7 +1945,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
 	set_page_refcounted(page);
 
 	arch_alloc_page(page, order);
-	kernel_map_pages(page, 1 << order, 1);
+	if (debug_pagealloc_enabled())
+		kernel_map_pages(page, 1 << order, 1);
 	kernel_poison_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 	set_page_owner(page, order, gfp_flags);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 15/17] vmalloc: New flags for safe vfree on special perms
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (13 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 14/17] mm: Make hibernate handle unmapped pages Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17  0:32 ` [PATCH 16/17] Plug in new special vfree flag Rick Edgecombe
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rick Edgecombe

This adds a new flags VM_HAS_SPECIAL_PERMS, for enabling vfree operations to
immediately clear executable TLB entries to freed pages, and handle freeing
memory with special permissions. It also takes care of reseting the direct map
permissions for the pages being unmapped. So this flag is useful for any kind
of memory with elevated permissions, or where there can be related permissions
changes on the directmap. Today this is RO+X and RO memory.

Although this enables directly vfreeing RO memory now, RO memory cannot be
freed in an interrupt because the allocation itself is used as a node on
deferred free list. So when RO memory needs to be freed in an interrupt the
code doing the vfree needs to have its own work queue, as was the case before
the deferred vfree list handling was added. Today there is only one case where
this happens.

For architectures with set_alias_ implementations this whole operation can be
done with one TLB flush when centralized like this. For others with directmap
permissions, currently only arm64, a backup method using set_memory functions
is used to reset the directmap. When arm64 adds set_alias_ functions, this
backup can be removed.

When the TLB is flushed to both remove TLB entries for the vmalloc range
mapping and the direct map permissions, the lazy purge operation could be done
to try to save a TLB flush later. However today vm_unmap_aliases could flush a
TLB range that does not include the directmap. So a helper is added with extra
parameters that can allow both the vmalloc address and the direct mapping to be
flushed during this operation. The behavior of the normal vm_unmap_aliases
function is unchanged.

Suggested-by: Dave Hansen <dave.hansen@intel.com>
Suggested-by: Andy Lutomirski <luto@kernel.org>
Suggested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 include/linux/vmalloc.h |  13 +++++
 mm/vmalloc.c            | 122 +++++++++++++++++++++++++++++++++-------
 2 files changed, 116 insertions(+), 19 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 398e9c95cd61..9f643f917360 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -21,6 +21,11 @@ struct notifier_block;		/* in notifier.h */
 #define VM_UNINITIALIZED	0x00000020	/* vm_struct is not fully initialized */
 #define VM_NO_GUARD		0x00000040      /* don't add guard page */
 #define VM_KASAN		0x00000080      /* has allocated kasan shadow memory */
+/*
+ * Memory with VM_HAS_SPECIAL_PERMS cannot be freed in an interrupt or with
+ * vfree_atomic.
+ */
+#define VM_HAS_SPECIAL_PERMS	0x00000200      /* Reset directmap and flush TLB on unmap */
 /* bits [20..32] reserved for arch specific ioremap internals */
 
 /*
@@ -135,6 +140,14 @@ extern struct vm_struct *__get_vm_area_caller(unsigned long size,
 extern struct vm_struct *remove_vm_area(const void *addr);
 extern struct vm_struct *find_vm_area(const void *addr);
 
+static inline void set_vm_special(void *addr)
+{
+	struct vm_struct *vm = find_vm_area(addr);
+
+	if (vm)
+		vm->flags |= VM_HAS_SPECIAL_PERMS;
+}
+
 extern int map_vm_area(struct vm_struct *area, pgprot_t prot,
 			struct page **pages);
 #ifdef CONFIG_MMU
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 871e41c55e23..d459b5b9649b 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -18,6 +18,7 @@
 #include <linux/interrupt.h>
 #include <linux/proc_fs.h>
 #include <linux/seq_file.h>
+#include <linux/set_memory.h>
 #include <linux/debugobjects.h>
 #include <linux/kallsyms.h>
 #include <linux/list.h>
@@ -1055,24 +1056,11 @@ static void vb_free(const void *addr, unsigned long size)
 		spin_unlock(&vb->lock);
 }
 
-/**
- * vm_unmap_aliases - unmap outstanding lazy aliases in the vmap layer
- *
- * The vmap/vmalloc layer lazily flushes kernel virtual mappings primarily
- * to amortize TLB flushing overheads. What this means is that any page you
- * have now, may, in a former life, have been mapped into kernel virtual
- * address by the vmap layer and so there might be some CPUs with TLB entries
- * still referencing that page (additional to the regular 1:1 kernel mapping).
- *
- * vm_unmap_aliases flushes all such lazy mappings. After it returns, we can
- * be sure that none of the pages we have control over will have any aliases
- * from the vmap layer.
- */
-void vm_unmap_aliases(void)
+static void _vm_unmap_aliases(unsigned long start, unsigned long end,
+				int must_flush)
 {
-	unsigned long start = ULONG_MAX, end = 0;
 	int cpu;
-	int flush = 0;
+	int flush = must_flush;
 
 	if (unlikely(!vmap_initialized))
 		return;
@@ -1109,6 +1097,27 @@ void vm_unmap_aliases(void)
 		flush_tlb_kernel_range(start, end);
 	mutex_unlock(&vmap_purge_lock);
 }
+
+/**
+ * vm_unmap_aliases - unmap outstanding lazy aliases in the vmap layer
+ *
+ * The vmap/vmalloc layer lazily flushes kernel virtual mappings primarily
+ * to amortize TLB flushing overheads. What this means is that any page you
+ * have now, may, in a former life, have been mapped into kernel virtual
+ * address by the vmap layer and so there might be some CPUs with TLB entries
+ * still referencing that page (additional to the regular 1:1 kernel mapping).
+ *
+ * vm_unmap_aliases flushes all such lazy mappings. After it returns, we can
+ * be sure that none of the pages we have control over will have any aliases
+ * from the vmap layer.
+ */
+void vm_unmap_aliases(void)
+{
+	unsigned long start = ULONG_MAX, end = 0;
+	int must_flush = 0;
+
+	_vm_unmap_aliases(start, end, must_flush);
+}
 EXPORT_SYMBOL_GPL(vm_unmap_aliases);
 
 /**
@@ -1494,6 +1503,79 @@ struct vm_struct *remove_vm_area(const void *addr)
 	return NULL;
 }
 
+static inline void set_area_alias(const struct vm_struct *area,
+			int (*set_alias)(struct page *page))
+{
+	int i;
+
+	for (i = 0; i < area->nr_pages; i++) {
+		unsigned long addr =
+			(unsigned long)page_address(area->pages[i]);
+
+		if (addr)
+			set_alias(area->pages[i]);
+	}
+}
+
+/* This handles removing and resetting vm mappings related to the vm_struct. */
+static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages)
+{
+	unsigned long addr = (unsigned long)area->addr;
+	unsigned long start = ULONG_MAX, end = 0;
+	int special = area->flags & VM_HAS_SPECIAL_PERMS;
+	int i;
+
+	/*
+	 * The below block can be removed when all architectures that have
+	 * direct map permissions also have set_alias_ implementations. This is
+	 * to do resetting on the directmap for any special permissions (today
+	 * only X), without leaving a RW+X window.
+	 */
+	if (special && !IS_ENABLED(CONFIG_ARCH_HAS_SET_ALIAS)) {
+		set_memory_nx(addr, area->nr_pages);
+		set_memory_rw(addr, area->nr_pages);
+	}
+
+	remove_vm_area(area->addr);
+
+	/* If this is not special memory, we can skip the below. */
+	if (!special)
+		return;
+
+	/*
+	 * If we are not deallocating pages, we can just do the flush of the VM
+	 * area and return.
+	 */
+	if (!deallocate_pages) {
+		vm_unmap_aliases();
+		return;
+	}
+
+	/*
+	 * If we are here, we need to flush the vm mapping and reset the direct
+	 * map.
+	 * First find the start and end range of the direct mappings to make
+	 * sure the vm_unmap_aliases flush includes the direct map.
+	 */
+	for (i = 0; i < area->nr_pages; i++) {
+		unsigned long addr =
+			(unsigned long)page_address(area->pages[i]);
+		if (addr) {
+			start = min(addr, start);
+			end = max(addr, end);
+		}
+	}
+
+	/*
+	 * First we set direct map to something not valid so that it won't be
+	 * cached if there are any accesses after the TLB flush, then we flush
+	 * the TLB, and reset the directmap permissions to the default.
+	 */
+	set_area_alias(area, set_alias_nv_noflush);
+	_vm_unmap_aliases(start, end, 1);
+	set_area_alias(area, set_alias_default_noflush);
+}
+
 static void __vunmap(const void *addr, int deallocate_pages)
 {
 	struct vm_struct *area;
@@ -1515,7 +1597,8 @@ static void __vunmap(const void *addr, int deallocate_pages)
 	debug_check_no_locks_freed(area->addr, get_vm_area_size(area));
 	debug_check_no_obj_freed(area->addr, get_vm_area_size(area));
 
-	remove_vm_area(addr);
+	vm_remove_mappings(area, deallocate_pages);
+
 	if (deallocate_pages) {
 		int i;
 
@@ -1925,8 +2008,9 @@ EXPORT_SYMBOL(vzalloc_node);
 
 void *vmalloc_exec(unsigned long size)
 {
-	return __vmalloc_node(size, 1, GFP_KERNEL, PAGE_KERNEL_EXEC,
-			      NUMA_NO_NODE, __builtin_return_address(0));
+	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
+			GFP_KERNEL, PAGE_KERNEL_EXEC, VM_HAS_SPECIAL_PERMS,
+			NUMA_NO_NODE, __builtin_return_address(0));
 }
 
 #if defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA32)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 16/17] Plug in new special vfree flag
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (14 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 15/17] vmalloc: New flags for safe vfree on special perms Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-02-06 16:23   ` Steven Rostedt
  2019-01-17  0:32 ` [PATCH 17/17] module: Prevent module removal racing with text_poke() Rick Edgecombe
  2019-01-17 13:21 ` [PATCH 00/17] Merge text_poke fixes and executable lockdowns Peter Zijlstra
  17 siblings, 1 reply; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rick Edgecombe, Rusty Russell, Masami Hiramatsu, Daniel Borkmann,
	Alexei Starovoitov, Jessica Yu, Steven Rostedt,
	Paul E . McKenney

Add new flag for handling freeing of special permissioned memory in vmalloc
and remove places where memory was set RW before freeing which is no longer
needed.

In kprobes, bpf and ftrace this just adds the flag, and removes the now
unneeded set_memory_ calls before calling vfree.

In modules, the freeing of init sections is moved to a work queue, since
freeing of RO memory is not supported in an interrupt by vmalloc.
Instead of call_rcu, it now uses synchronize_rcu() in the work queue.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Paul E. McKenney <paulmck@linux.ibm.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/ftrace.c       |  6 +--
 arch/x86/kernel/kprobes/core.c |  7 +---
 include/linux/filter.h         | 16 ++-----
 kernel/bpf/core.c              |  1 -
 kernel/module.c                | 77 +++++++++++++++++-----------------
 5 files changed, 45 insertions(+), 62 deletions(-)

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index eb4a1937e72c..47597e028346 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -692,10 +692,6 @@ static inline void *alloc_tramp(unsigned long size)
 }
 static inline void tramp_free(void *tramp, int size)
 {
-	int npages = PAGE_ALIGN(size) >> PAGE_SHIFT;
-
-	set_memory_nx((unsigned long)tramp, npages);
-	set_memory_rw((unsigned long)tramp, npages);
 	module_memfree(tramp);
 }
 #else
@@ -820,6 +816,8 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
 	/* ALLOC_TRAMP flags lets us know we created it */
 	ops->flags |= FTRACE_OPS_FL_ALLOC_TRAMP;
 
+	set_vm_special(trampoline);
+
 	/*
 	 * Module allocation needs to be completed by making the page
 	 * executable. The page is still writable, which is a security hazard,
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index fac692e36833..f2fab35bcb82 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -434,6 +434,7 @@ void *alloc_insn_page(void)
 	if (page == NULL)
 		return NULL;
 
+	set_vm_special(page);
 	/*
 	 * First make the page read-only, and then only then make it executable
 	 * to prevent it from being W+X in between.
@@ -452,12 +453,6 @@ void *alloc_insn_page(void)
 /* Recover page to RW mode before releasing it */
 void free_insn_page(void *page)
 {
-	/*
-	 * First make the page non-executable, and then only then make it
-	 * writable to prevent it from being W+X in between.
-	 */
-	set_memory_nx((unsigned long)page, 1);
-	set_memory_rw((unsigned long)page, 1);
 	module_memfree(page);
 }
 
diff --git a/include/linux/filter.h b/include/linux/filter.h
index f18cd317faf8..0abe812e7b75 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -20,6 +20,7 @@
 #include <linux/set_memory.h>
 #include <linux/kallsyms.h>
 #include <linux/if_vlan.h>
+#include <linux/vmalloc.h>
 
 #include <net/sch_generic.h>
 
@@ -483,7 +484,6 @@ struct bpf_prog {
 	u16			pages;		/* Number of allocated pages */
 	u16			jited:1,	/* Is our filter JIT'ed? */
 				jit_requested:1,/* archs need to JIT the prog */
-				undo_set_mem:1, /* Passed set_memory_ro() checkpoint */
 				gpl_compatible:1, /* Is filter GPL compatible? */
 				cb_access:1,	/* Is control block accessed? */
 				dst_needed:1,	/* Do we need dst entry? */
@@ -681,26 +681,17 @@ bpf_ctx_narrow_access_ok(u32 off, u32 size, u32 size_default)
 
 static inline void bpf_prog_lock_ro(struct bpf_prog *fp)
 {
+	set_vm_special(fp);
 	set_memory_ro((unsigned long)fp, fp->pages);
 }
 
-static inline void bpf_prog_unlock_ro(struct bpf_prog *fp)
-{
-	if (fp->undo_set_mem)
-		set_memory_rw((unsigned long)fp, fp->pages);
-}
-
 static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
 {
+	set_vm_special(hdr);
 	set_memory_ro((unsigned long)hdr, hdr->pages);
 	set_memory_x((unsigned long)hdr, hdr->pages);
 }
 
-static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr)
-{
-	set_memory_rw((unsigned long)hdr, hdr->pages);
-}
-
 static inline struct bpf_binary_header *
 bpf_jit_binary_hdr(const struct bpf_prog *fp)
 {
@@ -735,7 +726,6 @@ void __bpf_prog_free(struct bpf_prog *fp);
 
 static inline void bpf_prog_unlock_free(struct bpf_prog *fp)
 {
-	bpf_prog_unlock_ro(fp);
 	__bpf_prog_free(fp);
 }
 
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index f908b9356025..a1a4d6f4253c 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -804,7 +804,6 @@ void __weak bpf_jit_free(struct bpf_prog *fp)
 	if (fp->jited) {
 		struct bpf_binary_header *hdr = bpf_jit_binary_hdr(fp);
 
-		bpf_jit_binary_unlock_ro(hdr);
 		bpf_jit_binary_free(hdr);
 
 		WARN_ON_ONCE(!bpf_prog_kallsyms_verify_off(fp));
diff --git a/kernel/module.c b/kernel/module.c
index ae1b77da6a20..1af5c8e19086 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -98,6 +98,10 @@ DEFINE_MUTEX(module_mutex);
 EXPORT_SYMBOL_GPL(module_mutex);
 static LIST_HEAD(modules);
 
+/* Work queue for freeing init sections in success case */
+static struct work_struct init_free_wq;
+static struct llist_head init_free_list;
+
 #ifdef CONFIG_MODULES_TREE_LOOKUP
 
 /*
@@ -1949,6 +1953,8 @@ void module_enable_ro(const struct module *mod, bool after_init)
 	if (!rodata_enabled)
 		return;
 
+	set_vm_special(mod->core_layout.base);
+	set_vm_special(mod->init_layout.base);
 	frob_text(&mod->core_layout, set_memory_ro);
 	frob_text(&mod->core_layout, set_memory_x);
 
@@ -1972,15 +1978,6 @@ static void module_enable_nx(const struct module *mod)
 	frob_writable_data(&mod->init_layout, set_memory_nx);
 }
 
-static void module_disable_nx(const struct module *mod)
-{
-	frob_rodata(&mod->core_layout, set_memory_x);
-	frob_ro_after_init(&mod->core_layout, set_memory_x);
-	frob_writable_data(&mod->core_layout, set_memory_x);
-	frob_rodata(&mod->init_layout, set_memory_x);
-	frob_writable_data(&mod->init_layout, set_memory_x);
-}
-
 /* Iterate through all modules and set each module's text as RW */
 void set_all_modules_text_rw(void)
 {
@@ -2024,23 +2021,8 @@ void set_all_modules_text_ro(void)
 	}
 	mutex_unlock(&module_mutex);
 }
-
-static void disable_ro_nx(const struct module_layout *layout)
-{
-	if (rodata_enabled) {
-		frob_text(layout, set_memory_rw);
-		frob_rodata(layout, set_memory_rw);
-		frob_ro_after_init(layout, set_memory_rw);
-	}
-	frob_rodata(layout, set_memory_x);
-	frob_ro_after_init(layout, set_memory_x);
-	frob_writable_data(layout, set_memory_x);
-}
-
 #else
-static void disable_ro_nx(const struct module_layout *layout) { }
 static void module_enable_nx(const struct module *mod) { }
-static void module_disable_nx(const struct module *mod) { }
 #endif
 
 #ifdef CONFIG_LIVEPATCH
@@ -2120,6 +2102,11 @@ static void free_module_elf(struct module *mod)
 
 void __weak module_memfree(void *module_region)
 {
+	/*
+	 * This memory may be RO, and freeing RO memory in an interrupt is not
+	 * supported by vmalloc.
+	 */
+	WARN_ON(in_interrupt());
 	vfree(module_region);
 }
 
@@ -2171,7 +2158,6 @@ static void free_module(struct module *mod)
 	mutex_unlock(&module_mutex);
 
 	/* This may be empty, but that's OK */
-	disable_ro_nx(&mod->init_layout);
 	module_arch_freeing_init(mod);
 	module_memfree(mod->init_layout.base);
 	kfree(mod->args);
@@ -2181,7 +2167,6 @@ static void free_module(struct module *mod)
 	lockdep_free_key_range(mod->core_layout.base, mod->core_layout.size);
 
 	/* Finally, free the core (containing the module structure) */
-	disable_ro_nx(&mod->core_layout);
 	module_memfree(mod->core_layout.base);
 }
 
@@ -3424,17 +3409,34 @@ static void do_mod_ctors(struct module *mod)
 
 /* For freeing module_init on success, in case kallsyms traversing */
 struct mod_initfree {
-	struct rcu_head rcu;
+	struct llist_node node;
 	void *module_init;
 };
 
-static void do_free_init(struct rcu_head *head)
+static void do_free_init(struct work_struct *w)
 {
-	struct mod_initfree *m = container_of(head, struct mod_initfree, rcu);
-	module_memfree(m->module_init);
-	kfree(m);
+	struct llist_node *pos, *n, *list;
+	struct mod_initfree *initfree;
+
+	list = llist_del_all(&init_free_list);
+
+	synchronize_rcu();
+
+	llist_for_each_safe(pos, n, list) {
+		initfree = container_of(pos, struct mod_initfree, node);
+		module_memfree(initfree->module_init);
+		kfree(initfree);
+	}
 }
 
+static int __init modules_wq_init(void)
+{
+	INIT_WORK(&init_free_wq, do_free_init);
+	init_llist_head(&init_free_list);
+	return 0;
+}
+module_init(modules_wq_init);
+
 /*
  * This is where the real work happens.
  *
@@ -3511,7 +3513,6 @@ static noinline int do_init_module(struct module *mod)
 #endif
 	module_enable_ro(mod, true);
 	mod_tree_remove_init(mod);
-	disable_ro_nx(&mod->init_layout);
 	module_arch_freeing_init(mod);
 	mod->init_layout.base = NULL;
 	mod->init_layout.size = 0;
@@ -3522,14 +3523,18 @@ static noinline int do_init_module(struct module *mod)
 	 * We want to free module_init, but be aware that kallsyms may be
 	 * walking this with preempt disabled.  In all the failure paths, we
 	 * call synchronize_rcu(), but we don't want to slow down the success
-	 * path, so use actual RCU here.
+	 * path. We can't do module_memfree in an interrupt, so we do the work
+	 * and call synchronize_rcu() in a work queue.
+	 *
 	 * Note that module_alloc() on most architectures creates W+X page
 	 * mappings which won't be cleaned up until do_free_init() runs.  Any
 	 * code such as mark_rodata_ro() which depends on those mappings to
 	 * be cleaned up needs to sync with the queued work - ie
 	 * rcu_barrier()
 	 */
-	call_rcu(&freeinit->rcu, do_free_init);
+	if (llist_add(&freeinit->node, &init_free_list))
+		schedule_work(&init_free_wq);
+
 	mutex_unlock(&module_mutex);
 	wake_up_all(&module_wq);
 
@@ -3826,10 +3831,6 @@ static int load_module(struct load_info *info, const char __user *uargs,
 	module_bug_cleanup(mod);
 	mutex_unlock(&module_mutex);
 
-	/* we can't deallocate the module until we clear memory protection */
-	module_disable_ro(mod);
-	module_disable_nx(mod);
-
  ddebug_cleanup:
 	ftrace_release_mod(mod);
 	dynamic_debug_remove(mod, info->debug);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 17/17] module: Prevent module removal racing with text_poke()
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (15 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 16/17] Plug in new special vfree flag Rick Edgecombe
@ 2019-01-17  0:32 ` Rick Edgecombe
  2019-01-17  7:54   ` Masami Hiramatsu
  2019-01-17 13:21 ` [PATCH 00/17] Merge text_poke fixes and executable lockdowns Peter Zijlstra
  17 siblings, 1 reply; 51+ messages in thread
From: Rick Edgecombe @ 2019-01-17  0:32 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

It seems dangerous to allow code modifications to take place
concurrently with module unloading. So take the text_mutex while the
memory of the module is freed.

Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 kernel/module.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/module.c b/kernel/module.c
index 1af5c8e19086..90cfc4988d98 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -64,6 +64,7 @@
 #include <linux/bsearch.h>
 #include <linux/dynamic_debug.h>
 #include <linux/audit.h>
+#include <linux/memory.h>
 #include <uapi/linux/module.h>
 #include "module-internal.h"
 
@@ -2157,6 +2158,9 @@ static void free_module(struct module *mod)
 	synchronize_rcu();
 	mutex_unlock(&module_mutex);
 
+	/* Protect against patching of the module while it is being removed */
+	mutex_lock(&text_mutex);
+
 	/* This may be empty, but that's OK */
 	module_arch_freeing_init(mod);
 	module_memfree(mod->init_layout.base);
@@ -2168,6 +2172,7 @@ static void free_module(struct module *mod)
 
 	/* Finally, free the core (containing the module structure) */
 	module_memfree(mod->core_layout.base);
+	mutex_unlock(&text_mutex);
 }
 
 void *__symbol_get(const char *symbol)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 01/17] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2019-01-17  0:32 ` [PATCH 01/17] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Rick Edgecombe
@ 2019-01-17  6:47   ` Masami Hiramatsu
  2019-01-17 21:15     ` hpa
  2019-01-25  9:30   ` Borislav Petkov
  1 sibling, 1 reply; 51+ messages in thread
From: Masami Hiramatsu @ 2019-01-17  6:47 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: Andy Lutomirski, Ingo Molnar, linux-kernel, x86, hpa,
	Thomas Gleixner, Borislav Petkov, Nadav Amit, Dave Hansen,
	Peter Zijlstra, linux_dti, linux-integrity,
	linux-security-module, akpm, kernel-hardening, linux-mm,
	will.deacon, ard.biesheuvel, kristen, deneen.t.dock, Nadav Amit,
	Kees Cook, Dave Hansen, Masami Hiramatsu

On Wed, 16 Jan 2019 16:32:43 -0800
Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:

> From: Nadav Amit <namit@vmware.com>
> 
> text_mutex is currently expected to be held before text_poke() is
> called, but we kgdb does not take the mutex, and instead *supposedly*
> ensures the lock is not taken and will not be acquired by any other core
> while text_poke() is running.
> 
> The reason for the "supposedly" comment is that it is not entirely clear
> that this would be the case if gdb_do_roundup is zero.
> 
> This patch creates two wrapper functions, text_poke() and
> text_poke_kgdb() which do or do not run the lockdep assertion
> respectively.
> 
> While we are at it, change the return code of text_poke() to something
> meaningful. One day, callers might actually respect it and the existing
> BUG_ON() when patching fails could be removed. For kgdb, the return
> value can actually be used.

Looks good to me.

Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>

Thank you,

> 
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Fixes: 9222f606506c ("x86/alternatives: Lockdep-enforce text_mutex in text_poke*()")
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Acked-by: Jiri Kosina <jkosina@suse.cz>
> Signed-off-by: Nadav Amit <namit@vmware.com>
> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
> ---
>  arch/x86/include/asm/text-patching.h |  1 +
>  arch/x86/kernel/alternative.c        | 52 ++++++++++++++++++++--------
>  arch/x86/kernel/kgdb.c               | 11 +++---
>  3 files changed, 45 insertions(+), 19 deletions(-)
> 
> diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
> index e85ff65c43c3..f8fc8e86cf01 100644
> --- a/arch/x86/include/asm/text-patching.h
> +++ b/arch/x86/include/asm/text-patching.h
> @@ -35,6 +35,7 @@ extern void *text_poke_early(void *addr, const void *opcode, size_t len);
>   * inconsistent instruction while you patch.
>   */
>  extern void *text_poke(void *addr, const void *opcode, size_t len);
> +extern void *text_poke_kgdb(void *addr, const void *opcode, size_t len);
>  extern int poke_int3_handler(struct pt_regs *regs);
>  extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
>  extern int after_bootmem;
> diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
> index ebeac487a20c..c6a3a10a2fd5 100644
> --- a/arch/x86/kernel/alternative.c
> +++ b/arch/x86/kernel/alternative.c
> @@ -678,18 +678,7 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode,
>  	return addr;
>  }
>  
> -/**
> - * text_poke - Update instructions on a live kernel
> - * @addr: address to modify
> - * @opcode: source of the copy
> - * @len: length to copy
> - *
> - * Only atomic text poke/set should be allowed when not doing early patching.
> - * It means the size must be writable atomically and the address must be aligned
> - * in a way that permits an atomic write. It also makes sure we fit on a single
> - * page.
> - */
> -void *text_poke(void *addr, const void *opcode, size_t len)
> +static void *__text_poke(void *addr, const void *opcode, size_t len)
>  {
>  	unsigned long flags;
>  	char *vaddr;
> @@ -702,8 +691,6 @@ void *text_poke(void *addr, const void *opcode, size_t len)
>  	 */
>  	BUG_ON(!after_bootmem);
>  
> -	lockdep_assert_held(&text_mutex);
> -
>  	if (!core_kernel_text((unsigned long)addr)) {
>  		pages[0] = vmalloc_to_page(addr);
>  		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
> @@ -732,6 +719,43 @@ void *text_poke(void *addr, const void *opcode, size_t len)
>  	return addr;
>  }
>  
> +/**
> + * text_poke - Update instructions on a live kernel
> + * @addr: address to modify
> + * @opcode: source of the copy
> + * @len: length to copy
> + *
> + * Only atomic text poke/set should be allowed when not doing early patching.
> + * It means the size must be writable atomically and the address must be aligned
> + * in a way that permits an atomic write. It also makes sure we fit on a single
> + * page.
> + */
> +void *text_poke(void *addr, const void *opcode, size_t len)
> +{
> +	lockdep_assert_held(&text_mutex);
> +
> +	return __text_poke(addr, opcode, len);
> +}
> +
> +/**
> + * text_poke_kgdb - Update instructions on a live kernel by kgdb
> + * @addr: address to modify
> + * @opcode: source of the copy
> + * @len: length to copy
> + *
> + * Only atomic text poke/set should be allowed when not doing early patching.
> + * It means the size must be writable atomically and the address must be aligned
> + * in a way that permits an atomic write. It also makes sure we fit on a single
> + * page.
> + *
> + * Context: should only be used by kgdb, which ensures no other core is running,
> + *	    despite the fact it does not hold the text_mutex.
> + */
> +void *text_poke_kgdb(void *addr, const void *opcode, size_t len)
> +{
> +	return __text_poke(addr, opcode, len);
> +}
> +
>  static void do_sync_core(void *info)
>  {
>  	sync_core();
> diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
> index 5db08425063e..1461544cba8b 100644
> --- a/arch/x86/kernel/kgdb.c
> +++ b/arch/x86/kernel/kgdb.c
> @@ -758,13 +758,13 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
>  	if (!err)
>  		return err;
>  	/*
> -	 * It is safe to call text_poke() because normal kernel execution
> +	 * It is safe to call text_poke_kgdb() because normal kernel execution
>  	 * is stopped on all cores, so long as the text_mutex is not locked.
>  	 */
>  	if (mutex_is_locked(&text_mutex))
>  		return -EBUSY;
> -	text_poke((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
> -		  BREAK_INSTR_SIZE);
> +	text_poke_kgdb((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
> +		       BREAK_INSTR_SIZE);
>  	err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
>  	if (err)
>  		return err;
> @@ -783,12 +783,13 @@ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
>  	if (bpt->type != BP_POKE_BREAKPOINT)
>  		goto knl_write;
>  	/*
> -	 * It is safe to call text_poke() because normal kernel execution
> +	 * It is safe to call text_poke_kgdb() because normal kernel execution
>  	 * is stopped on all cores, so long as the text_mutex is not locked.
>  	 */
>  	if (mutex_is_locked(&text_mutex))
>  		goto knl_write;
> -	text_poke((void *)bpt->bpt_addr, bpt->saved_instr, BREAK_INSTR_SIZE);
> +	text_poke_kgdb((void *)bpt->bpt_addr, bpt->saved_instr,
> +		       BREAK_INSTR_SIZE);
>  	err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
>  	if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE))
>  		goto knl_write;
> -- 
> 2.17.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 09/17] x86/kprobes: Instruction pages initialization enhancements
  2019-01-17  0:32 ` [PATCH 09/17] x86/kprobes: Instruction pages initialization enhancements Rick Edgecombe
@ 2019-01-17  6:51   ` Masami Hiramatsu
  0 siblings, 0 replies; 51+ messages in thread
From: Masami Hiramatsu @ 2019-01-17  6:51 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: Andy Lutomirski, Ingo Molnar, linux-kernel, x86, hpa,
	Thomas Gleixner, Borislav Petkov, Nadav Amit, Dave Hansen,
	Peter Zijlstra, linux_dti, linux-integrity,
	linux-security-module, akpm, kernel-hardening, linux-mm,
	will.deacon, ard.biesheuvel, kristen, deneen.t.dock, Nadav Amit,
	Masami Hiramatsu

On Wed, 16 Jan 2019 16:32:51 -0800
Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:

> From: Nadav Amit <namit@vmware.com>
> 
> This patch is a preparatory patch for a following patch that makes
> module allocated pages non-executable. The patch sets the page as
> executable after allocation.
> 
> In the future, we may get better protection of executables. For example,
> by using hypercalls to request the hypervisor to protect VM executable
> pages from modifications using nested page-tables. This would allow
> us to ensure the executable has not changed between allocation and
> its write-protection.
> 
> While at it, do some small cleanup of what appears to be unnecessary
> masking.
> 

OK, then this should be done.

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

Thank you!


> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Signed-off-by: Nadav Amit <namit@vmware.com>
> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
> ---
>  arch/x86/kernel/kprobes/core.c | 24 ++++++++++++++++++++----
>  1 file changed, 20 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
> index 4ba75afba527..fac692e36833 100644
> --- a/arch/x86/kernel/kprobes/core.c
> +++ b/arch/x86/kernel/kprobes/core.c
> @@ -431,8 +431,20 @@ void *alloc_insn_page(void)
>  	void *page;
>  
>  	page = module_alloc(PAGE_SIZE);
> -	if (page)
> -		set_memory_ro((unsigned long)page & PAGE_MASK, 1);
> +	if (page == NULL)
> +		return NULL;
> +
> +	/*
> +	 * First make the page read-only, and then only then make it executable
> +	 * to prevent it from being W+X in between.
> +	 */
> +	set_memory_ro((unsigned long)page, 1);
> +
> +	/*
> +	 * TODO: Once additional kernel code protection mechanisms are set, ensure
> +	 * that the page was not maliciously altered and it is still zeroed.
> +	 */
> +	set_memory_x((unsigned long)page, 1);
>  
>  	return page;
>  }
> @@ -440,8 +452,12 @@ void *alloc_insn_page(void)
>  /* Recover page to RW mode before releasing it */
>  void free_insn_page(void *page)
>  {
> -	set_memory_nx((unsigned long)page & PAGE_MASK, 1);
> -	set_memory_rw((unsigned long)page & PAGE_MASK, 1);
> +	/*
> +	 * First make the page non-executable, and then only then make it
> +	 * writable to prevent it from being W+X in between.
> +	 */
> +	set_memory_nx((unsigned long)page, 1);
> +	set_memory_rw((unsigned long)page, 1);
>  	module_memfree(page);
>  }
>  
> -- 
> 2.17.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 17/17] module: Prevent module removal racing with text_poke()
  2019-01-17  0:32 ` [PATCH 17/17] module: Prevent module removal racing with text_poke() Rick Edgecombe
@ 2019-01-17  7:54   ` Masami Hiramatsu
  2019-01-17 18:07     ` Nadav Amit
  2019-01-17 23:58     ` H. Peter Anvin
  0 siblings, 2 replies; 51+ messages in thread
From: Masami Hiramatsu @ 2019-01-17  7:54 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: Andy Lutomirski, Ingo Molnar, linux-kernel, x86, hpa,
	Thomas Gleixner, Borislav Petkov, Nadav Amit, Dave Hansen,
	Peter Zijlstra, linux_dti, linux-integrity,
	linux-security-module, akpm, kernel-hardening, linux-mm,
	will.deacon, ard.biesheuvel, kristen, deneen.t.dock, Nadav Amit

On Wed, 16 Jan 2019 16:32:59 -0800
Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:

> From: Nadav Amit <namit@vmware.com>
> 
> It seems dangerous to allow code modifications to take place
> concurrently with module unloading. So take the text_mutex while the
> memory of the module is freed.

At that point, since the module itself is removed from module list,
it seems no actual harm. Or would you have any concern?

Thank you,

> 
> Signed-off-by: Nadav Amit <namit@vmware.com>
> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
> ---
>  kernel/module.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/kernel/module.c b/kernel/module.c
> index 1af5c8e19086..90cfc4988d98 100644
> --- a/kernel/module.c
> +++ b/kernel/module.c
> @@ -64,6 +64,7 @@
>  #include <linux/bsearch.h>
>  #include <linux/dynamic_debug.h>
>  #include <linux/audit.h>
> +#include <linux/memory.h>
>  #include <uapi/linux/module.h>
>  #include "module-internal.h"
>  
> @@ -2157,6 +2158,9 @@ static void free_module(struct module *mod)
>  	synchronize_rcu();
>  	mutex_unlock(&module_mutex);
>  
> +	/* Protect against patching of the module while it is being removed */
> +	mutex_lock(&text_mutex);
> +
>  	/* This may be empty, but that's OK */
>  	module_arch_freeing_init(mod);
>  	module_memfree(mod->init_layout.base);
> @@ -2168,6 +2172,7 @@ static void free_module(struct module *mod)
>  
>  	/* Finally, free the core (containing the module structure) */
>  	module_memfree(mod->core_layout.base);
> +	mutex_unlock(&text_mutex);
>  }
>  
>  void *__symbol_get(const char *symbol)
> -- 
> 2.17.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 14/17] mm: Make hibernate handle unmapped pages
  2019-01-17  0:32 ` [PATCH 14/17] mm: Make hibernate handle unmapped pages Rick Edgecombe
@ 2019-01-17  9:39   ` Pavel Machek
  2019-01-17 22:16     ` Edgecombe, Rick P
  0 siblings, 1 reply; 51+ messages in thread
From: Pavel Machek @ 2019-01-17  9:39 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: Andy Lutomirski, Ingo Molnar, linux-kernel, x86, hpa,
	Thomas Gleixner, Borislav Petkov, Nadav Amit, Dave Hansen,
	Peter Zijlstra, linux_dti, linux-integrity,
	linux-security-module, akpm, kernel-hardening, linux-mm,
	will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rafael J. Wysocki

[-- Attachment #1: Type: text/plain, Size: 1207 bytes --]

Hi!

> For architectures with CONFIG_ARCH_HAS_SET_ALIAS, pages can be unmapped
> briefly on the directmap, even when CONFIG_DEBUG_PAGEALLOC is not configured.
> So this changes kernel_map_pages and kernel_page_present to be defined when
> CONFIG_ARCH_HAS_SET_ALIAS is defined as well. It also changes places
> (page_alloc.c) where those functions are assumed to only be implemented when
> CONFIG_DEBUG_PAGEALLOC is defined.

Which architectures are that?

Should this be merged to the patch where HAS_SET_ALIAS is introduced? We
don't want broken hibernation in between....


> -#ifdef CONFIG_DEBUG_PAGEALLOC
>  extern bool _debug_pagealloc_enabled;
> -extern void __kernel_map_pages(struct page *page, int numpages, int enable);
>  
>  static inline bool debug_pagealloc_enabled(void)
>  {
> -	return _debug_pagealloc_enabled;
> +	return IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) && _debug_pagealloc_enabled;
>  }

This will break build AFAICT. _debug_pagealloc_enabled variable does
not exist in !CONFIG_DEBUG_PAGEALLOC case.

									Pavel

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 00/17] Merge text_poke fixes and executable lockdowns
  2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (16 preceding siblings ...)
  2019-01-17  0:32 ` [PATCH 17/17] module: Prevent module removal racing with text_poke() Rick Edgecombe
@ 2019-01-17 13:21 ` Peter Zijlstra
  17 siblings, 0 replies; 51+ messages in thread
From: Peter Zijlstra @ 2019-01-17 13:21 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: Andy Lutomirski, Ingo Molnar, linux-kernel, x86, hpa,
	Thomas Gleixner, Borislav Petkov, Nadav Amit, Dave Hansen,
	linux_dti, linux-integrity, linux-security-module, akpm,
	kernel-hardening, linux-mm, will.deacon, ard.biesheuvel, kristen,
	deneen.t.dock



1-7,11-12

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 17/17] module: Prevent module removal racing with text_poke()
  2019-01-17  7:54   ` Masami Hiramatsu
@ 2019-01-17 18:07     ` Nadav Amit
  2019-01-17 23:44       ` H. Peter Anvin
  2019-01-18  8:23       ` Masami Hiramatsu
  2019-01-17 23:58     ` H. Peter Anvin
  1 sibling, 2 replies; 51+ messages in thread
From: Nadav Amit @ 2019-01-17 18:07 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Rick Edgecombe, Andy Lutomirski, Ingo Molnar,
	Linux List Kernel Mailing, the arch/x86 maintainers,
	H. Peter Anvin, Thomas Gleixner, Borislav Petkov, Dave Hansen,
	Peter Zijlstra, Damian Tometzki, linux-integrity, LSM List,
	Andrew Morton, Kernel Hardening, Linux-MM, Will Deacon,
	Ard Biesheuvel, kristen, deneen.t.dock

> On Jan 16, 2019, at 11:54 PM, Masami Hiramatsu <mhiramat@kernel.org> wrote:
> 
> On Wed, 16 Jan 2019 16:32:59 -0800
> Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:
> 
>> From: Nadav Amit <namit@vmware.com>
>> 
>> It seems dangerous to allow code modifications to take place
>> concurrently with module unloading. So take the text_mutex while the
>> memory of the module is freed.
> 
> At that point, since the module itself is removed from module list,
> it seems no actual harm. Or would you have any concern?

So it appears that you are right and all the users of text_poke() and
text_poke_bp() do install module notifiers, and remove the module from their
internal data structure when they are done (*). As long as they prevent
text_poke*() to be called concurrently (e.g., using jump_label_lock()),
everything is fine.

Having said that, the question is whether you “trust” text_poke*() users to
do so. text_poke() description does not day explicitly that you need to
prevent modules from being removed.

What do you say?


(*) I am not sure about kgdb, but it probably does not matter much

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 06/17] x86/alternative: use temporary mm for text poking
  2019-01-17  0:32 ` [PATCH 06/17] x86/alternative: use temporary mm for text poking Rick Edgecombe
@ 2019-01-17 20:27   ` Andy Lutomirski
  2019-01-17 20:47     ` Andy Lutomirski
  0 siblings, 1 reply; 51+ messages in thread
From: Andy Lutomirski @ 2019-01-17 20:27 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: Andy Lutomirski, Ingo Molnar, LKML, X86 ML, H. Peter Anvin,
	Thomas Gleixner, Borislav Petkov, Nadav Amit, Dave Hansen,
	Peter Zijlstra, linux_dti, linux-integrity, LSM List,
	Andrew Morton, Kernel Hardening, Linux-MM, Will Deacon,
	Ard Biesheuvel, Kristen Carlson Accardi, Dock, Deneen T,
	Nadav Amit, Kees Cook, Dave Hansen, Masami Hiramatsu

On Wed, Jan 16, 2019 at 4:33 PM Rick Edgecombe
<rick.p.edgecombe@intel.com> wrote:
>
> From: Nadav Amit <namit@vmware.com>
>
> text_poke() can potentially compromise the security as it sets temporary
> PTEs in the fixmap. These PTEs might be used to rewrite the kernel code
> from other cores accidentally or maliciously, if an attacker gains the
> ability to write onto kernel memory.

i think this may be sufficient, but barely.

> +       pte_clear(poking_mm, poking_addr, ptep);
> +
> +       /*
> +        * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
> +        * as it also flushes the corresponding "user" address spaces, which
> +        * does not exist.
> +        *
> +        * Poking, however, is already very inefficient since it does not try to
> +        * batch updates, so we ignore this problem for the time being.
> +        *
> +        * Since the PTEs do not exist in other kernel address-spaces, we do
> +        * not use __flush_tlb_one_kernel(), which when PTI is on would cause
> +        * more unwarranted TLB flushes.
> +        *
> +        * There is a slight anomaly here: the PTE is a supervisor-only and
> +        * (potentially) global and we use __flush_tlb_one_user() but this
> +        * should be fine.
> +        */
> +       __flush_tlb_one_user(poking_addr);
> +       if (cross_page_boundary) {
> +               pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
> +               __flush_tlb_one_user(poking_addr + PAGE_SIZE);
> +       }

In principle, another CPU could still have the old translation.  Your
mutex probably makes this impossible, but it makes me nervous.
Ideally you'd use flush_tlb_mm_range(), but I guess you can't do that
with IRQs off.  Hmm.  I think you should add an inc_mm_tlb_gen() here.
Arguably, if you did that, you could omit the flushes, but maybe
that's silly.

If we start getting new users of use_temporary_mm(), we should give
some serious thought to the SMP semantics.

Also, you're using PAGE_KERNEL.  Please tell me that the global bit
isn't set in there.

--Andy

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 06/17] x86/alternative: use temporary mm for text poking
  2019-01-17 20:27   ` Andy Lutomirski
@ 2019-01-17 20:47     ` Andy Lutomirski
  2019-01-17 21:43       ` Nadav Amit
  0 siblings, 1 reply; 51+ messages in thread
From: Andy Lutomirski @ 2019-01-17 20:47 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Rick Edgecombe, Ingo Molnar, LKML, X86 ML, H. Peter Anvin,
	Thomas Gleixner, Borislav Petkov, Nadav Amit, Dave Hansen,
	Peter Zijlstra, linux_dti, linux-integrity, LSM List,
	Andrew Morton, Kernel Hardening, Linux-MM, Will Deacon,
	Ard Biesheuvel, Kristen Carlson Accardi, Dock, Deneen T,
	Nadav Amit, Kees Cook, Dave Hansen, Masami Hiramatsu

On Thu, Jan 17, 2019 at 12:27 PM Andy Lutomirski <luto@kernel.org> wrote:
>
> On Wed, Jan 16, 2019 at 4:33 PM Rick Edgecombe
> <rick.p.edgecombe@intel.com> wrote:
> >
> > From: Nadav Amit <namit@vmware.com>
> >
> > text_poke() can potentially compromise the security as it sets temporary
> > PTEs in the fixmap. These PTEs might be used to rewrite the kernel code
> > from other cores accidentally or maliciously, if an attacker gains the
> > ability to write onto kernel memory.
>
> i think this may be sufficient, but barely.
>
> > +       pte_clear(poking_mm, poking_addr, ptep);
> > +
> > +       /*
> > +        * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
> > +        * as it also flushes the corresponding "user" address spaces, which
> > +        * does not exist.
> > +        *
> > +        * Poking, however, is already very inefficient since it does not try to
> > +        * batch updates, so we ignore this problem for the time being.
> > +        *
> > +        * Since the PTEs do not exist in other kernel address-spaces, we do
> > +        * not use __flush_tlb_one_kernel(), which when PTI is on would cause
> > +        * more unwarranted TLB flushes.
> > +        *
> > +        * There is a slight anomaly here: the PTE is a supervisor-only and
> > +        * (potentially) global and we use __flush_tlb_one_user() but this
> > +        * should be fine.
> > +        */
> > +       __flush_tlb_one_user(poking_addr);
> > +       if (cross_page_boundary) {
> > +               pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
> > +               __flush_tlb_one_user(poking_addr + PAGE_SIZE);
> > +       }
>
> In principle, another CPU could still have the old translation.  Your
> mutex probably makes this impossible, but it makes me nervous.
> Ideally you'd use flush_tlb_mm_range(), but I guess you can't do that
> with IRQs off.  Hmm.  I think you should add an inc_mm_tlb_gen() here.
> Arguably, if you did that, you could omit the flushes, but maybe
> that's silly.
>
> If we start getting new users of use_temporary_mm(), we should give
> some serious thought to the SMP semantics.
>
> Also, you're using PAGE_KERNEL.  Please tell me that the global bit
> isn't set in there.
>

Much better solution: do unuse_temporary_mm() and *then*
flush_tlb_mm_range().  This is entirely non-sketchy and should be just
about optimal, too.

--Andy

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 01/17] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2019-01-17  6:47   ` Masami Hiramatsu
@ 2019-01-17 21:15     ` hpa
  2019-01-17 22:39       ` Nadav Amit
  0 siblings, 1 reply; 51+ messages in thread
From: hpa @ 2019-01-17 21:15 UTC (permalink / raw)
  To: Masami Hiramatsu, Rick Edgecombe
  Cc: Andy Lutomirski, Ingo Molnar, linux-kernel, x86, Thomas Gleixner,
	Borislav Petkov, Nadav Amit, Dave Hansen, Peter Zijlstra,
	linux_dti, linux-integrity, linux-security-module, akpm,
	kernel-hardening, linux-mm, will.deacon, ard.biesheuvel, kristen,
	deneen.t.dock, Nadav Amit, Kees Cook, Dave Hansen

On January 16, 2019 10:47:01 PM PST, Masami Hiramatsu <mhiramat@kernel.org> wrote:
>On Wed, 16 Jan 2019 16:32:43 -0800
>Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:
>
>> From: Nadav Amit <namit@vmware.com>
>> 
>> text_mutex is currently expected to be held before text_poke() is
>> called, but we kgdb does not take the mutex, and instead *supposedly*
>> ensures the lock is not taken and will not be acquired by any other
>core
>> while text_poke() is running.
>> 
>> The reason for the "supposedly" comment is that it is not entirely
>clear
>> that this would be the case if gdb_do_roundup is zero.
>> 
>> This patch creates two wrapper functions, text_poke() and
>> text_poke_kgdb() which do or do not run the lockdep assertion
>> respectively.
>> 
>> While we are at it, change the return code of text_poke() to
>something
>> meaningful. One day, callers might actually respect it and the
>existing
>> BUG_ON() when patching fails could be removed. For kgdb, the return
>> value can actually be used.
>
>Looks good to me.
>
>Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
>
>Thank you,
>
>> 
>> Cc: Andy Lutomirski <luto@kernel.org>
>> Cc: Kees Cook <keescook@chromium.org>
>> Cc: Dave Hansen <dave.hansen@intel.com>
>> Cc: Masami Hiramatsu <mhiramat@kernel.org>
>> Fixes: 9222f606506c ("x86/alternatives: Lockdep-enforce text_mutex in
>text_poke*()")
>> Suggested-by: Peter Zijlstra <peterz@infradead.org>
>> Acked-by: Jiri Kosina <jkosina@suse.cz>
>> Signed-off-by: Nadav Amit <namit@vmware.com>
>> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
>> ---
>>  arch/x86/include/asm/text-patching.h |  1 +
>>  arch/x86/kernel/alternative.c        | 52
>++++++++++++++++++++--------
>>  arch/x86/kernel/kgdb.c               | 11 +++---
>>  3 files changed, 45 insertions(+), 19 deletions(-)
>> 
>> diff --git a/arch/x86/include/asm/text-patching.h
>b/arch/x86/include/asm/text-patching.h
>> index e85ff65c43c3..f8fc8e86cf01 100644
>> --- a/arch/x86/include/asm/text-patching.h
>> +++ b/arch/x86/include/asm/text-patching.h
>> @@ -35,6 +35,7 @@ extern void *text_poke_early(void *addr, const void
>*opcode, size_t len);
>>   * inconsistent instruction while you patch.
>>   */
>>  extern void *text_poke(void *addr, const void *opcode, size_t len);
>> +extern void *text_poke_kgdb(void *addr, const void *opcode, size_t
>len);
>>  extern int poke_int3_handler(struct pt_regs *regs);
>>  extern void *text_poke_bp(void *addr, const void *opcode, size_t
>len, void *handler);
>>  extern int after_bootmem;
>> diff --git a/arch/x86/kernel/alternative.c
>b/arch/x86/kernel/alternative.c
>> index ebeac487a20c..c6a3a10a2fd5 100644
>> --- a/arch/x86/kernel/alternative.c
>> +++ b/arch/x86/kernel/alternative.c
>> @@ -678,18 +678,7 @@ void *__init_or_module text_poke_early(void
>*addr, const void *opcode,
>>  	return addr;
>>  }
>>  
>> -/**
>> - * text_poke - Update instructions on a live kernel
>> - * @addr: address to modify
>> - * @opcode: source of the copy
>> - * @len: length to copy
>> - *
>> - * Only atomic text poke/set should be allowed when not doing early
>patching.
>> - * It means the size must be writable atomically and the address
>must be aligned
>> - * in a way that permits an atomic write. It also makes sure we fit
>on a single
>> - * page.
>> - */
>> -void *text_poke(void *addr, const void *opcode, size_t len)
>> +static void *__text_poke(void *addr, const void *opcode, size_t len)
>>  {
>>  	unsigned long flags;
>>  	char *vaddr;
>> @@ -702,8 +691,6 @@ void *text_poke(void *addr, const void *opcode,
>size_t len)
>>  	 */
>>  	BUG_ON(!after_bootmem);
>>  
>> -	lockdep_assert_held(&text_mutex);
>> -
>>  	if (!core_kernel_text((unsigned long)addr)) {
>>  		pages[0] = vmalloc_to_page(addr);
>>  		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
>> @@ -732,6 +719,43 @@ void *text_poke(void *addr, const void *opcode,
>size_t len)
>>  	return addr;
>>  }
>>  
>> +/**
>> + * text_poke - Update instructions on a live kernel
>> + * @addr: address to modify
>> + * @opcode: source of the copy
>> + * @len: length to copy
>> + *
>> + * Only atomic text poke/set should be allowed when not doing early
>patching.
>> + * It means the size must be writable atomically and the address
>must be aligned
>> + * in a way that permits an atomic write. It also makes sure we fit
>on a single
>> + * page.
>> + */
>> +void *text_poke(void *addr, const void *opcode, size_t len)
>> +{
>> +	lockdep_assert_held(&text_mutex);
>> +
>> +	return __text_poke(addr, opcode, len);
>> +}
>> +
>> +/**
>> + * text_poke_kgdb - Update instructions on a live kernel by kgdb
>> + * @addr: address to modify
>> + * @opcode: source of the copy
>> + * @len: length to copy
>> + *
>> + * Only atomic text poke/set should be allowed when not doing early
>patching.
>> + * It means the size must be writable atomically and the address
>must be aligned
>> + * in a way that permits an atomic write. It also makes sure we fit
>on a single
>> + * page.
>> + *
>> + * Context: should only be used by kgdb, which ensures no other core
>is running,
>> + *	    despite the fact it does not hold the text_mutex.
>> + */
>> +void *text_poke_kgdb(void *addr, const void *opcode, size_t len)
>> +{
>> +	return __text_poke(addr, opcode, len);
>> +}
>> +
>>  static void do_sync_core(void *info)
>>  {
>>  	sync_core();
>> diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
>> index 5db08425063e..1461544cba8b 100644
>> --- a/arch/x86/kernel/kgdb.c
>> +++ b/arch/x86/kernel/kgdb.c
>> @@ -758,13 +758,13 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt
>*bpt)
>>  	if (!err)
>>  		return err;
>>  	/*
>> -	 * It is safe to call text_poke() because normal kernel execution
>> +	 * It is safe to call text_poke_kgdb() because normal kernel
>execution
>>  	 * is stopped on all cores, so long as the text_mutex is not
>locked.
>>  	 */
>>  	if (mutex_is_locked(&text_mutex))
>>  		return -EBUSY;
>> -	text_poke((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
>> -		  BREAK_INSTR_SIZE);
>> +	text_poke_kgdb((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
>> +		       BREAK_INSTR_SIZE);
>>  	err = probe_kernel_read(opc, (char *)bpt->bpt_addr,
>BREAK_INSTR_SIZE);
>>  	if (err)
>>  		return err;
>> @@ -783,12 +783,13 @@ int kgdb_arch_remove_breakpoint(struct
>kgdb_bkpt *bpt)
>>  	if (bpt->type != BP_POKE_BREAKPOINT)
>>  		goto knl_write;
>>  	/*
>> -	 * It is safe to call text_poke() because normal kernel execution
>> +	 * It is safe to call text_poke_kgdb() because normal kernel
>execution
>>  	 * is stopped on all cores, so long as the text_mutex is not
>locked.
>>  	 */
>>  	if (mutex_is_locked(&text_mutex))
>>  		goto knl_write;
>> -	text_poke((void *)bpt->bpt_addr, bpt->saved_instr,
>BREAK_INSTR_SIZE);
>> +	text_poke_kgdb((void *)bpt->bpt_addr, bpt->saved_instr,
>> +		       BREAK_INSTR_SIZE);
>>  	err = probe_kernel_read(opc, (char *)bpt->bpt_addr,
>BREAK_INSTR_SIZE);
>>  	if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE))
>>  		goto knl_write;
>> -- 
>> 2.17.1
>> 

If you are reorganizing this code, please do so so that the caller doesn't have to worry about if it should call text_poke_bp() or text_poke_early(). Right now the caller had to know that, which makes no sense.
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 06/17] x86/alternative: use temporary mm for text poking
  2019-01-17 20:47     ` Andy Lutomirski
@ 2019-01-17 21:43       ` Nadav Amit
  2019-01-17 22:29         ` Nadav Amit
  2019-01-17 22:31         ` hpa
  0 siblings, 2 replies; 51+ messages in thread
From: Nadav Amit @ 2019-01-17 21:43 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Rick Edgecombe, Ingo Molnar, LKML, X86 ML, H. Peter Anvin,
	Thomas Gleixner, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	linux_dti, linux-integrity, LSM List, Andrew Morton,
	Kernel Hardening, Linux-MM, Will Deacon, Ard Biesheuvel,
	Kristen Carlson Accardi, Dock, Deneen T, Kees Cook, Dave Hansen,
	Masami Hiramatsu

> On Jan 17, 2019, at 12:47 PM, Andy Lutomirski <luto@kernel.org> wrote:
> 
> On Thu, Jan 17, 2019 at 12:27 PM Andy Lutomirski <luto@kernel.org> wrote:
>> On Wed, Jan 16, 2019 at 4:33 PM Rick Edgecombe
>> <rick.p.edgecombe@intel.com> wrote:
>>> From: Nadav Amit <namit@vmware.com>
>>> 
>>> text_poke() can potentially compromise the security as it sets temporary
>>> PTEs in the fixmap. These PTEs might be used to rewrite the kernel code
>>> from other cores accidentally or maliciously, if an attacker gains the
>>> ability to write onto kernel memory.
>> 
>> i think this may be sufficient, but barely.
>> 
>>> +       pte_clear(poking_mm, poking_addr, ptep);
>>> +
>>> +       /*
>>> +        * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
>>> +        * as it also flushes the corresponding "user" address spaces, which
>>> +        * does not exist.
>>> +        *
>>> +        * Poking, however, is already very inefficient since it does not try to
>>> +        * batch updates, so we ignore this problem for the time being.
>>> +        *
>>> +        * Since the PTEs do not exist in other kernel address-spaces, we do
>>> +        * not use __flush_tlb_one_kernel(), which when PTI is on would cause
>>> +        * more unwarranted TLB flushes.
>>> +        *
>>> +        * There is a slight anomaly here: the PTE is a supervisor-only and
>>> +        * (potentially) global and we use __flush_tlb_one_user() but this
>>> +        * should be fine.
>>> +        */
>>> +       __flush_tlb_one_user(poking_addr);
>>> +       if (cross_page_boundary) {
>>> +               pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
>>> +               __flush_tlb_one_user(poking_addr + PAGE_SIZE);
>>> +       }
>> 
>> In principle, another CPU could still have the old translation.  Your
>> mutex probably makes this impossible, but it makes me nervous.
>> Ideally you'd use flush_tlb_mm_range(), but I guess you can't do that
>> with IRQs off.  Hmm.  I think you should add an inc_mm_tlb_gen() here.
>> Arguably, if you did that, you could omit the flushes, but maybe
>> that's silly.
>> 
>> If we start getting new users of use_temporary_mm(), we should give
>> some serious thought to the SMP semantics.
>> 
>> Also, you're using PAGE_KERNEL.  Please tell me that the global bit
>> isn't set in there.
> 
> Much better solution: do unuse_temporary_mm() and *then*
> flush_tlb_mm_range().  This is entirely non-sketchy and should be just
> about optimal, too.

This solution sounds nice and clean. The fact the global-bit was set didn’t
matter before (since __flush_tlb_one_user would get rid of it no matter
what), but would matter now, so I’ll change it too.

Thanks!

Nadav


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 14/17] mm: Make hibernate handle unmapped pages
  2019-01-17  9:39   ` Pavel Machek
@ 2019-01-17 22:16     ` Edgecombe, Rick P
  2019-01-17 23:41       ` Pavel Machek
  0 siblings, 1 reply; 51+ messages in thread
From: Edgecombe, Rick P @ 2019-01-17 22:16 UTC (permalink / raw)
  To: pavel
  Cc: linux-kernel, peterz, linux-integrity, ard.biesheuvel, tglx,
	linux-mm, nadav.amit, dave.hansen, Dock, Deneen T,
	linux-security-module, x86, akpm, hpa, kristen, mingo, linux_dti,
	luto, will.deacon, bp, kernel-hardening, rjw

On Thu, 2019-01-17 at 10:39 +0100, Pavel Machek wrote:
> Hi!
> 
> > For architectures with CONFIG_ARCH_HAS_SET_ALIAS, pages can be unmapped
> > briefly on the directmap, even when CONFIG_DEBUG_PAGEALLOC is not
> > configured.
> > So this changes kernel_map_pages and kernel_page_present to be defined when
> > CONFIG_ARCH_HAS_SET_ALIAS is defined as well. It also changes places
> > (page_alloc.c) where those functions are assumed to only be implemented when
> > CONFIG_DEBUG_PAGEALLOC is defined.
> 
> Which architectures are that?
> 
> Should this be merged to the patch where HAS_SET_ALIAS is introduced? We
> don't want broken hibernation in between....
Thanks for taking a look. It was added for x86 for patch 13 in this patchset and
there was interest expressed for adding for arm64. If you didn't get the whole
set and want to see let me know and I can send it.

> 
> > -#ifdef CONFIG_DEBUG_PAGEALLOC
> >  extern bool _debug_pagealloc_enabled;
> > -extern void __kernel_map_pages(struct page *page, int numpages, int
> > enable);
> >  
> >  static inline bool debug_pagealloc_enabled(void)
> >  {
> > -	return _debug_pagealloc_enabled;
> > +	return IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) && _debug_pagealloc_enabled;
> >  }
> 
> This will break build AFAICT. _debug_pagealloc_enabled variable does
> not exist in !CONFIG_DEBUG_PAGEALLOC case.
> 
> 									Pavel
After adding in the CONFIG_ARCH_HAS_SET_ALIAS condition to the ifdefs in this
area it looked a little hard to read to me, so I moved debug_pagealloc_enabled
and extern bool _debug_pagealloc_enabled outside to make it easier. I think you
are right, the actual non-extern variable can not be there, but the reference
here gets optimized out in that case.

Just double checked and it builds for both CONFIG_DEBUG_PAGEALLOC=n and
CONFIG_DEBUG_PAGEALLOC=y for me.

Thanks,

Rick

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 06/17] x86/alternative: use temporary mm for text poking
  2019-01-17 21:43       ` Nadav Amit
@ 2019-01-17 22:29         ` Nadav Amit
  2019-01-17 22:31         ` hpa
  1 sibling, 0 replies; 51+ messages in thread
From: Nadav Amit @ 2019-01-17 22:29 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Rick Edgecombe, Ingo Molnar, LKML, X86 ML, H. Peter Anvin,
	Thomas Gleixner, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Damian Tometzki, linux-integrity, LSM List, Andrew Morton,
	Kernel Hardening, Linux-MM, Will Deacon, Ard Biesheuvel,
	Kristen Carlson Accardi, Dock, Deneen T, Kees Cook, Dave Hansen,
	Masami Hiramatsu

> On Jan 17, 2019, at 1:43 PM, Nadav Amit <nadav.amit@gmail.com> wrote:
> 
>> On Jan 17, 2019, at 12:47 PM, Andy Lutomirski <luto@kernel.org> wrote:
>> 
>> On Thu, Jan 17, 2019 at 12:27 PM Andy Lutomirski <luto@kernel.org> wrote:
>>> On Wed, Jan 16, 2019 at 4:33 PM Rick Edgecombe
>>> <rick.p.edgecombe@intel.com> wrote:
>>>> From: Nadav Amit <namit@vmware.com>
>>>> 
>>>> text_poke() can potentially compromise the security as it sets temporary
>>>> PTEs in the fixmap. These PTEs might be used to rewrite the kernel code
>>>> from other cores accidentally or maliciously, if an attacker gains the
>>>> ability to write onto kernel memory.
>>> 
>>> i think this may be sufficient, but barely.
>>> 
>>>> +       pte_clear(poking_mm, poking_addr, ptep);
>>>> +
>>>> +       /*
>>>> +        * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
>>>> +        * as it also flushes the corresponding "user" address spaces, which
>>>> +        * does not exist.
>>>> +        *
>>>> +        * Poking, however, is already very inefficient since it does not try to
>>>> +        * batch updates, so we ignore this problem for the time being.
>>>> +        *
>>>> +        * Since the PTEs do not exist in other kernel address-spaces, we do
>>>> +        * not use __flush_tlb_one_kernel(), which when PTI is on would cause
>>>> +        * more unwarranted TLB flushes.
>>>> +        *
>>>> +        * There is a slight anomaly here: the PTE is a supervisor-only and
>>>> +        * (potentially) global and we use __flush_tlb_one_user() but this
>>>> +        * should be fine.
>>>> +        */
>>>> +       __flush_tlb_one_user(poking_addr);
>>>> +       if (cross_page_boundary) {
>>>> +               pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
>>>> +               __flush_tlb_one_user(poking_addr + PAGE_SIZE);
>>>> +       }
>>> 
>>> In principle, another CPU could still have the old translation.  Your
>>> mutex probably makes this impossible, but it makes me nervous.
>>> Ideally you'd use flush_tlb_mm_range(), but I guess you can't do that
>>> with IRQs off.  Hmm.  I think you should add an inc_mm_tlb_gen() here.
>>> Arguably, if you did that, you could omit the flushes, but maybe
>>> that's silly.
>>> 
>>> If we start getting new users of use_temporary_mm(), we should give
>>> some serious thought to the SMP semantics.
>>> 
>>> Also, you're using PAGE_KERNEL.  Please tell me that the global bit
>>> isn't set in there.
>> 
>> Much better solution: do unuse_temporary_mm() and *then*
>> flush_tlb_mm_range().  This is entirely non-sketchy and should be just
>> about optimal, too.
> 
> This solution sounds nice and clean. The fact the global-bit was set didn’t
> matter before (since __flush_tlb_one_user would get rid of it no matter
> what), but would matter now, so I’ll change it too.

Err.. so actually text_poke() might be called with disabled IRQs (by kgdb).
flush_tlb_mm_range() should still work fine even with disabled IRQs since no
core would use poking_mm at this point. I can add a comment to
flush_tlb_mm_range(), but all in all it is actually not very pretty.


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 06/17] x86/alternative: use temporary mm for text poking
  2019-01-17 21:43       ` Nadav Amit
  2019-01-17 22:29         ` Nadav Amit
@ 2019-01-17 22:31         ` hpa
  1 sibling, 0 replies; 51+ messages in thread
From: hpa @ 2019-01-17 22:31 UTC (permalink / raw)
  To: Nadav Amit, Andy Lutomirski
  Cc: Rick Edgecombe, Ingo Molnar, LKML, X86 ML, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, LSM List, Andrew Morton, Kernel Hardening,
	Linux-MM, Will Deacon, Ard Biesheuvel, Kristen Carlson Accardi,
	Dock, Deneen T, Kees Cook, Dave Hansen, Masami Hiramatsu

On January 17, 2019 1:43:54 PM PST, Nadav Amit <nadav.amit@gmail.com> wrote:
>> On Jan 17, 2019, at 12:47 PM, Andy Lutomirski <luto@kernel.org>
>wrote:
>> 
>> On Thu, Jan 17, 2019 at 12:27 PM Andy Lutomirski <luto@kernel.org>
>wrote:
>>> On Wed, Jan 16, 2019 at 4:33 PM Rick Edgecombe
>>> <rick.p.edgecombe@intel.com> wrote:
>>>> From: Nadav Amit <namit@vmware.com>
>>>> 
>>>> text_poke() can potentially compromise the security as it sets
>temporary
>>>> PTEs in the fixmap. These PTEs might be used to rewrite the kernel
>code
>>>> from other cores accidentally or maliciously, if an attacker gains
>the
>>>> ability to write onto kernel memory.
>>> 
>>> i think this may be sufficient, but barely.
>>> 
>>>> +       pte_clear(poking_mm, poking_addr, ptep);
>>>> +
>>>> +       /*
>>>> +        * __flush_tlb_one_user() performs a redundant TLB flush
>when PTI is on,
>>>> +        * as it also flushes the corresponding "user" address
>spaces, which
>>>> +        * does not exist.
>>>> +        *
>>>> +        * Poking, however, is already very inefficient since it
>does not try to
>>>> +        * batch updates, so we ignore this problem for the time
>being.
>>>> +        *
>>>> +        * Since the PTEs do not exist in other kernel
>address-spaces, we do
>>>> +        * not use __flush_tlb_one_kernel(), which when PTI is on
>would cause
>>>> +        * more unwarranted TLB flushes.
>>>> +        *
>>>> +        * There is a slight anomaly here: the PTE is a
>supervisor-only and
>>>> +        * (potentially) global and we use __flush_tlb_one_user()
>but this
>>>> +        * should be fine.
>>>> +        */
>>>> +       __flush_tlb_one_user(poking_addr);
>>>> +       if (cross_page_boundary) {
>>>> +               pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep
>+ 1);
>>>> +               __flush_tlb_one_user(poking_addr + PAGE_SIZE);
>>>> +       }
>>> 
>>> In principle, another CPU could still have the old translation. 
>Your
>>> mutex probably makes this impossible, but it makes me nervous.
>>> Ideally you'd use flush_tlb_mm_range(), but I guess you can't do
>that
>>> with IRQs off.  Hmm.  I think you should add an inc_mm_tlb_gen()
>here.
>>> Arguably, if you did that, you could omit the flushes, but maybe
>>> that's silly.
>>> 
>>> If we start getting new users of use_temporary_mm(), we should give
>>> some serious thought to the SMP semantics.
>>> 
>>> Also, you're using PAGE_KERNEL.  Please tell me that the global bit
>>> isn't set in there.
>> 
>> Much better solution: do unuse_temporary_mm() and *then*
>> flush_tlb_mm_range().  This is entirely non-sketchy and should be
>just
>> about optimal, too.
>
>This solution sounds nice and clean. The fact the global-bit was set
>didn’t
>matter before (since __flush_tlb_one_user would get rid of it no matter
>what), but would matter now, so I’ll change it too.
>
>Thanks!
>
>Nadav

You can just disable the global bit at the top level, obviously.

This approach also should make it far easier to do batching if desired.
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 01/17] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2019-01-17 21:15     ` hpa
@ 2019-01-17 22:39       ` Nadav Amit
  2019-01-17 22:59         ` hpa
  0 siblings, 1 reply; 51+ messages in thread
From: Nadav Amit @ 2019-01-17 22:39 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Masami Hiramatsu, Rick Edgecombe, Andy Lutomirski, Ingo Molnar,
	LKML, X86 ML, Thomas Gleixner, Borislav Petkov, Dave Hansen,
	Peter Zijlstra, Damian Tometzki, linux-integrity, LSM List,
	Andrew Morton, Kernel Hardening, Linux-MM, Will Deacon,
	Ard Biesheuvel, Kristen Carlson Accardi, Dock, Deneen T,
	Kees Cook, Dave Hansen

> On Jan 17, 2019, at 1:15 PM, hpa@zytor.com wrote:
> 
> On January 16, 2019 10:47:01 PM PST, Masami Hiramatsu <mhiramat@kernel.org> wrote:
>> On Wed, 16 Jan 2019 16:32:43 -0800
>> Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:
>> 
>>> From: Nadav Amit <namit@vmware.com>
>>> 
>>> text_mutex is currently expected to be held before text_poke() is
>>> called, but we kgdb does not take the mutex, and instead *supposedly*
>>> ensures the lock is not taken and will not be acquired by any other
>> core
>>> while text_poke() is running.
>>> 
>>> The reason for the "supposedly" comment is that it is not entirely
>> clear
>>> that this would be the case if gdb_do_roundup is zero.
>>> 
>>> This patch creates two wrapper functions, text_poke() and
>>> text_poke_kgdb() which do or do not run the lockdep assertion
>>> respectively.
>>> 
>>> While we are at it, change the return code of text_poke() to
>> something
>>> meaningful. One day, callers might actually respect it and the
>> existing
>>> BUG_ON() when patching fails could be removed. For kgdb, the return
>>> value can actually be used.
>> 
>> Looks good to me.
>> 
>> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
>> 
>> Thank you,
>> 
>>> Cc: Andy Lutomirski <luto@kernel.org>
>>> Cc: Kees Cook <keescook@chromium.org>
>>> Cc: Dave Hansen <dave.hansen@intel.com>
>>> Cc: Masami Hiramatsu <mhiramat@kernel.org>
>>> Fixes: 9222f606506c ("x86/alternatives: Lockdep-enforce text_mutex in
>> text_poke*()")
>>> Suggested-by: Peter Zijlstra <peterz@infradead.org>
>>> Acked-by: Jiri Kosina <jkosina@suse.cz>
>>> Signed-off-by: Nadav Amit <namit@vmware.com>
>>> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
>>> ---
>>> arch/x86/include/asm/text-patching.h |  1 +
>>> arch/x86/kernel/alternative.c        | 52
>> ++++++++++++++++++++--------
>>> arch/x86/kernel/kgdb.c               | 11 +++---
>>> 3 files changed, 45 insertions(+), 19 deletions(-)
>>> 
>>> diff --git a/arch/x86/include/asm/text-patching.h
>> b/arch/x86/include/asm/text-patching.h
>>> index e85ff65c43c3..f8fc8e86cf01 100644
>>> --- a/arch/x86/include/asm/text-patching.h
>>> +++ b/arch/x86/include/asm/text-patching.h
>>> @@ -35,6 +35,7 @@ extern void *text_poke_early(void *addr, const void
>> *opcode, size_t len);
>>>  * inconsistent instruction while you patch.
>>>  */
>>> extern void *text_poke(void *addr, const void *opcode, size_t len);
>>> +extern void *text_poke_kgdb(void *addr, const void *opcode, size_t
>> len);
>>> extern int poke_int3_handler(struct pt_regs *regs);
>>> extern void *text_poke_bp(void *addr, const void *opcode, size_t
>> len, void *handler);
>>> extern int after_bootmem;
>>> diff --git a/arch/x86/kernel/alternative.c
>> b/arch/x86/kernel/alternative.c
>>> index ebeac487a20c..c6a3a10a2fd5 100644
>>> --- a/arch/x86/kernel/alternative.c
>>> +++ b/arch/x86/kernel/alternative.c
>>> @@ -678,18 +678,7 @@ void *__init_or_module text_poke_early(void
>> *addr, const void *opcode,
>>> return addr;
>>> }
>>> 
>>> -/**
>>> - * text_poke - Update instructions on a live kernel
>>> - * @addr: address to modify
>>> - * @opcode: source of the copy
>>> - * @len: length to copy
>>> - *
>>> - * Only atomic text poke/set should be allowed when not doing early
>> patching.
>>> - * It means the size must be writable atomically and the address
>> must be aligned
>>> - * in a way that permits an atomic write. It also makes sure we fit
>> on a single
>>> - * page.
>>> - */
>>> -void *text_poke(void *addr, const void *opcode, size_t len)
>>> +static void *__text_poke(void *addr, const void *opcode, size_t len)
>>> {
>>> 	unsigned long flags;
>>> 	char *vaddr;
>>> @@ -702,8 +691,6 @@ void *text_poke(void *addr, const void *opcode,
>> size_t len)
>>>  */
>>> 	BUG_ON(!after_bootmem);
>>> 
>>> -	lockdep_assert_held(&text_mutex);
>>> -
>>> 	if (!core_kernel_text((unsigned long)addr)) {
>>> 		pages[0] = vmalloc_to_page(addr);
>>> 		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
>>> @@ -732,6 +719,43 @@ void *text_poke(void *addr, const void *opcode,
>> size_t len)
>>> return addr;
>>> }
>>> 
>>> +/**
>>> + * text_poke - Update instructions on a live kernel
>>> + * @addr: address to modify
>>> + * @opcode: source of the copy
>>> + * @len: length to copy
>>> + *
>>> + * Only atomic text poke/set should be allowed when not doing early
>> patching.
>>> + * It means the size must be writable atomically and the address
>> must be aligned
>>> + * in a way that permits an atomic write. It also makes sure we fit
>> on a single
>>> + * page.
>>> + */
>>> +void *text_poke(void *addr, const void *opcode, size_t len)
>>> +{
>>> +	lockdep_assert_held(&text_mutex);
>>> +
>>> +	return __text_poke(addr, opcode, len);
>>> +}
>>> +
>>> +/**
>>> + * text_poke_kgdb - Update instructions on a live kernel by kgdb
>>> + * @addr: address to modify
>>> + * @opcode: source of the copy
>>> + * @len: length to copy
>>> + *
>>> + * Only atomic text poke/set should be allowed when not doing early
>> patching.
>>> + * It means the size must be writable atomically and the address
>> must be aligned
>>> + * in a way that permits an atomic write. It also makes sure we fit
>> on a single
>>> + * page.
>>> + *
>>> + * Context: should only be used by kgdb, which ensures no other core
>> is running,
>>> + *	    despite the fact it does not hold the text_mutex.
>>> + */
>>> +void *text_poke_kgdb(void *addr, const void *opcode, size_t len)
>>> +{
>>> +	return __text_poke(addr, opcode, len);
>>> +}
>>> +
>>> static void do_sync_core(void *info)
>>> {
>>> 	sync_core();
>>> diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
>>> index 5db08425063e..1461544cba8b 100644
>>> --- a/arch/x86/kernel/kgdb.c
>>> +++ b/arch/x86/kernel/kgdb.c
>>> @@ -758,13 +758,13 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt
>> *bpt)
>>> if (!err)
>>> 		return err;
>>> 	/*
>>> -	 * It is safe to call text_poke() because normal kernel execution
>>> +	 * It is safe to call text_poke_kgdb() because normal kernel
>> execution
>>>  * is stopped on all cores, so long as the text_mutex is not
>> locked.
>>>  */
>>> 	if (mutex_is_locked(&text_mutex))
>>> 		return -EBUSY;
>>> -	text_poke((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
>>> -		  BREAK_INSTR_SIZE);
>>> +	text_poke_kgdb((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
>>> +		       BREAK_INSTR_SIZE);
>>> 	err = probe_kernel_read(opc, (char *)bpt->bpt_addr,
>> BREAK_INSTR_SIZE);
>>> if (err)
>>> 		return err;
>>> @@ -783,12 +783,13 @@ int kgdb_arch_remove_breakpoint(struct
>> kgdb_bkpt *bpt)
>>> if (bpt->type != BP_POKE_BREAKPOINT)
>>> 		goto knl_write;
>>> 	/*
>>> -	 * It is safe to call text_poke() because normal kernel execution
>>> +	 * It is safe to call text_poke_kgdb() because normal kernel
>> execution
>>>  * is stopped on all cores, so long as the text_mutex is not
>> locked.
>>>  */
>>> 	if (mutex_is_locked(&text_mutex))
>>> 		goto knl_write;
>>> -	text_poke((void *)bpt->bpt_addr, bpt->saved_instr,
>> BREAK_INSTR_SIZE);
>>> +	text_poke_kgdb((void *)bpt->bpt_addr, bpt->saved_instr,
>>> +		       BREAK_INSTR_SIZE);
>>> 	err = probe_kernel_read(opc, (char *)bpt->bpt_addr,
>> BREAK_INSTR_SIZE);
>>> if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE))
>>> 		goto knl_write;
>>> -- 
>>> 2.17.1
> 
> If you are reorganizing this code, please do so so that the caller doesn’t
> have to worry about if it should call text_poke_bp() or text_poke_early().
> Right now the caller had to know that, which makes no sense.

Did you look at "[11/17] x86/jump-label: remove support for custom poker”?

https://lore.kernel.org/patchwork/patch/1032857/

If this is not what you regard, please be more concrete. text_poke_early()
is still used directly on init and while modules are loaded, which might not
be great, but is outside of the scope of this patch-set.


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 01/17] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2019-01-17 22:39       ` Nadav Amit
@ 2019-01-17 22:59         ` hpa
  2019-01-17 23:14           ` Nadav Amit
  0 siblings, 1 reply; 51+ messages in thread
From: hpa @ 2019-01-17 22:59 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Masami Hiramatsu, Rick Edgecombe, Andy Lutomirski, Ingo Molnar,
	LKML, X86 ML, Thomas Gleixner, Borislav Petkov, Dave Hansen,
	Peter Zijlstra, Damian Tometzki, linux-integrity, LSM List,
	Andrew Morton, Kernel Hardening, Linux-MM, Will Deacon,
	Ard Biesheuvel, Kristen Carlson Accardi, Dock, Deneen T,
	Kees Cook, Dave Hansen

On January 17, 2019 2:39:15 PM PST, Nadav Amit <namit@vmware.com> wrote:
>> On Jan 17, 2019, at 1:15 PM, hpa@zytor.com wrote:
>> 
>> On January 16, 2019 10:47:01 PM PST, Masami Hiramatsu
><mhiramat@kernel.org> wrote:
>>> On Wed, 16 Jan 2019 16:32:43 -0800
>>> Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:
>>> 
>>>> From: Nadav Amit <namit@vmware.com>
>>>> 
>>>> text_mutex is currently expected to be held before text_poke() is
>>>> called, but we kgdb does not take the mutex, and instead
>*supposedly*
>>>> ensures the lock is not taken and will not be acquired by any other
>>> core
>>>> while text_poke() is running.
>>>> 
>>>> The reason for the "supposedly" comment is that it is not entirely
>>> clear
>>>> that this would be the case if gdb_do_roundup is zero.
>>>> 
>>>> This patch creates two wrapper functions, text_poke() and
>>>> text_poke_kgdb() which do or do not run the lockdep assertion
>>>> respectively.
>>>> 
>>>> While we are at it, change the return code of text_poke() to
>>> something
>>>> meaningful. One day, callers might actually respect it and the
>>> existing
>>>> BUG_ON() when patching fails could be removed. For kgdb, the return
>>>> value can actually be used.
>>> 
>>> Looks good to me.
>>> 
>>> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
>>> 
>>> Thank you,
>>> 
>>>> Cc: Andy Lutomirski <luto@kernel.org>
>>>> Cc: Kees Cook <keescook@chromium.org>
>>>> Cc: Dave Hansen <dave.hansen@intel.com>
>>>> Cc: Masami Hiramatsu <mhiramat@kernel.org>
>>>> Fixes: 9222f606506c ("x86/alternatives: Lockdep-enforce text_mutex
>in
>>> text_poke*()")
>>>> Suggested-by: Peter Zijlstra <peterz@infradead.org>
>>>> Acked-by: Jiri Kosina <jkosina@suse.cz>
>>>> Signed-off-by: Nadav Amit <namit@vmware.com>
>>>> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
>>>> ---
>>>> arch/x86/include/asm/text-patching.h |  1 +
>>>> arch/x86/kernel/alternative.c        | 52
>>> ++++++++++++++++++++--------
>>>> arch/x86/kernel/kgdb.c               | 11 +++---
>>>> 3 files changed, 45 insertions(+), 19 deletions(-)
>>>> 
>>>> diff --git a/arch/x86/include/asm/text-patching.h
>>> b/arch/x86/include/asm/text-patching.h
>>>> index e85ff65c43c3..f8fc8e86cf01 100644
>>>> --- a/arch/x86/include/asm/text-patching.h
>>>> +++ b/arch/x86/include/asm/text-patching.h
>>>> @@ -35,6 +35,7 @@ extern void *text_poke_early(void *addr, const
>void
>>> *opcode, size_t len);
>>>>  * inconsistent instruction while you patch.
>>>>  */
>>>> extern void *text_poke(void *addr, const void *opcode, size_t len);
>>>> +extern void *text_poke_kgdb(void *addr, const void *opcode, size_t
>>> len);
>>>> extern int poke_int3_handler(struct pt_regs *regs);
>>>> extern void *text_poke_bp(void *addr, const void *opcode, size_t
>>> len, void *handler);
>>>> extern int after_bootmem;
>>>> diff --git a/arch/x86/kernel/alternative.c
>>> b/arch/x86/kernel/alternative.c
>>>> index ebeac487a20c..c6a3a10a2fd5 100644
>>>> --- a/arch/x86/kernel/alternative.c
>>>> +++ b/arch/x86/kernel/alternative.c
>>>> @@ -678,18 +678,7 @@ void *__init_or_module text_poke_early(void
>>> *addr, const void *opcode,
>>>> return addr;
>>>> }
>>>> 
>>>> -/**
>>>> - * text_poke - Update instructions on a live kernel
>>>> - * @addr: address to modify
>>>> - * @opcode: source of the copy
>>>> - * @len: length to copy
>>>> - *
>>>> - * Only atomic text poke/set should be allowed when not doing
>early
>>> patching.
>>>> - * It means the size must be writable atomically and the address
>>> must be aligned
>>>> - * in a way that permits an atomic write. It also makes sure we
>fit
>>> on a single
>>>> - * page.
>>>> - */
>>>> -void *text_poke(void *addr, const void *opcode, size_t len)
>>>> +static void *__text_poke(void *addr, const void *opcode, size_t
>len)
>>>> {
>>>> 	unsigned long flags;
>>>> 	char *vaddr;
>>>> @@ -702,8 +691,6 @@ void *text_poke(void *addr, const void *opcode,
>>> size_t len)
>>>>  */
>>>> 	BUG_ON(!after_bootmem);
>>>> 
>>>> -	lockdep_assert_held(&text_mutex);
>>>> -
>>>> 	if (!core_kernel_text((unsigned long)addr)) {
>>>> 		pages[0] = vmalloc_to_page(addr);
>>>> 		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
>>>> @@ -732,6 +719,43 @@ void *text_poke(void *addr, const void
>*opcode,
>>> size_t len)
>>>> return addr;
>>>> }
>>>> 
>>>> +/**
>>>> + * text_poke - Update instructions on a live kernel
>>>> + * @addr: address to modify
>>>> + * @opcode: source of the copy
>>>> + * @len: length to copy
>>>> + *
>>>> + * Only atomic text poke/set should be allowed when not doing
>early
>>> patching.
>>>> + * It means the size must be writable atomically and the address
>>> must be aligned
>>>> + * in a way that permits an atomic write. It also makes sure we
>fit
>>> on a single
>>>> + * page.
>>>> + */
>>>> +void *text_poke(void *addr, const void *opcode, size_t len)
>>>> +{
>>>> +	lockdep_assert_held(&text_mutex);
>>>> +
>>>> +	return __text_poke(addr, opcode, len);
>>>> +}
>>>> +
>>>> +/**
>>>> + * text_poke_kgdb - Update instructions on a live kernel by kgdb
>>>> + * @addr: address to modify
>>>> + * @opcode: source of the copy
>>>> + * @len: length to copy
>>>> + *
>>>> + * Only atomic text poke/set should be allowed when not doing
>early
>>> patching.
>>>> + * It means the size must be writable atomically and the address
>>> must be aligned
>>>> + * in a way that permits an atomic write. It also makes sure we
>fit
>>> on a single
>>>> + * page.
>>>> + *
>>>> + * Context: should only be used by kgdb, which ensures no other
>core
>>> is running,
>>>> + *	    despite the fact it does not hold the text_mutex.
>>>> + */
>>>> +void *text_poke_kgdb(void *addr, const void *opcode, size_t len)
>>>> +{
>>>> +	return __text_poke(addr, opcode, len);
>>>> +}
>>>> +
>>>> static void do_sync_core(void *info)
>>>> {
>>>> 	sync_core();
>>>> diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
>>>> index 5db08425063e..1461544cba8b 100644
>>>> --- a/arch/x86/kernel/kgdb.c
>>>> +++ b/arch/x86/kernel/kgdb.c
>>>> @@ -758,13 +758,13 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt
>>> *bpt)
>>>> if (!err)
>>>> 		return err;
>>>> 	/*
>>>> -	 * It is safe to call text_poke() because normal kernel execution
>>>> +	 * It is safe to call text_poke_kgdb() because normal kernel
>>> execution
>>>>  * is stopped on all cores, so long as the text_mutex is not
>>> locked.
>>>>  */
>>>> 	if (mutex_is_locked(&text_mutex))
>>>> 		return -EBUSY;
>>>> -	text_poke((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
>>>> -		  BREAK_INSTR_SIZE);
>>>> +	text_poke_kgdb((void *)bpt->bpt_addr,
>arch_kgdb_ops.gdb_bpt_instr,
>>>> +		       BREAK_INSTR_SIZE);
>>>> 	err = probe_kernel_read(opc, (char *)bpt->bpt_addr,
>>> BREAK_INSTR_SIZE);
>>>> if (err)
>>>> 		return err;
>>>> @@ -783,12 +783,13 @@ int kgdb_arch_remove_breakpoint(struct
>>> kgdb_bkpt *bpt)
>>>> if (bpt->type != BP_POKE_BREAKPOINT)
>>>> 		goto knl_write;
>>>> 	/*
>>>> -	 * It is safe to call text_poke() because normal kernel execution
>>>> +	 * It is safe to call text_poke_kgdb() because normal kernel
>>> execution
>>>>  * is stopped on all cores, so long as the text_mutex is not
>>> locked.
>>>>  */
>>>> 	if (mutex_is_locked(&text_mutex))
>>>> 		goto knl_write;
>>>> -	text_poke((void *)bpt->bpt_addr, bpt->saved_instr,
>>> BREAK_INSTR_SIZE);
>>>> +	text_poke_kgdb((void *)bpt->bpt_addr, bpt->saved_instr,
>>>> +		       BREAK_INSTR_SIZE);
>>>> 	err = probe_kernel_read(opc, (char *)bpt->bpt_addr,
>>> BREAK_INSTR_SIZE);
>>>> if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE))
>>>> 		goto knl_write;
>>>> -- 
>>>> 2.17.1
>> 
>> If you are reorganizing this code, please do so so that the caller
>doesn’t
>> have to worry about if it should call text_poke_bp() or
>text_poke_early().
>> Right now the caller had to know that, which makes no sense.
>
>Did you look at "[11/17] x86/jump-label: remove support for custom
>poker”?
>
>https://lore.kernel.org/patchwork/patch/1032857/
>
>If this is not what you regard, please be more concrete.
>text_poke_early()
>is still used directly on init and while modules are loaded, which
>might not
>be great, but is outside of the scope of this patch-set.

I don't think it is out of scope, although that patch is a huge step in the right direction.

text_poke_{early,bp,...}, however, should be fully internal, that is, static functions, and we should present a single interface, preferably called text_poke(), to the outside world.

I think we have three subcases:

1. Early, UP, or under stop_machine();
2. Atomic and aligned;
3. Breakpoint.

My proposed algorithm should remove the need for a fixup which should help this interface, too.

The specific alignment needed for #2 is started by the hardware people to be not crossing 16 bytes (NOT a cache line) on any CPU we support SMP on and, of course, being possible to do atomically do on the specific CPU (note that we *can* do a redundantly large store of existing bytes, which adds flexibility.)

To the best of my knowledge any CPU supporting SSE can do an atomic (for our purposes) aligned 16-byte store via MOVAPS; of course any CPU with cx16 can do it without SSE registers. For older CPUs we may be limited to 8-byte stores (cx8) or even 4-byte stores before we need to use the breakpoint algorithm.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 01/17] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2019-01-17 22:59         ` hpa
@ 2019-01-17 23:14           ` Nadav Amit
  0 siblings, 0 replies; 51+ messages in thread
From: Nadav Amit @ 2019-01-17 23:14 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Masami Hiramatsu, Rick Edgecombe, Andy Lutomirski, Ingo Molnar,
	LKML, X86 ML, Thomas Gleixner, Borislav Petkov, Dave Hansen,
	Peter Zijlstra, Damian Tometzki, linux-integrity, LSM List,
	Andrew Morton, Kernel Hardening, Linux-MM, Will Deacon,
	Ard Biesheuvel, Kristen Carlson Accardi, Dock, Deneen T,
	Kees Cook, Dave Hansen

> On Jan 17, 2019, at 2:59 PM, hpa@zytor.com wrote:
> 
> On January 17, 2019 2:39:15 PM PST, Nadav Amit <namit@vmware.com> wrote:
>>> On Jan 17, 2019, at 1:15 PM, hpa@zytor.com wrote:
>>> 
>>> On January 16, 2019 10:47:01 PM PST, Masami Hiramatsu
>> <mhiramat@kernel.org> wrote:
>>>> On Wed, 16 Jan 2019 16:32:43 -0800
>>>> Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:
>>>> 
>>>>> From: Nadav Amit <namit@vmware.com>
>>>>> 
>>>>> text_mutex is currently expected to be held before text_poke() is
>>>>> called, but we kgdb does not take the mutex, and instead
>> *supposedly*
>>>>> ensures the lock is not taken and will not be acquired by any other
>>>> core
>>>>> while text_poke() is running.
>>>>> 
>>>>> The reason for the "supposedly" comment is that it is not entirely
>>>> clear
>>>>> that this would be the case if gdb_do_roundup is zero.
>>>>> 
>>>>> This patch creates two wrapper functions, text_poke() and
>>>>> text_poke_kgdb() which do or do not run the lockdep assertion
>>>>> respectively.
>>>>> 
>>>>> While we are at it, change the return code of text_poke() to
>>>> something
>>>>> meaningful. One day, callers might actually respect it and the
>>>> existing
>>>>> BUG_ON() when patching fails could be removed. For kgdb, the return
>>>>> value can actually be used.
>>>> 
>>>> Looks good to me.
>>>> 
>>>> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
>>>> 
>>>> Thank you,
>>>> 
>>>>> Cc: Andy Lutomirski <luto@kernel.org>
>>>>> Cc: Kees Cook <keescook@chromium.org>
>>>>> Cc: Dave Hansen <dave.hansen@intel.com>
>>>>> Cc: Masami Hiramatsu <mhiramat@kernel.org>
>>>>> Fixes: 9222f606506c ("x86/alternatives: Lockdep-enforce text_mutex
>> in
>>>> text_poke*()")
>>>>> Suggested-by: Peter Zijlstra <peterz@infradead.org>
>>>>> Acked-by: Jiri Kosina <jkosina@suse.cz>
>>>>> Signed-off-by: Nadav Amit <namit@vmware.com>
>>>>> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
>>>>> ---
>>>>> arch/x86/include/asm/text-patching.h |  1 +
>>>>> arch/x86/kernel/alternative.c        | 52
>>>> ++++++++++++++++++++--------
>>>>> arch/x86/kernel/kgdb.c               | 11 +++---
>>>>> 3 files changed, 45 insertions(+), 19 deletions(-)
>>>>> 
>>>>> diff --git a/arch/x86/include/asm/text-patching.h
>>>> b/arch/x86/include/asm/text-patching.h
>>>>> index e85ff65c43c3..f8fc8e86cf01 100644
>>>>> --- a/arch/x86/include/asm/text-patching.h
>>>>> +++ b/arch/x86/include/asm/text-patching.h
>>>>> @@ -35,6 +35,7 @@ extern void *text_poke_early(void *addr, const
>> void
>>>> *opcode, size_t len);
>>>>> * inconsistent instruction while you patch.
>>>>> */
>>>>> extern void *text_poke(void *addr, const void *opcode, size_t len);
>>>>> +extern void *text_poke_kgdb(void *addr, const void *opcode, size_t
>>>> len);
>>>>> extern int poke_int3_handler(struct pt_regs *regs);
>>>>> extern void *text_poke_bp(void *addr, const void *opcode, size_t
>>>> len, void *handler);
>>>>> extern int after_bootmem;
>>>>> diff --git a/arch/x86/kernel/alternative.c
>>>> b/arch/x86/kernel/alternative.c
>>>>> index ebeac487a20c..c6a3a10a2fd5 100644
>>>>> --- a/arch/x86/kernel/alternative.c
>>>>> +++ b/arch/x86/kernel/alternative.c
>>>>> @@ -678,18 +678,7 @@ void *__init_or_module text_poke_early(void
>>>> *addr, const void *opcode,
>>>>> return addr;
>>>>> }
>>>>> 
>>>>> -/**
>>>>> - * text_poke - Update instructions on a live kernel
>>>>> - * @addr: address to modify
>>>>> - * @opcode: source of the copy
>>>>> - * @len: length to copy
>>>>> - *
>>>>> - * Only atomic text poke/set should be allowed when not doing
>> early
>>>> patching.
>>>>> - * It means the size must be writable atomically and the address
>>>> must be aligned
>>>>> - * in a way that permits an atomic write. It also makes sure we
>> fit
>>>> on a single
>>>>> - * page.
>>>>> - */
>>>>> -void *text_poke(void *addr, const void *opcode, size_t len)
>>>>> +static void *__text_poke(void *addr, const void *opcode, size_t
>> len)
>>>>> {
>>>>> 	unsigned long flags;
>>>>> 	char *vaddr;
>>>>> @@ -702,8 +691,6 @@ void *text_poke(void *addr, const void *opcode,
>>>> size_t len)
>>>>> */
>>>>> 	BUG_ON(!after_bootmem);
>>>>> 
>>>>> -	lockdep_assert_held(&text_mutex);
>>>>> -
>>>>> 	if (!core_kernel_text((unsigned long)addr)) {
>>>>> 		pages[0] = vmalloc_to_page(addr);
>>>>> 		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
>>>>> @@ -732,6 +719,43 @@ void *text_poke(void *addr, const void
>> *opcode,
>>>> size_t len)
>>>>> return addr;
>>>>> }
>>>>> 
>>>>> +/**
>>>>> + * text_poke - Update instructions on a live kernel
>>>>> + * @addr: address to modify
>>>>> + * @opcode: source of the copy
>>>>> + * @len: length to copy
>>>>> + *
>>>>> + * Only atomic text poke/set should be allowed when not doing
>> early
>>>> patching.
>>>>> + * It means the size must be writable atomically and the address
>>>> must be aligned
>>>>> + * in a way that permits an atomic write. It also makes sure we
>> fit
>>>> on a single
>>>>> + * page.
>>>>> + */
>>>>> +void *text_poke(void *addr, const void *opcode, size_t len)
>>>>> +{
>>>>> +	lockdep_assert_held(&text_mutex);
>>>>> +
>>>>> +	return __text_poke(addr, opcode, len);
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> + * text_poke_kgdb - Update instructions on a live kernel by kgdb
>>>>> + * @addr: address to modify
>>>>> + * @opcode: source of the copy
>>>>> + * @len: length to copy
>>>>> + *
>>>>> + * Only atomic text poke/set should be allowed when not doing
>> early
>>>> patching.
>>>>> + * It means the size must be writable atomically and the address
>>>> must be aligned
>>>>> + * in a way that permits an atomic write. It also makes sure we
>> fit
>>>> on a single
>>>>> + * page.
>>>>> + *
>>>>> + * Context: should only be used by kgdb, which ensures no other
>> core
>>>> is running,
>>>>> + *	    despite the fact it does not hold the text_mutex.
>>>>> + */
>>>>> +void *text_poke_kgdb(void *addr, const void *opcode, size_t len)
>>>>> +{
>>>>> +	return __text_poke(addr, opcode, len);
>>>>> +}
>>>>> +
>>>>> static void do_sync_core(void *info)
>>>>> {
>>>>> 	sync_core();
>>>>> diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
>>>>> index 5db08425063e..1461544cba8b 100644
>>>>> --- a/arch/x86/kernel/kgdb.c
>>>>> +++ b/arch/x86/kernel/kgdb.c
>>>>> @@ -758,13 +758,13 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt
>>>> *bpt)
>>>>> if (!err)
>>>>> 		return err;
>>>>> 	/*
>>>>> -	 * It is safe to call text_poke() because normal kernel execution
>>>>> +	 * It is safe to call text_poke_kgdb() because normal kernel
>>>> execution
>>>>> * is stopped on all cores, so long as the text_mutex is not
>>>> locked.
>>>>> */
>>>>> 	if (mutex_is_locked(&text_mutex))
>>>>> 		return -EBUSY;
>>>>> -	text_poke((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
>>>>> -		  BREAK_INSTR_SIZE);
>>>>> +	text_poke_kgdb((void *)bpt->bpt_addr,
>> arch_kgdb_ops.gdb_bpt_instr,
>>>>> +		       BREAK_INSTR_SIZE);
>>>>> 	err = probe_kernel_read(opc, (char *)bpt->bpt_addr,
>>>> BREAK_INSTR_SIZE);
>>>>> if (err)
>>>>> 		return err;
>>>>> @@ -783,12 +783,13 @@ int kgdb_arch_remove_breakpoint(struct
>>>> kgdb_bkpt *bpt)
>>>>> if (bpt->type != BP_POKE_BREAKPOINT)
>>>>> 		goto knl_write;
>>>>> 	/*
>>>>> -	 * It is safe to call text_poke() because normal kernel execution
>>>>> +	 * It is safe to call text_poke_kgdb() because normal kernel
>>>> execution
>>>>> * is stopped on all cores, so long as the text_mutex is not
>>>> locked.
>>>>> */
>>>>> 	if (mutex_is_locked(&text_mutex))
>>>>> 		goto knl_write;
>>>>> -	text_poke((void *)bpt->bpt_addr, bpt->saved_instr,
>>>> BREAK_INSTR_SIZE);
>>>>> +	text_poke_kgdb((void *)bpt->bpt_addr, bpt->saved_instr,
>>>>> +		       BREAK_INSTR_SIZE);
>>>>> 	err = probe_kernel_read(opc, (char *)bpt->bpt_addr,
>>>> BREAK_INSTR_SIZE);
>>>>> if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE))
>>>>> 		goto knl_write;
>>>>> -- 
>>>>> 2.17.1
>>> 
>>> If you are reorganizing this code, please do so so that the caller
>> doesn’t
>>> have to worry about if it should call text_poke_bp() or
>> text_poke_early().
>>> Right now the caller had to know that, which makes no sense.
>> 
>> Did you look at "[11/17] x86/jump-label: remove support for custom
>> poker”?
>> 
>> https://lore.kernel.org/patchwork/patch/1032857/
>> 
>> If this is not what you regard, please be more concrete.
>> text_poke_early()
>> is still used directly on init and while modules are loaded, which
>> might not
>> be great, but is outside of the scope of this patch-set.
> 
> I don't think it is out of scope, although that patch is a huge step in the right direction.
> 
> text_poke_{early,bp,...}, however, should be fully internal, that is, static functions, and we should present a single interface, preferably called text_poke(), to the outside world.
> 
> I think we have three subcases:
> 
> 1. Early, UP, or under stop_machine();
> 2. Atomic and aligned;
> 3. Breakpoint.
> 
> My proposed algorithm should remove the need for a fixup which should help this interface, too.

That’s another reason why such change might be done later (after your
changes are merged). The main reason is that Rick was kind enough to
deal with the whole patch-set.

> The specific alignment needed for #2 is started by the hardware people to be not crossing 16 bytes (NOT a cache line) on any CPU we support SMP on and, of course, being possible to do atomically do on the specific CPU (note that we *can* do a redundantly large store of existing bytes, which adds flexibility.)
> 
> To the best of my knowledge any CPU supporting SSE can do an atomic (for our purposes) aligned 16-byte store via MOVAPS; of course any CPU with cx16 can do it without SSE registers. For older CPUs we may be limited to 8-byte stores (cx8) or even 4-byte stores before we need to use the breakpoint algorithm.

So the last time we had this discussion, I could not be convinced that
hypervisors (e.g, KVM), which do not follow this undocumented behavior,
would not break. I also don’t remember an official confirmation of this
behavior on Intel and AMD CPUs.


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 14/17] mm: Make hibernate handle unmapped pages
  2019-01-17 22:16     ` Edgecombe, Rick P
@ 2019-01-17 23:41       ` Pavel Machek
  2019-01-17 23:48         ` Edgecombe, Rick P
  0 siblings, 1 reply; 51+ messages in thread
From: Pavel Machek @ 2019-01-17 23:41 UTC (permalink / raw)
  To: Edgecombe, Rick P
  Cc: linux-kernel, peterz, linux-integrity, ard.biesheuvel, tglx,
	linux-mm, nadav.amit, dave.hansen, Dock, Deneen T,
	linux-security-module, x86, akpm, hpa, kristen, mingo, linux_dti,
	luto, will.deacon, bp, kernel-hardening, rjw

[-- Attachment #1: Type: text/plain, Size: 2179 bytes --]

Hi!

> > > For architectures with CONFIG_ARCH_HAS_SET_ALIAS, pages can be unmapped
> > > briefly on the directmap, even when CONFIG_DEBUG_PAGEALLOC is not
> > > configured.
> > > So this changes kernel_map_pages and kernel_page_present to be defined when
> > > CONFIG_ARCH_HAS_SET_ALIAS is defined as well. It also changes places
> > > (page_alloc.c) where those functions are assumed to only be implemented when
> > > CONFIG_DEBUG_PAGEALLOC is defined.
> > 
> > Which architectures are that?
> > 
> > Should this be merged to the patch where HAS_SET_ALIAS is introduced? We
> > don't want broken hibernation in between....
> Thanks for taking a look. It was added for x86 for patch 13 in this patchset and
> there was interest expressed for adding for arm64. If you didn't get the whole
> set and want to see let me know and I can send it.

I googled in in the meantime.

Anyway, if something is broken between patch 13 and 14, then they
should be same patch.

> > > -#ifdef CONFIG_DEBUG_PAGEALLOC
> > >  extern bool _debug_pagealloc_enabled;
> > > -extern void __kernel_map_pages(struct page *page, int numpages, int
> > > enable);
> > >  
> > >  static inline bool debug_pagealloc_enabled(void)
> > >  {
> > > -	return _debug_pagealloc_enabled;
> > > +	return IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) && _debug_pagealloc_enabled;
> > >  }
> > 
> > This will break build AFAICT. _debug_pagealloc_enabled variable does
> > not exist in !CONFIG_DEBUG_PAGEALLOC case.
> > 
> > 									Pavel
> After adding in the CONFIG_ARCH_HAS_SET_ALIAS condition to the ifdefs in this
> area it looked a little hard to read to me, so I moved debug_pagealloc_enabled
> and extern bool _debug_pagealloc_enabled outside to make it easier. I think you
> are right, the actual non-extern variable can not be there, but the reference
> here gets optimized out in that case.
> 
> Just double checked and it builds for both CONFIG_DEBUG_PAGEALLOC=n and
> CONFIG_DEBUG_PAGEALLOC=y for me.

Ok.

Thanks,
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 17/17] module: Prevent module removal racing with text_poke()
  2019-01-17 18:07     ` Nadav Amit
@ 2019-01-17 23:44       ` H. Peter Anvin
  2019-01-18  8:23       ` Masami Hiramatsu
  1 sibling, 0 replies; 51+ messages in thread
From: H. Peter Anvin @ 2019-01-17 23:44 UTC (permalink / raw)
  To: Nadav Amit, Masami Hiramatsu
  Cc: Rick Edgecombe, Andy Lutomirski, Ingo Molnar,
	Linux List Kernel Mailing, the arch/x86 maintainers,
	Thomas Gleixner, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Damian Tometzki, linux-integrity, LSM List, Andrew Morton,
	Kernel Hardening, Linux-MM, Will Deacon, Ard Biesheuvel, kristen,
	deneen.t.dock

On 1/17/19 10:07 AM, Nadav Amit wrote:
>> On Jan 16, 2019, at 11:54 PM, Masami Hiramatsu <mhiramat@kernel.org> wrote:
>>
>> On Wed, 16 Jan 2019 16:32:59 -0800
>> Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:
>>
>>> From: Nadav Amit <namit@vmware.com>
>>>
>>> It seems dangerous to allow code modifications to take place
>>> concurrently with module unloading. So take the text_mutex while the
>>> memory of the module is freed.
>>
>> At that point, since the module itself is removed from module list,
>> it seems no actual harm. Or would you have any concern?
> 
> So it appears that you are right and all the users of text_poke() and
> text_poke_bp() do install module notifiers, and remove the module from their
> internal data structure when they are done (*). As long as they prevent
> text_poke*() to be called concurrently (e.g., using jump_label_lock()),
> everything is fine.
> 
> Having said that, the question is whether you “trust” text_poke*() users to
> do so. text_poke() description does not day explicitly that you need to
> prevent modules from being removed.
> 
> What do you say?
> 

Please make it explicit.

	-hpa


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 14/17] mm: Make hibernate handle unmapped pages
  2019-01-17 23:41       ` Pavel Machek
@ 2019-01-17 23:48         ` Edgecombe, Rick P
  2019-01-18  8:16           ` Pavel Machek
  0 siblings, 1 reply; 51+ messages in thread
From: Edgecombe, Rick P @ 2019-01-17 23:48 UTC (permalink / raw)
  To: pavel
  Cc: linux-kernel, peterz, ard.biesheuvel, Dock, Deneen T,
	linux-integrity, tglx, linux-mm, nadav.amit, dave.hansen,
	linux-security-module, x86, akpm, hpa, kristen, mingo, linux_dti,
	luto, will.deacon, bp, kernel-hardening, rjw

On Fri, 2019-01-18 at 00:41 +0100, Pavel Machek wrote:
> Hi!
> 
> > > > For architectures with CONFIG_ARCH_HAS_SET_ALIAS, pages can be unmapped
> > > > briefly on the directmap, even when CONFIG_DEBUG_PAGEALLOC is not
> > > > configured.
> > > > So this changes kernel_map_pages and kernel_page_present to be defined
> > > > when
> > > > CONFIG_ARCH_HAS_SET_ALIAS is defined as well. It also changes places
> > > > (page_alloc.c) where those functions are assumed to only be implemented
> > > > when
> > > > CONFIG_DEBUG_PAGEALLOC is defined.
> > > 
> > > Which architectures are that?
> > > 
> > > Should this be merged to the patch where HAS_SET_ALIAS is introduced? We
> > > don't want broken hibernation in between....
> > 
> > Thanks for taking a look. It was added for x86 for patch 13 in this patchset
> > and
> > there was interest expressed for adding for arm64. If you didn't get the
> > whole
> > set and want to see let me know and I can send it.
> 
> I googled in in the meantime.
> 
> Anyway, if something is broken between patch 13 and 14, then they
> should be same patch.
Great. It should be ok because the new functions are not used anywhere until
after this patch.

Thanks,

Rick

> > > > -#ifdef CONFIG_DEBUG_PAGEALLOC
> > > >  extern bool _debug_pagealloc_enabled;
> > > > -extern void __kernel_map_pages(struct page *page, int numpages, int
> > > > enable);
> > > >  
> > > >  static inline bool debug_pagealloc_enabled(void)
> > > >  {
> > > > -	return _debug_pagealloc_enabled;
> > > > +	return IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) &&
> > > > _debug_pagealloc_enabled;
> > > >  }
> > > 
> > > This will break build AFAICT. _debug_pagealloc_enabled variable does
> > > not exist in !CONFIG_DEBUG_PAGEALLOC case.
> > > 
> > > 									Pavel
> > 
> > After adding in the CONFIG_ARCH_HAS_SET_ALIAS condition to the ifdefs in
> > this
> > area it looked a little hard to read to me, so I moved
> > debug_pagealloc_enabled
> > and extern bool _debug_pagealloc_enabled outside to make it easier. I think
> > you
> > are right, the actual non-extern variable can not be there, but the
> > reference
> > here gets optimized out in that case.
> > 
> > Just double checked and it builds for both CONFIG_DEBUG_PAGEALLOC=n and
> > CONFIG_DEBUG_PAGEALLOC=y for me.
> 
> Ok.
> 
> Thanks,
> 									Pavel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 17/17] module: Prevent module removal racing with text_poke()
  2019-01-17  7:54   ` Masami Hiramatsu
  2019-01-17 18:07     ` Nadav Amit
@ 2019-01-17 23:58     ` H. Peter Anvin
  2019-01-18  1:15       ` Nadav Amit
  1 sibling, 1 reply; 51+ messages in thread
From: H. Peter Anvin @ 2019-01-17 23:58 UTC (permalink / raw)
  To: Masami Hiramatsu, Rick Edgecombe
  Cc: Andy Lutomirski, Ingo Molnar, linux-kernel, x86, Thomas Gleixner,
	Borislav Petkov, Nadav Amit, Dave Hansen, Peter Zijlstra,
	linux_dti, linux-integrity, linux-security-module, akpm,
	kernel-hardening, linux-mm, will.deacon, ard.biesheuvel, kristen,
	deneen.t.dock, Nadav Amit

On 1/16/19 11:54 PM, Masami Hiramatsu wrote:
> On Wed, 16 Jan 2019 16:32:59 -0800
> Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:
> 
>> From: Nadav Amit <namit@vmware.com>
>>
>> It seems dangerous to allow code modifications to take place
>> concurrently with module unloading. So take the text_mutex while the
>> memory of the module is freed.
> 
> At that point, since the module itself is removed from module list,
> it seems no actual harm. Or would you have any concern?
> 

The issue isn't the module list, but rather when it is safe to free the
contents, so we don't clobber anything. We absolutely need to enforce
that we can't text_poke() something that might have already been freed.

That being said, we *also* really would prefer to enforce that we can't
text_poke() memory that doesn't actually contain code; as far as I can
tell we don't currently do that check.

This, again, is a good use for a separate mm context. We can enforce
that that context will only ever contain valid page mappings for actual
code pages.

(Note: in my proposed algorithm, with a separate mm, replace INVLPG with
switching CR3 if we have to do a rollback or roll forward in the
breakpoint handler.)

	-hpa

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 17/17] module: Prevent module removal racing with text_poke()
  2019-01-17 23:58     ` H. Peter Anvin
@ 2019-01-18  1:15       ` Nadav Amit
  2019-01-18 13:32         ` Masami Hiramatsu
  0 siblings, 1 reply; 51+ messages in thread
From: Nadav Amit @ 2019-01-18  1:15 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Masami Hiramatsu, Rick Edgecombe, Andy Lutomirski, Ingo Molnar,
	LKML, X86 ML, Thomas Gleixner, Borislav Petkov, Dave Hansen,
	Peter Zijlstra, Damian Tometzki, linux-integrity, LSM List,
	Andrew Morton, Kernel Hardening, Linux-MM, Will Deacon,
	ard.biesheuvel, kristen, deneen.t.dock

> On Jan 17, 2019, at 3:58 PM, H. Peter Anvin <hpa@zytor.com> wrote:
> 
> On 1/16/19 11:54 PM, Masami Hiramatsu wrote:
>> On Wed, 16 Jan 2019 16:32:59 -0800
>> Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:
>> 
>>> From: Nadav Amit <namit@vmware.com>
>>> 
>>> It seems dangerous to allow code modifications to take place
>>> concurrently with module unloading. So take the text_mutex while the
>>> memory of the module is freed.
>> 
>> At that point, since the module itself is removed from module list,
>> it seems no actual harm. Or would you have any concern?
> 
> The issue isn't the module list, but rather when it is safe to free the
> contents, so we don't clobber anything. We absolutely need to enforce
> that we can't text_poke() something that might have already been freed.
> 
> That being said, we *also* really would prefer to enforce that we can't
> text_poke() memory that doesn't actually contain code; as far as I can
> tell we don't currently do that check.

Yes, that what the mutex was supposed to achieve. It’s not supposed just
to check whether it is a code page, but also that it is the same code
page that you wanted to patch. 

> This, again, is a good use for a separate mm context. We can enforce
> that that context will only ever contain valid page mappings for actual
> code pages.

This will not tell you that you have the *right* code-page. The module
notifiers help to do so, since they synchronize the text poking with
the module removal.

> (Note: in my proposed algorithm, with a separate mm, replace INVLPG with
> switching CR3 if we have to do a rollback or roll forward in the
> breakpoint handler.)

I really need to read your patches more carefully to see what you mean.

Anyhow, so what do you prefer? I’m ok with either one:
	1. Keep this patch
	2. Remove this patch and change into a comment on text_poke()
	3. Just drop the patch


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 14/17] mm: Make hibernate handle unmapped pages
  2019-01-17 23:48         ` Edgecombe, Rick P
@ 2019-01-18  8:16           ` Pavel Machek
  0 siblings, 0 replies; 51+ messages in thread
From: Pavel Machek @ 2019-01-18  8:16 UTC (permalink / raw)
  To: Edgecombe, Rick P
  Cc: linux-kernel, peterz, ard.biesheuvel, Dock, Deneen T,
	linux-integrity, tglx, linux-mm, nadav.amit, dave.hansen,
	linux-security-module, x86, akpm, hpa, kristen, mingo, linux_dti,
	luto, will.deacon, bp, kernel-hardening, rjw

[-- Attachment #1: Type: text/plain, Size: 1551 bytes --]

On Thu 2019-01-17 23:48:30, Edgecombe, Rick P wrote:
> On Fri, 2019-01-18 at 00:41 +0100, Pavel Machek wrote:
> > Hi!
> > 
> > > > > For architectures with CONFIG_ARCH_HAS_SET_ALIAS, pages can be unmapped
> > > > > briefly on the directmap, even when CONFIG_DEBUG_PAGEALLOC is not
> > > > > configured.
> > > > > So this changes kernel_map_pages and kernel_page_present to be defined
> > > > > when
> > > > > CONFIG_ARCH_HAS_SET_ALIAS is defined as well. It also changes places
> > > > > (page_alloc.c) where those functions are assumed to only be implemented
> > > > > when
> > > > > CONFIG_DEBUG_PAGEALLOC is defined.
> > > > 
> > > > Which architectures are that?
> > > > 
> > > > Should this be merged to the patch where HAS_SET_ALIAS is introduced? We
> > > > don't want broken hibernation in between....
> > > 
> > > Thanks for taking a look. It was added for x86 for patch 13 in this patchset
> > > and
> > > there was interest expressed for adding for arm64. If you didn't get the
> > > whole
> > > set and want to see let me know and I can send it.
> > 
> > I googled in in the meantime.
> > 
> > Anyway, if something is broken between patch 13 and 14, then they
> > should be same patch.
> Great. It should be ok because the new functions are not used anywhere until
> after this patch.

Ok, that makes sense.

Acked-by: Pavel Machek <pavel@ucw.cz>
									Pavel

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 17/17] module: Prevent module removal racing with text_poke()
  2019-01-17 18:07     ` Nadav Amit
  2019-01-17 23:44       ` H. Peter Anvin
@ 2019-01-18  8:23       ` Masami Hiramatsu
  1 sibling, 0 replies; 51+ messages in thread
From: Masami Hiramatsu @ 2019-01-18  8:23 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Rick Edgecombe, Andy Lutomirski, Ingo Molnar,
	Linux List Kernel Mailing, the arch/x86 maintainers,
	H. Peter Anvin, Thomas Gleixner, Borislav Petkov, Dave Hansen,
	Peter Zijlstra, Damian Tometzki, linux-integrity, LSM List,
	Andrew Morton, Kernel Hardening, Linux-MM, Will Deacon,
	Ard Biesheuvel, kristen, deneen.t.dock

On Thu, 17 Jan 2019 18:07:03 +0000
Nadav Amit <namit@vmware.com> wrote:

> > On Jan 16, 2019, at 11:54 PM, Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > 
> > On Wed, 16 Jan 2019 16:32:59 -0800
> > Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:
> > 
> >> From: Nadav Amit <namit@vmware.com>
> >> 
> >> It seems dangerous to allow code modifications to take place
> >> concurrently with module unloading. So take the text_mutex while the
> >> memory of the module is freed.
> > 
> > At that point, since the module itself is removed from module list,
> > it seems no actual harm. Or would you have any concern?
> 
> So it appears that you are right and all the users of text_poke() and
> text_poke_bp() do install module notifiers, and remove the module from their
> internal data structure when they are done (*). As long as they prevent
> text_poke*() to be called concurrently (e.g., using jump_label_lock()),
> everything is fine.
> 
> Having said that, the question is whether you ^[$B!H^[(Btrust^[$B!I^[(B text_poke*() users to
> do so. text_poke() description does not day explicitly that you need to
> prevent modules from being removed.
> 
> What do you say?

I agreed, but in that case, this is just a fool proof. I think we should
prevent this kind of bug by review, and should comment it on text_poke(),
instead of locking text_mutex.

What I thought was even if we take text_mutex here, such user can modify
the (released) module code right after we exit this section.

Maybe we'd better make text_poke() more smart?

> (*) I am not sure about kgdb, but it probably does not matter much

I think we don't need to care about kgdb. It is a tool which should be able
to shoot your feet and we can not prevent it. Only expert can avoid it. :)

Thank you,

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 17/17] module: Prevent module removal racing with text_poke()
  2019-01-18  1:15       ` Nadav Amit
@ 2019-01-18 13:32         ` Masami Hiramatsu
  0 siblings, 0 replies; 51+ messages in thread
From: Masami Hiramatsu @ 2019-01-18 13:32 UTC (permalink / raw)
  To: Nadav Amit
  Cc: H. Peter Anvin, Masami Hiramatsu, Rick Edgecombe,
	Andy Lutomirski, Ingo Molnar, LKML, X86 ML, Thomas Gleixner,
	Borislav Petkov, Dave Hansen, Peter Zijlstra, Damian Tometzki,
	linux-integrity, LSM List, Andrew Morton, Kernel Hardening,
	Linux-MM, Will Deacon, ard.biesheuvel, kristen, deneen.t.dock

On Thu, 17 Jan 2019 17:15:27 -0800
Nadav Amit <nadav.amit@gmail.com> wrote:

> > On Jan 17, 2019, at 3:58 PM, H. Peter Anvin <hpa@zytor.com> wrote:
> > 
> > On 1/16/19 11:54 PM, Masami Hiramatsu wrote:
> >> On Wed, 16 Jan 2019 16:32:59 -0800
> >> Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:
> >> 
> >>> From: Nadav Amit <namit@vmware.com>
> >>> 
> >>> It seems dangerous to allow code modifications to take place
> >>> concurrently with module unloading. So take the text_mutex while the
> >>> memory of the module is freed.
> >> 
> >> At that point, since the module itself is removed from module list,
> >> it seems no actual harm. Or would you have any concern?
> > 
> > The issue isn't the module list, but rather when it is safe to free the
> > contents, so we don't clobber anything. We absolutely need to enforce
> > that we can't text_poke() something that might have already been freed.
> > 
> > That being said, we *also* really would prefer to enforce that we can't
> > text_poke() memory that doesn't actually contain code; as far as I can
> > tell we don't currently do that check.
> 
> Yes, that what the mutex was supposed to achieve. It^[$B!G^[(Bs not supposed just
> to check whether it is a code page, but also that it is the same code
> page that you wanted to patch. 
> 
> > This, again, is a good use for a separate mm context. We can enforce
> > that that context will only ever contain valid page mappings for actual
> > code pages.
> 
> This will not tell you that you have the *right* code-page. The module
> notifiers help to do so, since they synchronize the text poking with
> the module removal.
> 
> > (Note: in my proposed algorithm, with a separate mm, replace INVLPG with
> > switching CR3 if we have to do a rollback or roll forward in the
> > breakpoint handler.)
> 
> I really need to read your patches more carefully to see what you mean.
> 
> Anyhow, so what do you prefer? I^[$B!G^[(Bm ok with either one:
> 	1. Keep this patch
> 	2. Remove this patch and change into a comment on text_poke()
> 	3. Just drop the patch

I would prefer 2. so at least we should add a comment to text_poke().

Thank you,


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 01/17] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2019-01-17  0:32 ` [PATCH 01/17] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Rick Edgecombe
  2019-01-17  6:47   ` Masami Hiramatsu
@ 2019-01-25  9:30   ` Borislav Petkov
  2019-01-25 18:28     ` Nadav Amit
  1 sibling, 1 reply; 51+ messages in thread
From: Borislav Petkov @ 2019-01-25  9:30 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: Andy Lutomirski, Ingo Molnar, linux-kernel, x86, hpa,
	Thomas Gleixner, Nadav Amit, Dave Hansen, Peter Zijlstra,
	linux_dti, linux-integrity, linux-security-module, akpm,
	kernel-hardening, linux-mm, will.deacon, ard.biesheuvel, kristen,
	deneen.t.dock, Nadav Amit, Kees Cook, Dave Hansen,
	Masami Hiramatsu

On Wed, Jan 16, 2019 at 04:32:43PM -0800, Rick Edgecombe wrote:
> From: Nadav Amit <namit@vmware.com>
> 
> text_mutex is currently expected to be held before text_poke() is
> called, but we kgdb does not take the mutex, and instead *supposedly*
> ensures the lock is not taken and will not be acquired by any other core
> while text_poke() is running.
> 
> The reason for the "supposedly" comment is that it is not entirely clear
> that this would be the case if gdb_do_roundup is zero.

I guess that variable name is "kgdb_do_roundup" ?

> This patch creates two wrapper functions, text_poke() and

Avoid having "This patch" or "This commit" in the commit message. It is
tautologically useless.

Also, do

$ git grep 'This patch' Documentation/process

for more details.

> text_poke_kgdb() which do or do not run the lockdep assertion
> respectively.
> 
> While we are at it, change the return code of text_poke() to something
> meaningful. One day, callers might actually respect it and the existing
> BUG_ON() when patching fails could be removed. For kgdb, the return
> value can actually be used.
> 
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Fixes: 9222f606506c ("x86/alternatives: Lockdep-enforce text_mutex in text_poke*()")
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Acked-by: Jiri Kosina <jkosina@suse.cz>
> Signed-off-by: Nadav Amit <namit@vmware.com>
> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
> ---
>  arch/x86/include/asm/text-patching.h |  1 +
>  arch/x86/kernel/alternative.c        | 52 ++++++++++++++++++++--------
>  arch/x86/kernel/kgdb.c               | 11 +++---
>  3 files changed, 45 insertions(+), 19 deletions(-)

...

> +/**
> + * text_poke_kgdb - Update instructions on a live kernel by kgdb
> + * @addr: address to modify
> + * @opcode: source of the copy
> + * @len: length to copy
> + *
> + * Only atomic text poke/set should be allowed when not doing early patching.
> + * It means the size must be writable atomically and the address must be aligned
> + * in a way that permits an atomic write. It also makes sure we fit on a single
> + * page.
> + *
> + * Context: should only be used by kgdb, which ensures no other core is running,
> + *	    despite the fact it does not hold the text_mutex.
> + */
> +void *text_poke_kgdb(void *addr, const void *opcode, size_t len)

text_poke_unlocked() I guess. I don't think kgdb is that special that it
needs its own function flavor.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 01/17] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
  2019-01-25  9:30   ` Borislav Petkov
@ 2019-01-25 18:28     ` Nadav Amit
  0 siblings, 0 replies; 51+ messages in thread
From: Nadav Amit @ 2019-01-25 18:28 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Rick Edgecombe, Andy Lutomirski, Ingo Molnar, LKML, X86 ML,
	H. Peter Anvin, Thomas Gleixner, Dave Hansen, Peter Zijlstra,
	Damian Tometzki, linux-integrity, LSM List, Andrew Morton,
	Kernel Hardening, Linux-MM, Will Deacon, Ard Biesheuvel, kristen,
	deneen.t.dock, Kees Cook, Dave Hansen, Masami Hiramatsu

> On Jan 25, 2019, at 1:30 AM, Borislav Petkov <bp@alien8.de> wrote:
> 
> On Wed, Jan 16, 2019 at 04:32:43PM -0800, Rick Edgecombe wrote:
>> From: Nadav Amit <namit@vmware.com>
>> 
>> text_mutex is currently expected to be held before text_poke() is
>> called, but we kgdb does not take the mutex, and instead *supposedly*
>> ensures the lock is not taken and will not be acquired by any other core
>> while text_poke() is running.
>> 
>> The reason for the "supposedly" comment is that it is not entirely clear
>> that this would be the case if gdb_do_roundup is zero.
> 
> I guess that variable name is "kgdb_do_roundup” ?

Yes. Will fix.

> 
>> This patch creates two wrapper functions, text_poke() and
> 
> Avoid having "This patch" or "This commit" in the commit message. It is
> tautologically useless.
> 
> Also, do
> 
> $ git grep 'This patch' Documentation/process
> 
> for more details.

Ok.

>> 
>> +void *text_poke_kgdb(void *addr, const void *opcode, size_t len)
> 
> text_poke_unlocked() I guess. I don't think kgdb is that special that it
> needs its own function flavor.

Tglx suggested this naming to prevent anyone from misusing text_poke_kdgb().
This is a very specific use-case that nobody else should need.

Regards,
Nadav

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 08/17] x86/ftrace: set trampoline pages as executable
  2019-01-17  0:32 ` [PATCH 08/17] x86/ftrace: set trampoline pages as executable Rick Edgecombe
@ 2019-02-06 16:22   ` Steven Rostedt
  2019-02-06 17:33     ` Nadav Amit
  0 siblings, 1 reply; 51+ messages in thread
From: Steven Rostedt @ 2019-02-06 16:22 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: Andy Lutomirski, Ingo Molnar, linux-kernel, x86, hpa,
	Thomas Gleixner, Borislav Petkov, Nadav Amit, Dave Hansen,
	Peter Zijlstra, linux_dti, linux-integrity,
	linux-security-module, akpm, kernel-hardening, linux-mm,
	will.deacon, ard.biesheuvel, kristen, deneen.t.dock, Nadav Amit

On Wed, 16 Jan 2019 16:32:50 -0800
Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:

> From: Nadav Amit <namit@vmware.com>
> 
> Since alloc_module() will not set the pages as executable soon, we need
> to do so for ftrace trampoline pages after they are allocated.
> 
> For the time being, we do not change ftrace to use the text_poke()
> interface. As a result, ftrace breaks still breaks W^X.
> 
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Signed-off-by: Nadav Amit <namit@vmware.com>
> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
> ---
>  arch/x86/kernel/ftrace.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
> index 8257a59704ae..eb4a1937e72c 100644
> --- a/arch/x86/kernel/ftrace.c
> +++ b/arch/x86/kernel/ftrace.c
> @@ -742,6 +742,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
>  	unsigned long end_offset;
>  	unsigned long op_offset;
>  	unsigned long offset;
> +	unsigned long npages;
>  	unsigned long size;
>  	unsigned long retq;
>  	unsigned long *ptr;
> @@ -774,6 +775,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
>  		return 0;
>  
>  	*tramp_size = size + RET_SIZE + sizeof(void *);
> +	npages = DIV_ROUND_UP(*tramp_size, PAGE_SIZE);
>  
>  	/* Copy ftrace_caller onto the trampoline memory */
>  	ret = probe_kernel_read(trampoline, (void *)start_offset, size);
> @@ -818,6 +820,13 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
>  	/* ALLOC_TRAMP flags lets us know we created it */
>  	ops->flags |= FTRACE_OPS_FL_ALLOC_TRAMP;
>  
> +	/*
> +	 * Module allocation needs to be completed by making the page
> +	 * executable. The page is still writable, which is a security hazard,
> +	 * but anyhow ftrace breaks W^X completely.
> +	 */

Perhaps we should set the page to non writable after the page is
updated? And set it to writable only when we need to update it.

As for this patch:

Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve

> +	set_memory_x((unsigned long)trampoline, npages);
> +
>  	return (unsigned long)trampoline;
>  fail:
>  	tramp_free(trampoline, *tramp_size);


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 16/17] Plug in new special vfree flag
  2019-01-17  0:32 ` [PATCH 16/17] Plug in new special vfree flag Rick Edgecombe
@ 2019-02-06 16:23   ` Steven Rostedt
  2019-02-07 17:33     ` Edgecombe, Rick P
  0 siblings, 1 reply; 51+ messages in thread
From: Steven Rostedt @ 2019-02-06 16:23 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: Andy Lutomirski, Ingo Molnar, linux-kernel, x86, hpa,
	Thomas Gleixner, Borislav Petkov, Nadav Amit, Dave Hansen,
	Peter Zijlstra, linux_dti, linux-integrity,
	linux-security-module, akpm, kernel-hardening, linux-mm,
	will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rusty Russell, Masami Hiramatsu, Daniel Borkmann,
	Alexei Starovoitov, Jessica Yu, Paul E . McKenney

On Wed, 16 Jan 2019 16:32:58 -0800
Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:

> Add new flag for handling freeing of special permissioned memory in vmalloc
> and remove places where memory was set RW before freeing which is no longer
> needed.
> 
> In kprobes, bpf and ftrace this just adds the flag, and removes the now
> unneeded set_memory_ calls before calling vfree.
> 
> In modules, the freeing of init sections is moved to a work queue, since
> freeing of RO memory is not supported in an interrupt by vmalloc.
> Instead of call_rcu, it now uses synchronize_rcu() in the work queue.
> 
> Cc: Rusty Russell <rusty@rustcorp.com.au>
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Daniel Borkmann <daniel@iogearbox.net>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Jessica Yu <jeyu@kernel.org>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Paul E. McKenney <paulmck@linux.ibm.com>
> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
> ---
>  arch/x86/kernel/ftrace.c       |  6 +--

For the ftrace code.

Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve

>  arch/x86/kernel/kprobes/core.c |  7 +---
>  include/linux/filter.h         | 16 ++-----
>  kernel/bpf/core.c              |  1 -
>  kernel/module.c                | 77 +++++++++++++++++-----------------
>  5 files changed, 45 insertions(+), 62 deletions(-)
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 08/17] x86/ftrace: set trampoline pages as executable
  2019-02-06 16:22   ` Steven Rostedt
@ 2019-02-06 17:33     ` Nadav Amit
  2019-02-06 17:41       ` Steven Rostedt
  0 siblings, 1 reply; 51+ messages in thread
From: Nadav Amit @ 2019-02-06 17:33 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Rick Edgecombe, Andy Lutomirski, Ingo Molnar, LKML, X86 ML,
	H. Peter Anvin, Thomas Gleixner, Borislav Petkov, Dave Hansen,
	Peter Zijlstra, Damian Tometzki, linux-integrity, LSM List,
	Andrew Morton, Kernel Hardening, Linux-MM, Will Deacon,
	Ard Biesheuvel, Kristen Carlson Accardi, deneen.t.dock

> On Feb 6, 2019, at 8:22 AM, Steven Rostedt <rostedt@goodmis.org> wrote:
> 
> On Wed, 16 Jan 2019 16:32:50 -0800
> Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:
> 
>> From: Nadav Amit <namit@vmware.com>
>> 
>> Since alloc_module() will not set the pages as executable soon, we need
>> to do so for ftrace trampoline pages after they are allocated.
>> 
>> For the time being, we do not change ftrace to use the text_poke()
>> interface. As a result, ftrace breaks still breaks W^X.
>> 
>> Cc: Steven Rostedt <rostedt@goodmis.org>
>> Signed-off-by: Nadav Amit <namit@vmware.com>
>> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
>> ---
>> arch/x86/kernel/ftrace.c | 9 +++++++++
>> 1 file changed, 9 insertions(+)
>> 
>> diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
>> index 8257a59704ae..eb4a1937e72c 100644
>> --- a/arch/x86/kernel/ftrace.c
>> +++ b/arch/x86/kernel/ftrace.c
>> @@ -742,6 +742,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
>> 	unsigned long end_offset;
>> 	unsigned long op_offset;
>> 	unsigned long offset;
>> +	unsigned long npages;
>> 	unsigned long size;
>> 	unsigned long retq;
>> 	unsigned long *ptr;
>> @@ -774,6 +775,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
>> 		return 0;
>> 
>> 	*tramp_size = size + RET_SIZE + sizeof(void *);
>> +	npages = DIV_ROUND_UP(*tramp_size, PAGE_SIZE);
>> 
>> 	/* Copy ftrace_caller onto the trampoline memory */
>> 	ret = probe_kernel_read(trampoline, (void *)start_offset, size);
>> @@ -818,6 +820,13 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
>> 	/* ALLOC_TRAMP flags lets us know we created it */
>> 	ops->flags |= FTRACE_OPS_FL_ALLOC_TRAMP;
>> 
>> +	/*
>> +	 * Module allocation needs to be completed by making the page
>> +	 * executable. The page is still writable, which is a security hazard,
>> +	 * but anyhow ftrace breaks W^X completely.
>> +	 */
> 
> Perhaps we should set the page to non writable after the page is
> updated? And set it to writable only when we need to update it.

You remember that I sent you a patch that changed all these writes into
text_poke() and you said that I should defer it until this series is merged?

> As for this patch:
> 
> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Thanks!


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 08/17] x86/ftrace: set trampoline pages as executable
  2019-02-06 17:33     ` Nadav Amit
@ 2019-02-06 17:41       ` Steven Rostedt
  0 siblings, 0 replies; 51+ messages in thread
From: Steven Rostedt @ 2019-02-06 17:41 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Rick Edgecombe, Andy Lutomirski, Ingo Molnar, LKML, X86 ML,
	H. Peter Anvin, Thomas Gleixner, Borislav Petkov, Dave Hansen,
	Peter Zijlstra, Damian Tometzki, linux-integrity, LSM List,
	Andrew Morton, Kernel Hardening, Linux-MM, Will Deacon,
	Ard Biesheuvel, Kristen Carlson Accardi, deneen.t.dock

On Wed, 6 Feb 2019 09:33:35 -0800
Nadav Amit <nadav.amit@gmail.com> wrote:


> >> 	/* Copy ftrace_caller onto the trampoline memory */
> >> 	ret = probe_kernel_read(trampoline, (void *)start_offset, size);
> >> @@ -818,6 +820,13 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
> >> 	/* ALLOC_TRAMP flags lets us know we created it */
> >> 	ops->flags |= FTRACE_OPS_FL_ALLOC_TRAMP;
> >> 
> >> +	/*
> >> +	 * Module allocation needs to be completed by making the page
> >> +	 * executable. The page is still writable, which is a security hazard,
> >> +	 * but anyhow ftrace breaks W^X completely.
> >> +	 */  
> > 
> > Perhaps we should set the page to non writable after the page is
> > updated? And set it to writable only when we need to update it.  
> 
> You remember that I sent you a patch that changed all these writes into
> text_poke() and you said that I should defer it until this series is merged?
> 

And I notice that it is set to RO after this call anyway.

-- Steve

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 16/17] Plug in new special vfree flag
  2019-02-06 16:23   ` Steven Rostedt
@ 2019-02-07 17:33     ` Edgecombe, Rick P
  2019-02-07 17:49       ` Steven Rostedt
  0 siblings, 1 reply; 51+ messages in thread
From: Edgecombe, Rick P @ 2019-02-07 17:33 UTC (permalink / raw)
  To: rostedt
  Cc: linux-kernel, peterz, linux-integrity, ard.biesheuvel, daniel,
	jeyu, tglx, linux-mm, nadav.amit, dave.hansen, Dock, Deneen T,
	rusty, linux-security-module, x86, akpm, hpa, kristen, mingo,
	linux_dti, luto, will.deacon, bp, kernel-hardening, mhiramat,
	ast, paulmck

On Wed, 2019-02-06 at 11:23 -0500, Steven Rostedt wrote:
> On Wed, 16 Jan 2019 16:32:58 -0800
> Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:
> 
> > Add new flag for handling freeing of special permissioned memory in vmalloc
> > and remove places where memory was set RW before freeing which is no longer
> > needed.
> > 
> > In kprobes, bpf and ftrace this just adds the flag, and removes the now
> > unneeded set_memory_ calls before calling vfree.
> > 
> > In modules, the freeing of init sections is moved to a work queue, since
> > freeing of RO memory is not supported in an interrupt by vmalloc.
> > Instead of call_rcu, it now uses synchronize_rcu() in the work queue.
> > 
> > Cc: Rusty Russell <rusty@rustcorp.com.au>
> > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > Cc: Daniel Borkmann <daniel@iogearbox.net>
> > Cc: Alexei Starovoitov <ast@kernel.org>
> > Cc: Jessica Yu <jeyu@kernel.org>
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> > Cc: Paul E. McKenney <paulmck@linux.ibm.com>
> > Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
> > ---
> >  arch/x86/kernel/ftrace.c       |  6 +--
> 
> For the ftrace code.
> 
> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> 
> -- Steve
> 
Thanks!

Rick
> >  arch/x86/kernel/kprobes/core.c |  7 +---
> >  include/linux/filter.h         | 16 ++-----
> >  kernel/bpf/core.c              |  1 -
> >  kernel/module.c                | 77 +++++++++++++++++-----------------
> >  5 files changed, 45 insertions(+), 62 deletions(-)
> > 

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 16/17] Plug in new special vfree flag
  2019-02-07 17:33     ` Edgecombe, Rick P
@ 2019-02-07 17:49       ` Steven Rostedt
  2019-02-07 18:20         ` Edgecombe, Rick P
  0 siblings, 1 reply; 51+ messages in thread
From: Steven Rostedt @ 2019-02-07 17:49 UTC (permalink / raw)
  To: Edgecombe, Rick P
  Cc: linux-kernel, peterz, linux-integrity, ard.biesheuvel, daniel,
	jeyu, tglx, linux-mm, nadav.amit, dave.hansen, Dock, Deneen T,
	rusty, linux-security-module, x86, akpm, hpa, kristen, mingo,
	linux_dti, luto, will.deacon, bp, kernel-hardening, mhiramat,
	ast, paulmck

On Thu, 7 Feb 2019 17:33:37 +0000
"Edgecombe, Rick P" <rick.p.edgecombe@intel.com> wrote:


> > > ---
> > >  arch/x86/kernel/ftrace.c       |  6 +--  
> > 
> > For the ftrace code.
> > 
> > Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> > 
> > -- Steve
> >   
> Thanks!

I just noticed that the subject is incorrect; It is missing the
"subsystem:" part. See Documentation/process/submitting-patches.rst

-- Steve

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 16/17] Plug in new special vfree flag
  2019-02-07 17:49       ` Steven Rostedt
@ 2019-02-07 18:20         ` Edgecombe, Rick P
  0 siblings, 0 replies; 51+ messages in thread
From: Edgecombe, Rick P @ 2019-02-07 18:20 UTC (permalink / raw)
  To: rostedt
  Cc: linux-kernel, daniel, peterz, ard.biesheuvel, linux-integrity,
	jeyu, linux-mm, tglx, nadav.amit, dave.hansen, Dock, Deneen T,
	rusty, linux-security-module, x86, akpm, hpa, kristen, mingo,
	linux_dti, luto, will.deacon, bp, kernel-hardening, mhiramat,
	ast, paulmck

On Thu, 2019-02-07 at 12:49 -0500, Steven Rostedt wrote:
> On Thu, 7 Feb 2019 17:33:37 +0000
> "Edgecombe, Rick P" <rick.p.edgecombe@intel.com> wrote:
> 
> 
> > > > ---
> > > >  arch/x86/kernel/ftrace.c       |  6 +--  
> > > 
> > > For the ftrace code.
> > > 
> > > Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> > > 
> > > -- Steve
> > >   
> > 
> > Thanks!
> 
> I just noticed that the subject is incorrect; It is missing the
> "subsystem:" part. See Documentation/process/submitting-patches.rst
> 
> -- Steve
Sorry about that. There is actually v2 of this patchset out there, where there
are no code changes for this patch, but it is split into separate patches for
each subsystem. It has "x86/ftrace: " for the ftrace patch.

Rick

^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, back to index

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-17  0:32 [PATCH 00/17] Merge text_poke fixes and executable lockdowns Rick Edgecombe
2019-01-17  0:32 ` [PATCH 01/17] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Rick Edgecombe
2019-01-17  6:47   ` Masami Hiramatsu
2019-01-17 21:15     ` hpa
2019-01-17 22:39       ` Nadav Amit
2019-01-17 22:59         ` hpa
2019-01-17 23:14           ` Nadav Amit
2019-01-25  9:30   ` Borislav Petkov
2019-01-25 18:28     ` Nadav Amit
2019-01-17  0:32 ` [PATCH 02/17] x86/jump_label: Use text_poke_early() during early init Rick Edgecombe
2019-01-17  0:32 ` [PATCH 03/17] x86/mm: temporary mm struct Rick Edgecombe
2019-01-17  0:32 ` [PATCH 04/17] fork: provide a function for copying init_mm Rick Edgecombe
2019-01-17  0:32 ` [PATCH 05/17] x86/alternative: initializing temporary mm for patching Rick Edgecombe
2019-01-17  0:32 ` [PATCH 06/17] x86/alternative: use temporary mm for text poking Rick Edgecombe
2019-01-17 20:27   ` Andy Lutomirski
2019-01-17 20:47     ` Andy Lutomirski
2019-01-17 21:43       ` Nadav Amit
2019-01-17 22:29         ` Nadav Amit
2019-01-17 22:31         ` hpa
2019-01-17  0:32 ` [PATCH 07/17] x86/kgdb: avoid redundant comparison of patched code Rick Edgecombe
2019-01-17  0:32 ` [PATCH 08/17] x86/ftrace: set trampoline pages as executable Rick Edgecombe
2019-02-06 16:22   ` Steven Rostedt
2019-02-06 17:33     ` Nadav Amit
2019-02-06 17:41       ` Steven Rostedt
2019-01-17  0:32 ` [PATCH 09/17] x86/kprobes: Instruction pages initialization enhancements Rick Edgecombe
2019-01-17  6:51   ` Masami Hiramatsu
2019-01-17  0:32 ` [PATCH 10/17] x86: avoid W^X being broken during modules loading Rick Edgecombe
2019-01-17  0:32 ` [PATCH 11/17] x86/jump-label: remove support for custom poker Rick Edgecombe
2019-01-17  0:32 ` [PATCH 12/17] x86/alternative: Remove the return value of text_poke_*() Rick Edgecombe
2019-01-17  0:32 ` [PATCH 13/17] Add set_alias_ function and x86 implementation Rick Edgecombe
2019-01-17  0:32 ` [PATCH 14/17] mm: Make hibernate handle unmapped pages Rick Edgecombe
2019-01-17  9:39   ` Pavel Machek
2019-01-17 22:16     ` Edgecombe, Rick P
2019-01-17 23:41       ` Pavel Machek
2019-01-17 23:48         ` Edgecombe, Rick P
2019-01-18  8:16           ` Pavel Machek
2019-01-17  0:32 ` [PATCH 15/17] vmalloc: New flags for safe vfree on special perms Rick Edgecombe
2019-01-17  0:32 ` [PATCH 16/17] Plug in new special vfree flag Rick Edgecombe
2019-02-06 16:23   ` Steven Rostedt
2019-02-07 17:33     ` Edgecombe, Rick P
2019-02-07 17:49       ` Steven Rostedt
2019-02-07 18:20         ` Edgecombe, Rick P
2019-01-17  0:32 ` [PATCH 17/17] module: Prevent module removal racing with text_poke() Rick Edgecombe
2019-01-17  7:54   ` Masami Hiramatsu
2019-01-17 18:07     ` Nadav Amit
2019-01-17 23:44       ` H. Peter Anvin
2019-01-18  8:23       ` Masami Hiramatsu
2019-01-17 23:58     ` H. Peter Anvin
2019-01-18  1:15       ` Nadav Amit
2019-01-18 13:32         ` Masami Hiramatsu
2019-01-17 13:21 ` [PATCH 00/17] Merge text_poke fixes and executable lockdowns Peter Zijlstra

Linux-Integrity Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-integrity/0 linux-integrity/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-integrity linux-integrity/ https://lore.kernel.org/linux-integrity \
		linux-integrity@vger.kernel.org linux-integrity@archiver.kernel.org
	public-inbox-index linux-integrity


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-integrity


AGPL code for this site: git clone https://public-inbox.org/ public-inbox