linux-integrity.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns
@ 2019-02-21 23:44 Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 01/20] x86/jump_label: Use text_poke_early() during early init Rick Edgecombe
                   ` (20 more replies)
  0 siblings, 21 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rick Edgecombe

This patchset improves several overlapping issues around stale TLB entries and
W^X violations. It is combined from "x86/alternative: text_poke() enhancements
v7" [1] and "Don’t leave executable TLB entries to freed pages v2" [2] patchsets
that were conflicting.

The related issues that this fixes:
1. Fixmap PTEs that are used for patching are available for access from
   other cores and might be exploited. They are not even flushed from
   the TLB in remote cores, so the risk is even higher. Address this
   issue by introducing a temporary mm that is only used during
   patching. Unfortunately, due to init ordering, fixmap is still used
   during boot-time patching. Future patches can eliminate the need for
   it.
2. Missing lockdep assertion to ensure text_mutex is taken. It is
   actually not always taken, so fix the instances that were found not
   to take the lock (although they should be safe even without taking
   the lock).
3. Module_alloc returning memory that is RWX until a module is finished
   loading.
4. Sometimes when memory is freed via the module subsystem, an
   executable permissioned TLB entry can remain to a freed page. If the
   page is re-used to back an address that will receive data from
   userspace, it can result in user data being mapped as executable in
   the kernel. The root of this behavior is vfree lazily flushing the
   TLB, but not lazily freeing the underlying pages.


Changes v2 to v3:
 - Fix commit messages and comments [Boris]
 - Rename VM_HAS_SPECIAL_PERMS [Boris]
 - Remove unnecessary local variables [Boris]
 - Rename set_alias_*() functions [Boris, Andy]
 - Save/restore DR registers when using temporary mm
 - Move line deletion from patch 10 to patch 17

Changes v1 to v2:
 - Adding “Reviewed-by tag” [Masami]
 - Comment instead of code to warn against module removal while
   patching [Masami]
 - Avoiding open-coded TLB flush [Andy]
 - Remove "This patch" [Borislav Petkov]
 - Not set global bit during text poking [Andy, hpa]
 - Add Ack from [Pavel Machek]
 - Split patch 16 "Plug in new special vfree flag" into 4 patches (16-19)
   to make it easier to review. There were no code changes.

The changes from "Don’t leave executable TLB entries to freed pages
v2" to v1:
 - Add support for case of hibernate trying to save an unmapped page
   on the directmap. (Ard Biesheuvel)
 - No week arch breakout for vfree-ing special memory (Andy Lutomirski)
 - Avoid changing deferred free code by moving modules init free to work
   queue (Andy Lutomirski)
 - Plug in new flag for kprobes and ftrace
 - More arch generic names for set_pages functions (Ard Biesheuvel)
 - Fix for TLB not always flushing the directmap (Nadav Amit)
 
Changes from "x86/alternative: text_poke() enhancements v7" to v1
 - Fix build failure on CONFIG_RANDOMIZE_BASE=n (Rick)
 - Remove text_poke usage from ftrace (Nadav)
 
[1] https://lkml.org/lkml/2018/12/5/200
[2] https://lkml.org/lkml/2018/12/11/1571

Andy Lutomirski (1):
  x86/mm: Introduce temporary mm structs

Nadav Amit (12):
  x86/jump_label: Use text_poke_early() during early init
  x86/mm: Save DRs when loading a temporary mm
  fork: Provide a function for copying init_mm
  x86/alternative: Initialize temporary mm for patching
  x86/alternative: Use temporary mm for text poking
  x86/kgdb: Avoid redundant comparison of patched code
  x86/ftrace: Set trampoline pages as executable
  x86/kprobes: Set instruction page as executable
  x86/module: Avoid breaking W^X while loading modules
  x86/jump-label: Remove support for custom poker
  x86/alternative: Remove the return value of text_poke_*()
  x86/alternative: Comment about module removal races

Rick Edgecombe (7):
  x86/mm/cpa: Add set_direct_map_ functions
  mm: Make hibernate handle unmapped pages
  vmalloc: Add flag for free of special permsissions
  modules: Use vmalloc special flag
  bpf: Use vmalloc special flag
  x86/ftrace: Use vmalloc special flag
  x86/kprobes: Use vmalloc special flag

 arch/Kconfig                         |   4 +
 arch/x86/Kconfig                     |   1 +
 arch/x86/include/asm/fixmap.h        |   2 -
 arch/x86/include/asm/mmu_context.h   |  58 ++++++++++
 arch/x86/include/asm/pgtable.h       |   3 +
 arch/x86/include/asm/set_memory.h    |   3 +
 arch/x86/include/asm/text-patching.h |   6 +-
 arch/x86/kernel/alternative.c        | 153 +++++++++++++++++++++------
 arch/x86/kernel/ftrace.c             |  14 ++-
 arch/x86/kernel/jump_label.c         |  21 ++--
 arch/x86/kernel/kgdb.c               |  14 +--
 arch/x86/kernel/kprobes/core.c       |  19 +++-
 arch/x86/kernel/module.c             |   2 +-
 arch/x86/mm/init_64.c                |  36 +++++++
 arch/x86/mm/pageattr.c               |  16 +--
 arch/x86/xen/mmu_pv.c                |   2 -
 include/linux/filter.h               |  18 +---
 include/linux/mm.h                   |  18 ++--
 include/linux/sched/task.h           |   1 +
 include/linux/set_memory.h           |  10 ++
 include/linux/vmalloc.h              |  13 +++
 init/main.c                          |   3 +
 kernel/bpf/core.c                    |   1 -
 kernel/fork.c                        |  24 +++--
 kernel/module.c                      |  82 +++++++-------
 kernel/power/snapshot.c              |   5 +-
 mm/page_alloc.c                      |   7 +-
 mm/vmalloc.c                         | 113 ++++++++++++++++----
 28 files changed, 475 insertions(+), 174 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v3 01/20] x86/jump_label: Use text_poke_early() during early init
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 02/20] x86/mm: Introduce temporary mm structs Rick Edgecombe
                   ` (19 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Masami Hiramatsu,
	Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

There is no apparent reason not to use text_poke_early() during
early-init, since no patching of code that might be on the stack is done
and only a single core is running.

This is required for the next patches that would set a temporary mm for
text poking, and this mm is only initialized after some static-keys are
enabled/disabled.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/jump_label.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index f99bd26bd3f1..e7d8c636b228 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -50,7 +50,12 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
 	jmp.offset = jump_entry_target(entry) -
 		     (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
 
-	if (early_boot_irqs_disabled)
+	/*
+	 * As long as only a single processor is running and the code is still
+	 * not marked as RO, text_poke_early() can be used; Checking that
+	 * system_state is SYSTEM_BOOTING guarantees it.
+	 */
+	if (system_state == SYSTEM_BOOTING)
 		poker = text_poke_early;
 
 	if (type == JUMP_LABEL_JMP) {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 02/20] x86/mm: Introduce temporary mm structs
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 01/20] x86/jump_label: Use text_poke_early() during early init Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 03/20] x86/mm: Save DRs when loading a temporary mm Rick Edgecombe
                   ` (18 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Kees Cook, Dave Hansen, Nadav Amit, Rick Edgecombe

From: Andy Lutomirski <luto@kernel.org>

Using a dedicated page-table for temporary PTEs prevents other cores
from using - even speculatively - these PTEs, thereby providing two
benefits:

(1) Security hardening: an attacker that gains kernel memory writing
abilities cannot easily overwrite sensitive data.

(2) Avoiding TLB shootdowns: the PTEs do not need to be flushed in
remote page-tables.

To do so a temporary mm_struct can be used. Mappings which are private
for this mm can be set in the userspace part of the address-space.
During the whole time in which the temporary mm is loaded, interrupts
must be disabled.

The first use-case for temporary mm struct, which will follow, is for
poking the kernel text.

[ Commit message was written by Nadav Amit ]

Cc: Kees Cook <keescook@chromium.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/include/asm/mmu_context.h | 33 ++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 19d18fae6ec6..d684b954f3c0 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -356,4 +356,37 @@ static inline unsigned long __get_current_cr3_fast(void)
 	return cr3;
 }
 
+typedef struct {
+	struct mm_struct *prev;
+} temp_mm_state_t;
+
+/*
+ * Using a temporary mm allows to set temporary mappings that are not accessible
+ * by other cores. Such mappings are needed to perform sensitive memory writes
+ * that override the kernel memory protections (e.g., W^X), without exposing the
+ * temporary page-table mappings that are required for these write operations to
+ * other cores. Using temporary mm also allows to avoid TLB shootdowns when the
+ * mapping is torn down.
+ *
+ * Context: The temporary mm needs to be used exclusively by a single core. To
+ *          harden security IRQs must be disabled while the temporary mm is
+ *          loaded, thereby preventing interrupt handler bugs from overriding
+ *          the kernel memory protection.
+ */
+static inline temp_mm_state_t use_temporary_mm(struct mm_struct *mm)
+{
+	temp_mm_state_t state;
+
+	lockdep_assert_irqs_disabled();
+	state.prev = this_cpu_read(cpu_tlbstate.loaded_mm);
+	switch_mm_irqs_off(NULL, mm, current);
+	return state;
+}
+
+static inline void unuse_temporary_mm(temp_mm_state_t prev)
+{
+	lockdep_assert_irqs_disabled();
+	switch_mm_irqs_off(NULL, prev.prev, current);
+}
+
 #endif /* _ASM_X86_MMU_CONTEXT_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 03/20] x86/mm: Save DRs when loading a temporary mm
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 01/20] x86/jump_label: Use text_poke_early() during early init Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 02/20] x86/mm: Introduce temporary mm structs Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-22  0:07   ` Sean Christopherson
  2019-02-21 23:44 ` [PATCH v3 04/20] fork: Provide a function for copying init_mm Rick Edgecombe
                   ` (17 subsequent siblings)
  20 siblings, 1 reply; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit

From: Nadav Amit <namit@vmware.com>

Prevent user watchpoints from mistakenly firing while the temporary mm
is being used. As the addresses that of the temporary mm might overlap
those of the user-process, this is necessary to prevent wrong signals
or worse things from happening.

Cc: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/include/asm/mmu_context.h | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index d684b954f3c0..0d6c72ece750 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -13,6 +13,7 @@
 #include <asm/tlbflush.h>
 #include <asm/paravirt.h>
 #include <asm/mpx.h>
+#include <asm/debugreg.h>
 
 extern atomic64_t last_mm_ctx_id;
 
@@ -358,6 +359,7 @@ static inline unsigned long __get_current_cr3_fast(void)
 
 typedef struct {
 	struct mm_struct *prev;
+	unsigned short bp_enabled : 1;
 } temp_mm_state_t;
 
 /*
@@ -380,6 +382,22 @@ static inline temp_mm_state_t use_temporary_mm(struct mm_struct *mm)
 	lockdep_assert_irqs_disabled();
 	state.prev = this_cpu_read(cpu_tlbstate.loaded_mm);
 	switch_mm_irqs_off(NULL, mm, current);
+
+	/*
+	 * If breakpoints are enabled, disable them while the temporary mm is
+	 * used. Userspace might set up watchpoints on addresses that are used
+	 * in the temporary mm, which would lead to wrong signals being sent or
+	 * crashes.
+	 *
+	 * Note that breakpoints are not disabled selectively, which also causes
+	 * kernel breakpoints (e.g., perf's) to be disabled. This might be
+	 * undesirable, but still seems reasonable as the code that runs in the
+	 * temporary mm should be short.
+	 */
+	state.bp_enabled = hw_breakpoint_active();
+	if (state.bp_enabled)
+		hw_breakpoint_disable();
+
 	return state;
 }
 
@@ -387,6 +405,13 @@ static inline void unuse_temporary_mm(temp_mm_state_t prev)
 {
 	lockdep_assert_irqs_disabled();
 	switch_mm_irqs_off(NULL, prev.prev, current);
+
+	/*
+	 * Restore the breakpoints if they were disabled before the temporary mm
+	 * was loaded.
+	 */
+	if (prev.bp_enabled)
+		hw_breakpoint_restore();
 }
 
 #endif /* _ASM_X86_MMU_CONTEXT_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 04/20] fork: Provide a function for copying init_mm
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (2 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 03/20] x86/mm: Save DRs when loading a temporary mm Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 05/20] x86/alternative: Initialize temporary mm for patching Rick Edgecombe
                   ` (16 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

Provide a function for copying init_mm. This function will be later used
for setting a temporary mm.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 include/linux/sched/task.h |  1 +
 kernel/fork.c              | 24 ++++++++++++++++++------
 2 files changed, 19 insertions(+), 6 deletions(-)

diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index 44c6f15800ff..c5a00a7b3beb 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -76,6 +76,7 @@ extern void exit_itimers(struct signal_struct *);
 extern long _do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *, unsigned long);
 extern long do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *);
 struct task_struct *fork_idle(int);
+struct mm_struct *copy_init_mm(void);
 extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
 extern long kernel_wait4(pid_t, int __user *, int, struct rusage *);
 
diff --git a/kernel/fork.c b/kernel/fork.c
index b69248e6f0e0..1b43753c1884 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1299,13 +1299,20 @@ void mm_release(struct task_struct *tsk, struct mm_struct *mm)
 		complete_vfork_done(tsk);
 }
 
-/*
- * Allocate a new mm structure and copy contents from the
- * mm structure of the passed in task structure.
+/**
+ * dup_mm() - duplicates an existing mm structure
+ * @tsk: the task_struct with which the new mm will be associated.
+ * @oldmm: the mm to duplicate.
+ *
+ * Allocates a new mm structure and duplicates the provided @oldmm structure
+ * content into it.
+ *
+ * Return: the duplicated mm or NULL on failure.
  */
-static struct mm_struct *dup_mm(struct task_struct *tsk)
+static struct mm_struct *dup_mm(struct task_struct *tsk,
+				struct mm_struct *oldmm)
 {
-	struct mm_struct *mm, *oldmm = current->mm;
+	struct mm_struct *mm;
 	int err;
 
 	mm = allocate_mm();
@@ -1372,7 +1379,7 @@ static int copy_mm(unsigned long clone_flags, struct task_struct *tsk)
 	}
 
 	retval = -ENOMEM;
-	mm = dup_mm(tsk);
+	mm = dup_mm(tsk, current->mm);
 	if (!mm)
 		goto fail_nomem;
 
@@ -2187,6 +2194,11 @@ struct task_struct *fork_idle(int cpu)
 	return task;
 }
 
+struct mm_struct *copy_init_mm(void)
+{
+	return dup_mm(NULL, &init_mm);
+}
+
 /*
  *  Ok, this is the main fork-routine.
  *
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 05/20] x86/alternative: Initialize temporary mm for patching
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (3 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 04/20] fork: Provide a function for copying init_mm Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 06/20] x86/alternative: Use temporary mm for text poking Rick Edgecombe
                   ` (15 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

To prevent improper use of the PTEs that are used for text patching, the
next patches will use a temporary mm struct. Initailize it by copying
the init mm.

The address that will be used for patching is taken from the lower area
that is usually used for the task memory. Doing so prevents the need to
frequently synchronize the temporary-mm (e.g., when BPF programs are
installed), since different PGDs are used for the task memory.

Finally, randomize the address of the PTEs to harden against exploits
that use these PTEs.

Cc: Kees Cook <keescook@chromium.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/include/asm/pgtable.h       |  3 +++
 arch/x86/include/asm/text-patching.h |  2 ++
 arch/x86/kernel/alternative.c        |  3 +++
 arch/x86/mm/init_64.c                | 36 ++++++++++++++++++++++++++++
 init/main.c                          |  3 +++
 5 files changed, 47 insertions(+)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 40616e805292..e8f630d9a2ed 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1021,6 +1021,9 @@ static inline void __meminit init_trampoline_default(void)
 	/* Default trampoline pgd value */
 	trampoline_pgd_entry = init_top_pgt[pgd_index(__PAGE_OFFSET)];
 }
+
+void __init poking_init(void);
+
 # ifdef CONFIG_RANDOMIZE_MEMORY
 void __meminit init_trampoline(void);
 # else
diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
index f8fc8e86cf01..a75eed841eed 100644
--- a/arch/x86/include/asm/text-patching.h
+++ b/arch/x86/include/asm/text-patching.h
@@ -39,5 +39,7 @@ extern void *text_poke_kgdb(void *addr, const void *opcode, size_t len);
 extern int poke_int3_handler(struct pt_regs *regs);
 extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
 extern int after_bootmem;
+extern __ro_after_init struct mm_struct *poking_mm;
+extern __ro_after_init unsigned long poking_addr;
 
 #endif /* _ASM_X86_TEXT_PATCHING_H */
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 12fddbc8c55b..ae05fbb50171 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -678,6 +678,9 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode,
 	return addr;
 }
 
+__ro_after_init struct mm_struct *poking_mm;
+__ro_after_init unsigned long poking_addr;
+
 static void *__text_poke(void *addr, const void *opcode, size_t len)
 {
 	unsigned long flags;
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index bccff68e3267..125c8c48aa24 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -53,6 +53,7 @@
 #include <asm/init.h>
 #include <asm/uv/uv.h>
 #include <asm/setup.h>
+#include <asm/text-patching.h>
 
 #include "mm_internal.h"
 
@@ -1383,6 +1384,41 @@ unsigned long memory_block_size_bytes(void)
 	return memory_block_size_probed;
 }
 
+/*
+ * Initialize an mm_struct to be used during poking and a pointer to be used
+ * during patching.
+ */
+void __init poking_init(void)
+{
+	spinlock_t *ptl;
+	pte_t *ptep;
+
+	poking_mm = copy_init_mm();
+	BUG_ON(!poking_mm);
+
+	/*
+	 * Randomize the poking address, but make sure that the following page
+	 * will be mapped at the same PMD. We need 2 pages, so find space for 3,
+	 * and adjust the address if the PMD ends after the first one.
+	 */
+	poking_addr = TASK_UNMAPPED_BASE;
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+		poking_addr += (kaslr_get_random_long("Poking") & PAGE_MASK) %
+			(TASK_SIZE - TASK_UNMAPPED_BASE - 3 * PAGE_SIZE);
+
+	if (((poking_addr + PAGE_SIZE) & ~PMD_MASK) == 0)
+		poking_addr += PAGE_SIZE;
+
+	/*
+	 * We need to trigger the allocation of the page-tables that will be
+	 * needed for poking now. Later, poking may be performed in an atomic
+	 * section, which might cause allocation to fail.
+	 */
+	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
+	BUG_ON(!ptep);
+	pte_unmap_unlock(ptep, ptl);
+}
+
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
 /*
  * Initialise the sparsemem vmemmap using huge-pages at the PMD level.
diff --git a/init/main.c b/init/main.c
index e2e80ca3165a..f5947ba53bb4 100644
--- a/init/main.c
+++ b/init/main.c
@@ -496,6 +496,8 @@ void __init __weak thread_stack_cache_init(void)
 
 void __init __weak mem_encrypt_init(void) { }
 
+void __init __weak poking_init(void) { }
+
 bool initcall_debug;
 core_param(initcall_debug, initcall_debug, bool, 0644);
 
@@ -730,6 +732,7 @@ asmlinkage __visible void __init start_kernel(void)
 	taskstats_init_early();
 	delayacct_init();
 
+	poking_init();
 	check_bugs();
 
 	acpi_subsystem_init();
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 06/20] x86/alternative: Use temporary mm for text poking
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (4 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 05/20] x86/alternative: Initialize temporary mm for patching Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 07/20] x86/kgdb: Avoid redundant comparison of patched code Rick Edgecombe
                   ` (14 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Masami Hiramatsu,
	Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

text_poke() can potentially compromise security as it sets temporary
PTEs in the fixmap. These PTEs might be used to rewrite the kernel code
from other cores accidentally or maliciously, if an attacker gains the
ability to write onto kernel memory.

Moreover, since remote TLBs are not flushed after the temporary PTEs are
removed, the time-window in which the code is writable is not limited if
the fixmap PTEs - maliciously or accidentally - are cached in the TLB.
To address these potential security hazards, use a temporary mm for
patching the code.

Finally, text_poke() is also not conservative enough when mapping pages,
as it always tries to map 2 pages, even when a single one is sufficient.
So try to be more conservative, and do not map more than needed.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/include/asm/fixmap.h |   2 -
 arch/x86/kernel/alternative.c | 108 +++++++++++++++++++++++++++-------
 arch/x86/xen/mmu_pv.c         |   2 -
 3 files changed, 86 insertions(+), 26 deletions(-)

diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
index 50ba74a34a37..9da8cccdf3fb 100644
--- a/arch/x86/include/asm/fixmap.h
+++ b/arch/x86/include/asm/fixmap.h
@@ -103,8 +103,6 @@ enum fixed_addresses {
 #ifdef CONFIG_PARAVIRT
 	FIX_PARAVIRT_BOOTMAP,
 #endif
-	FIX_TEXT_POKE1,	/* reserve 2 pages for text_poke() */
-	FIX_TEXT_POKE0, /* first page is last, because allocation is backward */
 #ifdef	CONFIG_X86_INTEL_MID
 	FIX_LNW_VRTC,
 #endif
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index ae05fbb50171..cfe5bfe06f9d 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -11,6 +11,7 @@
 #include <linux/stop_machine.h>
 #include <linux/slab.h>
 #include <linux/kdebug.h>
+#include <linux/mmu_context.h>
 #include <asm/text-patching.h>
 #include <asm/alternative.h>
 #include <asm/sections.h>
@@ -683,41 +684,104 @@ __ro_after_init unsigned long poking_addr;
 
 static void *__text_poke(void *addr, const void *opcode, size_t len)
 {
+	bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE;
+	struct page *pages[2] = {NULL};
+	temp_mm_state_t prev;
 	unsigned long flags;
-	char *vaddr;
-	struct page *pages[2];
-	int i;
+	pte_t pte, *ptep;
+	spinlock_t *ptl;
+	pgprot_t pgprot;
 
 	/*
-	 * While boot memory allocator is runnig we cannot use struct
-	 * pages as they are not yet initialized.
+	 * While boot memory allocator is running we cannot use struct pages as
+	 * they are not yet initialized. There is no way to recover.
 	 */
 	BUG_ON(!after_bootmem);
 
 	if (!core_kernel_text((unsigned long)addr)) {
 		pages[0] = vmalloc_to_page(addr);
-		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
+		if (cross_page_boundary)
+			pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
 	} else {
 		pages[0] = virt_to_page(addr);
 		WARN_ON(!PageReserved(pages[0]));
-		pages[1] = virt_to_page(addr + PAGE_SIZE);
+		if (cross_page_boundary)
+			pages[1] = virt_to_page(addr + PAGE_SIZE);
 	}
-	BUG_ON(!pages[0]);
+	/*
+	 * If something went wrong, crash and burn since recovery paths are not
+	 * implemented.
+	 */
+	BUG_ON(!pages[0] || (cross_page_boundary && !pages[1]));
+
 	local_irq_save(flags);
-	set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0]));
-	if (pages[1])
-		set_fixmap(FIX_TEXT_POKE1, page_to_phys(pages[1]));
-	vaddr = (char *)fix_to_virt(FIX_TEXT_POKE0);
-	memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len);
-	clear_fixmap(FIX_TEXT_POKE0);
-	if (pages[1])
-		clear_fixmap(FIX_TEXT_POKE1);
-	local_flush_tlb();
-	sync_core();
-	/* Could also do a CLFLUSH here to speed up CPU recovery; but
-	   that causes hangs on some VIA CPUs. */
-	for (i = 0; i < len; i++)
-		BUG_ON(((char *)addr)[i] != ((char *)opcode)[i]);
+
+	/*
+	 * Map the page without the global bit, as TLB flushing is done with
+	 * flush_tlb_mm_range(), which is intended for non-global PTEs.
+	 */
+	pgprot = __pgprot(pgprot_val(PAGE_KERNEL) & ~_PAGE_GLOBAL);
+
+	/*
+	 * The lock is not really needed, but this allows to avoid open-coding.
+	 */
+	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
+
+	/*
+	 * This must not fail; preallocated in poking_init().
+	 */
+	VM_BUG_ON(!ptep);
+
+	pte = mk_pte(pages[0], pgprot);
+	set_pte_at(poking_mm, poking_addr, ptep, pte);
+
+	if (cross_page_boundary) {
+		pte = mk_pte(pages[1], pgprot);
+		set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte);
+	}
+
+	/*
+	 * Loading the temporary mm behaves as a compiler barrier, which
+	 * guarantees that the PTE will be set at the time memcpy() is done.
+	 */
+	prev = use_temporary_mm(poking_mm);
+
+	kasan_disable_current();
+	memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len);
+	kasan_enable_current();
+
+	/*
+	 * Ensure that the PTE is only cleared after the instructions of memcpy
+	 * were issued by using a compiler barrier.
+	 */
+	barrier();
+
+	pte_clear(poking_mm, poking_addr, ptep);
+	if (cross_page_boundary)
+		pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
+
+	/*
+	 * Loading the previous page-table hierarchy requires a serializing
+	 * instruction that already allows the core to see the updated version.
+	 * Xen-PV is assumed to serialize execution in a similar manner.
+	 */
+	unuse_temporary_mm(prev);
+
+	/*
+	 * Flushing the TLB might involve IPIs, which would require enabled
+	 * IRQs, but not if the mm is not used, as it is in this point.
+	 */
+	flush_tlb_mm_range(poking_mm, poking_addr, poking_addr +
+			   (cross_page_boundary ? 2 : 1) * PAGE_SIZE,
+			   PAGE_SHIFT, false);
+
+	/*
+	 * If the text does not match what we just wrote then something is
+	 * fundamentally screwy; there's nothing we can really do about that.
+	 */
+	BUG_ON(memcmp(addr, opcode, len));
+
+	pte_unmap_unlock(ptep, ptl);
 	local_irq_restore(flags);
 	return addr;
 }
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index 0f4fe206dcc2..82b181fcefe5 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -2319,8 +2319,6 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 #elif defined(CONFIG_X86_VSYSCALL_EMULATION)
 	case VSYSCALL_PAGE:
 #endif
-	case FIX_TEXT_POKE0:
-	case FIX_TEXT_POKE1:
 		/* All local page mappings */
 		pte = pfn_pte(phys, prot);
 		break;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 07/20] x86/kgdb: Avoid redundant comparison of patched code
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (5 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 06/20] x86/alternative: Use temporary mm for text poking Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 08/20] x86/ftrace: Set trampoline pages as executable Rick Edgecombe
                   ` (13 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

text_poke() already ensures that the written value is the correct one
and fails if that is not the case. There is no need for an additional
comparison. Remove it.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/kgdb.c | 14 +-------------
 1 file changed, 1 insertion(+), 13 deletions(-)

diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
index 1461544cba8b..057af9187a04 100644
--- a/arch/x86/kernel/kgdb.c
+++ b/arch/x86/kernel/kgdb.c
@@ -746,7 +746,6 @@ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long ip)
 int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 {
 	int err;
-	char opc[BREAK_INSTR_SIZE];
 
 	bpt->type = BP_BREAKPOINT;
 	err = probe_kernel_read(bpt->saved_instr, (char *)bpt->bpt_addr,
@@ -765,11 +764,6 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 		return -EBUSY;
 	text_poke_kgdb((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr,
 		       BREAK_INSTR_SIZE);
-	err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
-	if (err)
-		return err;
-	if (memcmp(opc, arch_kgdb_ops.gdb_bpt_instr, BREAK_INSTR_SIZE))
-		return -EINVAL;
 	bpt->type = BP_POKE_BREAKPOINT;
 
 	return err;
@@ -777,9 +771,6 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 
 int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
 {
-	int err;
-	char opc[BREAK_INSTR_SIZE];
-
 	if (bpt->type != BP_POKE_BREAKPOINT)
 		goto knl_write;
 	/*
@@ -790,10 +781,7 @@ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
 		goto knl_write;
 	text_poke_kgdb((void *)bpt->bpt_addr, bpt->saved_instr,
 		       BREAK_INSTR_SIZE);
-	err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE);
-	if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE))
-		goto knl_write;
-	return err;
+	return 0;
 
 knl_write:
 	return probe_kernel_write((char *)bpt->bpt_addr,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 08/20] x86/ftrace: Set trampoline pages as executable
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (6 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 07/20] x86/kgdb: Avoid redundant comparison of patched code Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 09/20] x86/kprobes: Set instruction page " Rick Edgecombe
                   ` (12 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

Since alloc_module() will not set the pages as executable soon, set
ftrace trampoline pages as executable after they are allocated.

For the time being, do not change ftrace to use the text_poke()
interface. As a result, ftrace still breaks W^X.

Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/ftrace.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 8257a59704ae..13c8249b197f 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -742,6 +742,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
 	unsigned long end_offset;
 	unsigned long op_offset;
 	unsigned long offset;
+	unsigned long npages;
 	unsigned long size;
 	unsigned long retq;
 	unsigned long *ptr;
@@ -774,6 +775,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
 		return 0;
 
 	*tramp_size = size + RET_SIZE + sizeof(void *);
+	npages = DIV_ROUND_UP(*tramp_size, PAGE_SIZE);
 
 	/* Copy ftrace_caller onto the trampoline memory */
 	ret = probe_kernel_read(trampoline, (void *)start_offset, size);
@@ -818,6 +820,12 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
 	/* ALLOC_TRAMP flags lets us know we created it */
 	ops->flags |= FTRACE_OPS_FL_ALLOC_TRAMP;
 
+	/*
+	 * Module allocation needs to be completed by making the page
+	 * executable. The page is still writable, which is a security hazard,
+	 * but anyhow ftrace breaks W^X completely.
+	 */
+	set_memory_x((unsigned long)trampoline, npages);
 	return (unsigned long)trampoline;
 fail:
 	tramp_free(trampoline, *tramp_size);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 09/20] x86/kprobes: Set instruction page as executable
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (7 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 08/20] x86/ftrace: Set trampoline pages as executable Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 10/20] x86/module: Avoid breaking W^X while loading modules Rick Edgecombe
                   ` (11 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

This patch is a preparatory patch for a following patch that makes
module allocated pages non-executable. The patch sets the page as
executable after allocation.

While at it, do some small cleanup of what appears to be unnecessary
masking.

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/kprobes/core.c | 24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 4ba75afba527..98c671e89889 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -431,8 +431,20 @@ void *alloc_insn_page(void)
 	void *page;
 
 	page = module_alloc(PAGE_SIZE);
-	if (page)
-		set_memory_ro((unsigned long)page & PAGE_MASK, 1);
+	if (!page)
+		return NULL;
+
+	/*
+	 * First make the page read-only, and only then make it executable to
+	 * prevent it from being W+X in between.
+	 */
+	set_memory_ro((unsigned long)page, 1);
+
+	/*
+	 * TODO: Once additional kernel code protection mechanisms are set, ensure
+	 * that the page was not maliciously altered and it is still zeroed.
+	 */
+	set_memory_x((unsigned long)page, 1);
 
 	return page;
 }
@@ -440,8 +452,12 @@ void *alloc_insn_page(void)
 /* Recover page to RW mode before releasing it */
 void free_insn_page(void *page)
 {
-	set_memory_nx((unsigned long)page & PAGE_MASK, 1);
-	set_memory_rw((unsigned long)page & PAGE_MASK, 1);
+	/*
+	 * First make the page non-executable, and only then make it writable to
+	 * prevent it from being W+X in between.
+	 */
+	set_memory_nx((unsigned long)page, 1);
+	set_memory_rw((unsigned long)page, 1);
 	module_memfree(page);
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 10/20] x86/module: Avoid breaking W^X while loading modules
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (8 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 09/20] x86/kprobes: Set instruction page " Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 11/20] x86/jump-label: Remove support for custom poker Rick Edgecombe
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Masami Hiramatsu, Jessica Yu,
	Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

When modules and BPF filters are loaded, there is a time window in
which some memory is both writable and executable. An attacker that has
already found another vulnerability (e.g., a dangling pointer) might be
able to exploit this behavior to overwrite kernel code. This patch
prevents having writable executable PTEs in this stage.

In addition, avoiding having W+X mappings can also slightly simplify the
patching of modules code on initialization (e.g., by alternatives and
static-key), as would be done in the next patch. This was actually the
main motivation for this patch.

To avoid having W+X mappings, set them initially as RW (NX) and after
they are set as RO set them as X as well. Setting them as executable is
done as a separate step to avoid one core in which the old PTE is cached
(hence writable), and another which sees the updated PTE (executable),
which would break the W^X protection.

Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Jessica Yu <jeyu@kernel.org>
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Suggested-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/alternative.c | 28 +++++++++++++++++++++-------
 arch/x86/kernel/module.c      |  2 +-
 include/linux/filter.h        |  1 +
 kernel/module.c               |  5 +++++
 4 files changed, 28 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index cfe5bfe06f9d..b75bfeda021e 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -667,15 +667,29 @@ void __init alternative_instructions(void)
  * handlers seeing an inconsistent instruction while you patch.
  */
 void *__init_or_module text_poke_early(void *addr, const void *opcode,
-					      size_t len)
+				       size_t len)
 {
 	unsigned long flags;
-	local_irq_save(flags);
-	memcpy(addr, opcode, len);
-	local_irq_restore(flags);
-	sync_core();
-	/* Could also do a CLFLUSH here to speed up CPU recovery; but
-	   that causes hangs on some VIA CPUs. */
+
+	if (boot_cpu_has(X86_FEATURE_NX) &&
+	    is_module_text_address((unsigned long)addr)) {
+		/*
+		 * Modules text is marked initially as non-executable, so the
+		 * code cannot be running and speculative code-fetches are
+		 * prevented. Just change the code.
+		 */
+		memcpy(addr, opcode, len);
+	} else {
+		local_irq_save(flags);
+		memcpy(addr, opcode, len);
+		local_irq_restore(flags);
+		sync_core();
+
+		/*
+		 * Could also do a CLFLUSH here to speed up CPU recovery; but
+		 * that causes hangs on some VIA CPUs.
+		 */
+	}
 	return addr;
 }
 
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index b052e883dd8c..cfa3106faee4 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -87,7 +87,7 @@ void *module_alloc(unsigned long size)
 	p = __vmalloc_node_range(size, MODULE_ALIGN,
 				    MODULES_VADDR + get_module_load_offset(),
 				    MODULES_END, GFP_KERNEL,
-				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
+				    PAGE_KERNEL, 0, NUMA_NO_NODE,
 				    __builtin_return_address(0));
 	if (p && (kasan_module_alloc(p, size) < 0)) {
 		vfree(p);
diff --git a/include/linux/filter.h b/include/linux/filter.h
index d531d4250bff..b9f93e62db96 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -694,6 +694,7 @@ static inline void bpf_prog_unlock_ro(struct bpf_prog *fp)
 static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
 {
 	set_memory_ro((unsigned long)hdr, hdr->pages);
+	set_memory_x((unsigned long)hdr, hdr->pages);
 }
 
 static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr)
diff --git a/kernel/module.c b/kernel/module.c
index 2ad1b5239910..ae1b77da6a20 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -1950,8 +1950,13 @@ void module_enable_ro(const struct module *mod, bool after_init)
 		return;
 
 	frob_text(&mod->core_layout, set_memory_ro);
+	frob_text(&mod->core_layout, set_memory_x);
+
 	frob_rodata(&mod->core_layout, set_memory_ro);
+
 	frob_text(&mod->init_layout, set_memory_ro);
+	frob_text(&mod->init_layout, set_memory_x);
+
 	frob_rodata(&mod->init_layout, set_memory_ro);
 
 	if (after_init)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 11/20] x86/jump-label: Remove support for custom poker
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (9 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 10/20] x86/module: Avoid breaking W^X while loading modules Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 12/20] x86/alternative: Remove the return value of text_poke_*() Rick Edgecombe
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Masami Hiramatsu,
	Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

There are only two types of poking: early and breakpoint based. The use
of a function pointer to perform poking complicates the code and is
probably inefficient due to the use of indirect branches.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/jump_label.c | 26 ++++++++++----------------
 1 file changed, 10 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index e7d8c636b228..e631c358f7f4 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -37,7 +37,6 @@ static void bug_at(unsigned char *ip, int line)
 
 static void __ref __jump_label_transform(struct jump_entry *entry,
 					 enum jump_label_type type,
-					 void *(*poker)(void *, const void *, size_t),
 					 int init)
 {
 	union jump_code_union jmp;
@@ -50,14 +49,6 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
 	jmp.offset = jump_entry_target(entry) -
 		     (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
 
-	/*
-	 * As long as only a single processor is running and the code is still
-	 * not marked as RO, text_poke_early() can be used; Checking that
-	 * system_state is SYSTEM_BOOTING guarantees it.
-	 */
-	if (system_state == SYSTEM_BOOTING)
-		poker = text_poke_early;
-
 	if (type == JUMP_LABEL_JMP) {
 		if (init) {
 			expect = default_nop; line = __LINE__;
@@ -80,16 +71,19 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
 		bug_at((void *)jump_entry_code(entry), line);
 
 	/*
-	 * Make text_poke_bp() a default fallback poker.
+	 * As long as only a single processor is running and the code is still
+	 * not marked as RO, text_poke_early() can be used; Checking that
+	 * system_state is SYSTEM_BOOTING guarantees it. It will be set to
+	 * SYSTEM_SCHEDULING before other cores are awaken and before the
+	 * code is write-protected.
 	 *
 	 * At the time the change is being done, just ignore whether we
 	 * are doing nop -> jump or jump -> nop transition, and assume
 	 * always nop being the 'currently valid' instruction
-	 *
 	 */
-	if (poker) {
-		(*poker)((void *)jump_entry_code(entry), code,
-			 JUMP_LABEL_NOP_SIZE);
+	if (init || system_state == SYSTEM_BOOTING) {
+		text_poke_early((void *)jump_entry_code(entry), code,
+				JUMP_LABEL_NOP_SIZE);
 		return;
 	}
 
@@ -101,7 +95,7 @@ void arch_jump_label_transform(struct jump_entry *entry,
 			       enum jump_label_type type)
 {
 	mutex_lock(&text_mutex);
-	__jump_label_transform(entry, type, NULL, 0);
+	__jump_label_transform(entry, type, 0);
 	mutex_unlock(&text_mutex);
 }
 
@@ -131,5 +125,5 @@ __init_or_module void arch_jump_label_transform_static(struct jump_entry *entry,
 			jlstate = JL_STATE_NO_UPDATE;
 	}
 	if (jlstate == JL_STATE_UPDATE)
-		__jump_label_transform(entry, type, text_poke_early, 1);
+		__jump_label_transform(entry, type, 1);
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 12/20] x86/alternative: Remove the return value of text_poke_*()
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (10 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 11/20] x86/jump-label: Remove support for custom poker Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 13/20] x86/mm/cpa: Add set_direct_map_ functions Rick Edgecombe
                   ` (8 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Kees Cook, Dave Hansen, Masami Hiramatsu,
	Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

The return value of text_poke_early() and text_poke_bp() is useless.
Remove it.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/include/asm/text-patching.h |  4 ++--
 arch/x86/kernel/alternative.c        | 11 ++++-------
 2 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
index a75eed841eed..c90678fd391a 100644
--- a/arch/x86/include/asm/text-patching.h
+++ b/arch/x86/include/asm/text-patching.h
@@ -18,7 +18,7 @@ static inline void apply_paravirt(struct paravirt_patch_site *start,
 #define __parainstructions_end	NULL
 #endif
 
-extern void *text_poke_early(void *addr, const void *opcode, size_t len);
+extern void text_poke_early(void *addr, const void *opcode, size_t len);
 
 /*
  * Clear and restore the kernel write-protection flag on the local CPU.
@@ -37,7 +37,7 @@ extern void *text_poke_early(void *addr, const void *opcode, size_t len);
 extern void *text_poke(void *addr, const void *opcode, size_t len);
 extern void *text_poke_kgdb(void *addr, const void *opcode, size_t len);
 extern int poke_int3_handler(struct pt_regs *regs);
-extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
+extern void text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
 extern int after_bootmem;
 extern __ro_after_init struct mm_struct *poking_mm;
 extern __ro_after_init unsigned long poking_addr;
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index b75bfeda021e..c63707e7ed3d 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -264,7 +264,7 @@ static void __init_or_module add_nops(void *insns, unsigned int len)
 
 extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
 extern s32 __smp_locks[], __smp_locks_end[];
-void *text_poke_early(void *addr, const void *opcode, size_t len);
+void text_poke_early(void *addr, const void *opcode, size_t len);
 
 /*
  * Are we looking at a near JMP with a 1 or 4-byte displacement.
@@ -666,8 +666,8 @@ void __init alternative_instructions(void)
  * instructions. And on the local CPU you need to be protected again NMI or MCE
  * handlers seeing an inconsistent instruction while you patch.
  */
-void *__init_or_module text_poke_early(void *addr, const void *opcode,
-				       size_t len)
+void __init_or_module text_poke_early(void *addr, const void *opcode,
+				      size_t len)
 {
 	unsigned long flags;
 
@@ -690,7 +690,6 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode,
 		 * that causes hangs on some VIA CPUs.
 		 */
 	}
-	return addr;
 }
 
 __ro_after_init struct mm_struct *poking_mm;
@@ -892,7 +891,7 @@ int poke_int3_handler(struct pt_regs *regs)
  *	  replacing opcode
  *	- sync cores
  */
-void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler)
+void text_poke_bp(void *addr, const void *opcode, size_t len, void *handler)
 {
 	unsigned char int3 = 0xcc;
 
@@ -934,7 +933,5 @@ void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler)
 	 * the writing of the new instruction.
 	 */
 	bp_patching_in_progress = false;
-
-	return addr;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 13/20] x86/mm/cpa: Add set_direct_map_ functions
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (11 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 12/20] x86/alternative: Remove the return value of text_poke_*() Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 14/20] mm: Make hibernate handle unmapped pages Rick Edgecombe
                   ` (7 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rick Edgecombe

Add two new functions set_direct_map_default_noflush() and
set_direct_map_invalid_noflush() for setting the direct map alias for the
page to its default valid permissions and to an invalid state that cannot
be cached in a TLB, respectively. These functions do not flush the TLB.

Note, __kernel_map_pages() does something similar but flushes the TLB and
doesn't reset the permission bits to default on all architectures.

Also add an ARCH config ARCH_HAS_SET_DIRECT_MAP for specifying whether
these have an actual implementation or a default empty one.

Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/Kconfig                      |  4 ++++
 arch/x86/Kconfig                  |  1 +
 arch/x86/include/asm/set_memory.h |  3 +++
 arch/x86/mm/pageattr.c            | 14 +++++++++++---
 include/linux/set_memory.h        | 10 ++++++++++
 5 files changed, 29 insertions(+), 3 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 4cfb6de48f79..79a9ec371964 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -249,6 +249,10 @@ config ARCH_HAS_FORTIFY_SOURCE
 config ARCH_HAS_SET_MEMORY
 	bool
 
+# Select if arch has all set_direct_map_invalid/default() functions
+config ARCH_HAS_SET_DIRECT_MAP
+	bool
+
 # Select if arch init_task must go in the __init_task_data section
 config ARCH_TASK_STRUCT_ON_STACK
        bool
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 26387c7bf305..291c6566cf88 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -66,6 +66,7 @@ config X86
 	select ARCH_HAS_UACCESS_FLUSHCACHE	if X86_64
 	select ARCH_HAS_UACCESS_MCSAFE		if X86_64 && X86_MCE
 	select ARCH_HAS_SET_MEMORY
+	select ARCH_HAS_SET_DIRECT_MAP
 	select ARCH_HAS_STRICT_KERNEL_RWX
 	select ARCH_HAS_STRICT_MODULE_RWX
 	select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 07a25753e85c..ae7b909dc242 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -85,6 +85,9 @@ int set_pages_nx(struct page *page, int numpages);
 int set_pages_ro(struct page *page, int numpages);
 int set_pages_rw(struct page *page, int numpages);
 
+int set_direct_map_invalid_noflush(struct page *page);
+int set_direct_map_default_noflush(struct page *page);
+
 extern int kernel_set_to_readonly;
 void set_kernel_text_rw(void);
 void set_kernel_text_ro(void);
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 4f8972311a77..fff9c91ad177 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -2209,8 +2209,6 @@ int set_pages_rw(struct page *page, int numpages)
 	return set_memory_rw(addr, numpages);
 }
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
-
 static int __set_pages_p(struct page *page, int numpages)
 {
 	unsigned long tempaddr = (unsigned long) page_address(page);
@@ -2249,6 +2247,17 @@ static int __set_pages_np(struct page *page, int numpages)
 	return __change_page_attr_set_clr(&cpa, 0);
 }
 
+int set_direct_map_invalid_noflush(struct page *page)
+{
+	return __set_pages_np(page, 1);
+}
+
+int set_direct_map_default_noflush(struct page *page)
+{
+	return __set_pages_p(page, 1);
+}
+
+#ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
 	if (PageHighMem(page))
@@ -2282,7 +2291,6 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
 }
 
 #ifdef CONFIG_HIBERNATION
-
 bool kernel_page_present(struct page *page)
 {
 	unsigned int level;
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index 2a986d282a97..82477e934b1a 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -10,6 +10,16 @@
 
 #ifdef CONFIG_ARCH_HAS_SET_MEMORY
 #include <asm/set_memory.h>
+#ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP
+static inline int set_direct_map_invalid_noflush(struct page *page)
+{
+	return 0;
+}
+static inline int set_direct_map_default_noflush(struct page *page)
+{
+	return 0;
+}
+#endif
 #else
 static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
 static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 14/20] mm: Make hibernate handle unmapped pages
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (12 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 13/20] x86/mm/cpa: Add set_direct_map_ functions Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 15/20] vmalloc: Add flag for free of special permsissions Rick Edgecombe
                   ` (6 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rick Edgecombe, Rafael J. Wysocki, Pavel Machek

Make hibernate handle unmapped pages on the direct map when
CONFIG_ARCH_HAS_SET_ALIAS is set. These functions allow for setting pages
to invalid configurations, so now hibernate should check if the pages have
valid mappings and handle if they are unmapped when doing a hibernate
save operation.

Previously this checking was already done when CONFIG_DEBUG_PAGEALLOC
was configured. It does not appear to have a big hibernating performance
impact. The speed of the saving operation before this change was measured
as 819.02 MB/s, and after was measured at 813.32 MB/s.

Before:
[    4.670938] PM: Wrote 171996 kbytes in 0.21 seconds (819.02 MB/s)

After:
[    4.504714] PM: Wrote 178932 kbytes in 0.22 seconds (813.32 MB/s)

Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Borislav Petkov <bp@alien8.de>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/mm/pageattr.c  |  4 ----
 include/linux/mm.h      | 18 ++++++------------
 kernel/power/snapshot.c |  5 +++--
 mm/page_alloc.c         |  7 +++++--
 4 files changed, 14 insertions(+), 20 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index fff9c91ad177..1cffee05f987 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -2257,7 +2257,6 @@ int set_direct_map_default_noflush(struct page *page)
 	return __set_pages_p(page, 1);
 }
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
 	if (PageHighMem(page))
@@ -2302,11 +2301,8 @@ bool kernel_page_present(struct page *page)
 	pte = lookup_address((unsigned long)page_address(page), &level);
 	return (pte_val(*pte) & _PAGE_PRESENT);
 }
-
 #endif /* CONFIG_HIBERNATION */
 
-#endif /* CONFIG_DEBUG_PAGEALLOC */
-
 int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
 				   unsigned numpages, unsigned long page_flags)
 {
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 80bb6408fe73..5748e9ce133e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2642,37 +2642,31 @@ static inline void kernel_poison_pages(struct page *page, int numpages,
 					int enable) { }
 #endif
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
 extern bool _debug_pagealloc_enabled;
-extern void __kernel_map_pages(struct page *page, int numpages, int enable);
 
 static inline bool debug_pagealloc_enabled(void)
 {
-	return _debug_pagealloc_enabled;
+	return IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) && _debug_pagealloc_enabled;
 }
 
+#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
+extern void __kernel_map_pages(struct page *page, int numpages, int enable);
+
 static inline void
 kernel_map_pages(struct page *page, int numpages, int enable)
 {
-	if (!debug_pagealloc_enabled())
-		return;
-
 	__kernel_map_pages(page, numpages, enable);
 }
 #ifdef CONFIG_HIBERNATION
 extern bool kernel_page_present(struct page *page);
 #endif	/* CONFIG_HIBERNATION */
-#else	/* CONFIG_DEBUG_PAGEALLOC */
+#else	/* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */
 static inline void
 kernel_map_pages(struct page *page, int numpages, int enable) {}
 #ifdef CONFIG_HIBERNATION
 static inline bool kernel_page_present(struct page *page) { return true; }
 #endif	/* CONFIG_HIBERNATION */
-static inline bool debug_pagealloc_enabled(void)
-{
-	return false;
-}
-#endif	/* CONFIG_DEBUG_PAGEALLOC */
+#endif	/* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */
 
 #ifdef __HAVE_ARCH_GATE_AREA
 extern struct vm_area_struct *get_gate_vma(struct mm_struct *mm);
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 640b2034edd6..f69b2920d4f4 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1334,8 +1334,9 @@ static inline void do_copy_page(long *dst, long *src)
  * safe_copy_page - Copy a page in a safe way.
  *
  * Check if the page we are going to copy is marked as present in the kernel
- * page tables (this always is the case if CONFIG_DEBUG_PAGEALLOC is not set
- * and in that case kernel_page_present() always returns 'true').
+ * page tables. This always is the case if CONFIG_DEBUG_PAGEALLOC or
+ * CONFIG_ARCH_HAS_SET_DIRECT_MAP is not set. In that case kernel_page_present()
+ * always returns 'true'.
  */
 static void safe_copy_page(void *dst, struct page *s_page)
 {
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d295c9bc01a8..92d0a0934274 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1074,7 +1074,9 @@ static __always_inline bool free_pages_prepare(struct page *page,
 	}
 	arch_free_page(page, order);
 	kernel_poison_pages(page, 1 << order, 0);
-	kernel_map_pages(page, 1 << order, 0);
+	if (debug_pagealloc_enabled())
+		kernel_map_pages(page, 1 << order, 0);
+
 	kasan_free_nondeferred_pages(page, order);
 
 	return true;
@@ -1944,7 +1946,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
 	set_page_refcounted(page);
 
 	arch_alloc_page(page, order);
-	kernel_map_pages(page, 1 << order, 1);
+	if (debug_pagealloc_enabled())
+		kernel_map_pages(page, 1 << order, 1);
 	kernel_poison_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 	set_page_owner(page, order, gfp_flags);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 15/20] vmalloc: Add flag for free of special permsissions
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (13 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 14/20] mm: Make hibernate handle unmapped pages Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 16/20] modules: Use vmalloc special flag Rick Edgecombe
                   ` (5 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rick Edgecombe

Add a new flag VM_FLUSH_RESET_PERMS, for enabling vfree operations to
immediately clear executable TLB entries before freeing pages, and handle
resetting permissions on the directmap. This flag is useful for any kind
of memory with elevated permissions, or where there can be related
permissions changes on the directmap. Today this is RO+X and RO memory.

Although this enables directly vfreeing non-writeable memory now,
non-writable memory cannot be freed in an interrupt because the allocation
itself is used as a node on deferred free list. So when RO memory needs to
be freed in an interrupt the code doing the vfree needs to have its own
work queue, as was the case before the deferred vfree list was added to
vmalloc.

For architectures with set_direct_map_ implementations this whole operation
can be done with one TLB flush when centralized like this. For others with
directmap permissions, currently only arm64, a backup method using
set_memory functions is used to reset the directmap. When arm64 adds
set_direct_map_ functions, this backup can be removed.

When the TLB is flushed to both remove TLB entries for the vmalloc range
mapping and the direct map permissions, the lazy purge operation could be
done to try to save a TLB flush later. However today vm_unmap_aliases
could flush a TLB range that does not include the directmap. So a helper
is added with extra parameters that can allow both the vmalloc address and
the direct mapping to be flushed during this operation. The behavior of the
normal vm_unmap_aliases function is unchanged.

Cc: Borislav Petkov <bp@alien8.de>
Suggested-by: Dave Hansen <dave.hansen@intel.com>
Suggested-by: Andy Lutomirski <luto@kernel.org>
Suggested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 include/linux/vmalloc.h |  13 +++++
 mm/vmalloc.c            | 113 +++++++++++++++++++++++++++++++++-------
 2 files changed, 107 insertions(+), 19 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 398e9c95cd61..345bb9d2f578 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -21,6 +21,11 @@ struct notifier_block;		/* in notifier.h */
 #define VM_UNINITIALIZED	0x00000020	/* vm_struct is not fully initialized */
 #define VM_NO_GUARD		0x00000040      /* don't add guard page */
 #define VM_KASAN		0x00000080      /* has allocated kasan shadow memory */
+/*
+ * Memory with VM_FLUSH_RESET_PERMS cannot be freed in an interrupt or with
+ * vfree_atomic().
+ */
+#define VM_FLUSH_RESET_PERMS	0x00000100      /* Reset direct map and flush TLB on unmap */
 /* bits [20..32] reserved for arch specific ioremap internals */
 
 /*
@@ -135,6 +140,14 @@ extern struct vm_struct *__get_vm_area_caller(unsigned long size,
 extern struct vm_struct *remove_vm_area(const void *addr);
 extern struct vm_struct *find_vm_area(const void *addr);
 
+static inline void set_vm_flush_reset_perms(void *addr)
+{
+	struct vm_struct *vm = find_vm_area(addr);
+
+	if (vm)
+		vm->flags |= VM_FLUSH_RESET_PERMS;
+}
+
 extern int map_vm_area(struct vm_struct *area, pgprot_t prot,
 			struct page **pages);
 #ifdef CONFIG_MMU
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 871e41c55e23..0e341581832e 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -18,6 +18,7 @@
 #include <linux/interrupt.h>
 #include <linux/proc_fs.h>
 #include <linux/seq_file.h>
+#include <linux/set_memory.h>
 #include <linux/debugobjects.h>
 #include <linux/kallsyms.h>
 #include <linux/list.h>
@@ -1055,24 +1056,9 @@ static void vb_free(const void *addr, unsigned long size)
 		spin_unlock(&vb->lock);
 }
 
-/**
- * vm_unmap_aliases - unmap outstanding lazy aliases in the vmap layer
- *
- * The vmap/vmalloc layer lazily flushes kernel virtual mappings primarily
- * to amortize TLB flushing overheads. What this means is that any page you
- * have now, may, in a former life, have been mapped into kernel virtual
- * address by the vmap layer and so there might be some CPUs with TLB entries
- * still referencing that page (additional to the regular 1:1 kernel mapping).
- *
- * vm_unmap_aliases flushes all such lazy mappings. After it returns, we can
- * be sure that none of the pages we have control over will have any aliases
- * from the vmap layer.
- */
-void vm_unmap_aliases(void)
+static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush)
 {
-	unsigned long start = ULONG_MAX, end = 0;
 	int cpu;
-	int flush = 0;
 
 	if (unlikely(!vmap_initialized))
 		return;
@@ -1109,6 +1095,27 @@ void vm_unmap_aliases(void)
 		flush_tlb_kernel_range(start, end);
 	mutex_unlock(&vmap_purge_lock);
 }
+
+/**
+ * vm_unmap_aliases - unmap outstanding lazy aliases in the vmap layer
+ *
+ * The vmap/vmalloc layer lazily flushes kernel virtual mappings primarily
+ * to amortize TLB flushing overheads. What this means is that any page you
+ * have now, may, in a former life, have been mapped into kernel virtual
+ * address by the vmap layer and so there might be some CPUs with TLB entries
+ * still referencing that page (additional to the regular 1:1 kernel mapping).
+ *
+ * vm_unmap_aliases flushes all such lazy mappings. After it returns, we can
+ * be sure that none of the pages we have control over will have any aliases
+ * from the vmap layer.
+ */
+void vm_unmap_aliases(void)
+{
+	unsigned long start = ULONG_MAX, end = 0;
+	int flush = 0;
+
+	_vm_unmap_aliases(start, end, flush);
+}
 EXPORT_SYMBOL_GPL(vm_unmap_aliases);
 
 /**
@@ -1494,6 +1501,72 @@ struct vm_struct *remove_vm_area(const void *addr)
 	return NULL;
 }
 
+static inline void set_area_direct_map(const struct vm_struct *area,
+				       int (*set_direct_map)(struct page *page))
+{
+	int i;
+
+	for (i = 0; i < area->nr_pages; i++)
+		if (page_address(area->pages[i]))
+			set_direct_map(area->pages[i]);
+}
+
+/* Handle removing and resetting vm mappings related to the vm_struct. */
+static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages)
+{
+	unsigned long addr = (unsigned long)area->addr;
+	unsigned long start = ULONG_MAX, end = 0;
+	int flush_reset = area->flags & VM_FLUSH_RESET_PERMS;
+	int i;
+
+	/*
+	 * The below block can be removed when all architectures that have
+	 * direct map permissions also have set_direct_map_() implementations.
+	 * This is concerned with resetting the direct map any an vm alias with
+	 * execute permissions, without leaving a RW+X window.
+	 */
+	if (flush_reset && !IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
+		set_memory_nx(addr, area->nr_pages);
+		set_memory_rw(addr, area->nr_pages);
+	}
+
+	remove_vm_area(area->addr);
+
+	/* If this is not VM_FLUSH_RESET_PERMS memory, no need for the below. */
+	if (!flush_reset)
+		return;
+
+	/*
+	 * If not deallocating pages, just do the flush of the VM area and
+	 * return.
+	 */
+	if (!deallocate_pages) {
+		vm_unmap_aliases();
+		return;
+	}
+
+	/*
+	 * If execution gets here, flush the vm mapping and reset the direct
+	 * map. Find the start and end range of the direct mappings to make sure
+	 * the vm_unmap_aliases() flush includes the direct map.
+	 */
+	for (i = 0; i < area->nr_pages; i++) {
+		if (page_address(area->pages[i])) {
+			start = min(addr, start);
+			end = max(addr, end);
+		}
+	}
+
+	/*
+	 * Set direct map to something invalid so that it won't be cached if
+	 * there are any accesses after the TLB flush, then flush the TLB and
+	 * reset the direct map permissions to the default.
+	 */
+	set_area_direct_map(area, set_direct_map_invalid_noflush);
+	_vm_unmap_aliases(start, end, 1);
+	set_area_direct_map(area, set_direct_map_default_noflush);
+}
+
 static void __vunmap(const void *addr, int deallocate_pages)
 {
 	struct vm_struct *area;
@@ -1515,7 +1588,8 @@ static void __vunmap(const void *addr, int deallocate_pages)
 	debug_check_no_locks_freed(area->addr, get_vm_area_size(area));
 	debug_check_no_obj_freed(area->addr, get_vm_area_size(area));
 
-	remove_vm_area(addr);
+	vm_remove_mappings(area, deallocate_pages);
+
 	if (deallocate_pages) {
 		int i;
 
@@ -1925,8 +1999,9 @@ EXPORT_SYMBOL(vzalloc_node);
 
 void *vmalloc_exec(unsigned long size)
 {
-	return __vmalloc_node(size, 1, GFP_KERNEL, PAGE_KERNEL_EXEC,
-			      NUMA_NO_NODE, __builtin_return_address(0));
+	return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
+			GFP_KERNEL, PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS,
+			NUMA_NO_NODE, __builtin_return_address(0));
 }
 
 #if defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA32)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 16/20] modules: Use vmalloc special flag
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (14 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 15/20] vmalloc: Add flag for free of special permsissions Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 17/20] bpf: " Rick Edgecombe
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rick Edgecombe, Jessica Yu, Steven Rostedt

Use new flag for handling freeing of special permissioned memory in vmalloc
and remove places where memory was set RW before freeing which is no longer
needed.

Since freeing of VM_FLUSH_RESET_PERMS memory is not supported in an
interrupt by vmalloc, the freeing of init sections is moved to a work
queue. Instead of call_rcu it now uses synchronize_rcu() in the work
queue.

Lastly, there is now a WARN_ON in module_memfree since it should not be
called in an interrupt with special memory as is required for
VM_FLUSH_RESET_PERMS.

Cc: Jessica Yu <jeyu@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 kernel/module.c | 77 +++++++++++++++++++++++++------------------------
 1 file changed, 39 insertions(+), 38 deletions(-)

diff --git a/kernel/module.c b/kernel/module.c
index ae1b77da6a20..3b97dfb47afb 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -98,6 +98,10 @@ DEFINE_MUTEX(module_mutex);
 EXPORT_SYMBOL_GPL(module_mutex);
 static LIST_HEAD(modules);
 
+/* Work queue for freeing init sections in success case */
+static struct work_struct init_free_wq;
+static struct llist_head init_free_list;
+
 #ifdef CONFIG_MODULES_TREE_LOOKUP
 
 /*
@@ -1949,6 +1953,8 @@ void module_enable_ro(const struct module *mod, bool after_init)
 	if (!rodata_enabled)
 		return;
 
+	set_vm_flush_reset_perms(mod->core_layout.base);
+	set_vm_flush_reset_perms(mod->init_layout.base);
 	frob_text(&mod->core_layout, set_memory_ro);
 	frob_text(&mod->core_layout, set_memory_x);
 
@@ -1972,15 +1978,6 @@ static void module_enable_nx(const struct module *mod)
 	frob_writable_data(&mod->init_layout, set_memory_nx);
 }
 
-static void module_disable_nx(const struct module *mod)
-{
-	frob_rodata(&mod->core_layout, set_memory_x);
-	frob_ro_after_init(&mod->core_layout, set_memory_x);
-	frob_writable_data(&mod->core_layout, set_memory_x);
-	frob_rodata(&mod->init_layout, set_memory_x);
-	frob_writable_data(&mod->init_layout, set_memory_x);
-}
-
 /* Iterate through all modules and set each module's text as RW */
 void set_all_modules_text_rw(void)
 {
@@ -2024,23 +2021,8 @@ void set_all_modules_text_ro(void)
 	}
 	mutex_unlock(&module_mutex);
 }
-
-static void disable_ro_nx(const struct module_layout *layout)
-{
-	if (rodata_enabled) {
-		frob_text(layout, set_memory_rw);
-		frob_rodata(layout, set_memory_rw);
-		frob_ro_after_init(layout, set_memory_rw);
-	}
-	frob_rodata(layout, set_memory_x);
-	frob_ro_after_init(layout, set_memory_x);
-	frob_writable_data(layout, set_memory_x);
-}
-
 #else
-static void disable_ro_nx(const struct module_layout *layout) { }
 static void module_enable_nx(const struct module *mod) { }
-static void module_disable_nx(const struct module *mod) { }
 #endif
 
 #ifdef CONFIG_LIVEPATCH
@@ -2120,6 +2102,11 @@ static void free_module_elf(struct module *mod)
 
 void __weak module_memfree(void *module_region)
 {
+	/*
+	 * This memory may be RO, and freeing RO memory in an interrupt is not
+	 * supported by vmalloc.
+	 */
+	WARN_ON(in_interrupt());
 	vfree(module_region);
 }
 
@@ -2171,7 +2158,6 @@ static void free_module(struct module *mod)
 	mutex_unlock(&module_mutex);
 
 	/* This may be empty, but that's OK */
-	disable_ro_nx(&mod->init_layout);
 	module_arch_freeing_init(mod);
 	module_memfree(mod->init_layout.base);
 	kfree(mod->args);
@@ -2181,7 +2167,6 @@ static void free_module(struct module *mod)
 	lockdep_free_key_range(mod->core_layout.base, mod->core_layout.size);
 
 	/* Finally, free the core (containing the module structure) */
-	disable_ro_nx(&mod->core_layout);
 	module_memfree(mod->core_layout.base);
 }
 
@@ -3424,17 +3409,34 @@ static void do_mod_ctors(struct module *mod)
 
 /* For freeing module_init on success, in case kallsyms traversing */
 struct mod_initfree {
-	struct rcu_head rcu;
+	struct llist_node node;
 	void *module_init;
 };
 
-static void do_free_init(struct rcu_head *head)
+static void do_free_init(struct work_struct *w)
 {
-	struct mod_initfree *m = container_of(head, struct mod_initfree, rcu);
-	module_memfree(m->module_init);
-	kfree(m);
+	struct llist_node *pos, *n, *list;
+	struct mod_initfree *initfree;
+
+	list = llist_del_all(&init_free_list);
+
+	synchronize_rcu();
+
+	llist_for_each_safe(pos, n, list) {
+		initfree = container_of(pos, struct mod_initfree, node);
+		module_memfree(initfree->module_init);
+		kfree(initfree);
+	}
 }
 
+static int __init modules_wq_init(void)
+{
+	INIT_WORK(&init_free_wq, do_free_init);
+	init_llist_head(&init_free_list);
+	return 0;
+}
+module_init(modules_wq_init);
+
 /*
  * This is where the real work happens.
  *
@@ -3511,7 +3513,6 @@ static noinline int do_init_module(struct module *mod)
 #endif
 	module_enable_ro(mod, true);
 	mod_tree_remove_init(mod);
-	disable_ro_nx(&mod->init_layout);
 	module_arch_freeing_init(mod);
 	mod->init_layout.base = NULL;
 	mod->init_layout.size = 0;
@@ -3522,14 +3523,18 @@ static noinline int do_init_module(struct module *mod)
 	 * We want to free module_init, but be aware that kallsyms may be
 	 * walking this with preempt disabled.  In all the failure paths, we
 	 * call synchronize_rcu(), but we don't want to slow down the success
-	 * path, so use actual RCU here.
+	 * path. module_memfree() cannot be called in an interrupt, so do the
+	 * work and call synchronize_rcu() in a work queue.
+	 *
 	 * Note that module_alloc() on most architectures creates W+X page
 	 * mappings which won't be cleaned up until do_free_init() runs.  Any
 	 * code such as mark_rodata_ro() which depends on those mappings to
 	 * be cleaned up needs to sync with the queued work - ie
 	 * rcu_barrier()
 	 */
-	call_rcu(&freeinit->rcu, do_free_init);
+	if (llist_add(&freeinit->node, &init_free_list))
+		schedule_work(&init_free_wq);
+
 	mutex_unlock(&module_mutex);
 	wake_up_all(&module_wq);
 
@@ -3826,10 +3831,6 @@ static int load_module(struct load_info *info, const char __user *uargs,
 	module_bug_cleanup(mod);
 	mutex_unlock(&module_mutex);
 
-	/* we can't deallocate the module until we clear memory protection */
-	module_disable_ro(mod);
-	module_disable_nx(mod);
-
  ddebug_cleanup:
 	ftrace_release_mod(mod);
 	dynamic_debug_remove(mod, info->debug);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 17/20] bpf: Use vmalloc special flag
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (15 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 16/20] modules: Use vmalloc special flag Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 18/20] x86/ftrace: " Rick Edgecombe
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rick Edgecombe, Daniel Borkmann, Alexei Starovoitov

Use new flag VM_FLUSH_RESET_PERMS for handling freeing of special
permissioned memory in vmalloc and remove places where memory was set RW
before freeing which is no longer needed. Don't track if the memory is RO
anymore because it is now tracked in vmalloc.

Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 include/linux/filter.h | 17 +++--------------
 kernel/bpf/core.c      |  1 -
 2 files changed, 3 insertions(+), 15 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index b9f93e62db96..f7b6c8a2e591 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -20,6 +20,7 @@
 #include <linux/set_memory.h>
 #include <linux/kallsyms.h>
 #include <linux/if_vlan.h>
+#include <linux/vmalloc.h>
 
 #include <net/sch_generic.h>
 
@@ -483,7 +484,6 @@ struct bpf_prog {
 	u16			pages;		/* Number of allocated pages */
 	u16			jited:1,	/* Is our filter JIT'ed? */
 				jit_requested:1,/* archs need to JIT the prog */
-				undo_set_mem:1,	/* Passed set_memory_ro() checkpoint */
 				gpl_compatible:1, /* Is filter GPL compatible? */
 				cb_access:1,	/* Is control block accessed? */
 				dst_needed:1,	/* Do we need dst entry? */
@@ -681,27 +681,17 @@ bpf_ctx_narrow_access_ok(u32 off, u32 size, u32 size_default)
 
 static inline void bpf_prog_lock_ro(struct bpf_prog *fp)
 {
-	fp->undo_set_mem = 1;
+	set_vm_flush_reset_perms(fp);
 	set_memory_ro((unsigned long)fp, fp->pages);
 }
 
-static inline void bpf_prog_unlock_ro(struct bpf_prog *fp)
-{
-	if (fp->undo_set_mem)
-		set_memory_rw((unsigned long)fp, fp->pages);
-}
-
 static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
 {
+	set_vm_flush_reset_perms(hdr);
 	set_memory_ro((unsigned long)hdr, hdr->pages);
 	set_memory_x((unsigned long)hdr, hdr->pages);
 }
 
-static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr)
-{
-	set_memory_rw((unsigned long)hdr, hdr->pages);
-}
-
 static inline struct bpf_binary_header *
 bpf_jit_binary_hdr(const struct bpf_prog *fp)
 {
@@ -736,7 +726,6 @@ void __bpf_prog_free(struct bpf_prog *fp);
 
 static inline void bpf_prog_unlock_free(struct bpf_prog *fp)
 {
-	bpf_prog_unlock_ro(fp);
 	__bpf_prog_free(fp);
 }
 
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 19c49313c709..465c1c3623e8 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -804,7 +804,6 @@ void __weak bpf_jit_free(struct bpf_prog *fp)
 	if (fp->jited) {
 		struct bpf_binary_header *hdr = bpf_jit_binary_hdr(fp);
 
-		bpf_jit_binary_unlock_ro(hdr);
 		bpf_jit_binary_free(hdr);
 
 		WARN_ON_ONCE(!bpf_prog_kallsyms_verify_off(fp));
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 18/20] x86/ftrace: Use vmalloc special flag
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (16 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 17/20] bpf: " Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-22  0:22   ` Steven Rostedt
  2019-02-21 23:44 ` [PATCH v3 19/20] x86/kprobes: " Rick Edgecombe
                   ` (2 subsequent siblings)
  20 siblings, 1 reply; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rick Edgecombe, Steven Rostedt

Use new flag VM_FLUSH_RESET_PERMS for handling freeing of special
permissioned memory in vmalloc and remove places where memory was set NX
and RW before freeing which is no longer needed.

Cc: Steven Rostedt <rostedt@goodmis.org>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/ftrace.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 13c8249b197f..93efe3955333 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -692,10 +692,6 @@ static inline void *alloc_tramp(unsigned long size)
 }
 static inline void tramp_free(void *tramp, int size)
 {
-	int npages = PAGE_ALIGN(size) >> PAGE_SHIFT;
-
-	set_memory_nx((unsigned long)tramp, npages);
-	set_memory_rw((unsigned long)tramp, npages);
 	module_memfree(tramp);
 }
 #else
@@ -820,6 +816,8 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
 	/* ALLOC_TRAMP flags lets us know we created it */
 	ops->flags |= FTRACE_OPS_FL_ALLOC_TRAMP;
 
+	set_vm_flush_reset_perms(trampoline);
+
 	/*
 	 * Module allocation needs to be completed by making the page
 	 * executable. The page is still writable, which is a security hazard,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 19/20] x86/kprobes: Use vmalloc special flag
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (17 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 18/20] x86/ftrace: " Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-21 23:44 ` [PATCH v3 20/20] x86/alternative: Comment about module removal races Rick Edgecombe
  2019-02-22 16:14 ` [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Borislav Petkov
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Rick Edgecombe, Masami Hiramatsu

Use new flag VM_FLUSH_RESET_PERMS for handling freeing of special
permissioned memory in vmalloc and remove places where memory was set NX
and RW before freeing which is no longer needed.

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/kprobes/core.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 98c671e89889..8b56935d7b53 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -434,6 +434,7 @@ void *alloc_insn_page(void)
 	if (!page)
 		return NULL;
 
+	set_vm_flush_reset_perms(page);
 	/*
 	 * First make the page read-only, and only then make it executable to
 	 * prevent it from being W+X in between.
@@ -452,12 +453,6 @@ void *alloc_insn_page(void)
 /* Recover page to RW mode before releasing it */
 void free_insn_page(void *page)
 {
-	/*
-	 * First make the page non-executable, and only then make it writable to
-	 * prevent it from being W+X in between.
-	 */
-	set_memory_nx((unsigned long)page, 1);
-	set_memory_rw((unsigned long)page, 1);
 	module_memfree(page);
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 20/20] x86/alternative: Comment about module removal races
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (18 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 19/20] x86/kprobes: " Rick Edgecombe
@ 2019-02-21 23:44 ` Rick Edgecombe
  2019-02-22 16:14 ` [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Borislav Petkov
  20 siblings, 0 replies; 27+ messages in thread
From: Rick Edgecombe @ 2019-02-21 23:44 UTC (permalink / raw)
  To: Andy Lutomirski, Ingo Molnar
  Cc: linux-kernel, x86, hpa, Thomas Gleixner, Borislav Petkov,
	Nadav Amit, Dave Hansen, Peter Zijlstra, linux_dti,
	linux-integrity, linux-security-module, akpm, kernel-hardening,
	linux-mm, will.deacon, ard.biesheuvel, kristen, deneen.t.dock,
	Nadav Amit, Masami Hiramatsu, Rick Edgecombe

From: Nadav Amit <namit@vmware.com>

Add a comment to clarify that users of text_poke() must ensure that
no races with module removal take place.

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 arch/x86/kernel/alternative.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index c63707e7ed3d..a1335b9486bf 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -809,6 +809,11 @@ static void *__text_poke(void *addr, const void *opcode, size_t len)
  * It means the size must be writable atomically and the address must be aligned
  * in a way that permits an atomic write. It also makes sure we fit on a single
  * page.
+ *
+ * Note that the caller must ensure that if the modified code is part of a
+ * module, the module would not be removed during poking. This can be achieved
+ * by registering a module notifier, and ordering module removal and patching
+ * trough a mutex.
  */
 void *text_poke(void *addr, const void *opcode, size_t len)
 {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 03/20] x86/mm: Save DRs when loading a temporary mm
  2019-02-21 23:44 ` [PATCH v3 03/20] x86/mm: Save DRs when loading a temporary mm Rick Edgecombe
@ 2019-02-22  0:07   ` Sean Christopherson
  2019-02-22  0:17     ` Nadav Amit
  0 siblings, 1 reply; 27+ messages in thread
From: Sean Christopherson @ 2019-02-22  0:07 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: Andy Lutomirski, Ingo Molnar, linux-kernel, x86, hpa,
	Thomas Gleixner, Borislav Petkov, Nadav Amit, Dave Hansen,
	Peter Zijlstra, linux_dti, linux-integrity,
	linux-security-module, akpm, kernel-hardening, linux-mm,
	will.deacon, ard.biesheuvel, kristen, deneen.t.dock, Nadav Amit

On Thu, Feb 21, 2019 at 03:44:34PM -0800, Rick Edgecombe wrote:
> From: Nadav Amit <namit@vmware.com>
> 
> Prevent user watchpoints from mistakenly firing while the temporary mm
> is being used. As the addresses that of the temporary mm might overlap
> those of the user-process, this is necessary to prevent wrong signals
> or worse things from happening.
> 
> Cc: Andy Lutomirski <luto@kernel.org>
> Signed-off-by: Nadav Amit <namit@vmware.com>
> ---
>  arch/x86/include/asm/mmu_context.h | 25 +++++++++++++++++++++++++
>  1 file changed, 25 insertions(+)
> 
> diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
> index d684b954f3c0..0d6c72ece750 100644
> --- a/arch/x86/include/asm/mmu_context.h
> +++ b/arch/x86/include/asm/mmu_context.h
> @@ -13,6 +13,7 @@
>  #include <asm/tlbflush.h>
>  #include <asm/paravirt.h>
>  #include <asm/mpx.h>
> +#include <asm/debugreg.h>
>  
>  extern atomic64_t last_mm_ctx_id;
>  
> @@ -358,6 +359,7 @@ static inline unsigned long __get_current_cr3_fast(void)
>  
>  typedef struct {
>  	struct mm_struct *prev;
> +	unsigned short bp_enabled : 1;
>  } temp_mm_state_t;
>  
>  /*
> @@ -380,6 +382,22 @@ static inline temp_mm_state_t use_temporary_mm(struct mm_struct *mm)
>  	lockdep_assert_irqs_disabled();
>  	state.prev = this_cpu_read(cpu_tlbstate.loaded_mm);
>  	switch_mm_irqs_off(NULL, mm, current);
> +
> +	/*
> +	 * If breakpoints are enabled, disable them while the temporary mm is
> +	 * used. Userspace might set up watchpoints on addresses that are used
> +	 * in the temporary mm, which would lead to wrong signals being sent or
> +	 * crashes.
> +	 *
> +	 * Note that breakpoints are not disabled selectively, which also causes
> +	 * kernel breakpoints (e.g., perf's) to be disabled. This might be
> +	 * undesirable, but still seems reasonable as the code that runs in the
> +	 * temporary mm should be short.
> +	 */
> +	state.bp_enabled = hw_breakpoint_active();

Pretty sure caching hw_breakpoint_active() is unnecessary.  It queries a
per-cpu value, not hardware's DR7 register, and that same value is
consumed by hw_breakpoint_restore().  No idea if breakpoints can be
disabled while using a temp mm, but even if that can happen, there's no
need to restore breakpoints if they've all been disabled, i.e. if
hw_breakpoint_active() returns false in unuse_temporary_mm().

> +	if (state.bp_enabled)
> +		hw_breakpoint_disable();
> +
>  	return state;
>  }
>  
> @@ -387,6 +405,13 @@ static inline void unuse_temporary_mm(temp_mm_state_t prev)
>  {
>  	lockdep_assert_irqs_disabled();
>  	switch_mm_irqs_off(NULL, prev.prev, current);
> +
> +	/*
> +	 * Restore the breakpoints if they were disabled before the temporary mm
> +	 * was loaded.
> +	 */
> +	if (prev.bp_enabled)
> +		hw_breakpoint_restore();
>  }
>  
>  #endif /* _ASM_X86_MMU_CONTEXT_H */
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 03/20] x86/mm: Save DRs when loading a temporary mm
  2019-02-22  0:07   ` Sean Christopherson
@ 2019-02-22  0:17     ` Nadav Amit
  0 siblings, 0 replies; 27+ messages in thread
From: Nadav Amit @ 2019-02-22  0:17 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Rick Edgecombe, Andy Lutomirski, Ingo Molnar, LKML, X86 ML,
	H. Peter Anvin, Thomas Gleixner, Borislav Petkov, Dave Hansen,
	Peter Zijlstra, Damian Tometzki, linux-integrity, LSM List,
	Andrew Morton, Kernel Hardening, Linux-MM, Will Deacon,
	Ard Biesheuvel, Kristen Carlson Accardi, deneen.t.dock

> On Feb 21, 2019, at 4:07 PM, Sean Christopherson <sean.j.christopherson@intel.com> wrote:
> 
> On Thu, Feb 21, 2019 at 03:44:34PM -0800, Rick Edgecombe wrote:
>> From: Nadav Amit <namit@vmware.com>
>> 
>> Prevent user watchpoints from mistakenly firing while the temporary mm
>> is being used. As the addresses that of the temporary mm might overlap
>> those of the user-process, this is necessary to prevent wrong signals
>> or worse things from happening.
>> 
>> Cc: Andy Lutomirski <luto@kernel.org>
>> Signed-off-by: Nadav Amit <namit@vmware.com>
>> ---
>> arch/x86/include/asm/mmu_context.h | 25 +++++++++++++++++++++++++
>> 1 file changed, 25 insertions(+)
>> 
>> diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
>> index d684b954f3c0..0d6c72ece750 100644
>> --- a/arch/x86/include/asm/mmu_context.h
>> +++ b/arch/x86/include/asm/mmu_context.h
>> @@ -13,6 +13,7 @@
>> #include <asm/tlbflush.h>
>> #include <asm/paravirt.h>
>> #include <asm/mpx.h>
>> +#include <asm/debugreg.h>
>> 
>> extern atomic64_t last_mm_ctx_id;
>> 
>> @@ -358,6 +359,7 @@ static inline unsigned long __get_current_cr3_fast(void)
>> 
>> typedef struct {
>> 	struct mm_struct *prev;
>> +	unsigned short bp_enabled : 1;
>> } temp_mm_state_t;
>> 
>> /*
>> @@ -380,6 +382,22 @@ static inline temp_mm_state_t use_temporary_mm(struct mm_struct *mm)
>> 	lockdep_assert_irqs_disabled();
>> 	state.prev = this_cpu_read(cpu_tlbstate.loaded_mm);
>> 	switch_mm_irqs_off(NULL, mm, current);
>> +
>> +	/*
>> +	 * If breakpoints are enabled, disable them while the temporary mm is
>> +	 * used. Userspace might set up watchpoints on addresses that are used
>> +	 * in the temporary mm, which would lead to wrong signals being sent or
>> +	 * crashes.
>> +	 *
>> +	 * Note that breakpoints are not disabled selectively, which also causes
>> +	 * kernel breakpoints (e.g., perf's) to be disabled. This might be
>> +	 * undesirable, but still seems reasonable as the code that runs in the
>> +	 * temporary mm should be short.
>> +	 */
>> +	state.bp_enabled = hw_breakpoint_active();
> 
> Pretty sure caching hw_breakpoint_active() is unnecessary.  It queries a
> per-cpu value, not hardware's DR7 register, and that same value is
> consumed by hw_breakpoint_restore().  No idea if breakpoints can be
> disabled while using a temp mm, but even if that can happen, there's no
> need to restore breakpoints if they've all been disabled, i.e. if
> hw_breakpoint_active() returns false in unuse_temporary_mm().

Good point. I will fix it for next version.

Thanks,
Nadav


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 18/20] x86/ftrace: Use vmalloc special flag
  2019-02-21 23:44 ` [PATCH v3 18/20] x86/ftrace: " Rick Edgecombe
@ 2019-02-22  0:22   ` Steven Rostedt
  2019-02-22  0:55     ` Edgecombe, Rick P
  0 siblings, 1 reply; 27+ messages in thread
From: Steven Rostedt @ 2019-02-22  0:22 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: Andy Lutomirski, Ingo Molnar, linux-kernel, x86, hpa,
	Thomas Gleixner, Borislav Petkov, Nadav Amit, Dave Hansen,
	Peter Zijlstra, linux_dti, linux-integrity,
	linux-security-module, akpm, kernel-hardening, linux-mm,
	will.deacon, ard.biesheuvel, kristen, deneen.t.dock

On Thu, 21 Feb 2019 15:44:49 -0800
Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:

> Use new flag VM_FLUSH_RESET_PERMS for handling freeing of special
> permissioned memory in vmalloc and remove places where memory was set NX
> and RW before freeing which is no longer needed.
> 
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
> ---
>  arch/x86/kernel/ftrace.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
> index 13c8249b197f..93efe3955333 100644
> --- a/arch/x86/kernel/ftrace.c
> +++ b/arch/x86/kernel/ftrace.c
> @@ -692,10 +692,6 @@ static inline void *alloc_tramp(unsigned long size)
>  }
>  static inline void tramp_free(void *tramp, int size)

As size is no longer used within the function, can you remove that too.

Thanks,

-- Steve

>  {
> -	int npages = PAGE_ALIGN(size) >> PAGE_SHIFT;
> -
> -	set_memory_nx((unsigned long)tramp, npages);
> -	set_memory_rw((unsigned long)tramp, npages);
>  	module_memfree(tramp);
>  }
>  #else
> @@ -820,6 +816,8 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
>  	/* ALLOC_TRAMP flags lets us know we created it */
>  	ops->flags |= FTRACE_OPS_FL_ALLOC_TRAMP;
>  
> +	set_vm_flush_reset_perms(trampoline);
> +
>  	/*
>  	 * Module allocation needs to be completed by making the page
>  	 * executable. The page is still writable, which is a security hazard,


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 18/20] x86/ftrace: Use vmalloc special flag
  2019-02-22  0:22   ` Steven Rostedt
@ 2019-02-22  0:55     ` Edgecombe, Rick P
  0 siblings, 0 replies; 27+ messages in thread
From: Edgecombe, Rick P @ 2019-02-22  0:55 UTC (permalink / raw)
  To: rostedt
  Cc: linux-kernel, peterz, linux-integrity, ard.biesheuvel, tglx,
	linux-mm, nadav.amit, dave.hansen, Dock, Deneen T,
	linux-security-module, x86, akpm, hpa, kristen, mingo, linux_dti,
	luto, will.deacon, bp, kernel-hardening

On Thu, 2019-02-21 at 19:22 -0500, Steven Rostedt wrote:
> On Thu, 21 Feb 2019 15:44:49 -0800
> Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:
> 
> > Use new flag VM_FLUSH_RESET_PERMS for handling freeing of special
> > permissioned memory in vmalloc and remove places where memory was set NX
> > and RW before freeing which is no longer needed.
> > 
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> > Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> > Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
> > ---
> >  arch/x86/kernel/ftrace.c | 6 ++----
> >  1 file changed, 2 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
> > index 13c8249b197f..93efe3955333 100644
> > --- a/arch/x86/kernel/ftrace.c
> > +++ b/arch/x86/kernel/ftrace.c
> > @@ -692,10 +692,6 @@ static inline void *alloc_tramp(unsigned long size)
> >  }
> >  static inline void tramp_free(void *tramp, int size)
> 
> As size is no longer used within the function, can you remove that too.
> 
> Thanks,
> 
> -- Steve
> 
Good point, I'll remove it.

Thanks,

Rick

[snip]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns
  2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
                   ` (19 preceding siblings ...)
  2019-02-21 23:44 ` [PATCH v3 20/20] x86/alternative: Comment about module removal races Rick Edgecombe
@ 2019-02-22 16:14 ` Borislav Petkov
  2019-02-22 18:32   ` Edgecombe, Rick P
  20 siblings, 1 reply; 27+ messages in thread
From: Borislav Petkov @ 2019-02-22 16:14 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: Andy Lutomirski, Ingo Molnar, linux-kernel, x86, hpa,
	Thomas Gleixner, Nadav Amit, Dave Hansen, Peter Zijlstra,
	linux_dti, linux-integrity, linux-security-module, akpm,
	kernel-hardening, linux-mm, will.deacon, ard.biesheuvel, kristen,
	deneen.t.dock

On Thu, Feb 21, 2019 at 03:44:31PM -0800, Rick Edgecombe wrote:
> Changes v2 to v3:
>  - Fix commit messages and comments [Boris]
>  - Rename VM_HAS_SPECIAL_PERMS [Boris]
>  - Remove unnecessary local variables [Boris]
>  - Rename set_alias_*() functions [Boris, Andy]
>  - Save/restore DR registers when using temporary mm
>  - Move line deletion from patch 10 to patch 17

In your previous submission there was a patch called

Subject: [PATCH v2 01/20] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"

What happened to it?

It did introduce a function text_poke_kgdb(), a.o., and I see this
function in the diff contexts in some of the patches in this submission
so it looks to me like you missed that first patch when submitting v3?

Or am *I* missing something?

Thx.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns
  2019-02-22 16:14 ` [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Borislav Petkov
@ 2019-02-22 18:32   ` Edgecombe, Rick P
  0 siblings, 0 replies; 27+ messages in thread
From: Edgecombe, Rick P @ 2019-02-22 18:32 UTC (permalink / raw)
  To: bp
  Cc: linux-kernel, peterz, linux-integrity, ard.biesheuvel, tglx,
	linux-mm, dave.hansen, nadav.amit, Dock, Deneen T,
	linux-security-module, x86, akpm, hpa, kristen, mingo, linux_dti,
	luto, will.deacon, kernel-hardening

On Fri, 2019-02-22 at 17:14 +0100, Borislav Petkov wrote:
> On Thu, Feb 21, 2019 at 03:44:31PM -0800, Rick Edgecombe wrote:
> > Changes v2 to v3:
> >  - Fix commit messages and comments [Boris]
> >  - Rename VM_HAS_SPECIAL_PERMS [Boris]
> >  - Remove unnecessary local variables [Boris]
> >  - Rename set_alias_*() functions [Boris, Andy]
> >  - Save/restore DR registers when using temporary mm
> >  - Move line deletion from patch 10 to patch 17
> 
> In your previous submission there was a patch called
> 
> Subject: [PATCH v2 01/20] Fix "x86/alternatives: Lockdep-enforce text_mutex in
> text_poke*()"
> 
> What happened to it?
> 
> It did introduce a function text_poke_kgdb(), a.o., and I see this
> function in the diff contexts in some of the patches in this submission
> so it looks to me like you missed that first patch when submitting v3?
> 
> Or am *I* missing something?
> 
> Thx.
> 
Oh, you are right! Sorry about that. I'll just send a new version with fixes for
other comments instead of a resend of this one.

Thanks,

Rick

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2019-02-22 18:32 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-21 23:44 [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 01/20] x86/jump_label: Use text_poke_early() during early init Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 02/20] x86/mm: Introduce temporary mm structs Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 03/20] x86/mm: Save DRs when loading a temporary mm Rick Edgecombe
2019-02-22  0:07   ` Sean Christopherson
2019-02-22  0:17     ` Nadav Amit
2019-02-21 23:44 ` [PATCH v3 04/20] fork: Provide a function for copying init_mm Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 05/20] x86/alternative: Initialize temporary mm for patching Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 06/20] x86/alternative: Use temporary mm for text poking Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 07/20] x86/kgdb: Avoid redundant comparison of patched code Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 08/20] x86/ftrace: Set trampoline pages as executable Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 09/20] x86/kprobes: Set instruction page " Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 10/20] x86/module: Avoid breaking W^X while loading modules Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 11/20] x86/jump-label: Remove support for custom poker Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 12/20] x86/alternative: Remove the return value of text_poke_*() Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 13/20] x86/mm/cpa: Add set_direct_map_ functions Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 14/20] mm: Make hibernate handle unmapped pages Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 15/20] vmalloc: Add flag for free of special permsissions Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 16/20] modules: Use vmalloc special flag Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 17/20] bpf: " Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 18/20] x86/ftrace: " Rick Edgecombe
2019-02-22  0:22   ` Steven Rostedt
2019-02-22  0:55     ` Edgecombe, Rick P
2019-02-21 23:44 ` [PATCH v3 19/20] x86/kprobes: " Rick Edgecombe
2019-02-21 23:44 ` [PATCH v3 20/20] x86/alternative: Comment about module removal races Rick Edgecombe
2019-02-22 16:14 ` [PATCH v3 00/20] Merge text_poke fixes and executable lockdowns Borislav Petkov
2019-02-22 18:32   ` Edgecombe, Rick P

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).