linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] Allow tasks to have their user stack pointer sanity checked
@ 2019-02-11 17:59 Will Deacon
  2019-02-11 17:59 ` [RFC PATCH 1/4] mm: Check user stack pointer is mapped with MAP_STACK Will Deacon
                   ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: Will Deacon @ 2019-02-11 17:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Will Deacon, Kees Cook, Jann Horn, Andrew Morton, Matthew Wilcox,
	Michal Hocko, Peter Zijlstra

Hi all,

I attended an interesting talk at LCA last month that described some of the
security features deployed in OpenBSD [1]. One hardening feature that piqued
my interest was, on syscall entry and page faults from userspace, checking
that the user stack pointer for a task points at pages that were either
allocated by the kernel for the initial process stack of mapped with mmap()
using the MAP_STACK flag. This acts as a basic defense against stack
pivoting attacks.

The problem with this checking is that it is a retrospective tightening
of the ABI, but that hasn't stopped me hacking it together behind a couple
of prctl() options.

Anyway, it was fun to implement so I figured I'd post it as an RFC.

Will

[1] https://2019.linux.conf.au/schedule/presentation/164/

Cc: Kees Cook <keescook@chromium.org>
Cc: Jann Horn <jannh@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>

--->8

Will Deacon (4):
  mm: Check user stack pointer is mapped with MAP_STACK
  mm: Expose user stack pointer checking via prctl()
  mm: Add kconfig entries for user stack pointer checking
  arm64: Check user stack pointer on syscall entry

 arch/arm64/Kconfig          |  1 +
 arch/arm64/kernel/syscall.c |  4 +++
 include/linux/mm.h          | 15 +++++++++-
 include/linux/mman.h        |  3 +-
 include/linux/sched.h       |  4 +++
 include/uapi/linux/prctl.h  |  5 ++++
 kernel/sys.c                |  5 ++++
 mm/Kconfig                  | 17 ++++++++++++
 mm/memory.c                 | 67 +++++++++++++++++++++++++++++++++++++++++++++
 9 files changed, 119 insertions(+), 2 deletions(-)

-- 
2.11.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC PATCH 1/4] mm: Check user stack pointer is mapped with MAP_STACK
  2019-02-11 17:59 [RFC PATCH 0/4] Allow tasks to have their user stack pointer sanity checked Will Deacon
@ 2019-02-11 17:59 ` Will Deacon
  2019-02-11 17:59 ` [RFC PATCH 2/4] mm: Expose user stack pointer checking via prctl() Will Deacon
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Will Deacon @ 2019-02-11 17:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Will Deacon, Kees Cook, Jann Horn, Andrew Morton, Matthew Wilcox,
	Michal Hocko, Peter Zijlstra

By marking stack VMAs with VM_USERSTACK, we can perform optional checks
on entry to the kernel from system calls and user faults to ensure that
the user stack pointer does indeed point to a stack VMA. If the stack
pointer is found to point elsewhere, a SIGSEGV can be delivered to the
current application.

This acts as a best-effort defense against stack-pivoting attacks.

Cc: Kees Cook <keescook@chromium.org>
Cc: Jann Horn <jannh@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 include/linux/mm.h    | 10 +++++++++-
 include/linux/mman.h  |  3 ++-
 include/linux/sched.h |  4 ++++
 mm/memory.c           | 45 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 60 insertions(+), 2 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 80bb6408fe73..9fa02d47a270 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -233,6 +233,7 @@ extern unsigned int kobjsize(const void *objp);
 #define VM_ARCH_1	0x01000000	/* Architecture-specific flag */
 #define VM_WIPEONFORK	0x02000000	/* Wipe VMA contents in child. */
 #define VM_DONTDUMP	0x04000000	/* Do not include in the core dump */
+#define VM_USERSTACK	0x08000000	/* User stack VM */
 
 #ifdef CONFIG_MEM_SOFT_DIRTY
 # define VM_SOFTDIRTY	0x08000000	/* Not soft dirty clean area */
@@ -310,7 +311,8 @@ extern unsigned int kobjsize(const void *objp);
 #define VM_STACK	VM_GROWSDOWN
 #endif
 
-#define VM_STACK_FLAGS	(VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT)
+#define VM_STACK_FLAGS	(VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT | \
+			 VM_USERSTACK)
 
 /*
  * Special vmas that are non-mergable, non-mlock()able.
@@ -1480,6 +1482,12 @@ int truncate_inode_page(struct address_space *mapping, struct page *page);
 int generic_error_remove_page(struct address_space *mapping, struct page *page);
 int invalidate_inode_page(struct page *page);
 
+#ifdef CONFIG_USER_STACK_POINTER_CHECKS
+bool usp_check_syscall(void);
+#else
+static inline bool usp_check_syscall(void) { return true; }
+#endif
+
 #ifdef CONFIG_MMU
 extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
 			unsigned long address, unsigned int flags);
diff --git a/include/linux/mman.h b/include/linux/mman.h
index 4b08e9c9c538..d4f2d39fca70 100644
--- a/include/linux/mman.h
+++ b/include/linux/mman.h
@@ -131,7 +131,8 @@ calc_vm_flag_bits(unsigned long flags)
 	return _calc_vm_trans(flags, MAP_GROWSDOWN,  VM_GROWSDOWN ) |
 	       _calc_vm_trans(flags, MAP_DENYWRITE,  VM_DENYWRITE ) |
 	       _calc_vm_trans(flags, MAP_LOCKED,     VM_LOCKED    ) |
-	       _calc_vm_trans(flags, MAP_SYNC,	     VM_SYNC      );
+	       _calc_vm_trans(flags, MAP_SYNC,	     VM_SYNC      ) |
+	       _calc_vm_trans(flags, MAP_STACK,      VM_USERSTACK ) ;
 }
 
 unsigned long vm_commit_limit(void);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index bba3afb4e9bf..2e6766301645 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1208,6 +1208,10 @@ struct task_struct {
 	unsigned long			prev_lowest_stack;
 #endif
 
+#ifdef CONFIG_USER_STACK_POINTER_CHECKS
+	unsigned int			usp_checks;
+#endif
+
 	/*
 	 * New fields for task_struct should be added above here, so that
 	 * they are included in the randomized portion of task_struct.
diff --git a/mm/memory.c b/mm/memory.c
index e11ca9dd823f..e0b449f520da 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -44,6 +44,7 @@
 #include <linux/sched/coredump.h>
 #include <linux/sched/numa_balancing.h>
 #include <linux/sched/task.h>
+#include <linux/sched/task_stack.h>
 #include <linux/hugetlb.h>
 #include <linux/mman.h>
 #include <linux/swap.h>
@@ -63,6 +64,7 @@
 #include <linux/elf.h>
 #include <linux/gfp.h>
 #include <linux/migrate.h>
+#include <linux/ptrace.h>
 #include <linux/string.h>
 #include <linux/dma-debug.h>
 #include <linux/debugfs.h>
@@ -3911,6 +3913,46 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
 	return handle_pte_fault(&vmf);
 }
 
+#ifdef CONFIG_USER_STACK_POINTER_CHECKS
+#define USP_CHECK_FAULT		(1U << 0)
+#define USP_CHECK_SYSCALL	(1U << 1)
+
+static bool __usp_check(void)
+{
+	struct vm_area_struct *vma;
+
+	vma = find_vma(current->mm, current_user_stack_pointer());
+	return vma && (vma->vm_flags & VM_USERSTACK);
+}
+
+static bool usp_check_fault(unsigned int flags)
+{
+	if (!(flags & FAULT_FLAG_USER))
+		return true;
+
+	if (!(current->usp_checks & USP_CHECK_FAULT))
+		return true;
+
+	return __usp_check();
+}
+
+bool usp_check_syscall(void)
+{
+	bool ret;
+	struct mm_struct *mm = current->mm;
+
+	if (!(current->usp_checks & USP_CHECK_SYSCALL))
+		return true;
+
+	down_read(&mm->mmap_sem);
+	ret = __usp_check();
+	up_read(&mm->mmap_sem);
+	return ret;
+}
+#else
+static bool usp_check_fault(unsigned int flags) { return true; }
+#endif
+
 /*
  * By the time we get here, we already hold the mm semaphore
  *
@@ -3930,6 +3972,9 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
 	/* do counter updates before entering really critical section. */
 	check_sync_rss_stat(current);
 
+	if (!usp_check_fault(flags))
+		return VM_FAULT_SIGSEGV;
+
 	if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
 					    flags & FAULT_FLAG_INSTRUCTION,
 					    flags & FAULT_FLAG_REMOTE))
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [RFC PATCH 2/4] mm: Expose user stack pointer checking via prctl()
  2019-02-11 17:59 [RFC PATCH 0/4] Allow tasks to have their user stack pointer sanity checked Will Deacon
  2019-02-11 17:59 ` [RFC PATCH 1/4] mm: Check user stack pointer is mapped with MAP_STACK Will Deacon
@ 2019-02-11 17:59 ` Will Deacon
  2019-02-11 17:59 ` [RFC PATCH 3/4] mm: Add kconfig entries for user stack pointer checking Will Deacon
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Will Deacon @ 2019-02-11 17:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Will Deacon, Kees Cook, Jann Horn, Andrew Morton, Matthew Wilcox,
	Michal Hocko, Peter Zijlstra

Hook up a prctl() option to control the level of user stack pointer
checking for the current task. By default, no checking is performed, but
checks can be independently controlled for system calls and page faults.

The option is inherited across fork() and preserved across exec().

Cc: Kees Cook <keescook@chromium.org>
Cc: Jann Horn <jannh@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 include/linux/mm.h         |  5 +++++
 include/uapi/linux/prctl.h |  5 +++++
 kernel/sys.c               |  5 +++++
 mm/memory.c                | 22 ++++++++++++++++++++++
 4 files changed, 37 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 9fa02d47a270..7a668447c01f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1483,8 +1483,13 @@ int generic_error_remove_page(struct address_space *mapping, struct page *page);
 int invalidate_inode_page(struct page *page);
 
 #ifdef CONFIG_USER_STACK_POINTER_CHECKS
+long prctl_sp_check(struct task_struct *tsk, unsigned long flags);
 bool usp_check_syscall(void);
 #else
+static inline long prctl_sp_check(struct task_struct *tsk, unsigned long flags)
+{
+	return -EINVAL;
+}
 static inline bool usp_check_syscall(void) { return true; }
 #endif
 
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index b4875a93363a..3c4d93856f2a 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -228,4 +228,9 @@ struct prctl_mm_map {
 # define PR_PAC_APDBKEY			(1UL << 3)
 # define PR_PAC_APGAKEY			(1UL << 4)
 
+/* User stack pointer sanity checking */
+#define PR_SP_CHECK			55
+# define PR_SP_CHECK_PAGE_FAULT		(1UL << 0)
+# define PR_SP_CHECK_SYSCALL		(1UL << 1)
+
 #endif /* _LINUX_PRCTL_H */
diff --git a/kernel/sys.c b/kernel/sys.c
index f7eb62eceb24..bd507eebed54 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -2485,6 +2485,11 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
 			return -EINVAL;
 		error = PAC_RESET_KEYS(me, arg2);
 		break;
+	case PR_SP_CHECK:
+		if (arg3 || arg4 || arg5)
+			return -EINVAL;
+		error = prctl_sp_check(me, arg2);
+		break;
 	default:
 		error = -EINVAL;
 		break;
diff --git a/mm/memory.c b/mm/memory.c
index e0b449f520da..700d9fd03c88 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -71,6 +71,7 @@
 #include <linux/userfaultfd_k.h>
 #include <linux/dax.h>
 #include <linux/oom.h>
+#include <linux/prctl.h>
 
 #include <asm/io.h>
 #include <asm/mmu_context.h>
@@ -3949,6 +3950,27 @@ bool usp_check_syscall(void)
 	up_read(&mm->mmap_sem);
 	return ret;
 }
+
+long prctl_sp_check(struct task_struct *tsk, unsigned long flags)
+{
+	if (flags & ~(PR_SP_CHECK_PAGE_FAULT | PR_SP_CHECK_SYSCALL))
+		return -EINVAL;
+
+	if (flags & PR_SP_CHECK_PAGE_FAULT)
+		tsk->usp_checks |= USP_CHECK_FAULT;
+	else
+		tsk->usp_checks &= ~USP_CHECK_FAULT;
+
+	if (flags & PR_SP_CHECK_SYSCALL) {
+		if (!IS_ENABLED(CONFIG_ARCH_HAS_USP_CHECK_SYSCALL))
+			return -EINVAL;
+		tsk->usp_checks |= USP_CHECK_SYSCALL;
+	} else {
+		tsk->usp_checks &= ~USP_CHECK_SYSCALL;
+	}
+
+	return 0;
+}
 #else
 static bool usp_check_fault(unsigned int flags) { return true; }
 #endif
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [RFC PATCH 3/4] mm: Add kconfig entries for user stack pointer checking
  2019-02-11 17:59 [RFC PATCH 0/4] Allow tasks to have their user stack pointer sanity checked Will Deacon
  2019-02-11 17:59 ` [RFC PATCH 1/4] mm: Check user stack pointer is mapped with MAP_STACK Will Deacon
  2019-02-11 17:59 ` [RFC PATCH 2/4] mm: Expose user stack pointer checking via prctl() Will Deacon
@ 2019-02-11 17:59 ` Will Deacon
  2019-02-11 17:59 ` [RFC PATCH 4/4] arm64: Check user stack pointer on syscall entry Will Deacon
  2019-02-11 19:12 ` [RFC PATCH 0/4] Allow tasks to have their user stack pointer sanity checked Kees Cook
  4 siblings, 0 replies; 7+ messages in thread
From: Will Deacon @ 2019-02-11 17:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Will Deacon, Kees Cook, Jann Horn, Andrew Morton, Matthew Wilcox,
	Michal Hocko, Peter Zijlstra

Provide Kconfig entries to enable/disable user stack pointer checking
and also for architectures to expose the system call controls via
prctl() once they have augmented their system call entry path to perform
the necessary checks.

Cc: Kees Cook <keescook@chromium.org>
Cc: Jann Horn <jannh@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 mm/Kconfig | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/mm/Kconfig b/mm/Kconfig
index 25c71eb8a7db..35f044162501 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -758,4 +758,21 @@ config GUP_BENCHMARK
 config ARCH_HAS_PTE_SPECIAL
 	bool
 
+config USER_STACK_POINTER_CHECKS
+	bool "Check user stack pointer points to stack pages"
+	depends on MMU
+	default y
+	help
+	  This feature can be used to enforce that the user stack pointer
+	  points to either the kernel-allocated user stack or a mapping
+	  created with the MAP_STACK flag.
+
+	  By default, no checks are performed, and an application must
+	  opt-in via the PR_SP_CHECK prctl() system call if it wishes to
+	  enable checking. Checking can be independently controlled for
+	  system calls and page fault handling.
+
+config ARCH_HAS_USP_CHECK_SYSCALL
+	bool
+
 endmenu
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [RFC PATCH 4/4] arm64: Check user stack pointer on syscall entry
  2019-02-11 17:59 [RFC PATCH 0/4] Allow tasks to have their user stack pointer sanity checked Will Deacon
                   ` (2 preceding siblings ...)
  2019-02-11 17:59 ` [RFC PATCH 3/4] mm: Add kconfig entries for user stack pointer checking Will Deacon
@ 2019-02-11 17:59 ` Will Deacon
  2019-02-11 19:12 ` [RFC PATCH 0/4] Allow tasks to have their user stack pointer sanity checked Kees Cook
  4 siblings, 0 replies; 7+ messages in thread
From: Will Deacon @ 2019-02-11 17:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Will Deacon, Kees Cook, Jann Horn, Andrew Morton, Matthew Wilcox,
	Michal Hocko, Peter Zijlstra

Allow the user stack pointer value to be checked on system call entry
and deliver a SIGSEGV if the check does not pass.

Cc: Kees Cook <keescook@chromium.org>
Cc: Jann Horn <jannh@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/Kconfig          | 1 +
 arch/arm64/kernel/syscall.c | 4 ++++
 2 files changed, 5 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index a4168d366127..e87304a06a85 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -29,6 +29,7 @@ config ARM64
 	select ARCH_HAS_SYNC_DMA_FOR_CPU
 	select ARCH_HAS_SYSCALL_WRAPPER
 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+	select ARCH_HAS_USP_CHECK_SYSCALL
 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
 	select ARCH_INLINE_READ_LOCK if !PREEMPT
 	select ARCH_INLINE_READ_LOCK_BH if !PREEMPT
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 5610ac01c1ec..06566010f6d1 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -11,6 +11,7 @@
 #include <asm/fpsimd.h>
 #include <asm/syscall.h>
 #include <asm/thread_info.h>
+#include <asm/traps.h>
 #include <asm/unistd.h>
 
 long compat_arm_syscall(struct pt_regs *regs, int scno);
@@ -71,6 +72,9 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
 	local_daif_restore(DAIF_PROCCTX);
 	user_exit();
 
+	if (!usp_check_syscall())
+		force_signal_inject(SIGSEGV, SEGV_MAPERR, GET_USP(regs));
+
 	if (has_syscall_work(flags)) {
 		/* set default errno for user-issued syscall(-1) */
 		if (scno == NO_SYSCALL)
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH 0/4] Allow tasks to have their user stack pointer sanity checked
  2019-02-11 17:59 [RFC PATCH 0/4] Allow tasks to have their user stack pointer sanity checked Will Deacon
                   ` (3 preceding siblings ...)
  2019-02-11 17:59 ` [RFC PATCH 4/4] arm64: Check user stack pointer on syscall entry Will Deacon
@ 2019-02-11 19:12 ` Kees Cook
  2019-02-13 13:19   ` Will Deacon
  4 siblings, 1 reply; 7+ messages in thread
From: Kees Cook @ 2019-02-11 19:12 UTC (permalink / raw)
  To: Will Deacon
  Cc: LKML, Jann Horn, Andrew Morton, Matthew Wilcox, Michal Hocko,
	Peter Zijlstra

On Mon, Feb 11, 2019 at 9:59 AM Will Deacon <will.deacon@arm.com> wrote:
>
> Hi all,
>
> I attended an interesting talk at LCA last month that described some of the
> security features deployed in OpenBSD [1]. One hardening feature that piqued
> my interest was, on syscall entry and page faults from userspace, checking
> that the user stack pointer for a task points at pages that were either
> allocated by the kernel for the initial process stack of mapped with mmap()
> using the MAP_STACK flag. This acts as a basic defense against stack
> pivoting attacks.

I think this is nice to have, yes! Thanks for working on it. It seems
like this blocks pivots to heap -- relocating to a groomed stack area
would still be allowed. Regardless, this does narrow the scope of such
attacks quite nicely.

> The problem with this checking is that it is a retrospective tightening
> of the ABI, but that hasn't stopped me hacking it together behind a couple
> of prctl() options.

MAP_STACK has been around for a long time, so I think anything using
threads via glibc should be "covered". I would assume this would mean
that glibc could set the prctl() for such users. I suspect there are a
lot of open-coded threading implementations, though. It'd be
interesting to see how many need modification.

Given that this is behind a prctl(), it seems the CONFIG isn't needed?

> Anyway, it was fun to implement so I figured I'd post it as an RFC.

Thanks! I'd love to see an x86 counterpart to the sycall check too.

Did you trying bringing up a full userspace and windowing environment
with this enabled by default (i.e. forcing init to set the prctls)?
I'd be curious to see how much (if anything) goes boom. :)

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

>
> Will
>
> [1] https://2019.linux.conf.au/schedule/presentation/164/
>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Jann Horn <jannh@google.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
>
> --->8
>
> Will Deacon (4):
>   mm: Check user stack pointer is mapped with MAP_STACK
>   mm: Expose user stack pointer checking via prctl()
>   mm: Add kconfig entries for user stack pointer checking
>   arm64: Check user stack pointer on syscall entry
>
>  arch/arm64/Kconfig          |  1 +
>  arch/arm64/kernel/syscall.c |  4 +++
>  include/linux/mm.h          | 15 +++++++++-
>  include/linux/mman.h        |  3 +-
>  include/linux/sched.h       |  4 +++
>  include/uapi/linux/prctl.h  |  5 ++++
>  kernel/sys.c                |  5 ++++
>  mm/Kconfig                  | 17 ++++++++++++
>  mm/memory.c                 | 67 +++++++++++++++++++++++++++++++++++++++++++++
>  9 files changed, 119 insertions(+), 2 deletions(-)
>
> --
> 2.11.0
>


-- 
Kees Cook

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH 0/4] Allow tasks to have their user stack pointer sanity checked
  2019-02-11 19:12 ` [RFC PATCH 0/4] Allow tasks to have their user stack pointer sanity checked Kees Cook
@ 2019-02-13 13:19   ` Will Deacon
  0 siblings, 0 replies; 7+ messages in thread
From: Will Deacon @ 2019-02-13 13:19 UTC (permalink / raw)
  To: Kees Cook
  Cc: LKML, Jann Horn, Andrew Morton, Matthew Wilcox, Michal Hocko,
	Peter Zijlstra

Hi Kees,

On Mon, Feb 11, 2019 at 11:12:19AM -0800, Kees Cook wrote:
> On Mon, Feb 11, 2019 at 9:59 AM Will Deacon <will.deacon@arm.com> wrote:
> > I attended an interesting talk at LCA last month that described some of the
> > security features deployed in OpenBSD [1]. One hardening feature that piqued
> > my interest was, on syscall entry and page faults from userspace, checking
> > that the user stack pointer for a task points at pages that were either
> > allocated by the kernel for the initial process stack of mapped with mmap()
> > using the MAP_STACK flag. This acts as a basic defense against stack
> > pivoting attacks.
> 
> I think this is nice to have, yes! Thanks for working on it. It seems
> like this blocks pivots to heap -- relocating to a groomed stack area
> would still be allowed. Regardless, this does narrow the scope of such
> attacks quite nicely.
> 
> > The problem with this checking is that it is a retrospective tightening
> > of the ABI, but that hasn't stopped me hacking it together behind a couple
> > of prctl() options.
> 
> MAP_STACK has been around for a long time, so I think anything using
> threads via glibc should be "covered". I would assume this would mean
> that glibc could set the prctl() for such users. I suspect there are a
> lot of open-coded threading implementations, though. It'd be
> interesting to see how many need modification.
> 
> Given that this is behind a prctl(), it seems the CONFIG isn't needed?

I wanted to keep the CONFIG because we grow task_struct and maybe somebody
cares about that (many of the other fields in there are guarded).

> > Anyway, it was fun to implement so I figured I'd post it as an RFC.
> 
> Thanks! I'd love to see an x86 counterpart to the sycall check too.

I'll take a quick look. I think that, like arm64, x86 moved much of their
entry code into C so it might be really straightforward.

> Did you trying bringing up a full userspace and windowing environment
> with this enabled by default (i.e. forcing init to set the prctls)?
> I'd be curious to see how much (if anything) goes boom. :)

So far I haven't found anything other than my targetted testcase which
explodes. However, I would fully expect some JITs to go wrong and probably
also some uses of sigaltstack().

> Reviewed-by: Kees Cook <keescook@chromium.org>

Thanks!

Will

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-02-13 13:19 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-11 17:59 [RFC PATCH 0/4] Allow tasks to have their user stack pointer sanity checked Will Deacon
2019-02-11 17:59 ` [RFC PATCH 1/4] mm: Check user stack pointer is mapped with MAP_STACK Will Deacon
2019-02-11 17:59 ` [RFC PATCH 2/4] mm: Expose user stack pointer checking via prctl() Will Deacon
2019-02-11 17:59 ` [RFC PATCH 3/4] mm: Add kconfig entries for user stack pointer checking Will Deacon
2019-02-11 17:59 ` [RFC PATCH 4/4] arm64: Check user stack pointer on syscall entry Will Deacon
2019-02-11 19:12 ` [RFC PATCH 0/4] Allow tasks to have their user stack pointer sanity checked Kees Cook
2019-02-13 13:19   ` Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).