All of lore.kernel.org
 help / color / mirror / Atom feed
From: Laura Abbott <labbott@redhat.com>
To: Kees Cook <keescook@chromium.org>, kernel-hardening@lists.openwall.com
Cc: Laura Abbott <labbott@fedoraproject.org>,
	Balbir Singh <bsingharora@gmail.com>,
	Daniel Micay <danielmicay@gmail.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Rik van Riel <riel@redhat.com>,
	Casey Schaufler <casey@schaufler-ca.com>,
	PaX Team <pageexec@freemail.hu>,
	Brad Spengler <spender@grsecurity.net>,
	Russell King <linux@armlinux.org.uk>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Tony Luck <tony.luck@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	x86@kernel.org, Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Andy Lutomirski <luto@kernel.org>, Borislav Petkov <bp@suse.de>,
	Mathias Krause <minipli@googlemail.com>, Jan Kara <jack@suse.cz>,
	Vitaly Wool <vitalywool@gmail.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Dmitry Vyukov <dvyukov@google.com>,
	linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support
Date: Mon, 25 Jul 2016 12:16:24 -0700	[thread overview]
Message-ID: <0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com> (raw)
In-Reply-To: <1469046427-12696-13-git-send-email-keescook@chromium.org>

On 07/20/2016 01:27 PM, Kees Cook wrote:
> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix discovered by Michael Ellerman.
>
> Based on code from PaX and grsecurity.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> Tested-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
>  init/Kconfig |  1 +
>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
>
> diff --git a/init/Kconfig b/init/Kconfig
> index 798c2020ee7c..1c4711819dfd 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1765,6 +1765,7 @@ config SLAB
>
>  config SLUB
>  	bool "SLUB (Unqueued Allocator)"
> +	select HAVE_HARDENED_USERCOPY_ALLOCATOR
>  	help
>  	   SLUB is a slab allocator that minimizes cache line usage
>  	   instead of managing queues of cached objects (SLAB approach).
> diff --git a/mm/slub.c b/mm/slub.c
> index 825ff4505336..7dee3d9a5843 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
>  EXPORT_SYMBOL(__kmalloc_node);
>  #endif
>
> +#ifdef CONFIG_HARDENED_USERCOPY
> +/*
> + * Rejects objects that are incorrectly sized.
> + *
> + * Returns NULL if check passes, otherwise const char * to name of cache
> + * to indicate an error.
> + */
> +const char *__check_heap_object(const void *ptr, unsigned long n,
> +				struct page *page)
> +{
> +	struct kmem_cache *s;
> +	unsigned long offset;
> +	size_t object_size;
> +
> +	/* Find object and usable object size. */
> +	s = page->slab_cache;
> +	object_size = slab_ksize(s);
> +
> +	/* Find offset within object. */
> +	offset = (ptr - page_address(page)) % s->size;
> +
> +	/* Adjust for redzone and reject if within the redzone. */
> +	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
> +		if (offset < s->red_left_pad)
> +			return s->name;
> +		offset -= s->red_left_pad;
> +	}
> +
> +	/* Allow address range falling entirely within object size. */
> +	if (offset <= object_size && n <= object_size - offset)
> +		return NULL;
> +
> +	return s->name;
> +}
> +#endif /* CONFIG_HARDENED_USERCOPY */
> +

I compared this against what check_valid_pointer does for SLUB_DEBUG
checking. I was hoping we could utilize that function to avoid
duplication but a) __check_heap_object needs to allow accesses anywhere
in the object, not just the beginning b) accessing page->objects
is racy without the addition of locking in SLUB_DEBUG.

Still, the ptr < page_address(page) check from __check_heap_object would
be good to add to avoid generating garbage large offsets and trying to
infer C math.

diff --git a/mm/slub.c b/mm/slub.c
index 7dee3d9..5370e4f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
         s = page->slab_cache;
         object_size = slab_ksize(s);
  
+       if (ptr < page_address(page))
+               return s->name;
+
         /* Find offset within object. */
         offset = (ptr - page_address(page)) % s->size;
  

With that, you can add

Reviwed-by: Laura Abbott <labbott@redhat.com>

>  static size_t __ksize(const void *object)
>  {
>  	struct page *page;
>

Thanks,
Laura

WARNING: multiple messages have this Message-ID (diff)
From: Laura Abbott <labbott@redhat.com>
To: Kees Cook <keescook@chromium.org>, kernel-hardening@lists.openwall.com
Cc: Laura Abbott <labbott@fedoraproject.org>,
	Balbir Singh <bsingharora@gmail.com>,
	Daniel Micay <danielmicay@gmail.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Rik van Riel <riel@redhat.com>,
	Casey Schaufler <casey@schaufler-ca.com>,
	PaX Team <pageexec@freemail.hu>,
	Brad Spengler <spender@grsecurity.net>,
	Russell King <linux@armlinux.org.uk>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Tony Luck <tony.luck@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	x86@kernel.org, Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Andy
Subject: Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support
Date: Mon, 25 Jul 2016 12:16:24 -0700	[thread overview]
Message-ID: <0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com> (raw)
In-Reply-To: <1469046427-12696-13-git-send-email-keescook@chromium.org>

On 07/20/2016 01:27 PM, Kees Cook wrote:
> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix discovered by Michael Ellerman.
>
> Based on code from PaX and grsecurity.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> Tested-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
>  init/Kconfig |  1 +
>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
>
> diff --git a/init/Kconfig b/init/Kconfig
> index 798c2020ee7c..1c4711819dfd 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1765,6 +1765,7 @@ config SLAB
>
>  config SLUB
>  	bool "SLUB (Unqueued Allocator)"
> +	select HAVE_HARDENED_USERCOPY_ALLOCATOR
>  	help
>  	   SLUB is a slab allocator that minimizes cache line usage
>  	   instead of managing queues of cached objects (SLAB approach).
> diff --git a/mm/slub.c b/mm/slub.c
> index 825ff4505336..7dee3d9a5843 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
>  EXPORT_SYMBOL(__kmalloc_node);
>  #endif
>
> +#ifdef CONFIG_HARDENED_USERCOPY
> +/*
> + * Rejects objects that are incorrectly sized.
> + *
> + * Returns NULL if check passes, otherwise const char * to name of cache
> + * to indicate an error.
> + */
> +const char *__check_heap_object(const void *ptr, unsigned long n,
> +				struct page *page)
> +{
> +	struct kmem_cache *s;
> +	unsigned long offset;
> +	size_t object_size;
> +
> +	/* Find object and usable object size. */
> +	s = page->slab_cache;
> +	object_size = slab_ksize(s);
> +
> +	/* Find offset within object. */
> +	offset = (ptr - page_address(page)) % s->size;
> +
> +	/* Adjust for redzone and reject if within the redzone. */
> +	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
> +		if (offset < s->red_left_pad)
> +			return s->name;
> +		offset -= s->red_left_pad;
> +	}
> +
> +	/* Allow address range falling entirely within object size. */
> +	if (offset <= object_size && n <= object_size - offset)
> +		return NULL;
> +
> +	return s->name;
> +}
> +#endif /* CONFIG_HARDENED_USERCOPY */
> +

I compared this against what check_valid_pointer does for SLUB_DEBUG
checking. I was hoping we could utilize that function to avoid
duplication but a) __check_heap_object needs to allow accesses anywhere
in the object, not just the beginning b) accessing page->objects
is racy without the addition of locking in SLUB_DEBUG.

Still, the ptr < page_address(page) check from __check_heap_object would
be good to add to avoid generating garbage large offsets and trying to
infer C math.

diff --git a/mm/slub.c b/mm/slub.c
index 7dee3d9..5370e4f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
         s = page->slab_cache;
         object_size = slab_ksize(s);
  
+       if (ptr < page_address(page))
+               return s->name;
+
         /* Find offset within object. */
         offset = (ptr - page_address(page)) % s->size;
  

With that, you can add

Reviwed-by: Laura Abbott <labbott@redhat.com>

>  static size_t __ksize(const void *object)
>  {
>  	struct page *page;
>

Thanks,
Laura

WARNING: multiple messages have this Message-ID (diff)
From: Laura Abbott <labbott@redhat.com>
To: Kees Cook <keescook@chromium.org>, kernel-hardening@lists.openwall.com
Cc: Laura Abbott <labbott@fedoraproject.org>,
	Balbir Singh <bsingharora@gmail.com>,
	Daniel Micay <danielmicay@gmail.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Rik van Riel <riel@redhat.com>,
	Casey Schaufler <casey@schaufler-ca.com>,
	PaX Team <pageexec@freemail.hu>,
	Brad Spengler <spender@grsecurity.net>,
	Russell King <linux@armlinux.org.uk>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Tony Luck <tony.luck@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	x86@kernel.org, Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Andy Lutomirski <luto@kernel.org>, Borislav Petkov <bp@suse.de>,
	Mathias Krause <minipli@googlemail.com>, Jan Kara <jack@suse.cz>,
	Vitaly Wool <vitalywool@gmail.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Dmitry Vyukov <dvyukov@google.com>,
	linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support
Date: Mon, 25 Jul 2016 19:16:24 +0000	[thread overview]
Message-ID: <0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com> (raw)
In-Reply-To: <1469046427-12696-13-git-send-email-keescook@chromium.org>

On 07/20/2016 01:27 PM, Kees Cook wrote:
> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix discovered by Michael Ellerman.
>
> Based on code from PaX and grsecurity.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> Tested-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
>  init/Kconfig |  1 +
>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
>
> diff --git a/init/Kconfig b/init/Kconfig
> index 798c2020ee7c..1c4711819dfd 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1765,6 +1765,7 @@ config SLAB
>
>  config SLUB
>  	bool "SLUB (Unqueued Allocator)"
> +	select HAVE_HARDENED_USERCOPY_ALLOCATOR
>  	help
>  	   SLUB is a slab allocator that minimizes cache line usage
>  	   instead of managing queues of cached objects (SLAB approach).
> diff --git a/mm/slub.c b/mm/slub.c
> index 825ff4505336..7dee3d9a5843 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
>  EXPORT_SYMBOL(__kmalloc_node);
>  #endif
>
> +#ifdef CONFIG_HARDENED_USERCOPY
> +/*
> + * Rejects objects that are incorrectly sized.
> + *
> + * Returns NULL if check passes, otherwise const char * to name of cache
> + * to indicate an error.
> + */
> +const char *__check_heap_object(const void *ptr, unsigned long n,
> +				struct page *page)
> +{
> +	struct kmem_cache *s;
> +	unsigned long offset;
> +	size_t object_size;
> +
> +	/* Find object and usable object size. */
> +	s = page->slab_cache;
> +	object_size = slab_ksize(s);
> +
> +	/* Find offset within object. */
> +	offset = (ptr - page_address(page)) % s->size;
> +
> +	/* Adjust for redzone and reject if within the redzone. */
> +	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
> +		if (offset < s->red_left_pad)
> +			return s->name;
> +		offset -= s->red_left_pad;
> +	}
> +
> +	/* Allow address range falling entirely within object size. */
> +	if (offset <= object_size && n <= object_size - offset)
> +		return NULL;
> +
> +	return s->name;
> +}
> +#endif /* CONFIG_HARDENED_USERCOPY */
> +

I compared this against what check_valid_pointer does for SLUB_DEBUG
checking. I was hoping we could utilize that function to avoid
duplication but a) __check_heap_object needs to allow accesses anywhere
in the object, not just the beginning b) accessing page->objects
is racy without the addition of locking in SLUB_DEBUG.

Still, the ptr < page_address(page) check from __check_heap_object would
be good to add to avoid generating garbage large offsets and trying to
infer C math.

diff --git a/mm/slub.c b/mm/slub.c
index 7dee3d9..5370e4f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
         s = page->slab_cache;
         object_size = slab_ksize(s);
  
+       if (ptr < page_address(page))
+               return s->name;
+
         /* Find offset within object. */
         offset = (ptr - page_address(page)) % s->size;
  

With that, you can add

Reviwed-by: Laura Abbott <labbott@redhat.com>

>  static size_t __ksize(const void *object)
>  {
>  	struct page *page;
>

Thanks,
Laura

WARNING: multiple messages have this Message-ID (diff)
From: Laura Abbott <labbott@redhat.com>
To: Kees Cook <keescook@chromium.org>, kernel-hardening@lists.openwall.com
Cc: Laura Abbott <labbott@fedoraproject.org>,
	Balbir Singh <bsingharora@gmail.com>,
	Daniel Micay <danielmicay@gmail.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Rik van Riel <riel@redhat.com>,
	Casey Schaufler <casey@schaufler-ca.com>,
	PaX Team <pageexec@freemail.hu>,
	Brad Spengler <spender@grsecurity.net>,
	Russell King <linux@armlinux.org.uk>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Tony Luck <tony.luck@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	x86@kernel.org, Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Andy Lutomirski <luto@kernel.org>, Borislav Petkov <bp@suse.de>,
	Mathias Krause <minipli@googlemail.com>, Jan Kara <jack@suse.cz>,
	Vitaly Wool <vitalywool@gmail.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Dmitry Vyukov <dvyukov@google.com>,
	linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support
Date: Mon, 25 Jul 2016 12:16:24 -0700	[thread overview]
Message-ID: <0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com> (raw)
In-Reply-To: <1469046427-12696-13-git-send-email-keescook@chromium.org>

On 07/20/2016 01:27 PM, Kees Cook wrote:
> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix discovered by Michael Ellerman.
>
> Based on code from PaX and grsecurity.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> Tested-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
>  init/Kconfig |  1 +
>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
>
> diff --git a/init/Kconfig b/init/Kconfig
> index 798c2020ee7c..1c4711819dfd 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1765,6 +1765,7 @@ config SLAB
>
>  config SLUB
>  	bool "SLUB (Unqueued Allocator)"
> +	select HAVE_HARDENED_USERCOPY_ALLOCATOR
>  	help
>  	   SLUB is a slab allocator that minimizes cache line usage
>  	   instead of managing queues of cached objects (SLAB approach).
> diff --git a/mm/slub.c b/mm/slub.c
> index 825ff4505336..7dee3d9a5843 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
>  EXPORT_SYMBOL(__kmalloc_node);
>  #endif
>
> +#ifdef CONFIG_HARDENED_USERCOPY
> +/*
> + * Rejects objects that are incorrectly sized.
> + *
> + * Returns NULL if check passes, otherwise const char * to name of cache
> + * to indicate an error.
> + */
> +const char *__check_heap_object(const void *ptr, unsigned long n,
> +				struct page *page)
> +{
> +	struct kmem_cache *s;
> +	unsigned long offset;
> +	size_t object_size;
> +
> +	/* Find object and usable object size. */
> +	s = page->slab_cache;
> +	object_size = slab_ksize(s);
> +
> +	/* Find offset within object. */
> +	offset = (ptr - page_address(page)) % s->size;
> +
> +	/* Adjust for redzone and reject if within the redzone. */
> +	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
> +		if (offset < s->red_left_pad)
> +			return s->name;
> +		offset -= s->red_left_pad;
> +	}
> +
> +	/* Allow address range falling entirely within object size. */
> +	if (offset <= object_size && n <= object_size - offset)
> +		return NULL;
> +
> +	return s->name;
> +}
> +#endif /* CONFIG_HARDENED_USERCOPY */
> +

I compared this against what check_valid_pointer does for SLUB_DEBUG
checking. I was hoping we could utilize that function to avoid
duplication but a) __check_heap_object needs to allow accesses anywhere
in the object, not just the beginning b) accessing page->objects
is racy without the addition of locking in SLUB_DEBUG.

Still, the ptr < page_address(page) check from __check_heap_object would
be good to add to avoid generating garbage large offsets and trying to
infer C math.

diff --git a/mm/slub.c b/mm/slub.c
index 7dee3d9..5370e4f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
         s = page->slab_cache;
         object_size = slab_ksize(s);
  
+       if (ptr < page_address(page))
+               return s->name;
+
         /* Find offset within object. */
         offset = (ptr - page_address(page)) % s->size;
  

With that, you can add

Reviwed-by: Laura Abbott <labbott@redhat.com>

>  static size_t __ksize(const void *object)
>  {
>  	struct page *page;
>

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: labbott@redhat.com (Laura Abbott)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v4 12/12] mm: SLUB hardened usercopy support
Date: Mon, 25 Jul 2016 12:16:24 -0700	[thread overview]
Message-ID: <0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com> (raw)
In-Reply-To: <1469046427-12696-13-git-send-email-keescook@chromium.org>

On 07/20/2016 01:27 PM, Kees Cook wrote:
> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix discovered by Michael Ellerman.
>
> Based on code from PaX and grsecurity.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> Tested-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
>  init/Kconfig |  1 +
>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
>
> diff --git a/init/Kconfig b/init/Kconfig
> index 798c2020ee7c..1c4711819dfd 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1765,6 +1765,7 @@ config SLAB
>
>  config SLUB
>  	bool "SLUB (Unqueued Allocator)"
> +	select HAVE_HARDENED_USERCOPY_ALLOCATOR
>  	help
>  	   SLUB is a slab allocator that minimizes cache line usage
>  	   instead of managing queues of cached objects (SLAB approach).
> diff --git a/mm/slub.c b/mm/slub.c
> index 825ff4505336..7dee3d9a5843 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
>  EXPORT_SYMBOL(__kmalloc_node);
>  #endif
>
> +#ifdef CONFIG_HARDENED_USERCOPY
> +/*
> + * Rejects objects that are incorrectly sized.
> + *
> + * Returns NULL if check passes, otherwise const char * to name of cache
> + * to indicate an error.
> + */
> +const char *__check_heap_object(const void *ptr, unsigned long n,
> +				struct page *page)
> +{
> +	struct kmem_cache *s;
> +	unsigned long offset;
> +	size_t object_size;
> +
> +	/* Find object and usable object size. */
> +	s = page->slab_cache;
> +	object_size = slab_ksize(s);
> +
> +	/* Find offset within object. */
> +	offset = (ptr - page_address(page)) % s->size;
> +
> +	/* Adjust for redzone and reject if within the redzone. */
> +	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
> +		if (offset < s->red_left_pad)
> +			return s->name;
> +		offset -= s->red_left_pad;
> +	}
> +
> +	/* Allow address range falling entirely within object size. */
> +	if (offset <= object_size && n <= object_size - offset)
> +		return NULL;
> +
> +	return s->name;
> +}
> +#endif /* CONFIG_HARDENED_USERCOPY */
> +

I compared this against what check_valid_pointer does for SLUB_DEBUG
checking. I was hoping we could utilize that function to avoid
duplication but a) __check_heap_object needs to allow accesses anywhere
in the object, not just the beginning b) accessing page->objects
is racy without the addition of locking in SLUB_DEBUG.

Still, the ptr < page_address(page) check from __check_heap_object would
be good to add to avoid generating garbage large offsets and trying to
infer C math.

diff --git a/mm/slub.c b/mm/slub.c
index 7dee3d9..5370e4f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
         s = page->slab_cache;
         object_size = slab_ksize(s);
  
+       if (ptr < page_address(page))
+               return s->name;
+
         /* Find offset within object. */
         offset = (ptr - page_address(page)) % s->size;
  

With that, you can add

Reviwed-by: Laura Abbott <labbott@redhat.com>

>  static size_t __ksize(const void *object)
>  {
>  	struct page *page;
>

Thanks,
Laura

WARNING: multiple messages have this Message-ID (diff)
From: Laura Abbott <labbott@redhat.com>
To: Kees Cook <keescook@chromium.org>, kernel-hardening@lists.openwall.com
Cc: Laura Abbott <labbott@fedoraproject.org>,
	Balbir Singh <bsingharora@gmail.com>,
	Daniel Micay <danielmicay@gmail.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Rik van Riel <riel@redhat.com>,
	Casey Schaufler <casey@schaufler-ca.com>,
	PaX Team <pageexec@freemail.hu>,
	Brad Spengler <spender@grsecurity.net>,
	Russell King <linux@armlinux.org.uk>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Tony Luck <tony.luck@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	x86@kernel.org, Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Andy Lutomirski <luto@kernel.org>, Borislav Petkov <bp@suse.de>,
	Mathias Krause <minipli@googlemail.com>, Jan Kara <jack@suse.cz>,
	Vitaly Wool <vitalywool@gmail.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Dmitry Vyukov <dvyukov@google.com>,
	linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: [kernel-hardening] Re: [PATCH v4 12/12] mm: SLUB hardened usercopy support
Date: Mon, 25 Jul 2016 12:16:24 -0700	[thread overview]
Message-ID: <0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com> (raw)
In-Reply-To: <1469046427-12696-13-git-send-email-keescook@chromium.org>

On 07/20/2016 01:27 PM, Kees Cook wrote:
> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix discovered by Michael Ellerman.
>
> Based on code from PaX and grsecurity.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> Tested-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
>  init/Kconfig |  1 +
>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
>
> diff --git a/init/Kconfig b/init/Kconfig
> index 798c2020ee7c..1c4711819dfd 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1765,6 +1765,7 @@ config SLAB
>
>  config SLUB
>  	bool "SLUB (Unqueued Allocator)"
> +	select HAVE_HARDENED_USERCOPY_ALLOCATOR
>  	help
>  	   SLUB is a slab allocator that minimizes cache line usage
>  	   instead of managing queues of cached objects (SLAB approach).
> diff --git a/mm/slub.c b/mm/slub.c
> index 825ff4505336..7dee3d9a5843 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
>  EXPORT_SYMBOL(__kmalloc_node);
>  #endif
>
> +#ifdef CONFIG_HARDENED_USERCOPY
> +/*
> + * Rejects objects that are incorrectly sized.
> + *
> + * Returns NULL if check passes, otherwise const char * to name of cache
> + * to indicate an error.
> + */
> +const char *__check_heap_object(const void *ptr, unsigned long n,
> +				struct page *page)
> +{
> +	struct kmem_cache *s;
> +	unsigned long offset;
> +	size_t object_size;
> +
> +	/* Find object and usable object size. */
> +	s = page->slab_cache;
> +	object_size = slab_ksize(s);
> +
> +	/* Find offset within object. */
> +	offset = (ptr - page_address(page)) % s->size;
> +
> +	/* Adjust for redzone and reject if within the redzone. */
> +	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
> +		if (offset < s->red_left_pad)
> +			return s->name;
> +		offset -= s->red_left_pad;
> +	}
> +
> +	/* Allow address range falling entirely within object size. */
> +	if (offset <= object_size && n <= object_size - offset)
> +		return NULL;
> +
> +	return s->name;
> +}
> +#endif /* CONFIG_HARDENED_USERCOPY */
> +

I compared this against what check_valid_pointer does for SLUB_DEBUG
checking. I was hoping we could utilize that function to avoid
duplication but a) __check_heap_object needs to allow accesses anywhere
in the object, not just the beginning b) accessing page->objects
is racy without the addition of locking in SLUB_DEBUG.

Still, the ptr < page_address(page) check from __check_heap_object would
be good to add to avoid generating garbage large offsets and trying to
infer C math.

diff --git a/mm/slub.c b/mm/slub.c
index 7dee3d9..5370e4f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
         s = page->slab_cache;
         object_size = slab_ksize(s);
  
+       if (ptr < page_address(page))
+               return s->name;
+
         /* Find offset within object. */
         offset = (ptr - page_address(page)) % s->size;
  

With that, you can add

Reviwed-by: Laura Abbott <labbott@redhat.com>

>  static size_t __ksize(const void *object)
>  {
>  	struct page *page;
>

Thanks,
Laura

  reply	other threads:[~2016-07-25 19:17 UTC|newest]

Thread overview: 127+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
2016-07-20 20:26 ` [kernel-hardening] " Kees Cook
2016-07-20 20:26 ` Kees Cook
2016-07-20 20:26 ` Kees Cook
2016-07-20 20:26 ` Kees Cook
2016-07-20 20:26 ` Kees Cook
2016-07-20 20:26 ` [PATCH v4 01/12] mm: Add is_migrate_cma_page Kees Cook
2016-07-20 20:26   ` [kernel-hardening] " Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26 ` [PATCH v4 02/12] mm: Implement stack frame object validation Kees Cook
2016-07-20 20:26   ` [kernel-hardening] " Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26 ` [PATCH v4 03/12] mm: Hardened usercopy Kees Cook
2016-07-20 20:26   ` [kernel-hardening] " Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26 ` [PATCH v4 04/12] x86/uaccess: Enable hardened usercopy Kees Cook
2016-07-20 20:26   ` [kernel-hardening] " Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:26   ` Kees Cook
2016-07-20 20:27 ` [PATCH v4 05/12] ARM: uaccess: " Kees Cook
2016-07-20 20:27   ` [kernel-hardening] " Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27 ` [PATCH v4 06/12] arm64/uaccess: " Kees Cook
2016-07-20 20:27   ` [kernel-hardening] " Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27 ` [PATCH v4 07/12] ia64/uaccess: " Kees Cook
2016-07-20 20:27   ` [kernel-hardening] " Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27 ` [PATCH v4 08/12] powerpc/uaccess: " Kees Cook
2016-07-20 20:27   ` [kernel-hardening] " Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27 ` [PATCH v4 09/12] sparc/uaccess: " Kees Cook
2016-07-20 20:27   ` [kernel-hardening] " Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27 ` [PATCH v4 10/12] s390/uaccess: " Kees Cook
2016-07-20 20:27   ` [kernel-hardening] " Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27 ` [PATCH v4 11/12] mm: SLAB hardened usercopy support Kees Cook
2016-07-20 20:27   ` [kernel-hardening] " Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27 ` [PATCH v4 12/12] mm: SLUB " Kees Cook
2016-07-20 20:27   ` [kernel-hardening] " Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-20 20:27   ` Kees Cook
2016-07-25 19:16   ` Laura Abbott [this message]
2016-07-25 19:16     ` [kernel-hardening] " Laura Abbott
2016-07-25 19:16     ` Laura Abbott
2016-07-25 19:16     ` Laura Abbott
2016-07-25 19:16     ` Laura Abbott
2016-07-25 19:16     ` Laura Abbott
2016-07-25 20:45     ` Kees Cook
2016-07-25 20:45       ` [kernel-hardening] " Kees Cook
2016-07-25 20:45       ` Kees Cook
2016-07-25 20:45       ` Kees Cook
2016-07-25 20:45       ` Kees Cook
2016-07-25 20:45       ` Kees Cook
2016-07-25 20:45       ` Kees Cook
2016-07-26  0:54       ` Laura Abbott
2016-07-26  0:54         ` [kernel-hardening] " Laura Abbott
2016-07-26  0:54         ` Laura Abbott
2016-07-26  0:54         ` Laura Abbott
2016-07-26  0:54         ` Laura Abbott
2016-07-26  0:54         ` Laura Abbott
2016-07-26  0:54         ` Laura Abbott
2016-07-25 21:42     ` Rik van Riel
2016-07-25 21:42       ` [kernel-hardening] " Rik van Riel
2016-07-25 21:42       ` Rik van Riel
2016-07-25 21:42       ` Rik van Riel
2016-07-25 21:42       ` Rik van Riel
2016-07-25 23:29       ` Laura Abbott
2016-07-25 23:29         ` [kernel-hardening] " Laura Abbott
2016-07-25 23:29         ` Laura Abbott
2016-07-25 23:29         ` Laura Abbott
2016-07-25 23:29         ` Laura Abbott
2016-07-25 23:29         ` Laura Abbott
2016-07-26  0:22         ` Rik van Riel
2016-07-26  0:22           ` [kernel-hardening] " Rik van Riel
2016-07-26  0:22           ` Rik van Riel
2016-07-26  0:22           ` Rik van Riel
2016-07-26  0:22           ` Rik van Riel
2016-07-23  0:36 ` [PATCH v4 00/12] mm: Hardened usercopy Laura Abbott
2016-07-23  0:36   ` [kernel-hardening] " Laura Abbott
2016-07-23  0:36   ` Laura Abbott
2016-07-23  0:36   ` Laura Abbott
2016-07-23  0:36   ` Laura Abbott
2016-07-23  0:36   ` Laura Abbott
2016-07-25 17:50   ` Kees Cook
2016-07-25 17:50     ` [kernel-hardening] " Kees Cook
2016-07-25 17:50     ` Kees Cook
2016-07-25 17:50     ` Kees Cook
2016-07-25 17:50     ` Kees Cook
2016-07-25 17:50     ` Kees Cook
2016-07-25 17:50     ` Kees Cook

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com \
    --to=labbott@redhat.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=ard.biesheuvel@linaro.org \
    --cc=benh@kernel.crashing.org \
    --cc=bp@suse.de \
    --cc=bsingharora@gmail.com \
    --cc=casey@schaufler-ca.com \
    --cc=catalin.marinas@arm.com \
    --cc=cl@linux.com \
    --cc=danielmicay@gmail.com \
    --cc=davem@davemloft.net \
    --cc=dvyukov@google.com \
    --cc=fenghua.yu@intel.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=jack@suse.cz \
    --cc=jpoimboe@redhat.com \
    --cc=keescook@chromium.org \
    --cc=kernel-hardening@lists.openwall.com \
    --cc=labbott@fedoraproject.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-ia64@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@armlinux.org.uk \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=luto@kernel.org \
    --cc=minipli@googlemail.com \
    --cc=mpe@ellerman.id.au \
    --cc=pageexec@freemail.hu \
    --cc=penberg@kernel.org \
    --cc=riel@redhat.com \
    --cc=rientjes@google.com \
    --cc=sparclinux@vger.kernel.org \
    --cc=spender@grsecurity.net \
    --cc=tony.luck@intel.com \
    --cc=vitalywool@gmail.com \
    --cc=will.deacon@arm.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.