From: Michael Ellerman <mpe@ellerman.id.au>
To: linux-kernel@vger.kernel.org
Cc: Jan Kara <jack@suse.cz>,
kernel-hardening@lists.openwall.com,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will.deacon@arm.com>,
linux-mm@kvack.org, sparclinux@vger.kernel.org,
linux-ia64@vger.kernel.org, Christoph Lameter <cl@linux.com>,
Andrea Arcangeli <aarcange@redhat.com>,
linux-arch@vger.kernel.org, x86@kernel.org,
Russell King <linux@armlinux.org.uk>,
linux-arm-kernel@lists.infradead.org,
PaX Team <pageexec@freemail.hu>, Borislav Petkov <bp@suse.de>,
Mathias Krause <minipli@googlemail.com>,
Fenghua Yu <fenghua.yu@intel.com>, Rik van Riel <riel@redhat.com>,
Kees Cook <keescook@chromium.org>,
David Rientjes <rientjes@google.com>,
Tony Luck <tony.luck@intel.com>,
Andy Lutomirski <luto@kernel.org>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Dmitry Vyukov <dvyukov@google.com>,
Laura Abbott <labbott@fedoraproject.org>,
Brad Spengler <spender@grsecurity.net>,
Ard
Subject: Re: [PATCH 9/9] mm: SLUB hardened usercopy support
Date: Thu, 07 Jul 2016 14:35:17 +1000 [thread overview]
Message-ID: <20403.5479379401$1467866197@news.gmane.org> (raw)
In-Reply-To: <1467843928-29351-10-git-send-email-keescook@chromium.org>
Kees Cook <keescook@chromium.org> writes:
> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects.
>
> Based on code from PaX and grsecurity.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> diff --git a/mm/slub.c b/mm/slub.c
> index 825ff4505336..0c8ace04f075 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3614,6 +3614,33 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
> EXPORT_SYMBOL(__kmalloc_node);
> #endif
>
> +#ifdef CONFIG_HARDENED_USERCOPY
> +/*
> + * Rejects objects that are incorrectly sized.
> + *
> + * Returns NULL if check passes, otherwise const char * to name of cache
> + * to indicate an error.
> + */
> +const char *__check_heap_object(const void *ptr, unsigned long n,
> + struct page *page)
> +{
> + struct kmem_cache *s;
> + unsigned long offset;
> +
> + /* Find object. */
> + s = page->slab_cache;
> +
> + /* Find offset within object. */
> + offset = (ptr - page_address(page)) % s->size;
> +
> + /* Allow address range falling entirely within object size. */
> + if (offset <= s->object_size && n <= s->object_size - offset)
> + return NULL;
> +
> + return s->name;
> +}
I gave this a quick spin on powerpc, it blew up immediately :)
Brought up 16 CPUs
usercopy: kernel memory overwrite attempt detected to c0000001fe023868 (kmalloc-16) (9 bytes)
CPU: 8 PID: 103 Comm: kdevtmpfs Not tainted 4.7.0-rc3-00098-g09d9556ae5d1 #55
Call Trace:
[c0000001fa0cfb40] [c0000000009bdbe8] dump_stack+0xb0/0xf0 (unreliable)
[c0000001fa0cfb80] [c00000000029cf44] __check_object_size+0x74/0x320
[c0000001fa0cfc00] [c00000000005d4d0] copy_from_user+0x60/0xd4
[c0000001fa0cfc40] [c00000000022b6cc] memdup_user+0x5c/0xf0
[c0000001fa0cfc80] [c00000000022b90c] strndup_user+0x7c/0x110
[c0000001fa0cfcc0] [c0000000002d6c28] SyS_mount+0x58/0x180
[c0000001fa0cfd10] [c0000000005ee908] devtmpfsd+0x98/0x210
[c0000001fa0cfd80] [c0000000000df810] kthread+0x110/0x130
[c0000001fa0cfe30] [c0000000000095e8] ret_from_kernel_thread+0x5c/0x74
SLUB tracing says:
TRACE kmalloc-16 alloc 0xc0000001fe023868 inuse=186 fp=0x (null)
Which is not 16-byte aligned, which seems to be caused by the red zone?
The following patch fixes it for me, but I don't know SLUB enough to say
if it's always correct.
diff --git a/mm/slub.c b/mm/slub.c
index 0c8ace04f075..66191ea4545a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3630,6 +3630,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
/* Find object. */
s = page->slab_cache;
+ /* Subtract red zone if enabled */
+ ptr = restore_red_left(s, ptr);
+
/* Find offset within object. */
offset = (ptr - page_address(page)) % s->size;
cheers
_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
next prev parent reply other threads:[~2016-07-07 4:35 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-06 22:25 [PATCH 0/9] mm: Hardened usercopy Kees Cook
2016-07-06 22:25 ` [PATCH 1/9] " Kees Cook
2016-07-07 5:37 ` Baruch Siach
2016-07-07 17:25 ` Kees Cook
2016-07-07 18:35 ` Baruch Siach
2016-07-07 7:42 ` Thomas Gleixner
2016-07-07 17:29 ` Kees Cook
2016-07-07 19:34 ` Thomas Gleixner
2016-07-07 8:01 ` Arnd Bergmann
2016-07-07 17:37 ` Kees Cook
2016-07-08 5:34 ` Michael Ellerman
2016-07-08 5:34 ` Michael Ellerman
2016-07-08 5:34 ` Michael Ellerman
2016-07-08 5:34 ` Michael Ellerman
2016-07-08 9:22 ` Arnd Bergmann
2016-07-07 16:19 ` Rik van Riel
2016-07-07 16:35 ` Rik van Riel
2016-07-07 17:41 ` Kees Cook
2016-07-06 22:25 ` [PATCH 2/9] x86/uaccess: Enable hardened usercopy Kees Cook
2016-07-06 22:25 ` [PATCH 3/9] ARM: uaccess: " Kees Cook
2016-07-06 22:25 ` [PATCH 4/9] arm64/uaccess: " Kees Cook
2016-07-07 10:07 ` Mark Rutland
2016-07-07 17:19 ` Kees Cook
2016-07-06 22:25 ` [PATCH 5/9] ia64/uaccess: " Kees Cook
2016-07-06 22:25 ` [PATCH 6/9] powerpc/uaccess: " Kees Cook
2016-07-06 22:25 ` [PATCH 7/9] sparc/uaccess: " Kees Cook
2016-07-06 22:25 ` [PATCH 8/9] mm: SLAB hardened usercopy support Kees Cook
2016-07-06 22:25 ` [PATCH 9/9] mm: SLUB " Kees Cook
2016-07-07 4:35 ` Michael Ellerman
2016-07-07 4:35 ` Michael Ellerman
2016-07-07 4:35 ` Michael Ellerman [this message]
2016-07-07 4:35 ` Michael Ellerman
[not found] ` <577ddc18.d351190a.1fa54.ffffbe79SMTPIN_ADDED_BROKEN@mx.google.com>
2016-07-07 18:56 ` [kernel-hardening] " Kees Cook
2016-07-08 10:19 ` Michael Ellerman
2016-07-08 10:19 ` Michael Ellerman
2016-07-08 10:19 ` Michael Ellerman
2016-07-08 10:19 ` [kernel-hardening] " Michael Ellerman
2016-07-07 7:30 ` [PATCH 0/9] mm: Hardened usercopy Christian Borntraeger
2016-07-07 17:27 ` Kees Cook
2016-07-08 8:46 ` Ingo Molnar
2016-07-08 16:19 ` Linus Torvalds
2016-07-08 18:23 ` Ingo Molnar
2016-07-09 2:22 ` Laura Abbott
2016-07-09 2:44 ` Rik van Riel
2016-07-09 7:55 ` Ingo Molnar
2016-07-09 8:25 ` Ard Biesheuvel
2016-07-09 12:58 ` Laura Abbott
2016-07-09 17:03 ` Kees Cook
2016-07-09 17:01 ` Kees Cook
2016-07-09 21:27 ` Andy Lutomirski
2016-07-09 23:16 ` PaX Team
2016-07-10 9:16 ` Ingo Molnar
2016-07-10 12:03 ` PaX Team
2016-07-10 12:38 ` Andy Lutomirski
2016-07-11 18:40 ` Kees Cook
2016-07-11 18:34 ` Kees Cook
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='20403.5479379401$1467866197@news.gmane.org' \
--to=mpe@ellerman.id.au \
--cc=aarcange@redhat.com \
--cc=bp@suse.de \
--cc=catalin.marinas@arm.com \
--cc=cl@linux.com \
--cc=dvyukov@google.com \
--cc=fenghua.yu@intel.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=jack@suse.cz \
--cc=keescook@chromium.org \
--cc=kernel-hardening@lists.openwall.com \
--cc=labbott@fedoraproject.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-ia64@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux@armlinux.org.uk \
--cc=luto@kernel.org \
--cc=minipli@googlemail.com \
--cc=pageexec@freemail.hu \
--cc=riel@redhat.com \
--cc=rientjes@google.com \
--cc=sparclinux@vger.kernel.org \
--cc=spender@grsecurity.net \
--cc=tony.luck@intel.com \
--cc=will.deacon@arm.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).