From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Date: Fri, 08 Jul 2016 20:19:58 +1000 From: Michael Ellerman In-Reply-To: Subject: Re: [kernel-hardening] Re: [PATCH 9/9] mm: SLUB hardened usercopy support Content-Transfer-Encoding: 8bit Message-ID: <02bf4f996a3a37d4729969a7a4bc28d61dbef79d.camel@ellerman.id.au> MIME-Version: 1.0 To: Kees Cook , "kernel-hardening@lists.openwall.com" Cc: Jan Kara , Catalin Marinas , Will Deacon , Linux-MM , sparclinux , linux-ia64@vger.kernel.org, Christoph Lameter , Andrea Arcangeli , linux-arch , "x86@kernel.org" , Russell King , PaX Team , Borislav Petkov , lin , Mathias Krause , Fenghua Yu , Rik van Riel , David Rientjes , Tony Luck , Andy Lutomirski , Joonsoo Kim , Dmitry Vyukov , Laura Abbott , Brad Spengler , Ard Biesheuvel , LKML , Pekka Enberg , Casey Schauf ler , Andrew Morton , "linuxppc-dev@lists.ozlabs.org" , "David S. Miller" Kees Cook writes: > On Thu, Jul 7, 2016 at 12:35 AM, Michael Ellerman wrote: >> I gave this a quick spin on powerpc, it blew up immediately :) > > Wheee :) This series is rather easy to test: blows up REALLY quickly > if it's wrong. ;) Better than subtle race conditions which is the usual :) >> diff --git a/mm/slub.c b/mm/slub.c >> index 0c8ace04f075..66191ea4545a 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -3630,6 +3630,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n, >> /* Find object. */ >> s = page->slab_cache; >> >> + /* Subtract red zone if enabled */ >> + ptr = restore_red_left(s, ptr); >> + > > Ah, interesting. Just to make sure: you've built with > CONFIG_SLUB_DEBUG and either CONFIG_SLUB_DEBUG_ON or booted with > either slub_debug or slub_debug=z ? Yeah built with CONFIG_SLUB_DEBUG_ON, and booted with and without slub_debug options. > Thanks for the slub fix! > > I wonder if this code should be using size_from_object() instead of s->size? Hmm, not sure. Who's SLUB maintainer? :) I was modelling it on the logic in check_valid_pointer(), which also does the restore_red_left(), and then checks for % s->size: static inline int check_valid_pointer(struct kmem_cache *s, struct page *page, void *object) { void *base; if (!object) return 1; base = page_address(page); object = restore_red_left(s, object); if (object < base || object >= base + page->objects * s->size || (object - base) % s->size) { return 0; } return 1; } cheers