From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ozlabs.org (ozlabs.org [IPv6:2401:3900:2:1::2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3rm9Tr2L7RzDqlj for ; Fri, 8 Jul 2016 20:20:00 +1000 (AEST) Date: Fri, 08 Jul 2016 20:19:58 +1000 From: Michael Ellerman To: Kees Cook , "kernel-hardening\@lists.openwall.com" Cc: Jan Kara , Catalin Marinas , Will Deacon , Linux-MM , sparclinux , linux-ia64@vger.kernel.org, Christoph Lameter , Andrea Arcangeli , linux-arch , "x86\@kernel.org" , Russell King , PaX Team , Borislav Petkov , lin , Mathias Krause , Fenghua Yu , Rik van Riel , David Rientjes , Tony Luck , Andy Lutomirski , Joonsoo Kim , Dmitry Vyukov , Laura Abbott , Brad Spengler , Ard Biesheuvel , LKML , Pekka Enberg , Casey Schauf ler , Andrew Morton , "linuxppc-dev\@lists.ozlabs.org" , "David S. Miller" Subject: Re: [kernel-hardening] Re: [PATCH 9/9] mm: SLUB hardened usercopy support In-Reply-To: List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Message-Id: <577f7e55.4668420a.84f17.5cb9SMTPIN_ADDED_MISSING@mx.google.com> Kees Cook writes: > On Thu, Jul 7, 2016 at 12:35 AM, Michael Ellerman wrote: >> I gave this a quick spin on powerpc, it blew up immediately :) > > Wheee :) This series is rather easy to test: blows up REALLY quickly > if it's wrong. ;) Better than subtle race conditions which is the usual :) >> diff --git a/mm/slub.c b/mm/slub.c >> index 0c8ace04f075..66191ea4545a 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -3630,6 +3630,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n, >> /* Find object. */ >> s = page->slab_cache; >> >> + /* Subtract red zone if enabled */ >> + ptr = restore_red_left(s, ptr); >> + > > Ah, interesting. Just to make sure: you've built with > CONFIG_SLUB_DEBUG and either CONFIG_SLUB_DEBUG_ON or booted with > either slub_debug or slub_debug=z ? Yeah built with CONFIG_SLUB_DEBUG_ON, and booted with and without slub_debug options. > Thanks for the slub fix! > > I wonder if this code should be using size_from_object() instead of s->size? Hmm, not sure. Who's SLUB maintainer? :) I was modelling it on the logic in check_valid_pointer(), which also does the restore_red_left(), and then checks for % s->size: static inline int check_valid_pointer(struct kmem_cache *s, struct page *page, void *object) { void *base; if (!object) return 1; base = page_address(page); object = restore_red_left(s, object); if (object < base || object >= base + page->objects * s->size || (object - base) % s->size) { return 0; } return 1; } cheers