From mboxrd@z Thu Jan 1 00:00:00 1970 Reply-To: kernel-hardening@lists.openwall.com MIME-Version: 1.0 Sender: keescook@google.com In-Reply-To: References: <1468619065-3222-1-git-send-email-keescook@chromium.org> <1468619065-3222-3-git-send-email-keescook@chromium.org> From: Kees Cook Date: Tue, 19 Jul 2016 15:55:54 -0700 Message-ID: Content-Type: text/plain; charset=UTF-8 Subject: [kernel-hardening] Re: [PATCH v3 02/11] mm: Hardened usercopy To: Laura Abbott Cc: LKML , Balbir Singh , Daniel Micay , Josh Poimboeuf , Rik van Riel , Casey Schaufler , PaX Team , Brad Spengler , Russell King , Catalin Marinas , Will Deacon , Ard Biesheuvel , Benjamin Herrenschmidt , Michael Ellerman , Tony Luck , Fenghua Yu , "David S. Miller" , "x86@kernel.org" , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Andy Lutomirski , Borislav Petkov , Mathias Krause , Jan Kara , Vitaly Wool , Andrea Arcangeli , Dmitry Vyukov , Laura Abbott , "linux-arm-kernel@lists.infradead.org" , linux-ia64@vger.kernel.org, "linuxppc-dev@lists.ozlabs.org" , sparclinux , linux-arch , Linux-MM , "kernel-hardening@lists.openwall.com" List-ID: On Tue, Jul 19, 2016 at 12:12 PM, Kees Cook wrote: > On Mon, Jul 18, 2016 at 6:52 PM, Laura Abbott wrote: >> On 07/15/2016 02:44 PM, Kees Cook wrote: >>> +static inline const char *check_heap_object(const void *ptr, unsigned >>> long n, >>> + bool to_user) >>> +{ >>> + struct page *page, *endpage; >>> + const void *end = ptr + n - 1; >>> + >>> + if (!virt_addr_valid(ptr)) >>> + return NULL; >>> + >> >> >> virt_addr_valid returns true on vmalloc addresses on arm64 which causes some >> intermittent false positives (tab completion in a qemu buildroot environment >> was showing it fairly reliably). I think this is an arm64 bug because >> virt_addr_valid should return true if and only if virt_to_page returns the >> corresponding page. We can work around this for now by explicitly >> checking against is_vmalloc_addr. > > Hrm, that's weird. Sounds like a bug too, but I'll add a check for > is_vmalloc_addr() to catch it for now. BTW, if you were testing against -next, KASAN moved things around in copy_*_user() in a way I wasn't expecting (__copy* and copy* now both call __arch_copy* instead of copy* calling __copy*). I'll have this fixed in the next version. -Kees -- Kees Cook Chrome OS & Brillo Security