From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lf0-f70.google.com (mail-lf0-f70.google.com [209.85.215.70]) by kanga.kvack.org (Postfix) with ESMTP id 3C55B6B0005 for ; Mon, 25 Jul 2016 13:50:09 -0400 (EDT) Received: by mail-lf0-f70.google.com with SMTP id l89so121476199lfi.3 for ; Mon, 25 Jul 2016 10:50:09 -0700 (PDT) Received: from mail-wm0-x229.google.com (mail-wm0-x229.google.com. [2a00:1450:400c:c09::229]) by mx.google.com with ESMTPS id i7si6095867wjo.244.2016.07.25.10.50.07 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 25 Jul 2016 10:50:07 -0700 (PDT) Received: by mail-wm0-x229.google.com with SMTP id o80so168011481wme.1 for ; Mon, 25 Jul 2016 10:50:07 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <93009d46-0171-7398-4add-98e09ade608b@redhat.com> References: <1469046427-12696-1-git-send-email-keescook@chromium.org> <93009d46-0171-7398-4add-98e09ade608b@redhat.com> From: Kees Cook Date: Mon, 25 Jul 2016 10:50:05 -0700 Message-ID: Subject: Re: [PATCH v4 00/12] mm: Hardened usercopy Content-Type: text/plain; charset=UTF-8 Sender: owner-linux-mm@kvack.org List-ID: To: Laura Abbott Cc: "kernel-hardening@lists.openwall.com" , Balbir Singh , Daniel Micay , Josh Poimboeuf , Rik van Riel , Casey Schaufler , PaX Team , Brad Spengler , Russell King , Catalin Marinas , Will Deacon , Ard Biesheuvel , Benjamin Herrenschmidt , Michael Ellerman , Tony Luck , Fenghua Yu , "David S. Miller" , "x86@kernel.org" , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Andy Lutomirski , Borislav Petkov , Mathias Krause , Jan Kara , Vitaly Wool , Andrea Arcangeli , Dmitry Vyukov , "linux-arm-kernel@lists.infradead.org" , linux-ia64@vger.kernel.org, "linuxppc-dev@lists.ozlabs.org" , sparclinux , linux-arch , Linux-MM , LKML On Fri, Jul 22, 2016 at 5:36 PM, Laura Abbott wrote: > On 07/20/2016 01:26 PM, Kees Cook wrote: >> >> Hi, >> >> [This is now in my kspp -next tree, though I'd really love to add some >> additional explicit Tested-bys, Reviewed-bys, or Acked-bys. If you've >> looked through any part of this or have done any testing, please consider >> sending an email with your "*-by:" line. :)] >> >> This is a start of the mainline port of PAX_USERCOPY[1]. After writing >> tests (now in lkdtm in -next) for Casey's earlier port[2], I kept tweaking >> things further and further until I ended up with a whole new patch series. >> To that end, I took Rik, Laura, and other people's feedback along with >> additional changes and clean-ups. >> >> Based on my understanding, PAX_USERCOPY was designed to catch a >> few classes of flaws (mainly bad bounds checking) around the use of >> copy_to_user()/copy_from_user(). These changes don't touch get_user() and >> put_user(), since these operate on constant sized lengths, and tend to be >> much less vulnerable. There are effectively three distinct protections in >> the whole series, each of which I've given a separate CONFIG, though this >> patch set is only the first of the three intended protections. (Generally >> speaking, PAX_USERCOPY covers what I'm calling CONFIG_HARDENED_USERCOPY >> (this) and CONFIG_HARDENED_USERCOPY_WHITELIST (future), and >> PAX_USERCOPY_SLABS covers CONFIG_HARDENED_USERCOPY_SPLIT_KMALLOC >> (future).) >> >> This series, which adds CONFIG_HARDENED_USERCOPY, checks that objects >> being copied to/from userspace meet certain criteria: >> - if address is a heap object, the size must not exceed the object's >> allocated size. (This will catch all kinds of heap overflow flaws.) >> - if address range is in the current process stack, it must be within the >> a valid stack frame (if such checking is possible) or at least entirely >> within the current process's stack. (This could catch large lengths that >> would have extended beyond the current process stack, or overflows if >> their length extends back into the original stack.) >> - if the address range is part of kernel data, rodata, or bss, allow it. >> - if address range is page-allocated, that it doesn't span multiple >> allocations (excepting Reserved and CMA pages). >> - if address is within the kernel text, reject it. >> - everything else is accepted >> >> The patches in the series are: >> - Support for examination of CMA page types: >> 1- mm: Add is_migrate_cma_page >> - Support for arch-specific stack frame checking (which will likely be >> replaced in the future by Josh's more comprehensive unwinder): >> 2- mm: Implement stack frame object validation >> - The core copy_to/from_user() checks, without the slab object checks: >> 3- mm: Hardened usercopy >> - Per-arch enablement of the protection: >> 4- x86/uaccess: Enable hardened usercopy >> 5- ARM: uaccess: Enable hardened usercopy >> 6- arm64/uaccess: Enable hardened usercopy >> 7- ia64/uaccess: Enable hardened usercopy >> 8- powerpc/uaccess: Enable hardened usercopy >> 9- sparc/uaccess: Enable hardened usercopy >> 10- s390/uaccess: Enable hardened usercopy >> - The heap allocator implementation of object size checking: >> 11- mm: SLAB hardened usercopy support >> 12- mm: SLUB hardened usercopy support >> >> Some notes: >> >> - This is expected to apply on top of -next which contains fixes for the >> position of _etext on both arm and arm64, though it has some conflicts >> with KASAN that should be trivial to fix up. Also in -next are the >> tests for this protection (in lkdtm), prefixed with USERCOPY_. >> >> - I couldn't detect a measurable performance change with these features >> enabled. Kernel build times were unchanged, hackbench was unchanged, >> etc. I think we could flip this to "on by default" at some point, but >> for now, I'm leaving it off until I can get some more definitive >> measurements. I would love if someone with greater familiarity with >> perf could give this a spin and report results. >> >> - The SLOB support extracted from grsecurity seems entirely broken. I >> have no idea what's going on there, I spent my time testing SLAB and >> SLUB. Having someone else look at SLOB would be nice, but this series >> doesn't depend on it. >> >> Additional features that would be nice, but aren't blocking this series: >> >> - Needs more architecture support for stack frame checking (only x86 now, >> but it seems Josh will have a good solution for this soon). >> >> >> Thanks! >> >> -Kees >> >> [1] https://grsecurity.net/download.php "grsecurity - test kernel patch" >> [2] http://www.openwall.com/lists/kernel-hardening/2016/05/19/5 >> >> v4: >> - handle CMA pages, labbott >> - update stack checker comments, labbott >> - check for vmalloc addresses, labbott >> - deal with KASAN in -next changing arm64 copy*user calls >> - check for linear mappings at runtime instead of via CONFIG >> >> v3: >> - switch to using BUG for better Oops integration >> - when checking page allocations, check each for Reserved >> - use enums for the stack check return for readability >> >> v2: >> - added s390 support >> - handle slub red zone >> - disallow writes to rodata area >> - stack frame walker now CONFIG-controlled arch-specific helper >> > > Do you have/plan to have LKDTM or the like tests for this? I started > reviewing > the slub code and was about to write some test cases for myself. I did that > for CMA as well which is a decent indicator these should all go somewhere. Yeah, there is an entire section of tests in lkdtm for the usercopy protection. I didn't add anything for CMA or multipage allocations yet, though. Feel free to add those if you have a moment! :) It's on my todo list. -Kees -- Kees Cook Chrome OS & Brillo Security -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org