From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752341AbcGaW0i (ORCPT ); Sun, 31 Jul 2016 18:26:38 -0400 Received: from outbound1.eu.mailhop.org ([52.28.251.132]:43852 "EHLO outbound1.eu.mailhop.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751341AbcGaW03 (ORCPT ); Sun, 31 Jul 2016 18:26:29 -0400 X-MHO-User: 8dd0deb6-576d-11e6-ac92-3142cfe117f2 X-Report-Abuse-To: https://support.duocircle.com/support/solutions/articles/5000540958-duocircle-standard-smtp-abuse-information X-Originating-IP: 74.99.77.15 X-Mail-Handler: DuoCircle Outbound SMTP X-DKIM: OpenDKIM Filter v2.6.8 io 240318006D Date: Sun, 31 Jul 2016 22:24:16 +0000 From: Jason Cooper To: kernel-hardening@lists.openwall.com Cc: Nick Kralevich , "Roberts, William C" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , "akpm@linux-foundation.org" , "keescook@chromium.org" , "gregkh@linuxfoundation.org" , "jeffv@google.com" , "salyzyn@android.com" , "dcashman@android.com" Subject: Re: [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization Message-ID: <20160731222416.GZ4541@io.lakedaemon.net> References: <1469557346-5534-1-git-send-email-william.c.roberts@intel.com> <1469557346-5534-2-git-send-email-william.c.roberts@intel.com> <20160726200309.GJ4541@io.lakedaemon.net> <476DC76E7D1DF2438D32BFADF679FC560125F29C@ORSMSX103.amr.corp.intel.com> <20160726205944.GM4541@io.lakedaemon.net> <20160728210734.GU4541@io.lakedaemon.net> <1469787002.10626.34.camel@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1469787002.10626.34.camel@gmail.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Daniel, On Fri, Jul 29, 2016 at 06:10:02AM -0400, Daniel Micay wrote: > > > In the Project Zero Stagefright post > > > (http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html) > > > , we see that the linear allocation of memory combined with the > > > low number of bits in the initial mmap offset resulted in a much > > > more predictable layout which aided the attacker. The initial > > > random mmap base range was increased by Daniel Cashman in > > > d07e22597d1d355829b7b18ac19afa912cf758d1, but we've done nothing > > > to address page relative attacks. > > > > > > Inter-mmap randomization will decrease the predictability of later > > > mmap() allocations, which should help make data structures harder > > > to find in memory. In addition, this patch will also introduce > > > unmapped gaps between pages, preventing linear overruns from one > > > mapping to another another mapping. I am unable to quantify how > > > much this will improve security, but it should be > 0. > > > > One person calls "unmapped gaps between pages" a feature, others > > call it a mess. ;-) > > It's very hard to quantify the benefits of fine-grained randomization, ? N = # of possible addresses. The bigger N is, the more chances the attacker will trip up before finding what they were looking for. > but there are other useful guarantees you could provide. It would be > quite helpful for the kernel to expose the option to force a PROT_NONE > mapping after every allocation. The gaps should actually be enforced. > > So perhaps 3 things, simply exposed as off-by-default sysctl options > (no need for special treatment on 32-bit): I'm certainly not an mm-developer, but this looks to me like we're pushing the work of creating efficient, random mappings out to userspace. :-/ > a) configurable minimum gap size in pages (for protection against > linear and small {under,over}flows) b) configurable minimum gap size > based on a ratio to allocation size (for making the heap sparse to > mitigate heap sprays, especially when mixed with fine-grained > randomization - for example 2x would add a 2M gap after a 1M mapping) mmm, this looks like an information leak. Best to set a range of pages and pick a random number within that range for each call. > c) configurable maximum random gap size (the random gap would be in > addition to the enforced minimums) > > The randomization could just be considered an extra with minor > benefits rather than the whole feature. A full fine-grained > randomization implementation would need a higher-level form of > randomization than gaps in the kernel along with cooperation from > userspace allocators. This would make sense as one part of it though. Ok, so here's an idea. This idea could be used in conjunction with random gaps, or on it's own. It would be enhanced by userspace random load order. The benefit is that with 32bit address space, and no random gapping, it's still not wasting much space. Given a memory space, break it up into X bands such that there are 2*X possible addresses. |A B|C D|E F|G H| ... |2*X-2 2*X-1| |--> <--|--> <--|--> <--|--> <--| ... |--> <--| min max For each call to mmap, we randomly pick a value within [0 - 2*X). Assuming A=0 in the diagram above, even values grow up, odd values grow down. Gradually consuming the single gap in the middle of each band. How many bands to use would depend on: * 32/64bit * Average number of mmap calls * largest single mmap call usually seen * if using random gaps and range used If the free gap in a chosen band is too small for the request, pick again among the other bands. Again, I'm not an mm dev, so I might be totally smoking crack on this one... thx, Jason.