From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758096AbcGZUNj (ORCPT ); Tue, 26 Jul 2016 16:13:39 -0400 Received: from mga03.intel.com ([134.134.136.65]:54326 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757140AbcGZUNZ convert rfc822-to-8bit (ORCPT ); Tue, 26 Jul 2016 16:13:25 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.28,426,1464678000"; d="scan'208";a="854037545" From: "Roberts, William C" To: Jason Cooper CC: "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , "kernel-hardening@lists.openwall.com" , "akpm@linux-foundation.org" , "keescook@chromium.org" , "gregkh@linuxfoundation.org" , "nnk@google.com" , "jeffv@google.com" , "salyzyn@android.com" , "dcashman@android.com" Subject: RE: [PATCH] [RFC] Introduce mmap randomization Thread-Topic: [PATCH] [RFC] Introduce mmap randomization Thread-Index: AQHR52qt7zrvCwCtAUyE0nuoE66zMKArl/mA//+LrpCAAAF2QA== Date: Tue, 26 Jul 2016 20:13:23 +0000 Message-ID: <476DC76E7D1DF2438D32BFADF679FC560125F29C@ORSMSX103.amr.corp.intel.com> References: <1469557346-5534-1-git-send-email-william.c.roberts@intel.com> <1469557346-5534-2-git-send-email-william.c.roberts@intel.com> <20160726200309.GJ4541@io.lakedaemon.net> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMTRmYzRjNDktOGFmMC00YTc0LWEzZTYtY2VjYmQyNGYwOGYxIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6Im9JVVVESDdZVFd6bVBUMWlcL0hsQ1BldWtIY0hybDkrUXRoazl3RDV0b1BzPSJ9 x-ctpclassification: CTP_IC x-originating-ip: [10.22.254.139] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org RESEND fixing mm-list email.... > > -----Original Message----- > > From: Jason Cooper [mailto:jason@lakedaemon.net] > > Sent: Tuesday, July 26, 2016 1:03 PM > > To: Roberts, William C > > Cc: linux-mm@vger.kernel.org; linux-kernel@vger.kernel.org; kernel- > > hardening@lists.openwall.com; akpm@linux-foundation.org; > > keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com; > > jeffv@google.com; salyzyn@android.com; dcashman@android.com > > Subject: Re: [PATCH] [RFC] Introduce mmap randomization > > > > Hi William! > > > > On Tue, Jul 26, 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote: > > > From: William Roberts > > > > > > This patch introduces the ability randomize mmap locations where the > > > address is not requested, for instance when ld is allocating pages > > > for shared libraries. It chooses to randomize based on the current > > > personality for ASLR. > > > > Now I see how you found the randomize_range() fix. :-P > > > > > Currently, allocations are done sequentially within unmapped address > > > space gaps. This may happen top down or bottom up depending on scheme. > > > > > > For instance these mmap calls produce contiguous mappings: > > > int size = getpagesize(); > > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = > > 0x40026000 > > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = > > 0x40027000 > > > > > > Note no gap between. > > > > > > After patches: > > > int size = getpagesize(); > > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = > > 0x400b4000 > > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = > > 0x40055000 > > > > > > Note gap between. > > > > > > Using the test program mentioned here, that allocates fixed sized > > > blocks till exhaustion: > > > https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html > > > , no difference was noticed in the number of allocations. Most > > > varied from run to run, but were always within a few allocations of > > > one another between patched and un-patched runs. > > > > Did you test this with different allocation sizes? > > No I didn't it. I wasn't sure the best way to test this, any ideas? > > > > > > Performance Measurements: > > > Using strace with -T option and filtering for mmap on the program ls > > > shows a slowdown of approximate 3.7% > > > > I think it would be helpful to show the effect on the resulting object code. > > Do you mean the maps of the process? I have some captures for whoopsie on my > Ubuntu system I can share. > > One thing I didn't make clear in my commit message is why this is good. Right > now, if you know An address within in a process, you know all offsets done with > mmap(). For instance, an offset To libX can yield libY by adding/subtracting an > offset. This is meant to make rops a bit harder, or In general any mapping offset > mmore difficult to find/guess. > > > > > > Signed-off-by: William Roberts > > > --- > > > mm/mmap.c | 24 ++++++++++++++++++++++++ > > > 1 file changed, 24 insertions(+) > > > > > > diff --git a/mm/mmap.c b/mm/mmap.c > > > index de2c176..7891272 100644 > > > --- a/mm/mmap.c > > > +++ b/mm/mmap.c > > > @@ -43,6 +43,7 @@ > > > #include > > > #include > > > #include > > > +#include > > > > > > #include > > > #include > > > @@ -1582,6 +1583,24 @@ unacct_error: > > > return error; > > > } > > > > > > +/* > > > + * Generate a random address within a range. This differs from > > > +randomize_addr() by randomizing > > > + * on len sized chunks. This helps prevent fragmentation of the > > > +virtual > > memory map. > > > + */ > > > +static unsigned long randomize_mmap(unsigned long start, unsigned > > > +long end, unsigned long len) { > > > + unsigned long slots; > > > + > > > + if ((current->personality & ADDR_NO_RANDOMIZE) || > > !randomize_va_space) > > > + return 0; > > > > Couldn't we avoid checking this every time? Say, by assigning a > > function pointer during init? > > Yeah that could be done. I just copied the way others checked elsewhere in the > kernel :-P > > > > > > + > > > + slots = (end - start)/len; > > > + if (!slots) > > > + return 0; > > > + > > > + return PAGE_ALIGN(start + ((get_random_long() % slots) * len)); } > > > + > > > > Personally, I'd prefer this function noop out based on a configuration option. > > Me too. > > > > > > unsigned long unmapped_area(struct vm_unmapped_area_info *info) { > > > /* > > > @@ -1676,6 +1695,8 @@ found: > > > if (gap_start < info->low_limit) > > > gap_start = info->low_limit; > > > > > > + gap_start = randomize_mmap(gap_start, gap_end, length) ? : > > > +gap_start; > > > + > > > /* Adjust gap address to the desired alignment */ > > > gap_start += (info->align_offset - gap_start) & info->align_mask; > > > > > > @@ -1775,6 +1796,9 @@ found: > > > found_highest: > > > /* Compute highest gap address at the desired alignment */ > > > gap_end -= info->length; > > > + > > > + gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end; > > > + > > > gap_end -= (gap_end - info->align_offset) & info->align_mask; > > > > > > VM_BUG_ON(gap_end < info->low_limit); > > > > I'll have to dig into the mm code more before I can comment intelligently on > this. > > > > thx, > > > > Jason.