kernel-hardening.lists.openwall.com archive mirror
 help / color / mirror / Atom feed
From: Andy Lutomirski <luto@amacapital.net>
To: "Reshetova, Elena" <elena.reshetova@intel.com>
Cc: Andy Lutomirski <luto@kernel.org>, Jann Horn <jannh@google.com>,
	"Perla, Enrico" <enrico.perla@intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	"kernel-hardening@lists.openwall.com"
	<kernel-hardening@lists.openwall.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"bp@alien8.de" <bp@alien8.de>,
	"keescook@chromium.org" <keescook@chromium.org>,
	"tytso@mit.edu" <tytso@mit.edu>
Subject: Re: [RFC PATCH] x86/entry/64: randomize kernel stack offset upon system call
Date: Mon, 11 Feb 2019 07:54:59 -0800	[thread overview]
Message-ID: <EBF96DB2-B25D-437B-A067-F187E031BC3E@amacapital.net> (raw)
In-Reply-To: <2236FBA76BA1254E88B949DDB74E612BA4BBA73C@IRSMSX102.ger.corp.intel.com>



On Feb 10, 2019, at 10:39 PM, Reshetova, Elena <elena.reshetova@intel.com> wrote:

>> On Sat, Feb 9, 2019 at 3:13 AM Reshetova, Elena
>> <elena.reshetova@intel.com> wrote:
>>> 
>>>> On Fri, Feb 08, 2019 at 01:20:09PM +0000, Reshetova, Elena wrote:
>>>>>> On Fri, Feb 08, 2019 at 02:15:49PM +0200, Elena Reshetova wrote:
>>>> 
>>>>>> 
>>>>>> Why can't we change the stack offset periodically from an interrupt or
>>>>>> so, and then have every later entry use that.
>>>>> 
>>>>> Hm... This sounds more complex conceptually - we cannot touch
>>>>> stack when it is in use, so we have to periodically probe for a
>>>>> good time (when process is in userspace I guess) to change it from an
>> interrupt?
>>>>> IMO trampoline stack provides such a good clean place for doing it and we
>>>>> have stackleak there doing stack cleanup, so would make sense to keep
>>>>> these features operating together.
>>>> 
>>>> The idea was to just change a per-cpu (possible per-task if you ctxsw
>>>> it) offset that is used on entry to offset the stack.
>>>> So only entries after the change will have the updated offset, any
>>>> in-progress syscalls will continue with their current offset and will be
>>>> unaffected.
>>> 
>>> Let me try to write this into simple steps to make sure I understand your
>>> approach:
>>> 
>>> - create a new per-stack value (and potentially its per-cpu "shadow") called
>> stack_offset = 0
>>> - periodically issue an interrupt, and inside it walk the process tree and
>>>  update stack_offset randomly for each process
>>> - when a process makes a new syscall, it subtracts stack_offset value from
>> top_of_stack()
>>> and that becomes its new  top_of_stack() for that system call.
>>> 
>>> Smth like this?
>> 
>> I'm proposing somthing that is conceptually different. 
> 
> OK, looks like I fully misunderstand what you meant indeed.
> The reason I didn’t reply to your earlier answer is that I started to look
> into unwinder code & logic to get at least a slight clue on how things
> can be done since I haven't looked in it almost at all before (I wasn't changing
> anything with regards to it, so I didn't have to). So, I meant to come back
> with a more rigid answer that just "let me study this first"...

Fair enough.

> 
> You are,
>> conceptually, changing the location of the stack.  I'm suggesting that
>> you leave the stack alone and, instead, randomize how you use the
>> stack. 
> 
> 
> So, yes, instead of having:
> 
> allocated_stack_top
> random_offset
> actual_stack_top
> pt_regs
> ...
> and so on
> 
> We will have smth like:
> 
> allocated_stack_top = actual_stack_top
> pt_regs
> random_offset
> ...
> 
> So, conceptually we have the same amount of randomization with 
> both approaches, but it is applied very differently. 

Exactly.

> 
> Security-wise I will have to think more if second approach has any negative
> consequences, in addition to positive ones. As a paranoid security person,
> you might want to merge both approaches and randomize both places (before and
> after pt_regs) with different offsets, but I guess this would be out of question, right? 

It’s not out of the question, but it needs some amount of cost vs benefit analysis.  The costs are complexity, speed, and a reduction in available randomness for any given amount of memory consumed.

> 
> I am not that experienced with exploits , but we have been
> talking now with Jann and Enrico on this, so I think it is the best they comment
> directly here. I am just wondering if having pt_regs in a fixed place can
> be an advantage for an attacker under any scenario... 

If an attacker has write-what-where (i.e. can write controlled values to controlled absolute virtual addresses), then I expect that pt_regs is a pretty low ranking target.  But it may be a fairly juicy target if you have a stack buffer overflow that lets an attacker write to a controlled *offset* from the stack. We used to keep thread_info at the bottom of the stack, and that was a great attack target.

But there’s an easier mitigation: just do regs->cs |= 3 or something like that in the exit code. Then any such attack can only corrupt *user* state.  The performance impact would be *very* low, since this could go in the asm path that’s only used for IRET to user mode.

  reply	other threads:[~2019-02-11 15:54 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-08 12:15 [RFC PATCH] Early version of thread stack randomization Elena Reshetova
2019-02-08 12:15 ` [RFC PATCH] x86/entry/64: randomize kernel stack offset upon system call Elena Reshetova
2019-02-08 13:05   ` Peter Zijlstra
2019-02-08 13:20     ` Reshetova, Elena
2019-02-08 14:26       ` Peter Zijlstra
2019-02-09 11:13         ` Reshetova, Elena
2019-02-09 18:25           ` Andy Lutomirski
2019-02-11  6:39             ` Reshetova, Elena
2019-02-11 15:54               ` Andy Lutomirski [this message]
2019-02-12 10:16                 ` Perla, Enrico
2019-02-14  7:52                   ` Reshetova, Elena
2019-02-19 14:47                     ` Jann Horn
2019-02-20 22:20                     ` Kees Cook
2019-02-21  6:37                       ` Andy Lutomirski
2019-02-21 13:20                         ` Jann Horn
2019-02-21 15:49                           ` Andy Lutomirski
2019-02-20 22:15                   ` Kees Cook
2019-02-20 22:53                     ` Kees Cook
2019-02-21 23:29                       ` Kees Cook
2019-02-27 11:03                         ` Reshetova, Elena
2019-02-21  9:35                     ` Perla, Enrico
2019-02-21 17:23                       ` Kees Cook
2019-02-21 17:48                         ` Perla, Enrico
2019-02-21 19:18                           ` Kees Cook
2019-02-20 21:51         ` Kees Cook
2019-02-08 15:15       ` Peter Zijlstra
2019-02-09 11:38         ` Reshetova, Elena
2019-02-09 12:09           ` Greg KH
2019-02-11  6:05             ` Reshetova, Elena
2019-02-08 16:34   ` Andy Lutomirski
2019-02-20 22:03     ` Kees Cook
2019-02-08 21:28   ` Kees Cook
2019-02-11 12:47     ` Reshetova, Elena
2019-02-20 22:04   ` Kees Cook

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=EBF96DB2-B25D-437B-A067-F187E031BC3E@amacapital.net \
    --to=luto@amacapital.net \
    --cc=bp@alien8.de \
    --cc=elena.reshetova@intel.com \
    --cc=enrico.perla@intel.com \
    --cc=jannh@google.com \
    --cc=keescook@chromium.org \
    --cc=kernel-hardening@lists.openwall.com \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=tytso@mit.edu \
    --subject='Re: [RFC PATCH] x86/entry/64: randomize kernel stack offset upon system call' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).