kernel-hardening.lists.openwall.com archive mirror
 help / color / mirror / Atom feed
From: Jann Horn <jannh@google.com>
To: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>,
	"Reshetova, Elena" <elena.reshetova@intel.com>,
	"Perla, Enrico" <enrico.perla@intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	"kernel-hardening@lists.openwall.com"
	<kernel-hardening@lists.openwall.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"bp@alien8.de" <bp@alien8.de>, "tytso@mit.edu" <tytso@mit.edu>
Subject: Re: [RFC PATCH] x86/entry/64: randomize kernel stack offset upon system call
Date: Thu, 21 Feb 2019 14:20:02 +0100	[thread overview]
Message-ID: <CAG48ez3sh+qcw9X6u2M0apRdN2TJR5Z-MGQS_UcmDhje+44CSA@mail.gmail.com> (raw)
In-Reply-To: <CALCETrW84mH-jfKOCKmdODL06Bhzugck+vVt-vDe9byA27V=Jg@mail.gmail.com>

On Thu, Feb 21, 2019 at 7:38 AM Andy Lutomirski <luto@kernel.org> wrote:
> > On Feb 20, 2019, at 2:20 PM, Kees Cook <keescook@chromium.org> wrote:
> >
> > On Wed, Feb 13, 2019 at 11:52 PM Reshetova, Elena
> > <elena.reshetova@intel.com> wrote:
> >> Now back to our proposed countermeasures given that attacker has found a way to do
> >> a crafted overflow and overwrite:
> >>
> >>  1) pt_regs is not predictable, but can be discovered in ptrace-style scenario or cache-probing.
> >>     If discovered, then attack succeeds as of now.
> >>  2) relative stack offset is not predictable and randomized, cannot be probed very easily via
> >>      cache or ptrace. So, this is an additional hurdle on the attacker's way since stack is non-
> >>      deterministic now.
> >>  3) nothing changed for this type of attack, given that attacker's goal is not to overwrite CS
> >>      in adjusted pt_regs. If it is his goal, then it helps with that.
> >>
> >>
> >> Now summary:
> >>
> >> It would seem to me that:
> >>
> >> - regs->cs |= 3 on exit is a thing worth doing anyway, just because it is cheap, as Andy said, and it
> >> might make a positive difference in two out of three attack scenarios. Objections?
> >
> > I would agree, let's just do this.
>
> Thinking slightly more about this, it’s an incomplete protection.  It
> keeps an attacker from returning to kernel mode, but it does not
> protect the privileged flag bits.  I think that IOPL is the only thing
> we really care about, and doing anything useful about IOPL would be
> rather more complex, unfortunately.  I suppose we could just zero it
> and guard that with a static branch that is switched off the first
> time anyone uses iopl(3).
>
> I suppose we could also add a config option to straight-up disable
> IOPL.  I sincerely hope that no one uses it any more. Even the small
> number of semi-legit users really ought to be using ioperm() instead.

/me raises hand. iopl(3) is useful for making CLI and STI work from
userspace, I've used it for that (for testing stuff, not for anything
that has been shipped to people). Of course, that's probably a reason
to get rid of it, not to keep it. ^^

  reply	other threads:[~2019-02-21 13:20 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-08 12:15 [RFC PATCH] Early version of thread stack randomization Elena Reshetova
2019-02-08 12:15 ` [RFC PATCH] x86/entry/64: randomize kernel stack offset upon system call Elena Reshetova
2019-02-08 13:05   ` Peter Zijlstra
2019-02-08 13:20     ` Reshetova, Elena
2019-02-08 14:26       ` Peter Zijlstra
2019-02-09 11:13         ` Reshetova, Elena
2019-02-09 18:25           ` Andy Lutomirski
2019-02-11  6:39             ` Reshetova, Elena
2019-02-11 15:54               ` Andy Lutomirski
2019-02-12 10:16                 ` Perla, Enrico
2019-02-14  7:52                   ` Reshetova, Elena
2019-02-19 14:47                     ` Jann Horn
2019-02-20 22:20                     ` Kees Cook
2019-02-21  6:37                       ` Andy Lutomirski
2019-02-21 13:20                         ` Jann Horn [this message]
2019-02-21 15:49                           ` Andy Lutomirski
2019-02-20 22:15                   ` Kees Cook
2019-02-20 22:53                     ` Kees Cook
2019-02-21 23:29                       ` Kees Cook
2019-02-27 11:03                         ` Reshetova, Elena
2019-02-21  9:35                     ` Perla, Enrico
2019-02-21 17:23                       ` Kees Cook
2019-02-21 17:48                         ` Perla, Enrico
2019-02-21 19:18                           ` Kees Cook
2019-02-20 21:51         ` Kees Cook
2019-02-08 15:15       ` Peter Zijlstra
2019-02-09 11:38         ` Reshetova, Elena
2019-02-09 12:09           ` Greg KH
2019-02-11  6:05             ` Reshetova, Elena
2019-02-08 16:34   ` Andy Lutomirski
2019-02-20 22:03     ` Kees Cook
2019-02-08 21:28   ` Kees Cook
2019-02-11 12:47     ` Reshetova, Elena
2019-02-20 22:04   ` Kees Cook

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAG48ez3sh+qcw9X6u2M0apRdN2TJR5Z-MGQS_UcmDhje+44CSA@mail.gmail.com \
    --to=jannh@google.com \
    --cc=bp@alien8.de \
    --cc=elena.reshetova@intel.com \
    --cc=enrico.perla@intel.com \
    --cc=keescook@chromium.org \
    --cc=kernel-hardening@lists.openwall.com \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).