linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Linus Torvalds <torvalds@linux-foundation.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: "Theodore Y. Ts'o" <tytso@mit.edu>,
	Thomas Gleixner <tglx@linutronix.de>,
	"Ahmed S. Darwish" <darwish.07@gmail.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Nicholas Mc Guire <hofrat@opentech.at>,
	"the arch/x86 maintainers" <x86@kernel.org>,
	Andy Lutomirski <luto@kernel.org>,
	Kees Cook <keescook@chromium.org>
Subject: Re: x86/random: Speculation to the rescue
Date: Mon, 30 Sep 2019 10:03:46 -0700	[thread overview]
Message-ID: <CAHk-=wjC0vTOgFU=dhX5NQxF84MBaGNpoQ1M6wD=yzBEy4tzTw@mail.gmail.com> (raw)
In-Reply-To: <20190930163215.GH4519@hirez.programming.kicks-ass.net>

On Mon, Sep 30, 2019 at 9:32 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> In my experience LFSRs are good at defeating branch predictors, which
> would make even in-order cores suffer lots of branch misses. And that
> might be enough, maybe.

Agreed, branch mis-prediction is likely fairly hard to take into
account ahead of time even in an in-order CPU.

But when you know the LFSR, and you know the architecture, you could
just re-create the timing, and have a fairly high chance of getting
the same complex pattern.

And in the simple enough (ie bad) case - the embedded world - you
don't need to "know" or do any deep analysis of anything or try to
predict it ahead of time. You just look at what another identical
machine does when given the identical starting point.

So I don't think an LFSR is all that great on its own. It's
complicated to predict, and it gives odd patterns, but on an in-order
core I'm not convinced it gives sufficiently _different_ odd patterns
across booting.

This, btw, is why you shouldn't trust the "I ran the thing a billion
times" on my PC, even if you were to have an old in-order Atom CPU
available to you. If you didn't restart the whole CPU state from an
identical starting point as you re-run them, the differences you see
may simply not be real. They may be an artificial effect of cumulative
changes to internal CPU branch prediction arrays and cache tag layout.

I don't think it's a huge issue if you have a real load, and you have
_any_ source of entropy at all, but I really do not think that an LFSR
is necessarily a good idea. It's just _too_ identical across reboots,
and will have very similar (but yes, complex due to branch prediction)
behavior across different runs.

Of course, in the "completely and utterly identical state and
absolutely no timing differences anywhere" situation, even my "take
timer interrupts and force at least cache misses on SMP" model doesn't
protect you from just re-running the 100% identical sequence.

But when it's a more complex load than an LFSR, I personally at least
feel better about it. An LFSR I can well imagine will give the exact
same (odd) timing patterns across boots even if there were earlier
minor changes. But hopefully a bigger load with just a more complex
footprint will have more of that. More cache misses, more DRAM
accesses, more branch mispredicts, more "pipeline was broken in a
_slightly_ different place due to timer".

It is also, btw, why I don't mix in TSC _differences_ when I mix
things in. I think it's better to actually mix in the TSC value
itself. Even if you re-run the LFSR, and it has the exact same branch
mis-predicts (because it's the same LFSR), if there were any timing
differences from _anything_ else before you ran that LFSR, then the
bits you'll be mixing in are different across boots. But if you mix in
the relative difference, you might be mixing in the identical bits.

The only real difference is only the initial TSC value, of course, so
the added entropy is small. But when we're talking about trying to get
to a total of 256 bits, a couple of bits here and there end up
mattering.

But no. Never any _guarantees_. There is no absolute security. Only best effort.

An OoO CPU will have a _lot_ more internal state, and a lot of things
that perturb that internal state, and that will make small changes in
timing cause more chaos in the end. Much less to worry about.

An in-order CPU will have less internal state, and so less
perturbations and sources of real entropy from small differences. We
can only hope there is _some_.

It's not like our existing "depend on external interrupt timing" is
any hard guarantee either, regardless of how long we wait or how many
external interrupts we'd get.

It's always a "at some point you have to make a judgement call".

And we all have different levels of comfort about where that point
ends up being.

               Linus

  reply	other threads:[~2019-09-30 17:04 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-28 22:24 x86/random: Speculation to the rescue Thomas Gleixner
2019-09-28 23:53 ` Linus Torvalds
2019-09-29  7:40   ` Thomas Gleixner
2019-09-29  8:05   ` Alexander E. Patrakov
2019-09-30  1:16   ` Linus Torvalds
2019-09-30  2:59     ` Linus Torvalds
2019-09-30  6:10       ` Borislav Petkov
2019-09-30 16:06         ` Linus Torvalds
2019-10-01 13:51           ` Borislav Petkov
2019-10-01 17:14             ` Linus Torvalds
2019-10-01 17:50               ` [PATCH] char/random: Add a newline at the end of the file Borislav Petkov
2019-09-30 18:05         ` x86/random: Speculation to the rescue Kees Cook
2019-09-30  3:37     ` Theodore Y. Ts'o
2019-09-30 13:16       ` Theodore Y. Ts'o
2019-09-30 16:15         ` Linus Torvalds
2019-09-30 16:32           ` Peter Zijlstra
2019-09-30 17:03             ` Linus Torvalds [this message]
2019-10-01 10:28           ` David Laight
2019-10-15 21:50             ` Thomas Gleixner
2019-10-01 16:15   ` Ahmed S. Darwish
2019-10-01 16:37     ` Kees Cook
2019-10-01 17:18       ` Ahmed S. Darwish
2019-10-01 17:25     ` Linus Torvalds
2019-10-06 12:07       ` Pavel Machek
2019-10-02 12:01     ` Theodore Y. Ts'o
2019-10-06 11:41   ` Pavel Machek
2019-10-06 17:26     ` Linus Torvalds
2019-10-06 17:35       ` Pavel Machek
2019-10-06 18:06         ` Linus Torvalds
2019-10-06 18:21           ` Pavel Machek
2019-10-06 18:26             ` Linus Torvalds
2019-10-07 11:47             ` Theodore Y. Ts'o
2019-10-07 22:18               ` Pavel Machek
2019-10-08 11:33                 ` David Laight
2019-10-09  8:02                   ` Pavel Machek
2019-10-09  9:37                     ` David Laight
2019-10-01  2:14 hgntkwis

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHk-=wjC0vTOgFU=dhX5NQxF84MBaGNpoQ1M6wD=yzBEy4tzTw@mail.gmail.com' \
    --to=torvalds@linux-foundation.org \
    --cc=darwish.07@gmail.com \
    --cc=hofrat@opentech.at \
    --cc=keescook@chromium.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=tytso@mit.edu \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).