All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] random: avoid mis-detecting a slow counter as a cycle counter
@ 2022-04-21 19:29 Eric Biggers
  2022-04-21 20:20 ` Jason A. Donenfeld
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Biggers @ 2022-04-21 19:29 UTC (permalink / raw)
  To: Theodore Ts'o, Jason A . Donenfeld ; +Cc: linux-kernel, linux-crypto

From: Eric Biggers <ebiggers@google.com>

The method that try_to_generate_entropy() uses to detect a cycle counter
is to check whether two calls to random_get_entropy() return different
values.  This is uncomfortably prone to false positives if
random_get_entropy() is a slow counter, as the two calls could return
different values if the counter happens to be on the cusp of a change.
Making things worse, the task can be preempted between the calls.

This is problematic because try_to_generate_entropy() doesn't do any
real entropy estimation later; it always credits 1 bit per loop
iteration.  To avoid crediting garbage, it relies entirely on the
preceding check for whether a cycle counter is present.

Therefore, increase the number of counter comparisons from 1 to 3, to
greatly reduce the rate of false positive cycle counter detections.

Fixes: 50ee7529ec45 ("random: try to actively add entropy rather than passively wait for it")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 drivers/char/random.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index bf89c6f27a192..9647c61345573 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1382,12 +1382,20 @@ static void try_to_generate_entropy(void)
 		unsigned long entropy;
 		struct timer_list timer;
 	} stack;
+	int i;
 
 	stack.entropy = random_get_entropy();
 
-	/* Slow counter - or none. Don't even bother */
-	if (stack.entropy == random_get_entropy())
-		return;
+	/*
+	 * We must not proceed if we don't actually have a cycle counter.  To
+	 * detect a cycle counter, check whether random_get_entropy() returns a
+	 * different value each time.  Check it multiple times to avoid false
+	 * positives where a slow counter could be just on the cusp of a change.
+	 */
+	for (i = 0; i < 3; i++) {
+		if (stack.entropy == random_get_entropy())
+			return;
+	}
 
 	timer_setup_on_stack(&stack.timer, entropy_timer, 0);
 	while (!crng_ready() && !signal_pending(current)) {

base-commit: 939ee380b17589d026e132a1be91199409c3c934
-- 
2.35.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] random: avoid mis-detecting a slow counter as a cycle counter
  2022-04-21 19:29 [PATCH] random: avoid mis-detecting a slow counter as a cycle counter Eric Biggers
@ 2022-04-21 20:20 ` Jason A. Donenfeld
  2022-04-21 20:49   ` Eric Biggers
  0 siblings, 1 reply; 5+ messages in thread
From: Jason A. Donenfeld @ 2022-04-21 20:20 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Theodore Ts'o, linux-kernel, linux-crypto, Andy Polyakov

Hi Eric,

On Thu, Apr 21, 2022 at 9:30 PM Eric Biggers <ebiggers@kernel.org> wrote:
> The method that try_to_generate_entropy() uses to detect a cycle counter
> is to check whether two calls to random_get_entropy() return different
> values.  This is uncomfortably prone to false positives if
> random_get_entropy() is a slow counter, as the two calls could return
> different values if the counter happens to be on the cusp of a change.
> Making things worse, the task can be preempted between the calls.
>
> This is problematic because try_to_generate_entropy() doesn't do any
> real entropy estimation later; it always credits 1 bit per loop
> iteration.  To avoid crediting garbage, it relies entirely on the
> preceding check for whether a cycle counter is present.
>
> Therefore, increase the number of counter comparisons from 1 to 3, to
> greatly reduce the rate of false positive cycle counter detections.

Thanks for the patch. It seems like this at least is not worse than
before. But before I commit this and we forget about the problem for a
while, I was also wondering if we can do much, much better than before,
and actually make this "work" with slow counters. Right now, the core
algorithm is:

    while (!crng_ready()) {
        if (no timer) mod_timer(jiffies + 1);
	mix(sample);
	schedule();    // <---- calls the timer, which does credit_entry_bits(1)
	sample = rdtsc;
    }

So we credit 1 bit every time that timer fires. What if the timer
instead did this:

    static void entropy_timer(struct timer_list *t)
    {
        struct timer_state *s = container_of(...t...);
        if (++s->samples == s->samples_per_bit) {
            credit_entropy_bits(1);
            s->samples = 0;
        }
    }

Currently, samples_per_bit is 1. What if we make it >1 on systems with
slow cycle counters? The question then is: how do we relate some
information about cycle counter samples to the samples_per_bit estimate?
The jitter stuff in crypto/ does something. Andy (CC'd) mentioned to me
last week that he did something some time ago computing FFTs on the fly
or something like that. And maybe there are other ideas still. I wonder
if we can find something appropriate for the kernel here.

Any thoughts on that direction?

Jason

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] random: avoid mis-detecting a slow counter as a cycle counter
  2022-04-21 20:20 ` Jason A. Donenfeld
@ 2022-04-21 20:49   ` Eric Biggers
  2022-04-21 22:59     ` Jason A. Donenfeld
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Biggers @ 2022-04-21 20:49 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: Theodore Ts'o, linux-kernel, linux-crypto, Andy Polyakov

On Thu, Apr 21, 2022 at 10:20:35PM +0200, Jason A. Donenfeld wrote:
> Hi Eric,
> 
> On Thu, Apr 21, 2022 at 9:30 PM Eric Biggers <ebiggers@kernel.org> wrote:
> > The method that try_to_generate_entropy() uses to detect a cycle counter
> > is to check whether two calls to random_get_entropy() return different
> > values.  This is uncomfortably prone to false positives if
> > random_get_entropy() is a slow counter, as the two calls could return
> > different values if the counter happens to be on the cusp of a change.
> > Making things worse, the task can be preempted between the calls.
> >
> > This is problematic because try_to_generate_entropy() doesn't do any
> > real entropy estimation later; it always credits 1 bit per loop
> > iteration.  To avoid crediting garbage, it relies entirely on the
> > preceding check for whether a cycle counter is present.
> >
> > Therefore, increase the number of counter comparisons from 1 to 3, to
> > greatly reduce the rate of false positive cycle counter detections.
> 
> Thanks for the patch. It seems like this at least is not worse than
> before. But before I commit this and we forget about the problem for a
> while, I was also wondering if we can do much, much better than before,
> and actually make this "work" with slow counters. Right now, the core
> algorithm is:
> 
>     while (!crng_ready()) {
>         if (no timer) mod_timer(jiffies + 1);
> 	mix(sample);
> 	schedule();    // <---- calls the timer, which does credit_entry_bits(1)
> 	sample = rdtsc;
>     }
> 
> So we credit 1 bit every time that timer fires. What if the timer
> instead did this:
> 
>     static void entropy_timer(struct timer_list *t)
>     {
>         struct timer_state *s = container_of(...t...);
>         if (++s->samples == s->samples_per_bit) {
>             credit_entropy_bits(1);
>             s->samples = 0;
>         }
>     }
> 
> Currently, samples_per_bit is 1. What if we make it >1 on systems with
> slow cycle counters? The question then is: how do we relate some
> information about cycle counter samples to the samples_per_bit estimate?
> The jitter stuff in crypto/ does something. Andy (CC'd) mentioned to me
> last week that he did something some time ago computing FFTs on the fly
> or something like that. And maybe there are other ideas still. I wonder
> if we can find something appropriate for the kernel here.
> 
> Any thoughts on that direction?
> 

I think we'll need to go there eventually, along with fixing
add_timer_randomness() and add_interrupt_randomness() to credit entropy more
accurately.  I do not think there is an easy fix, though; this is mostly an open
research area.  Looking into research papers and what has been done for other
jitter entropy implementations would be useful.

- Eric

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] random: avoid mis-detecting a slow counter as a cycle counter
  2022-04-21 20:49   ` Eric Biggers
@ 2022-04-21 22:59     ` Jason A. Donenfeld
  2022-04-21 23:14       ` Eric Biggers
  0 siblings, 1 reply; 5+ messages in thread
From: Jason A. Donenfeld @ 2022-04-21 22:59 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Theodore Ts'o, linux-kernel, linux-crypto, Andy Polyakov

Hey Eric,

On Thu, Apr 21, 2022 at 01:49:54PM -0700, Eric Biggers wrote:
> I think we'll need to go there eventually, along with fixing
> add_timer_randomness() and add_interrupt_randomness() to credit entropy more
> accurately.  I do not think there is an easy fix, though; this is mostly an open
> research area.  Looking into research papers and what has been done for other
> jitter entropy implementations would be useful.

Alright, so my feeble attempt at nerd sniping you into working on this
inside of a mailing list thread didn't catch, alas. :)) But yea, I guess
this is something we'll have to look at. For add_timer_randomness(), I
actually wonder whether we could just get rid of all the estimation
stuff and credit either 1 or 0 bits per event, like all other sources.
Food for thought.

Anyway, onto your actual patch. I was just looking at this and something
didn't look right:

> +       for (i = 0; i < 3; i++) {
> +               if (stack.entropy == random_get_entropy())
> +                       return;
> +       }

So stack.entropy is set once when the function starts. Then we see if it
becomes equal to a new counter three times in a row. But if it's not
equal on the first try, it's probably not equal on the second and third,
right?

I suspect what you actually meant to do here is check adjacent counters,
the rationale being that on a system with a slow counter, you might be
[un]lucky and read the counter _just_ before it changes, and then the
new one differs, even though there's usually quite a large period of
time in between the two. For example:

| real time | cycle counter |
| --------- | ------------- |
| 3         | 5             |
| 4         | 5             |
| 5         | 5             |
| 6         | 5             |
| 7         | 5             | <--- a
| 8         | 6             | <--- b
| 9         | 6             | <--- c
| 10        | 6             | <--- d

If we read the counter at (a) and compare it to (b), we might be fooled
into thinking that it's a fast counter, when in reality it is not. The
solution is to also compare counter (b) to counter (c), on the theory
that if the counter is _actually_ slow, and (a)!=(b), then certainly
(b)==(c). And for this we probably only need two comparisons, not three.
What your code does is compare (a)==(b), (a)==(c), (a)==(d), but I don't
think that gives us much.

So maybe a different way of writing this is just:

    if (random_get_entropy() == (stack.entropy = random_get_entropy()) ||
        stack.entropy == (stack.entropy = random_get_entropy()))
            return;

Or at least something to that extent.

Jason

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] random: avoid mis-detecting a slow counter as a cycle counter
  2022-04-21 22:59     ` Jason A. Donenfeld
@ 2022-04-21 23:14       ` Eric Biggers
  0 siblings, 0 replies; 5+ messages in thread
From: Eric Biggers @ 2022-04-21 23:14 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: Theodore Ts'o, linux-kernel, linux-crypto, Andy Polyakov

On Fri, Apr 22, 2022 at 12:59:19AM +0200, Jason A. Donenfeld wrote:
> 
> I suspect what you actually meant to do here is check adjacent counters,

Yeah, I messed that up.  I'll send a v2.

- Eric

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-04-21 23:14 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-21 19:29 [PATCH] random: avoid mis-detecting a slow counter as a cycle counter Eric Biggers
2022-04-21 20:20 ` Jason A. Donenfeld
2022-04-21 20:49   ` Eric Biggers
2022-04-21 22:59     ` Jason A. Donenfeld
2022-04-21 23:14       ` Eric Biggers

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.