All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] random: avoid mis-detecting a slow counter as a cycle counter
@ 2022-04-21 23:31 Eric Biggers
  2022-04-21 23:40 ` Jason A. Donenfeld
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Biggers @ 2022-04-21 23:31 UTC (permalink / raw)
  To: Jason A . Donenfeld , Theodore Ts'o; +Cc: linux-kernel, linux-crypto

From: Eric Biggers <ebiggers@google.com>

The method that try_to_generate_entropy() uses to detect a cycle counter
is to check whether two calls to random_get_entropy() return different
values.  This is uncomfortably prone to false positives if
random_get_entropy() is a slow counter, as the two calls could return
different values if the counter happens to be on the cusp of a change.
Making things worse, the task can be preempted between the calls.

This is problematic because try_to_generate_entropy() doesn't do any
real entropy estimation later; it always credits 1 bit per loop
iteration.  To avoid crediting garbage, it relies entirely on the
preceding check for whether a cycle counter is present.

Therefore, increase the number of counter comparisons from 1 to 3, to
greatly reduce the rate of false positive cycle counter detections.

Fixes: 50ee7529ec45 ("random: try to actively add entropy rather than passively wait for it")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---

v2: compare with previous value rather than first one.

 drivers/char/random.c | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index bf89c6f27a192..18d2d1f959683 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1382,12 +1382,22 @@ static void try_to_generate_entropy(void)
 		unsigned long entropy;
 		struct timer_list timer;
 	} stack;
+	int i;
 
+	/*
+	 * We must not proceed if we don't actually have a cycle counter.  To
+	 * detect a cycle counter, check whether random_get_entropy() returns a
+	 * new value each time.  Check this multiple times to avoid false
+	 * positives where a slow counter could be just on the cusp of a change.
+	 */
 	stack.entropy = random_get_entropy();
+	for (i = 0; i < 3; i++) {
+		unsigned long entropy = random_get_entropy();
 
-	/* Slow counter - or none. Don't even bother */
-	if (stack.entropy == random_get_entropy())
-		return;
+		if (stack.entropy == entropy)
+			return;
+		stack.entropy = entropy;
+	}
 
 	timer_setup_on_stack(&stack.timer, entropy_timer, 0);
 	while (!crng_ready() && !signal_pending(current)) {

base-commit: 939ee380b17589d026e132a1be91199409c3c934
-- 
2.35.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] random: avoid mis-detecting a slow counter as a cycle counter
  2022-04-21 23:31 [PATCH v2] random: avoid mis-detecting a slow counter as a cycle counter Eric Biggers
@ 2022-04-21 23:40 ` Jason A. Donenfeld
  2022-04-22  0:34   ` Eric Biggers
  0 siblings, 1 reply; 5+ messages in thread
From: Jason A. Donenfeld @ 2022-04-21 23:40 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Theodore Ts'o, linux-kernel, linux-crypto

Hi Eric,

Thanks. This looks better.

On Thu, Apr 21, 2022 at 04:31:52PM -0700, Eric Biggers wrote:
> Therefore, increase the number of counter comparisons from 1 to 3, to
> greatly reduce the rate of false positive cycle counter detections.
> +	for (i = 0; i < 3; i++) {
> +		unsigned long entropy = random_get_entropy();
 
Wondering: why do you do 3 comparisons rather than 2? What does 3 get
you that 2 doesn't already? I thought the only real requirement was that
in the event where (a)!=(b), (b) is read as meaningfully close as
possible to when the counter changes.

Jason

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] random: avoid mis-detecting a slow counter as a cycle counter
  2022-04-21 23:40 ` Jason A. Donenfeld
@ 2022-04-22  0:34   ` Eric Biggers
  2022-04-22  9:42     ` Jason A. Donenfeld
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Biggers @ 2022-04-22  0:34 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: Theodore Ts'o, linux-kernel, linux-crypto

On Fri, Apr 22, 2022 at 01:40:25AM +0200, Jason A. Donenfeld wrote:
> Hi Eric,
> 
> Thanks. This looks better.
> 
> On Thu, Apr 21, 2022 at 04:31:52PM -0700, Eric Biggers wrote:
> > Therefore, increase the number of counter comparisons from 1 to 3, to
> > greatly reduce the rate of false positive cycle counter detections.
> > +	for (i = 0; i < 3; i++) {
> > +		unsigned long entropy = random_get_entropy();
>  
> Wondering: why do you do 3 comparisons rather than 2? What does 3 get
> you that 2 doesn't already? I thought the only real requirement was that
> in the event where (a)!=(b), (b) is read as meaningfully close as
> possible to when the counter changes.
> 

On CONFIG_PREEMPT kernels this code usually runs with preemption enabled, so I
don't think it's guaranteed that any particular number of comparisons will be
sufficient, since the task could get preempted for a long time between each call
to random_get_entropy().  However, the chance of a false positive should
decrease exponentially, and should be pretty small in the first place, so 3
comparisons seems like a good number.

We could also disable IRQs while checking, if you'd prefer to go that route.  We
would still need 2 comparisons.

- Eric

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] random: avoid mis-detecting a slow counter as a cycle counter
  2022-04-22  0:34   ` Eric Biggers
@ 2022-04-22  9:42     ` Jason A. Donenfeld
  2022-04-22 13:24       ` Jason A. Donenfeld
  0 siblings, 1 reply; 5+ messages in thread
From: Jason A. Donenfeld @ 2022-04-22  9:42 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Theodore Ts'o, linux-kernel, linux-crypto, Linus Torvalds

Hi Eric,

On Thu, Apr 21, 2022 at 05:34:58PM -0700, Eric Biggers wrote:
> On Fri, Apr 22, 2022 at 01:40:25AM +0200, Jason A. Donenfeld wrote:
> > Hi Eric,
> > 
> > Thanks. This looks better.
> > 
> > On Thu, Apr 21, 2022 at 04:31:52PM -0700, Eric Biggers wrote:
> > > Therefore, increase the number of counter comparisons from 1 to 3, to
> > > greatly reduce the rate of false positive cycle counter detections.
> > > +	for (i = 0; i < 3; i++) {
> > > +		unsigned long entropy = random_get_entropy();
> >  
> > Wondering: why do you do 3 comparisons rather than 2? What does 3 get
> > you that 2 doesn't already? I thought the only real requirement was that
> > in the event where (a)!=(b), (b) is read as meaningfully close as
> > possible to when the counter changes.
> > 
> 
> On CONFIG_PREEMPT kernels this code usually runs with preemption enabled, so I
> don't think it's guaranteed that any particular number of comparisons will be
> sufficient, since the task could get preempted for a long time between each call
> to random_get_entropy().  However, the chance of a false positive should
> decrease exponentially, and should be pretty small in the first place, so 3
> comparisons seems like a good number.

Ahh, I see. So you check three times instead of disabling
preemption/irqs, which would be awfully heavy weight. Seems like a
reasonable compromise.

By the way, I was thinking about the assumptions we're making with this
comparison ("two adjacent counters shouldn't be the same") in the
context of this idea from my first reply to you:

    static void entropy_timer(struct timer_list *t)
    {
        struct timer_state *s = container_of(...t...);
        if (++s->samples == s->samples_per_bit) {
            credit_entropy_bits(1);
            s->samples = 0;
        }
    }

A naive approach that strikes me as strictly _no worse_ than what we
currently have would be to say that right now we require every counter
to be different in order to credit everytime. If every other counter is
different, then we should credit every other time. If every third
counter is different, we should credit every third time. And so forth.
While that simple logic isn't some sort of fancy realtime FFT thing, it
also doesn't appear on its surface to be relying on assumptions that
we're not already making. I think? It has flaws -- it doesn't account
for the possibility that while the counter changes, it's way too uniform
in how it changes -- but neither does the current technique. So while
it's not the end goal of actually looking at this through some
statistical lens, it feels like an improvement on what we have now with
little complication.

If that seems convincing, what do you make of the below snippet?

Jason

------------8<--------------------------------------------------------------

diff --git a/drivers/char/random.c b/drivers/char/random.c
index bf89c6f27a19..cabba031cbaf 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1354,6 +1354,12 @@ void add_interrupt_randomness(int irq)
 }
 EXPORT_SYMBOL_GPL(add_interrupt_randomness);
 
+struct entropy_timer_state {
+	unsigned long entropy;
+	struct timer_list timer;
+	unsigned int samples, samples_per_bit;
+};
+
 /*
  * Each time the timer fires, we expect that we got an unpredictable
  * jump in the cycle counter. Even if the timer is running on another
@@ -1367,9 +1373,14 @@ EXPORT_SYMBOL_GPL(add_interrupt_randomness);
  *
  * So the re-arming always happens in the entropy loop itself.
  */
-static void entropy_timer(struct timer_list *t)
+static void entropy_timer(struct timer_list *timer)
 {
-	credit_entropy_bits(1);
+	struct entropy_timer_state *state = container_of(timer, struct entropy_timer_state, timer);
+
+	if (++state->samples == state->samples_per_bit) {
+		credit_entropy_bits(1);
+		state->samples = 0;
+	}
 }
 
 /*
@@ -1378,16 +1389,26 @@ static void entropy_timer(struct timer_list *t)
  */
 static void try_to_generate_entropy(void)
 {
-	struct {
-		unsigned long entropy;
-		struct timer_list timer;
-	} stack;
+	enum { NUM_TRIALS = 2048, MAX_BITS_PER_SAMPLE = 256 };
+	struct entropy_timer_state stack;
+	unsigned int i, num_different = 1;
 
-	stack.entropy = random_get_entropy();
-
-	/* Slow counter - or none. Don't even bother */
-	if (stack.entropy == random_get_entropy())
+	unsigned long *trials = kmalloc_array(NUM_TRIALS, sizeof(*trials), GFP_KERNEL);
+	if (!trials)
 		return;
+	for (i = 0; i < NUM_TRIALS; ++i)
+		trials[i] = random_get_entropy();
+	for (i = 0; i < NUM_TRIALS - 1; ++i) {
+		if (trials[i] != trials[i + 1])
+			++num_different;
+	}
+	mix_pool_bytes(trials, NUM_TRIALS * sizeof(*trials));
+	kfree(trials);
+	stack.samples_per_bit = DIV_ROUND_UP(NUM_TRIALS, num_different);
+	if (stack.samples > MAX_BITS_PER_SAMPLE)
+		return;
+	stack.samples = 0;
+	stack.entropy = random_get_entropy();
 
 	timer_setup_on_stack(&stack.timer, entropy_timer, 0);
 	while (!crng_ready() && !signal_pending(current)) {


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] random: avoid mis-detecting a slow counter as a cycle counter
  2022-04-22  9:42     ` Jason A. Donenfeld
@ 2022-04-22 13:24       ` Jason A. Donenfeld
  0 siblings, 0 replies; 5+ messages in thread
From: Jason A. Donenfeld @ 2022-04-22 13:24 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Theodore Ts'o, linux-kernel, linux-crypto, Linus Torvalds

On Fri, Apr 22, 2022 at 11:42:04AM +0200, Jason A. Donenfeld wrote:
> Hi Eric,
> 
> On Thu, Apr 21, 2022 at 05:34:58PM -0700, Eric Biggers wrote:
> > On Fri, Apr 22, 2022 at 01:40:25AM +0200, Jason A. Donenfeld wrote:
> > > Hi Eric,
> > > 
> > > Thanks. This looks better.
> > > 
> > > On Thu, Apr 21, 2022 at 04:31:52PM -0700, Eric Biggers wrote:
> > > > Therefore, increase the number of counter comparisons from 1 to 3, to
> > > > greatly reduce the rate of false positive cycle counter detections.
> > > > +	for (i = 0; i < 3; i++) {
> > > > +		unsigned long entropy = random_get_entropy();
> > >  
> > > Wondering: why do you do 3 comparisons rather than 2? What does 3 get
> > > you that 2 doesn't already? I thought the only real requirement was that
> > > in the event where (a)!=(b), (b) is read as meaningfully close as
> > > possible to when the counter changes.
> > > 
> > 
> > On CONFIG_PREEMPT kernels this code usually runs with preemption enabled, so I
> > don't think it's guaranteed that any particular number of comparisons will be
> > sufficient, since the task could get preempted for a long time between each call
> > to random_get_entropy().  However, the chance of a false positive should
> > decrease exponentially, and should be pretty small in the first place, so 3
> > comparisons seems like a good number.
> 
> Ahh, I see. So you check three times instead of disabling
> preemption/irqs, which would be awfully heavy weight. Seems like a
> reasonable compromise.
> 
> By the way, I was thinking about the assumptions we're making with this
> comparison ("two adjacent counters shouldn't be the same") in the
> context of this idea from my first reply to you:

Rather than buggy inline email code, I made a real patch out of it for
your consideration:
https://lore.kernel.org/linux-crypto/20220422132027.1267060-1-Jason@zx2c4.com/

Jason

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-04-22 13:24 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-21 23:31 [PATCH v2] random: avoid mis-detecting a slow counter as a cycle counter Eric Biggers
2022-04-21 23:40 ` Jason A. Donenfeld
2022-04-22  0:34   ` Eric Biggers
2022-04-22  9:42     ` Jason A. Donenfeld
2022-04-22 13:24       ` Jason A. Donenfeld

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.