linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] random: fix locking for crng_init in crng_reseed()
@ 2022-02-09 18:57 Dominik Brodowski
  2022-02-09 21:39 ` Jason A. Donenfeld
  2022-02-21  4:05 ` Eric Biggers
  0 siblings, 2 replies; 11+ messages in thread
From: Dominik Brodowski @ 2022-02-09 18:57 UTC (permalink / raw)
  To: Jason A . Donenfeld, tytso; +Cc: linux-kernel, linux-crypto

crng_init is protected by primary_crng->lock. Therefore, we need
to hold this lock when increasing crng_init to 2. As we shouldn't
hold this lock for too long, only hold it for those parts which
require protection.

Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
---
 drivers/char/random.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index cc4d9d414df2..aee56032ebb4 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -497,6 +497,7 @@ static void crng_slow_load(const void *cp, unsigned int len)
 
 static void crng_reseed(void)
 {
+	bool complete_init = false;
 	unsigned long flags;
 	int entropy_count;
 	unsigned long next_gen;
@@ -526,12 +527,14 @@ static void crng_reseed(void)
 		++next_gen;
 	WRITE_ONCE(base_crng.generation, next_gen);
 	base_crng.birth = jiffies;
-	spin_unlock_irqrestore(&base_crng.lock, flags);
-	memzero_explicit(key, sizeof(key));
-
 	if (crng_init < 2) {
 		invalidate_batched_entropy();
 		crng_init = 2;
+		complete_init = true;
+	}
+	spin_unlock_irqrestore(&base_crng.lock, flags);
+	memzero_explicit(key, sizeof(key));
+	if (complete_init) {
 		process_random_ready_list();
 		wake_up_interruptible(&crng_init_wait);
 		kill_fasync(&fasync, SIGIO, POLL_IN);

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] random: fix locking for crng_init in crng_reseed()
  2022-02-09 18:57 [PATCH] random: fix locking for crng_init in crng_reseed() Dominik Brodowski
@ 2022-02-09 21:39 ` Jason A. Donenfeld
  2022-02-09 21:54   ` [PATCH] random: tie batched entropy generation to base_crng generation Jason A. Donenfeld
  2022-02-10  5:43   ` [PATCH] random: fix locking for crng_init in crng_reseed() Dominik Brodowski
  2022-02-21  4:05 ` Eric Biggers
  1 sibling, 2 replies; 11+ messages in thread
From: Jason A. Donenfeld @ 2022-02-09 21:39 UTC (permalink / raw)
  To: Dominik Brodowski; +Cc: Theodore Ts'o, LKML, Linux Crypto Mailing List

Hi Dominik,

Thanks, applied. I changed complete_init to finalize_init, to match
our naming scheme from earlier, and I moved
invalidate_batched_entropy() to outside the lock and after
crng_init=2, since now it uses atomics, and it should probably be
ordered after crng_init = 2, so the new batch gets the new entropy.

Actually, though, come to think of it: shouldn't we always call
invalidate_batched_entropy() after reseeding? More generally, we can
instead probably tie the entropy generation counter to the base_crng
counter, and have this all done automatically. That might be something
interesting to do in the future.

Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] random: tie batched entropy generation to base_crng generation
  2022-02-09 21:39 ` Jason A. Donenfeld
@ 2022-02-09 21:54   ` Jason A. Donenfeld
  2022-02-10  6:00     ` Dominik Brodowski
  2022-02-10  5:43   ` [PATCH] random: fix locking for crng_init in crng_reseed() Dominik Brodowski
  1 sibling, 1 reply; 11+ messages in thread
From: Jason A. Donenfeld @ 2022-02-09 21:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: Jason A. Donenfeld, Dominik Brodowski, Theodore Ts'o

Now that we have an explicit base_crng generation counter, we don't need
a separate one for batched entropy. Rather, we can just move the
generation forward every time we change crng_init state.

Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 drivers/char/random.c | 28 +++++++---------------------
 1 file changed, 7 insertions(+), 21 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 999f1d164e72..f4d432305869 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -431,8 +431,6 @@ static DEFINE_PER_CPU(struct crng, crngs) = {
 
 static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
 
-static void invalidate_batched_entropy(void);
-
 /*
  * crng_fast_load() can be called by code in the interrupt service
  * path.  So we can't afford to dilly-dally. Returns the number of
@@ -455,7 +453,7 @@ static size_t crng_fast_load(const void *cp, size_t len)
 		src++; crng_init_cnt++; len--; ret++;
 	}
 	if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
-		invalidate_batched_entropy();
+		++base_crng.generation;
 		crng_init = 1;
 	}
 	spin_unlock_irqrestore(&base_crng.lock, flags);
@@ -536,7 +534,6 @@ static void crng_reseed(void)
 	spin_unlock_irqrestore(&base_crng.lock, flags);
 	memzero_explicit(key, sizeof(key));
 	if (finalize_init) {
-		invalidate_batched_entropy();
 		process_random_ready_list();
 		wake_up_interruptible(&crng_init_wait);
 		kill_fasync(&fasync, SIGIO, POLL_IN);
@@ -1278,7 +1275,7 @@ int __init rand_initialize(void)
 
 	extract_entropy(base_crng.key, sizeof(base_crng.key));
 	if (arch_init && trust_cpu && crng_init < 2) {
-		invalidate_batched_entropy();
+		++base_crng.generation;
 		crng_init = 2;
 		pr_notice("crng init done (trusting CPU's manufacturer)\n");
 	}
@@ -1628,8 +1625,6 @@ static int __init random_sysctls_init(void)
 device_initcall(random_sysctls_init);
 #endif	/* CONFIG_SYSCTL */
 
-static atomic_t batch_generation = ATOMIC_INIT(0);
-
 struct batched_entropy {
 	union {
 		/* We make this 1.5x a ChaCha block, so that we get the
@@ -1642,8 +1637,8 @@ struct batched_entropy {
 		u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))];
 	};
 	local_lock_t lock;
+	unsigned long generation;
 	unsigned int position;
-	int generation;
 };
 
 /*
@@ -1662,14 +1657,14 @@ u64 get_random_u64(void)
 	unsigned long flags;
 	struct batched_entropy *batch;
 	static void *previous;
-	int next_gen;
+	unsigned long next_gen;
 
 	warn_unseeded_randomness(&previous);
 
 	local_lock_irqsave(&batched_entropy_u64.lock, flags);
 	batch = raw_cpu_ptr(&batched_entropy_u64);
 
-	next_gen = atomic_read(&batch_generation);
+	next_gen = READ_ONCE(base_crng.generation);
 	if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0 ||
 	    next_gen != batch->generation) {
 		_get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64));
@@ -1695,14 +1690,14 @@ u32 get_random_u32(void)
 	unsigned long flags;
 	struct batched_entropy *batch;
 	static void *previous;
-	int next_gen;
+	unsigned long next_gen;
 
 	warn_unseeded_randomness(&previous);
 
 	local_lock_irqsave(&batched_entropy_u32.lock, flags);
 	batch = raw_cpu_ptr(&batched_entropy_u32);
 
-	next_gen = atomic_read(&batch_generation);
+	next_gen = READ_ONCE(base_crng.generation);
 	if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0 ||
 	    next_gen != batch->generation) {
 		_get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32));
@@ -1718,15 +1713,6 @@ u32 get_random_u32(void)
 }
 EXPORT_SYMBOL(get_random_u32);
 
-/* It's important to invalidate all potential batched entropy that might
- * be stored before the crng is initialized, which we can do lazily by
- * bumping the generation counter.
- */
-static void invalidate_batched_entropy(void)
-{
-	atomic_inc(&batch_generation);
-}
-
 /**
  * randomize_page - Generate a random, page aligned address
  * @start:	The smallest acceptable address the caller will take.
-- 
2.35.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] random: fix locking for crng_init in crng_reseed()
  2022-02-09 21:39 ` Jason A. Donenfeld
  2022-02-09 21:54   ` [PATCH] random: tie batched entropy generation to base_crng generation Jason A. Donenfeld
@ 2022-02-10  5:43   ` Dominik Brodowski
  2022-02-10 13:07     ` Jason A. Donenfeld
  1 sibling, 1 reply; 11+ messages in thread
From: Dominik Brodowski @ 2022-02-10  5:43 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: Theodore Ts'o, LKML, Linux Crypto Mailing List

Hi Jason,

Am Wed, Feb 09, 2022 at 10:39:17PM +0100 schrieb Jason A. Donenfeld:
> Thanks, applied. I changed complete_init to finalize_init, to match
> our naming scheme from earlier, and I moved
> invalidate_batched_entropy() to outside the lock and after
> crng_init=2, since now it uses atomics, and it should probably be
> ordered after crng_init = 2, so the new batch gets the new entropy.

Doesn't that mean that there is a small window where crng_init == 2, but
get_random_u64/get_random_u32 still returns old data, with potentially
insufficient entropy (as obtained at a time when crng_init was still < 2)?
That's why I moved invalidate_batched_entropy() under the lock.

But with your subsequent patch, it doesn't matter any more.

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] random: tie batched entropy generation to base_crng generation
  2022-02-09 21:54   ` [PATCH] random: tie batched entropy generation to base_crng generation Jason A. Donenfeld
@ 2022-02-10  6:00     ` Dominik Brodowski
  2022-02-10 13:09       ` Jason A. Donenfeld
  0 siblings, 1 reply; 11+ messages in thread
From: Dominik Brodowski @ 2022-02-10  6:00 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: linux-kernel, Theodore Ts'o

Am Wed, Feb 09, 2022 at 10:54:06PM +0100 schrieb Jason A. Donenfeld:
> Now that we have an explicit base_crng generation counter, we don't need
> a separate one for batched entropy. Rather, we can just move the
> generation forward every time we change crng_init state.
> 
> Cc: Dominik Brodowski <linux@dominikbrodowski.net>
> Cc: Theodore Ts'o <tytso@mit.edu>
> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> ---
>  drivers/char/random.c | 28 +++++++---------------------
>  1 file changed, 7 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/char/random.c b/drivers/char/random.c
> index 999f1d164e72..f4d432305869 100644
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -431,8 +431,6 @@ static DEFINE_PER_CPU(struct crng, crngs) = {
>  
>  static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
>  
> -static void invalidate_batched_entropy(void);
> -
>  /*
>   * crng_fast_load() can be called by code in the interrupt service
>   * path.  So we can't afford to dilly-dally. Returns the number of
> @@ -455,7 +453,7 @@ static size_t crng_fast_load(const void *cp, size_t len)
>  		src++; crng_init_cnt++; len--; ret++;
>  	}
>  	if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
> -		invalidate_batched_entropy();
> +		++base_crng.generation;
>  		crng_init = 1;
>  	}
>  	spin_unlock_irqrestore(&base_crng.lock, flags);

This will only ever increase base_crng.generation from 0 to 1, and the
proper lock is held. The base_crng.key has changed, so it's appropriate
to state that it has reached a new generation.

> @@ -536,7 +534,6 @@ static void crng_reseed(void)
>  	spin_unlock_irqrestore(&base_crng.lock, flags);
>  	memzero_explicit(key, sizeof(key));
>  	if (finalize_init) {
> -		invalidate_batched_entropy();
>  		process_random_ready_list();
>  		wake_up_interruptible(&crng_init_wait);
>  		kill_fasync(&fasync, SIGIO, POLL_IN);

In crng_reseed(), base_crng.generation is incremented above while holding
the lock, and checked that it doesn't reach ULONG_MAX. OK.

> @@ -1278,7 +1275,7 @@ int __init rand_initialize(void)
>  
>  	extract_entropy(base_crng.key, sizeof(base_crng.key));
>  	if (arch_init && trust_cpu && crng_init < 2) {
> -		invalidate_batched_entropy();
> +		++base_crng.generation;
>  		crng_init = 2;
>  		pr_notice("crng init done (trusting CPU's manufacturer)\n");
>  	}

Here we do not need to take a lock (single-threaded operation), can only be
at generation 0 or 1, and the base_crng.key has changed. Which leads me to
ask: shouldn't we increase the generation counter always (or at least if
arch_init is true)? And just make icnrementing crng_init to 2 depending on
trust_cpu?

To sum it up:

	Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] random: fix locking for crng_init in crng_reseed()
  2022-02-10  5:43   ` [PATCH] random: fix locking for crng_init in crng_reseed() Dominik Brodowski
@ 2022-02-10 13:07     ` Jason A. Donenfeld
  0 siblings, 0 replies; 11+ messages in thread
From: Jason A. Donenfeld @ 2022-02-10 13:07 UTC (permalink / raw)
  To: Dominik Brodowski; +Cc: Theodore Ts'o, LKML, Linux Crypto Mailing List

Hi Dominik,

Thanks, I see your point. I'll do it the way you suggested (which as
you pointed out goes away anyway right after).

Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] random: tie batched entropy generation to base_crng generation
  2022-02-10  6:00     ` Dominik Brodowski
@ 2022-02-10 13:09       ` Jason A. Donenfeld
  2022-02-10 13:13         ` [PATCH v2] " Jason A. Donenfeld
  0 siblings, 1 reply; 11+ messages in thread
From: Jason A. Donenfeld @ 2022-02-10 13:09 UTC (permalink / raw)
  To: Dominik Brodowski; +Cc: LKML, Theodore Ts'o

Hi Dominik,

On Thu, Feb 10, 2022 at 7:04 AM Dominik Brodowski
<linux@dominikbrodowski.net> wrote:
> Here we do not need to take a lock (single-threaded operation), can only be
> at generation 0 or 1, and the base_crng.key has changed. Which leads me to
> ask: shouldn't we increase the generation counter always (or at least if
> arch_init is true)? And just make icnrementing crng_init to 2 depending on
> trust_cpu?

Interesting consideration. I think incrementing the generation counter
there unconditionally can't hurt. It should be done every time the
base_crng key changes, which there it clearly does since we're
extracting into it. I'll go ahead and do that.

Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2] random: tie batched entropy generation to base_crng generation
  2022-02-10 13:09       ` Jason A. Donenfeld
@ 2022-02-10 13:13         ` Jason A. Donenfeld
  2022-02-21  4:13           ` Eric Biggers
  0 siblings, 1 reply; 11+ messages in thread
From: Jason A. Donenfeld @ 2022-02-10 13:13 UTC (permalink / raw)
  To: linux-kernel; +Cc: Jason A. Donenfeld, Theodore Ts'o, Dominik Brodowski

Now that we have an explicit base_crng generation counter, we don't need
a separate one for batched entropy. Rather, we can just move the
generation forward every time we change crng_init state or update the
base_crng key.

Cc: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
v2 always increments the generation after extraction, as suggested by
Dominik.

 drivers/char/random.c | 29 ++++++++---------------------
 1 file changed, 8 insertions(+), 21 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 5beb421ec12b..57d36f13e3a6 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -431,8 +431,6 @@ static DEFINE_PER_CPU(struct crng, crngs) = {
 
 static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
 
-static void invalidate_batched_entropy(void);
-
 /*
  * crng_fast_load() can be called by code in the interrupt service
  * path.  So we can't afford to dilly-dally. Returns the number of
@@ -455,7 +453,7 @@ static size_t crng_fast_load(const void *cp, size_t len)
 		src++; crng_init_cnt++; len--; ret++;
 	}
 	if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
-		invalidate_batched_entropy();
+		++base_crng.generation;
 		crng_init = 1;
 	}
 	spin_unlock_irqrestore(&base_crng.lock, flags);
@@ -530,7 +528,6 @@ static void crng_reseed(void)
 	WRITE_ONCE(base_crng.generation, next_gen);
 	base_crng.birth = jiffies;
 	if (crng_init < 2) {
-		invalidate_batched_entropy();
 		crng_init = 2;
 		finalize_init = true;
 	}
@@ -1277,8 +1274,9 @@ int __init rand_initialize(void)
 	mix_pool_bytes(utsname(), sizeof(*(utsname())));
 
 	extract_entropy(base_crng.key, sizeof(base_crng.key));
+	++base_crng.generation;
+
 	if (arch_init && trust_cpu && crng_init < 2) {
-		invalidate_batched_entropy();
 		crng_init = 2;
 		pr_notice("crng init done (trusting CPU's manufacturer)\n");
 	}
@@ -1628,8 +1626,6 @@ static int __init random_sysctls_init(void)
 device_initcall(random_sysctls_init);
 #endif	/* CONFIG_SYSCTL */
 
-static atomic_t batch_generation = ATOMIC_INIT(0);
-
 struct batched_entropy {
 	union {
 		/* We make this 1.5x a ChaCha block, so that we get the
@@ -1642,8 +1638,8 @@ struct batched_entropy {
 		u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))];
 	};
 	local_lock_t lock;
+	unsigned long generation;
 	unsigned int position;
-	int generation;
 };
 
 /*
@@ -1662,14 +1658,14 @@ u64 get_random_u64(void)
 	unsigned long flags;
 	struct batched_entropy *batch;
 	static void *previous;
-	int next_gen;
+	unsigned long next_gen;
 
 	warn_unseeded_randomness(&previous);
 
 	local_lock_irqsave(&batched_entropy_u64.lock, flags);
 	batch = raw_cpu_ptr(&batched_entropy_u64);
 
-	next_gen = atomic_read(&batch_generation);
+	next_gen = READ_ONCE(base_crng.generation);
 	if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0 ||
 	    next_gen != batch->generation) {
 		_get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64));
@@ -1695,14 +1691,14 @@ u32 get_random_u32(void)
 	unsigned long flags;
 	struct batched_entropy *batch;
 	static void *previous;
-	int next_gen;
+	unsigned long next_gen;
 
 	warn_unseeded_randomness(&previous);
 
 	local_lock_irqsave(&batched_entropy_u32.lock, flags);
 	batch = raw_cpu_ptr(&batched_entropy_u32);
 
-	next_gen = atomic_read(&batch_generation);
+	next_gen = READ_ONCE(base_crng.generation);
 	if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0 ||
 	    next_gen != batch->generation) {
 		_get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32));
@@ -1718,15 +1714,6 @@ u32 get_random_u32(void)
 }
 EXPORT_SYMBOL(get_random_u32);
 
-/* It's important to invalidate all potential batched entropy that might
- * be stored before the crng is initialized, which we can do lazily by
- * bumping the generation counter.
- */
-static void invalidate_batched_entropy(void)
-{
-	atomic_inc(&batch_generation);
-}
-
 /**
  * randomize_page - Generate a random, page aligned address
  * @start:	The smallest acceptable address the caller will take.
-- 
2.35.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] random: fix locking for crng_init in crng_reseed()
  2022-02-09 18:57 [PATCH] random: fix locking for crng_init in crng_reseed() Dominik Brodowski
  2022-02-09 21:39 ` Jason A. Donenfeld
@ 2022-02-21  4:05 ` Eric Biggers
  1 sibling, 0 replies; 11+ messages in thread
From: Eric Biggers @ 2022-02-21  4:05 UTC (permalink / raw)
  To: Dominik Brodowski; +Cc: Jason A . Donenfeld, tytso, linux-kernel, linux-crypto

On Wed, Feb 09, 2022 at 07:57:06PM +0100, Dominik Brodowski wrote:
> crng_init is protected by primary_crng->lock. Therefore, we need
> to hold this lock when increasing crng_init to 2. As we shouldn't
> hold this lock for too long, only hold it for those parts which
> require protection.
> 
> Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
> ---
>  drivers/char/random.c |    9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 

Reviewed-by: Eric Biggers <ebiggers@google.com>

Though to bikeshed on the variable name, I think that 'became_ready' would be
more self-explanatory than 'complete_init' (this patch) and 'finalize_init'
(the version committed).

- Eric

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] random: tie batched entropy generation to base_crng generation
  2022-02-10 13:13         ` [PATCH v2] " Jason A. Donenfeld
@ 2022-02-21  4:13           ` Eric Biggers
  2022-02-21 14:35             ` Jason A. Donenfeld
  0 siblings, 1 reply; 11+ messages in thread
From: Eric Biggers @ 2022-02-21  4:13 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: linux-kernel, Theodore Ts'o, Dominik Brodowski

On Thu, Feb 10, 2022 at 02:13:04PM +0100, Jason A. Donenfeld wrote:
> Now that we have an explicit base_crng generation counter, we don't need
> a separate one for batched entropy. Rather, we can just move the
> generation forward every time we change crng_init state or update the
> base_crng key.
> 
> Cc: Theodore Ts'o <tytso@mit.edu>
> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>
> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> ---
> v2 always increments the generation after extraction, as suggested by
> Dominik.
> 
>  drivers/char/random.c | 29 ++++++++---------------------
>  1 file changed, 8 insertions(+), 21 deletions(-)

Reviewed-by: Eric Biggers <ebiggers@google.com>

One comment below:

> @@ -455,7 +453,7 @@ static size_t crng_fast_load(const void *cp, size_t len)
>  		src++; crng_init_cnt++; len--; ret++;
>  	}
>  	if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
> -		invalidate_batched_entropy();
> +		++base_crng.generation;
>  		crng_init = 1;
>  	}

This is an existing issue, but why doesn't crng_slow_load() do this too?

- Eric

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] random: tie batched entropy generation to base_crng generation
  2022-02-21  4:13           ` Eric Biggers
@ 2022-02-21 14:35             ` Jason A. Donenfeld
  0 siblings, 0 replies; 11+ messages in thread
From: Jason A. Donenfeld @ 2022-02-21 14:35 UTC (permalink / raw)
  To: Eric Biggers; +Cc: LKML, Theodore Ts'o, Dominik Brodowski

On Mon, Feb 21, 2022 at 5:13 AM Eric Biggers <ebiggers@kernel.org> wrote:
> > @@ -455,7 +453,7 @@ static size_t crng_fast_load(const void *cp, size_t len)
> >               src++; crng_init_cnt++; len--; ret++;
> >       }
> >       if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
> > -             invalidate_batched_entropy();
> > +             ++base_crng.generation;
> >               crng_init = 1;
> >       }
>
> This is an existing issue, but why doesn't crng_slow_load() do this too?

Because it's called by add_device_randomness(), which is mostly
ingesting static bytes, akin to what you get by running `dmidecode`
and such. The idea is that this is something that's good to mix, but
bad to credit. I think there was a CVE a few years back about this,
precipitating the change.

Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-02-21 14:36 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-09 18:57 [PATCH] random: fix locking for crng_init in crng_reseed() Dominik Brodowski
2022-02-09 21:39 ` Jason A. Donenfeld
2022-02-09 21:54   ` [PATCH] random: tie batched entropy generation to base_crng generation Jason A. Donenfeld
2022-02-10  6:00     ` Dominik Brodowski
2022-02-10 13:09       ` Jason A. Donenfeld
2022-02-10 13:13         ` [PATCH v2] " Jason A. Donenfeld
2022-02-21  4:13           ` Eric Biggers
2022-02-21 14:35             ` Jason A. Donenfeld
2022-02-10  5:43   ` [PATCH] random: fix locking for crng_init in crng_reseed() Dominik Brodowski
2022-02-10 13:07     ` Jason A. Donenfeld
2022-02-21  4:05 ` Eric Biggers

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).