linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] cleanup init-time fast/slow key loading
@ 2022-02-12 23:10 Jason A. Donenfeld
  2022-02-12 23:10 ` [PATCH 1/3] random: unify early init crng load accounting Jason A. Donenfeld
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Jason A. Donenfeld @ 2022-02-12 23:10 UTC (permalink / raw)
  To: linux-kernel, linux; +Cc: Jason A. Donenfeld

The crng direct key loading functions used at early boot were
inconsistent and there were a few subtle bugs around their usage. This
short series cleans that up.

Jason A. Donenfeld (3):
  random: unify early init crng load accounting
  random: check for crng_init == 0, not crng_ready() in
    add_device_randomness()
  random: use trylock in irq handler rather than spinning

 drivers/char/random.c | 120 +++++++++++++++++++++---------------------
 1 file changed, 60 insertions(+), 60 deletions(-)

-- 
2.35.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/3] random: unify early init crng load accounting
  2022-02-12 23:10 [PATCH 0/3] cleanup init-time fast/slow key loading Jason A. Donenfeld
@ 2022-02-12 23:10 ` Jason A. Donenfeld
  2022-02-13  6:55   ` Dominik Brodowski
  2022-02-21  5:50   ` Eric Biggers
  2022-02-12 23:10 ` [PATCH 2/3] random: check for crng_init == 0, not crng_ready() in add_device_randomness() Jason A. Donenfeld
  2022-02-12 23:10 ` [PATCH 3/3] random: use trylock in irq handler rather than spinning Jason A. Donenfeld
  2 siblings, 2 replies; 12+ messages in thread
From: Jason A. Donenfeld @ 2022-02-12 23:10 UTC (permalink / raw)
  To: linux-kernel, linux; +Cc: Jason A. Donenfeld, Theodore Ts'o

crng_fast_load() and crng_slow_load() have different semantics:

- crng_fast_load() xors and accounts with crng_init_cnt.
- crng_slow_load() hashes and doesn't account.

However add_hwgenerator_randomness() can afford to hash (it's called
from a kthread), and it should account. Additionally, ones that can
afford to hash don't need to take a trylock but can take a normal lock.
So, we combine these into one function, crng_pre_init_inject(), which
allows us to control these in a uniform way. This will make it simpler
later to simplify this all down when the time comes for that.

Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 drivers/char/random.c | 107 ++++++++++++++++++++++--------------------
 1 file changed, 55 insertions(+), 52 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 8088348190e6..a128bb947bd4 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -423,72 +423,74 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS],
 }
 
 /*
- * This function is for crng_init < 2 only.
+ * This function is for crng_init < 2 only. It loads entropy directly
+ * into the crng's key, without going through the input pool. It is,
+ * generally speaking, not very safe, but we use this only at early
+ * boot time when it's better to have something there rather than
+ * nothing.
  *
- * crng_fast_load() can be called by code in the interrupt service
- * path.  So we can't afford to dilly-dally. Returns the number of
- * bytes processed from cp.
+ * There are two paths, a slow one and a fast one. The slow one
+ * hashes the input along with the current key. The fast one simply
+ * xors it in, and should only be used from interrupt context.
+ *
+ * If account is set, then the crng_init_cnt counter is incremented.
+ * This shouldn't be set by functions like add_device_randomness(),
+ * where we can't trust the buffer passed to it is guaranteed to be
+ * unpredictable (so it might not have any entropy at all).
+ *
+ * Returns the number of bytes processed from cp, which is bounded by
+ * CRNG_INIT_CNT_THRESH if account is true.
  */
-static size_t crng_fast_load(const void *cp, size_t len)
+static size_t crng_pre_init_inject(const void *cp, size_t len,
+				   bool fast, bool account)
 {
 	static int crng_init_cnt = 0;
 	unsigned long flags;
-	u8 *src = (u8 *)cp;
-	size_t ret = 0;
+	const u8 *src = cp;
+
+	if (fast) {
+		if (!spin_trylock_irqsave(&base_crng.lock, flags))
+			return 0;
+	} else
+		spin_lock_irqsave(&base_crng.lock, flags);
 
-	if (!spin_trylock_irqsave(&base_crng.lock, flags))
-		return 0;
 	if (crng_init != 0) {
 		spin_unlock_irqrestore(&base_crng.lock, flags);
 		return 0;
 	}
-	while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) {
-		base_crng.key[crng_init_cnt % sizeof(base_crng.key)] ^= *src;
-		src++; crng_init_cnt++; len--; ret++;
-	}
-	if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
-		++base_crng.generation;
-		crng_init = 1;
-	}
-	spin_unlock_irqrestore(&base_crng.lock, flags);
-	if (crng_init == 1)
-		pr_notice("fast init done\n");
-	return ret;
-}
 
-/*
- * This function is for crng_init < 2 only.
- *
- * crng_slow_load() is called by add_device_randomness, which has two
- * attributes.  (1) We can't trust the buffer passed to it is
- * guaranteed to be unpredictable (so it might not have any entropy at
- * all), and (2) it doesn't have the performance constraints of
- * crng_fast_load().
- *
- * So, we simply hash the contents in with the current key. Finally,
- * we do *not* advance crng_init_cnt since buffer we may get may be
- * something like a fixed DMI table (for example), which might very
- * well be unique to the machine, but is otherwise unvarying.
- */
-static void crng_slow_load(const void *cp, size_t len)
-{
-	unsigned long flags;
-	struct blake2s_state hash;
+	if (account)
+		len = min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt);
 
-	blake2s_init(&hash, sizeof(base_crng.key));
+	if (fast) {
+		size_t i;
 
-	if (!spin_trylock_irqsave(&base_crng.lock, flags))
-		return;
-	if (crng_init != 0) {
-		spin_unlock_irqrestore(&base_crng.lock, flags);
-		return;
+		for (i = 0; i < len; ++i)
+			base_crng.key[(crng_init_cnt + i) %
+				      sizeof(base_crng.key)] ^= *src++;
+	} else {
+		struct blake2s_state hash;
+
+		blake2s_init(&hash, sizeof(base_crng.key));
+		blake2s_update(&hash, base_crng.key, sizeof(base_crng.key));
+		blake2s_update(&hash, cp, len);
+		blake2s_final(&hash, base_crng.key);
 	}
 
-	blake2s_update(&hash, base_crng.key, sizeof(base_crng.key));
-	blake2s_update(&hash, cp, len);
-	blake2s_final(&hash, base_crng.key);
+	if (account) {
+		crng_init_cnt += len;
+		if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
+			++base_crng.generation;
+			crng_init = 1;
+		}
+	}
 
 	spin_unlock_irqrestore(&base_crng.lock, flags);
+
+	if (crng_init == 1)
+		pr_notice("fast init done\n");
+
+	return len;
 }
 
 static void _get_random_bytes(void *buf, size_t nbytes)
@@ -1006,7 +1008,7 @@ void add_device_randomness(const void *buf, size_t size)
 	unsigned long flags;
 
 	if (!crng_ready() && size)
-		crng_slow_load(buf, size);
+		crng_pre_init_inject(buf, size, false, false);
 
 	spin_lock_irqsave(&input_pool.lock, flags);
 	_mix_pool_bytes(buf, size);
@@ -1123,7 +1125,7 @@ void add_hwgenerator_randomness(const void *buffer, size_t count,
 				size_t entropy)
 {
 	if (unlikely(crng_init == 0)) {
-		size_t ret = crng_fast_load(buffer, count);
+		size_t ret = crng_pre_init_inject(buffer, count, false, true);
 		mix_pool_bytes(buffer, ret);
 		count -= ret;
 		buffer += ret;
@@ -1279,7 +1281,8 @@ void add_interrupt_randomness(int irq)
 
 	if (unlikely(crng_init == 0)) {
 		if (new_count >= 64 &&
-		    crng_fast_load(fast_pool->pool, sizeof(fast_pool->pool)) > 0) {
+		    crng_pre_init_inject(fast_pool->pool, sizeof(fast_pool->pool),
+					 true, true) > 0) {
 			atomic_set(&fast_pool->count, 0);
 			fast_pool->last = now;
 
-- 
2.35.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/3] random: check for crng_init == 0, not crng_ready() in add_device_randomness()
  2022-02-12 23:10 [PATCH 0/3] cleanup init-time fast/slow key loading Jason A. Donenfeld
  2022-02-12 23:10 ` [PATCH 1/3] random: unify early init crng load accounting Jason A. Donenfeld
@ 2022-02-12 23:10 ` Jason A. Donenfeld
  2022-02-13  6:55   ` Dominik Brodowski
  2022-02-21  5:38   ` Eric Biggers
  2022-02-12 23:10 ` [PATCH 3/3] random: use trylock in irq handler rather than spinning Jason A. Donenfeld
  2 siblings, 2 replies; 12+ messages in thread
From: Jason A. Donenfeld @ 2022-02-12 23:10 UTC (permalink / raw)
  To: linux-kernel, linux; +Cc: Jason A. Donenfeld, Theodore Ts'o

This has no real functional change, as crng_pre_init_inject() (and
before that, crng_slow_init()) always checks for == 0, not >= 2. So
correct the outer unlocked change to reflect that.

Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 drivers/char/random.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index a128bb947bd4..9a8e1bb9845d 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1007,7 +1007,7 @@ void add_device_randomness(const void *buf, size_t size)
 	unsigned long time = random_get_entropy() ^ jiffies;
 	unsigned long flags;
 
-	if (!crng_ready() && size)
+	if (crng_init == 0 && size)
 		crng_pre_init_inject(buf, size, false, false);
 
 	spin_lock_irqsave(&input_pool.lock, flags);
-- 
2.35.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/3] random: use trylock in irq handler rather than spinning
  2022-02-12 23:10 [PATCH 0/3] cleanup init-time fast/slow key loading Jason A. Donenfeld
  2022-02-12 23:10 ` [PATCH 1/3] random: unify early init crng load accounting Jason A. Donenfeld
  2022-02-12 23:10 ` [PATCH 2/3] random: check for crng_init == 0, not crng_ready() in add_device_randomness() Jason A. Donenfeld
@ 2022-02-12 23:10 ` Jason A. Donenfeld
  2022-02-13  6:55   ` Dominik Brodowski
  2 siblings, 1 reply; 12+ messages in thread
From: Jason A. Donenfeld @ 2022-02-12 23:10 UTC (permalink / raw)
  To: linux-kernel, linux; +Cc: Jason A. Donenfeld, Theodore Ts'o

crng_pre_init_inject() (and prior crng_fast_load()) uses a trylock when
in fast mode, so that it never contends. We should be doing the same
when grabbing a spinlock for mixing into the entropy pool. So switch to
doing that before calling the underscored _mix_pool_bytes().

Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 drivers/char/random.c | 11 ++++-------
 1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 9a8e1bb9845d..ca224c3f2561 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1286,13 +1286,10 @@ void add_interrupt_randomness(int irq)
 			atomic_set(&fast_pool->count, 0);
 			fast_pool->last = now;
 
-			/*
-			 * Technically this call means that we're using a spinlock_t
-			 * in the IRQ handler, which isn't terrific for PREEMPT_RT.
-			 * However, this only happens during boot, and then never
-			 * again, so we live with it.
-			 */
-			mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool));
+			if (spin_trylock(&input_pool.lock)) {
+				_mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool));
+				spin_unlock(&input_pool.lock);
+			}
 		}
 		return;
 	}
-- 
2.35.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] random: unify early init crng load accounting
  2022-02-12 23:10 ` [PATCH 1/3] random: unify early init crng load accounting Jason A. Donenfeld
@ 2022-02-13  6:55   ` Dominik Brodowski
  2022-02-13 10:56     ` Jason A. Donenfeld
  2022-02-21  5:50   ` Eric Biggers
  1 sibling, 1 reply; 12+ messages in thread
From: Dominik Brodowski @ 2022-02-13  6:55 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: linux-kernel, Theodore Ts'o

Am Sun, Feb 13, 2022 at 12:10:20AM +0100 schrieb Jason A. Donenfeld:
> crng_fast_load() and crng_slow_load() have different semantics:
> 
> - crng_fast_load() xors and accounts with crng_init_cnt.
> - crng_slow_load() hashes and doesn't account.
> 
> However add_hwgenerator_randomness() can afford to hash (it's called
> from a kthread), and it should account. Additionally, ones that can
> afford to hash don't need to take a trylock but can take a normal lock.
> So, we combine these into one function, crng_pre_init_inject(), which
> allows us to control these in a uniform way. This will make it simpler
> later to simplify this all down when the time comes for that.

I thought the call to crng_fast_load() from add_input_randomness() is on its
way out?

Otherwise, patch looks fine:

	Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] random: check for crng_init == 0, not crng_ready() in add_device_randomness()
  2022-02-12 23:10 ` [PATCH 2/3] random: check for crng_init == 0, not crng_ready() in add_device_randomness() Jason A. Donenfeld
@ 2022-02-13  6:55   ` Dominik Brodowski
  2022-02-21  5:38   ` Eric Biggers
  1 sibling, 0 replies; 12+ messages in thread
From: Dominik Brodowski @ 2022-02-13  6:55 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: linux-kernel, Theodore Ts'o

Am Sun, Feb 13, 2022 at 12:10:21AM +0100 schrieb Jason A. Donenfeld:
> This has no real functional change, as crng_pre_init_inject() (and
> before that, crng_slow_init()) always checks for == 0, not >= 2. So
> correct the outer unlocked change to reflect that.
> 
> Cc: Dominik Brodowski <linux@dominikbrodowski.net>
> Cc: Theodore Ts'o <tytso@mit.edu>
> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>

	Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/3] random: use trylock in irq handler rather than spinning
  2022-02-12 23:10 ` [PATCH 3/3] random: use trylock in irq handler rather than spinning Jason A. Donenfeld
@ 2022-02-13  6:55   ` Dominik Brodowski
  2022-02-13 10:51     ` Jason A. Donenfeld
  0 siblings, 1 reply; 12+ messages in thread
From: Dominik Brodowski @ 2022-02-13  6:55 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: linux-kernel, Theodore Ts'o

Am Sun, Feb 13, 2022 at 12:10:22AM +0100 schrieb Jason A. Donenfeld:
> crng_pre_init_inject() (and prior crng_fast_load()) uses a trylock when
> in fast mode, so that it never contends. We should be doing the same
> when grabbing a spinlock for mixing into the entropy pool. So switch to
> doing that before calling the underscored _mix_pool_bytes().
> 
> Cc: Dominik Brodowski <linux@dominikbrodowski.net>
> Cc: Theodore Ts'o <tytso@mit.edu>
> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> ---
>  drivers/char/random.c | 11 ++++-------
>  1 file changed, 4 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/char/random.c b/drivers/char/random.c
> index 9a8e1bb9845d..ca224c3f2561 100644
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -1286,13 +1286,10 @@ void add_interrupt_randomness(int irq)
>  			atomic_set(&fast_pool->count, 0);
>  			fast_pool->last = now;
>  
> -			/*
> -			 * Technically this call means that we're using a spinlock_t
> -			 * in the IRQ handler, which isn't terrific for PREEMPT_RT.
> -			 * However, this only happens during boot, and then never
> -			 * again, so we live with it.
> -			 */
> -			mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool));
> +			if (spin_trylock(&input_pool.lock)) {
> +				_mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool));
> +				spin_unlock(&input_pool.lock);
> +			}

You're still using a spinlock_t here, so I don't see a need to remove the
comment. Also, I'm not super happy that the count is re-set to 0 but the
input remains unused. Maybe the better approach here is, as discussed in the
other thread, to always use the workqueue mechanism, which would allow us to
streamline the code further.

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/3] random: use trylock in irq handler rather than spinning
  2022-02-13  6:55   ` Dominik Brodowski
@ 2022-02-13 10:51     ` Jason A. Donenfeld
  0 siblings, 0 replies; 12+ messages in thread
From: Jason A. Donenfeld @ 2022-02-13 10:51 UTC (permalink / raw)
  To: Dominik Brodowski; +Cc: linux-kernel, Theodore Ts'o

Hi Dominik,

Yes, the decision about RT and trylocks is still undecided, and I'm
actually trying to get the existing design into as optimal shape as
possible before considering deferring it, so we can really have a
complete comparison. It was an error (my error) to introduce the full
lock here in the original patch that added this. So I've actually made
this patch into a fixup for that original one, so we don't need this
one on top.

I'll be looking at the deferred work next week, but I'd like the
existing thing to be as solid as possible before. Otherwise it's too
hard to evaluate pros and cons.

Jason

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] random: unify early init crng load accounting
  2022-02-13  6:55   ` Dominik Brodowski
@ 2022-02-13 10:56     ` Jason A. Donenfeld
  0 siblings, 0 replies; 12+ messages in thread
From: Jason A. Donenfeld @ 2022-02-13 10:56 UTC (permalink / raw)
  To: Dominik Brodowski; +Cc: linux-kernel, Theodore Ts'o

On 2/13/22, Dominik Brodowski <linux@dominikbrodowski.net> wrote:
> Am Sun, Feb 13, 2022 at 12:10:20AM +0100 schrieb Jason A. Donenfeld:
>> crng_fast_load() and crng_slow_load() have different semantics:
>>
>> - crng_fast_load() xors and accounts with crng_init_cnt.
>> - crng_slow_load() hashes and doesn't account.
>>
>> However add_hwgenerator_randomness() can afford to hash (it's called
>> from a kthread), and it should account. Additionally, ones that can
>> afford to hash don't need to take a trylock but can take a normal lock.
>> So, we combine these into one function, crng_pre_init_inject(), which
>> allows us to control these in a uniform way. This will make it simpler
>> later to simplify this all down when the time comes for that.
>
> I thought the call to crng_fast_load() from add_input_randomness() is on
> its
> way out?

It might be. In that case, this commit is preparation work to make the
sunsetting there more clear, and so we can really see what that change
is like. Or maybe we won't do that, in which case this is an
improvement. I'll be looking at that next week.

Jason

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] random: check for crng_init == 0, not crng_ready() in add_device_randomness()
  2022-02-12 23:10 ` [PATCH 2/3] random: check for crng_init == 0, not crng_ready() in add_device_randomness() Jason A. Donenfeld
  2022-02-13  6:55   ` Dominik Brodowski
@ 2022-02-21  5:38   ` Eric Biggers
  1 sibling, 0 replies; 12+ messages in thread
From: Eric Biggers @ 2022-02-21  5:38 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: linux-kernel, linux, Theodore Ts'o

On Sun, Feb 13, 2022 at 12:10:21AM +0100, Jason A. Donenfeld wrote:
> This has no real functional change, as crng_pre_init_inject() (and
> before that, crng_slow_init()) always checks for == 0, not >= 2. So
> correct the outer unlocked change to reflect that.
> 
> Cc: Dominik Brodowski <linux@dominikbrodowski.net>
> Cc: Theodore Ts'o <tytso@mit.edu>
> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> ---
>  drivers/char/random.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/char/random.c b/drivers/char/random.c
> index a128bb947bd4..9a8e1bb9845d 100644
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -1007,7 +1007,7 @@ void add_device_randomness(const void *buf, size_t size)
>  	unsigned long time = random_get_entropy() ^ jiffies;
>  	unsigned long flags;
>  
> -	if (!crng_ready() && size)
> +	if (crng_init == 0 && size)
>  		crng_pre_init_inject(buf, size, false, false);
>  
>  	spin_lock_irqsave(&input_pool.lock, flags);
> -- 
> 2.35.0
> 

Reviewed-by: Eric Biggers <ebiggers@google.com>

- Eric

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] random: unify early init crng load accounting
  2022-02-12 23:10 ` [PATCH 1/3] random: unify early init crng load accounting Jason A. Donenfeld
  2022-02-13  6:55   ` Dominik Brodowski
@ 2022-02-21  5:50   ` Eric Biggers
  2022-02-21 15:22     ` Jason A. Donenfeld
  1 sibling, 1 reply; 12+ messages in thread
From: Eric Biggers @ 2022-02-21  5:50 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: linux-kernel, linux, Theodore Ts'o

On Sun, Feb 13, 2022 at 12:10:20AM +0100, Jason A. Donenfeld wrote:
> -static size_t crng_fast_load(const void *cp, size_t len)
> +static size_t crng_pre_init_inject(const void *cp, size_t len,
> +				   bool fast, bool account)

This would be a good chance to rename 'cp' to something more usual, like 'in'.

Also, there's still a mention of "crng_{fast,slow}_load" in crng_make_state().

> +	const u8 *src = cp;
> +
> +	if (fast) {
> +		if (!spin_trylock_irqsave(&base_crng.lock, flags))
> +			return 0;
> +	} else
> +		spin_lock_irqsave(&base_crng.lock, flags);

Nit: the kernel coding style requires braces around the else clause here.

- Eric

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] random: unify early init crng load accounting
  2022-02-21  5:50   ` Eric Biggers
@ 2022-02-21 15:22     ` Jason A. Donenfeld
  0 siblings, 0 replies; 12+ messages in thread
From: Jason A. Donenfeld @ 2022-02-21 15:22 UTC (permalink / raw)
  To: Eric Biggers; +Cc: LKML, Dominik Brodowski, Theodore Ts'o

On Mon, Feb 21, 2022 at 6:50 AM Eric Biggers <ebiggers@kernel.org> wrote:
>
> On Sun, Feb 13, 2022 at 12:10:20AM +0100, Jason A. Donenfeld wrote:
> > -static size_t crng_fast_load(const void *cp, size_t len)
> > +static size_t crng_pre_init_inject(const void *cp, size_t len,
> > +                                bool fast, bool account)
>
> This would be a good chance to rename 'cp' to something more usual, like 'in'.

I'll do that. By the way, if you feel inclined to clean up other
instances of 'cp' and other odd variable names, feel free to send a
patch devoted to that.

>
> Also, there's still a mention of "crng_{fast,slow}_load" in crng_make_state().

Nice catch, will fix.

>
> > +     const u8 *src = cp;
> > +
> > +     if (fast) {
> > +             if (!spin_trylock_irqsave(&base_crng.lock, flags))
> > +                     return 0;
> > +     } else
> > +             spin_lock_irqsave(&base_crng.lock, flags);
>
> Nit: the kernel coding style requires braces around the else clause here.

Not the biggest fan of this one, but will do.

Jason

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-02-21 15:22 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-12 23:10 [PATCH 0/3] cleanup init-time fast/slow key loading Jason A. Donenfeld
2022-02-12 23:10 ` [PATCH 1/3] random: unify early init crng load accounting Jason A. Donenfeld
2022-02-13  6:55   ` Dominik Brodowski
2022-02-13 10:56     ` Jason A. Donenfeld
2022-02-21  5:50   ` Eric Biggers
2022-02-21 15:22     ` Jason A. Donenfeld
2022-02-12 23:10 ` [PATCH 2/3] random: check for crng_init == 0, not crng_ready() in add_device_randomness() Jason A. Donenfeld
2022-02-13  6:55   ` Dominik Brodowski
2022-02-21  5:38   ` Eric Biggers
2022-02-12 23:10 ` [PATCH 3/3] random: use trylock in irq handler rather than spinning Jason A. Donenfeld
2022-02-13  6:55   ` Dominik Brodowski
2022-02-13 10:51     ` Jason A. Donenfeld

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).