linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	stable@vger.kernel.org, Theodore Tso <tytso@mit.edu>,
	Dominik Brodowski <linux@dominikbrodowski.net>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Jann Horn <jannh@google.com>, Eric Biggers <ebiggers@google.com>,
	"Jason A. Donenfeld" <Jason@zx2c4.com>
Subject: [PATCH 5.4 079/240] random: use simpler fast key erasure flow on per-cpu keys
Date: Mon, 20 Jun 2022 14:49:40 +0200	[thread overview]
Message-ID: <20220620124741.046796051@linuxfoundation.org> (raw)
In-Reply-To: <20220620124737.799371052@linuxfoundation.org>

From: "Jason A. Donenfeld" <Jason@zx2c4.com>

commit 186873c549df11b63e17062f863654e1501e1524 upstream.

Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.

All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.

This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.

The flow looks like this:

──extract()──► base_crng.key ◄──memcpy()───┐
                   │                       │
                   └──chacha()──────┬─► new_base_key
                                    └─► crngs[n].key ◄──memcpy()───┐
                                              │                    │
                                              └──chacha()───┬─► new_key
                                                            └─► random_bytes
                                                                      │
                                                                      └────►

There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.

In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.

The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.

Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 drivers/char/random.c |  403 ++++++++++++++++++++++++++++----------------------
 1 file changed, 233 insertions(+), 170 deletions(-)

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -67,63 +67,19 @@
  * Exported interfaces ---- kernel output
  * --------------------------------------
  *
- * The primary kernel interface is
+ * The primary kernel interfaces are:
  *
  *	void get_random_bytes(void *buf, int nbytes);
- *
- * This interface will return the requested number of random bytes,
- * and place it in the requested buffer.  This is equivalent to a
- * read from /dev/urandom.
- *
- * For less critical applications, there are the functions:
- *
  *	u32 get_random_u32()
  *	u64 get_random_u64()
  *	unsigned int get_random_int()
  *	unsigned long get_random_long()
  *
- * These are produced by a cryptographic RNG seeded from get_random_bytes,
- * and so do not deplete the entropy pool as much.  These are recommended
- * for most in-kernel operations *if the result is going to be stored in
- * the kernel*.
- *
- * Specifically, the get_random_int() family do not attempt to do
- * "anti-backtracking".  If you capture the state of the kernel (e.g.
- * by snapshotting the VM), you can figure out previous get_random_int()
- * return values.  But if the value is stored in the kernel anyway,
- * this is not a problem.
- *
- * It *is* safe to expose get_random_int() output to attackers (e.g. as
- * network cookies); given outputs 1..n, it's not feasible to predict
- * outputs 0 or n+1.  The only concern is an attacker who breaks into
- * the kernel later; the get_random_int() engine is not reseeded as
- * often as the get_random_bytes() one.
- *
- * get_random_bytes() is needed for keys that need to stay secret after
- * they are erased from the kernel.  For example, any key that will
- * be wrapped and stored encrypted.  And session encryption keys: we'd
- * like to know that after the session is closed and the keys erased,
- * the plaintext is unrecoverable to someone who recorded the ciphertext.
- *
- * But for network ports/cookies, stack canaries, PRNG seeds, address
- * space layout randomization, session *authentication* keys, or other
- * applications where the sensitive data is stored in the kernel in
- * plaintext for as long as it's sensitive, the get_random_int() family
- * is just fine.
- *
- * Consider ASLR.  We want to keep the address space secret from an
- * outside attacker while the process is running, but once the address
- * space is torn down, it's of no use to an attacker any more.  And it's
- * stored in kernel data structures as long as it's alive, so worrying
- * about an attacker's ability to extrapolate it from the get_random_int()
- * CRNG is silly.
- *
- * Even some cryptographic keys are safe to generate with get_random_int().
- * In particular, keys for SipHash are generally fine.  Here, knowledge
- * of the key authorizes you to do something to a kernel object (inject
- * packets to a network connection, or flood a hash table), and the
- * key is stored with the object being protected.  Once it goes away,
- * we no longer care if anyone knows the key.
+ * These interfaces will return the requested number of random bytes
+ * into the given buffer or as a return value. This is equivalent to a
+ * read from /dev/urandom. The get_random_{u32,u64,int,long}() family
+ * of functions may be higher performance for one-off random integers,
+ * because they do a bit of buffering.
  *
  * prandom_u32()
  * -------------
@@ -300,20 +256,6 @@ static struct fasync_struct *fasync;
 static DEFINE_SPINLOCK(random_ready_list_lock);
 static LIST_HEAD(random_ready_list);
 
-struct crng_state {
-	u32 state[16];
-	unsigned long init_time;
-	spinlock_t lock;
-};
-
-static struct crng_state primary_crng = {
-	.lock = __SPIN_LOCK_UNLOCKED(primary_crng.lock),
-	.state[0] = CHACHA_CONSTANT_EXPA,
-	.state[1] = CHACHA_CONSTANT_ND_3,
-	.state[2] = CHACHA_CONSTANT_2_BY,
-	.state[3] = CHACHA_CONSTANT_TE_K,
-};
-
 /*
  * crng_init =  0 --> Uninitialized
  *		1 --> Initialized
@@ -325,9 +267,6 @@ static struct crng_state primary_crng =
 static int crng_init = 0;
 #define crng_ready() (likely(crng_init > 1))
 static int crng_init_cnt = 0;
-#define CRNG_INIT_CNT_THRESH (2 * CHACHA_KEY_SIZE)
-static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]);
-static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used);
 static void process_random_ready_list(void);
 static void _get_random_bytes(void *buf, int nbytes);
 
@@ -470,7 +409,28 @@ static void credit_entropy_bits(int nbit
  *
  *********************************************************************/
 
-#define CRNG_RESEED_INTERVAL (300 * HZ)
+enum {
+	CRNG_RESEED_INTERVAL = 300 * HZ,
+	CRNG_INIT_CNT_THRESH = 2 * CHACHA_KEY_SIZE
+};
+
+static struct {
+	u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long));
+	unsigned long birth;
+	unsigned long generation;
+	spinlock_t lock;
+} base_crng = {
+	.lock = __SPIN_LOCK_UNLOCKED(base_crng.lock)
+};
+
+struct crng {
+	u8 key[CHACHA_KEY_SIZE];
+	unsigned long generation;
+};
+
+static DEFINE_PER_CPU(struct crng, crngs) = {
+	.generation = ULONG_MAX
+};
 
 static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
 
@@ -487,22 +447,22 @@ static size_t crng_fast_load(const u8 *c
 	u8 *p;
 	size_t ret = 0;
 
-	if (!spin_trylock_irqsave(&primary_crng.lock, flags))
+	if (!spin_trylock_irqsave(&base_crng.lock, flags))
 		return 0;
 	if (crng_init != 0) {
-		spin_unlock_irqrestore(&primary_crng.lock, flags);
+		spin_unlock_irqrestore(&base_crng.lock, flags);
 		return 0;
 	}
-	p = (u8 *)&primary_crng.state[4];
+	p = base_crng.key;
 	while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) {
-		p[crng_init_cnt % CHACHA_KEY_SIZE] ^= *cp;
+		p[crng_init_cnt % sizeof(base_crng.key)] ^= *cp;
 		cp++; crng_init_cnt++; len--; ret++;
 	}
 	if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
 		invalidate_batched_entropy();
 		crng_init = 1;
 	}
-	spin_unlock_irqrestore(&primary_crng.lock, flags);
+	spin_unlock_irqrestore(&base_crng.lock, flags);
 	if (crng_init == 1)
 		pr_notice("fast init done\n");
 	return ret;
@@ -527,14 +487,14 @@ static int crng_slow_load(const u8 *cp,
 	unsigned long flags;
 	static u8 lfsr = 1;
 	u8 tmp;
-	unsigned int i, max = CHACHA_KEY_SIZE;
+	unsigned int i, max = sizeof(base_crng.key);
 	const u8 *src_buf = cp;
-	u8 *dest_buf = (u8 *)&primary_crng.state[4];
+	u8 *dest_buf = base_crng.key;
 
-	if (!spin_trylock_irqsave(&primary_crng.lock, flags))
+	if (!spin_trylock_irqsave(&base_crng.lock, flags))
 		return 0;
 	if (crng_init != 0) {
-		spin_unlock_irqrestore(&primary_crng.lock, flags);
+		spin_unlock_irqrestore(&base_crng.lock, flags);
 		return 0;
 	}
 	if (len > max)
@@ -545,38 +505,50 @@ static int crng_slow_load(const u8 *cp,
 		lfsr >>= 1;
 		if (tmp & 1)
 			lfsr ^= 0xE1;
-		tmp = dest_buf[i % CHACHA_KEY_SIZE];
-		dest_buf[i % CHACHA_KEY_SIZE] ^= src_buf[i % len] ^ lfsr;
+		tmp = dest_buf[i % sizeof(base_crng.key)];
+		dest_buf[i % sizeof(base_crng.key)] ^= src_buf[i % len] ^ lfsr;
 		lfsr += (tmp << 3) | (tmp >> 5);
 	}
-	spin_unlock_irqrestore(&primary_crng.lock, flags);
+	spin_unlock_irqrestore(&base_crng.lock, flags);
 	return 1;
 }
 
 static void crng_reseed(void)
 {
 	unsigned long flags;
-	int i, entropy_count;
-	union {
-		u8 block[CHACHA_BLOCK_SIZE];
-		u32 key[8];
-	} buf;
+	int entropy_count;
+	unsigned long next_gen;
+	u8 key[CHACHA_KEY_SIZE];
 
+	/*
+	 * First we make sure we have POOL_MIN_BITS of entropy in the pool,
+	 * and then we drain all of it. Only then can we extract a new key.
+	 */
 	do {
 		entropy_count = READ_ONCE(input_pool.entropy_count);
 		if (entropy_count < POOL_MIN_BITS)
 			return;
 	} while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count);
-	extract_entropy(buf.key, sizeof(buf.key));
+	extract_entropy(key, sizeof(key));
 	wake_up_interruptible(&random_write_wait);
 	kill_fasync(&fasync, SIGIO, POLL_OUT);
 
-	spin_lock_irqsave(&primary_crng.lock, flags);
-	for (i = 0; i < 8; i++)
-		primary_crng.state[i + 4] ^= buf.key[i];
-	memzero_explicit(&buf, sizeof(buf));
-	WRITE_ONCE(primary_crng.init_time, jiffies);
-	spin_unlock_irqrestore(&primary_crng.lock, flags);
+	/*
+	 * We copy the new key into the base_crng, overwriting the old one,
+	 * and update the generation counter. We avoid hitting ULONG_MAX,
+	 * because the per-cpu crngs are initialized to ULONG_MAX, so this
+	 * forces new CPUs that come online to always initialize.
+	 */
+	spin_lock_irqsave(&base_crng.lock, flags);
+	memcpy(base_crng.key, key, sizeof(base_crng.key));
+	next_gen = base_crng.generation + 1;
+	if (next_gen == ULONG_MAX)
+		++next_gen;
+	WRITE_ONCE(base_crng.generation, next_gen);
+	WRITE_ONCE(base_crng.birth, jiffies);
+	spin_unlock_irqrestore(&base_crng.lock, flags);
+	memzero_explicit(key, sizeof(key));
+
 	if (crng_init < 2) {
 		invalidate_batched_entropy();
 		crng_init = 2;
@@ -597,77 +569,143 @@ static void crng_reseed(void)
 	}
 }
 
-static void extract_crng(u8 out[CHACHA_BLOCK_SIZE])
+/*
+ * The general form here is based on a "fast key erasure RNG" from
+ * <https://blog.cr.yp.to/20170723-random.html>. It generates a ChaCha
+ * block using the provided key, and then immediately overwites that
+ * key with half the block. It returns the resultant ChaCha state to the
+ * user, along with the second half of the block containing 32 bytes of
+ * random data that may be used; random_data_len may not be greater than
+ * 32.
+ */
+static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE],
+				  u32 chacha_state[CHACHA_BLOCK_SIZE / sizeof(u32)],
+				  u8 *random_data, size_t random_data_len)
 {
-	unsigned long flags, init_time;
+	u8 first_block[CHACHA_BLOCK_SIZE];
 
-	if (crng_ready()) {
-		init_time = READ_ONCE(primary_crng.init_time);
-		if (time_after(jiffies, init_time + CRNG_RESEED_INTERVAL))
-			crng_reseed();
-	}
-	spin_lock_irqsave(&primary_crng.lock, flags);
-	chacha20_block(&primary_crng.state[0], out);
-	if (primary_crng.state[12] == 0)
-		primary_crng.state[13]++;
-	spin_unlock_irqrestore(&primary_crng.lock, flags);
+	BUG_ON(random_data_len > 32);
+
+	chacha_init_consts(chacha_state);
+	memcpy(&chacha_state[4], key, CHACHA_KEY_SIZE);
+	memset(&chacha_state[12], 0, sizeof(u32) * 4);
+	chacha20_block(chacha_state, first_block);
+
+	memcpy(key, first_block, CHACHA_KEY_SIZE);
+	memcpy(random_data, first_block + CHACHA_KEY_SIZE, random_data_len);
+	memzero_explicit(first_block, sizeof(first_block));
 }
 
 /*
- * Use the leftover bytes from the CRNG block output (if there is
- * enough) to mutate the CRNG key to provide backtracking protection.
+ * This function returns a ChaCha state that you may use for generating
+ * random data. It also returns up to 32 bytes on its own of random data
+ * that may be used; random_data_len may not be greater than 32.
  */
-static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used)
+static void crng_make_state(u32 chacha_state[CHACHA_BLOCK_SIZE / sizeof(u32)],
+			    u8 *random_data, size_t random_data_len)
 {
 	unsigned long flags;
-	u32 *s, *d;
-	int i;
+	struct crng *crng;
 
-	used = round_up(used, sizeof(u32));
-	if (used + CHACHA_KEY_SIZE > CHACHA_BLOCK_SIZE) {
-		extract_crng(tmp);
-		used = 0;
-	}
-	spin_lock_irqsave(&primary_crng.lock, flags);
-	s = (u32 *)&tmp[used];
-	d = &primary_crng.state[4];
-	for (i = 0; i < 8; i++)
-		*d++ ^= *s++;
-	spin_unlock_irqrestore(&primary_crng.lock, flags);
-}
-
-static ssize_t extract_crng_user(void __user *buf, size_t nbytes)
-{
-	ssize_t ret = 0, i = CHACHA_BLOCK_SIZE;
-	u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4);
-	int large_request = (nbytes > 256);
+	BUG_ON(random_data_len > 32);
+
+	/*
+	 * For the fast path, we check whether we're ready, unlocked first, and
+	 * then re-check once locked later. In the case where we're really not
+	 * ready, we do fast key erasure with the base_crng directly, because
+	 * this is what crng_{fast,slow}_load mutate during early init.
+	 */
+	if (unlikely(!crng_ready())) {
+		bool ready;
+
+		spin_lock_irqsave(&base_crng.lock, flags);
+		ready = crng_ready();
+		if (!ready)
+			crng_fast_key_erasure(base_crng.key, chacha_state,
+					      random_data, random_data_len);
+		spin_unlock_irqrestore(&base_crng.lock, flags);
+		if (!ready)
+			return;
+	}
+
+	/*
+	 * If the base_crng is more than 5 minutes old, we reseed, which
+	 * in turn bumps the generation counter that we check below.
+	 */
+	if (unlikely(time_after(jiffies, READ_ONCE(base_crng.birth) + CRNG_RESEED_INTERVAL)))
+		crng_reseed();
+
+	local_irq_save(flags);
+	crng = raw_cpu_ptr(&crngs);
+
+	/*
+	 * If our per-cpu crng is older than the base_crng, then it means
+	 * somebody reseeded the base_crng. In that case, we do fast key
+	 * erasure on the base_crng, and use its output as the new key
+	 * for our per-cpu crng. This brings us up to date with base_crng.
+	 */
+	if (unlikely(crng->generation != READ_ONCE(base_crng.generation))) {
+		spin_lock(&base_crng.lock);
+		crng_fast_key_erasure(base_crng.key, chacha_state,
+				      crng->key, sizeof(crng->key));
+		crng->generation = base_crng.generation;
+		spin_unlock(&base_crng.lock);
+	}
+
+	/*
+	 * Finally, when we've made it this far, our per-cpu crng has an up
+	 * to date key, and we can do fast key erasure with it to produce
+	 * some random data and a ChaCha state for the caller. All other
+	 * branches of this function are "unlikely", so most of the time we
+	 * should wind up here immediately.
+	 */
+	crng_fast_key_erasure(crng->key, chacha_state, random_data, random_data_len);
+	local_irq_restore(flags);
+}
+
+static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes)
+{
+	bool large_request = nbytes > 256;
+	ssize_t ret = 0, len;
+	u32 chacha_state[CHACHA_BLOCK_SIZE / sizeof(u32)];
+	u8 output[CHACHA_BLOCK_SIZE];
+
+	if (!nbytes)
+		return 0;
+
+	len = min_t(ssize_t, 32, nbytes);
+	crng_make_state(chacha_state, output, len);
+
+	if (copy_to_user(buf, output, len))
+		return -EFAULT;
+	nbytes -= len;
+	buf += len;
+	ret += len;
 
 	while (nbytes) {
 		if (large_request && need_resched()) {
-			if (signal_pending(current)) {
-				if (ret == 0)
-					ret = -ERESTARTSYS;
+			if (signal_pending(current))
 				break;
-			}
 			schedule();
 		}
 
-		extract_crng(tmp);
-		i = min_t(int, nbytes, CHACHA_BLOCK_SIZE);
-		if (copy_to_user(buf, tmp, i)) {
+		chacha20_block(chacha_state, output);
+		if (unlikely(chacha_state[12] == 0))
+			++chacha_state[13];
+
+		len = min_t(ssize_t, nbytes, CHACHA_BLOCK_SIZE);
+		if (copy_to_user(buf, output, len)) {
 			ret = -EFAULT;
 			break;
 		}
 
-		nbytes -= i;
-		buf += i;
-		ret += i;
+		nbytes -= len;
+		buf += len;
+		ret += len;
 	}
-	crng_backtrack_protect(tmp, i);
-
-	/* Wipe data just written to memory */
-	memzero_explicit(tmp, sizeof(tmp));
 
+	memzero_explicit(chacha_state, sizeof(chacha_state));
+	memzero_explicit(output, sizeof(output));
 	return ret;
 }
 
@@ -976,23 +1014,36 @@ static void _warn_unseeded_randomness(co
  */
 static void _get_random_bytes(void *buf, int nbytes)
 {
-	u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4);
+	u32 chacha_state[CHACHA_BLOCK_SIZE / sizeof(u32)];
+	u8 tmp[CHACHA_BLOCK_SIZE];
+	ssize_t len;
 
 	trace_get_random_bytes(nbytes, _RET_IP_);
 
-	while (nbytes >= CHACHA_BLOCK_SIZE) {
-		extract_crng(buf);
-		buf += CHACHA_BLOCK_SIZE;
+	if (!nbytes)
+		return;
+
+	len = min_t(ssize_t, 32, nbytes);
+	crng_make_state(chacha_state, buf, len);
+	nbytes -= len;
+	buf += len;
+
+	while (nbytes) {
+		if (nbytes < CHACHA_BLOCK_SIZE) {
+			chacha20_block(chacha_state, tmp);
+			memcpy(buf, tmp, nbytes);
+			memzero_explicit(tmp, sizeof(tmp));
+			break;
+		}
+
+		chacha20_block(chacha_state, buf);
+		if (unlikely(chacha_state[12] == 0))
+			++chacha_state[13];
 		nbytes -= CHACHA_BLOCK_SIZE;
+		buf += CHACHA_BLOCK_SIZE;
 	}
 
-	if (nbytes > 0) {
-		extract_crng(tmp);
-		memcpy(buf, tmp, nbytes);
-		crng_backtrack_protect(tmp, nbytes);
-	} else
-		crng_backtrack_protect(tmp, CHACHA_BLOCK_SIZE);
-	memzero_explicit(tmp, sizeof(tmp));
+	memzero_explicit(chacha_state, sizeof(chacha_state));
 }
 
 void get_random_bytes(void *buf, int nbytes)
@@ -1223,13 +1274,12 @@ int __init rand_initialize(void)
 	mix_pool_bytes(&now, sizeof(now));
 	mix_pool_bytes(utsname(), sizeof(*(utsname())));
 
-	extract_entropy(&primary_crng.state[4], sizeof(u32) * 12);
+	extract_entropy(base_crng.key, sizeof(base_crng.key));
 	if (arch_init && trust_cpu && crng_init < 2) {
 		invalidate_batched_entropy();
 		crng_init = 2;
 		pr_notice("crng init done (trusting CPU's manufacturer)\n");
 	}
-	primary_crng.init_time = jiffies - CRNG_RESEED_INTERVAL - 1;
 
 	if (ratelimit_disable) {
 		urandom_warning.interval = 0;
@@ -1261,7 +1311,7 @@ static ssize_t urandom_read_nowarn(struc
 	int ret;
 
 	nbytes = min_t(size_t, nbytes, INT_MAX >> 6);
-	ret = extract_crng_user(buf, nbytes);
+	ret = get_random_bytes_user(buf, nbytes);
 	trace_urandom_read(8 * nbytes, 0, input_pool.entropy_count);
 	return ret;
 }
@@ -1567,8 +1617,15 @@ static atomic_t batch_generation = ATOMI
 
 struct batched_entropy {
 	union {
-		u64 entropy_u64[CHACHA_BLOCK_SIZE / sizeof(u64)];
-		u32 entropy_u32[CHACHA_BLOCK_SIZE / sizeof(u32)];
+		/*
+		 * We make this 1.5x a ChaCha block, so that we get the
+		 * remaining 32 bytes from fast key erasure, plus one full
+		 * block from the detached ChaCha state. We can increase
+		 * the size of this later if needed so long as we keep the
+		 * formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE.
+		 */
+		u64 entropy_u64[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u64))];
+		u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))];
 	};
 	unsigned int position;
 	int generation;
@@ -1576,13 +1633,13 @@ struct batched_entropy {
 
 /*
  * Get a random word for internal kernel use only. The quality of the random
- * number is good as /dev/urandom, but there is no backtrack protection, with
- * the goal of being quite fast and not depleting entropy. In order to ensure
- * that the randomness provided by this function is okay, the function
- * wait_for_random_bytes() should be called and return 0 at least once at any
- * point prior.
+ * number is good as /dev/urandom. In order to ensure that the randomness
+ * provided by this function is okay, the function wait_for_random_bytes()
+ * should be called and return 0 at least once at any point prior.
  */
-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64);
+static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = {
+	.position = UINT_MAX
+};
 
 u64 get_random_u64(void)
 {
@@ -1598,20 +1655,24 @@ u64 get_random_u64(void)
 	batch = raw_cpu_ptr(&batched_entropy_u64);
 
 	next_gen = atomic_read(&batch_generation);
-	if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0 ||
+	if (batch->position >= ARRAY_SIZE(batch->entropy_u64) ||
 	    next_gen != batch->generation) {
-		extract_crng((u8 *)batch->entropy_u64);
+		_get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64));
 		batch->position = 0;
 		batch->generation = next_gen;
 	}
 
-	ret = batch->entropy_u64[batch->position++];
+	ret = batch->entropy_u64[batch->position];
+	batch->entropy_u64[batch->position] = 0;
+	++batch->position;
 	local_irq_restore(flags);
 	return ret;
 }
 EXPORT_SYMBOL(get_random_u64);
 
-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32);
+static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = {
+	.position = UINT_MAX
+};
 
 u32 get_random_u32(void)
 {
@@ -1627,14 +1688,16 @@ u32 get_random_u32(void)
 	batch = raw_cpu_ptr(&batched_entropy_u32);
 
 	next_gen = atomic_read(&batch_generation);
-	if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0 ||
+	if (batch->position >= ARRAY_SIZE(batch->entropy_u32) ||
 	    next_gen != batch->generation) {
-		extract_crng((u8 *)batch->entropy_u32);
+		_get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32));
 		batch->position = 0;
 		batch->generation = next_gen;
 	}
 
-	ret = batch->entropy_u32[batch->position++];
+	ret = batch->entropy_u32[batch->position];
+	batch->entropy_u32[batch->position] = 0;
+	++batch->position;
 	local_irq_restore(flags);
 	return ret;
 }



  parent reply	other threads:[~2022-06-20 13:40 UTC|newest]

Thread overview: 249+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-20 12:48 [PATCH 5.4 000/240] 5.4.200-rc1 review Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 001/240] 9p: missing chunk of "fs/9p: Dont update file type when updating file attributes" Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 002/240] bpf: Fix incorrect memory charge cost calculation in stack_map_alloc() Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 003/240] nfc: st21nfca: fix incorrect sizing calculations in EVT_TRANSACTION Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 004/240] crypto: blake2s - generic C library implementation and selftest Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 005/240] lib/crypto: blake2s: move hmac construction into wireguard Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 006/240] lib/crypto: sha1: re-roll loops to reduce code size Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 007/240] compat_ioctl: remove /dev/random commands Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 008/240] random: dont forget compat_ioctl on urandom Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 009/240] random: Dont wake crng_init_wait when crng_init == 1 Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 010/240] random: Add a urandom_read_nowait() for random APIs that dont warn Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 011/240] random: add GRND_INSECURE to return best-effort non-cryptographic bytes Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 012/240] random: ignore GRND_RANDOM in getentropy(2) Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 013/240] random: make /dev/random be almost like /dev/urandom Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 014/240] random: remove the blocking pool Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 015/240] random: delete code to pull data into pools Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 016/240] random: remove kernel.random.read_wakeup_threshold Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 017/240] random: remove unnecessary unlikely() Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 018/240] random: convert to ENTROPY_BITS for better code readability Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 019/240] random: Add and use pr_fmt() Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 020/240] random: fix typo in add_timer_randomness() Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 021/240] random: remove some dead code of poolinfo Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 022/240] random: split primary/secondary crng init paths Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 023/240] random: avoid warnings for !CONFIG_NUMA builds Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 024/240] x86: Remove arch_has_random, arch_has_random_seed Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 025/240] powerpc: " Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 026/240] s390: " Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 027/240] linux/random.h: " Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 028/240] linux/random.h: Use false with bool Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 029/240] linux/random.h: Mark CONFIG_ARCH_RANDOM functions __must_check Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 030/240] powerpc: Use bool in archrandom.h Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 031/240] random: add arch_get_random_*long_early() Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 032/240] random: avoid arch_get_random_seed_long() when collecting IRQ randomness Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 033/240] random: remove dead code left over from blocking pool Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 034/240] MAINTAINERS: co-maintain random.c Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 035/240] crypto: blake2s - include <linux/bug.h> instead of <asm/bug.h> Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 036/240] crypto: blake2s - adjust include guard naming Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 037/240] random: document add_hwgenerator_randomness() with other input functions Greg Kroah-Hartman
2022-06-20 12:48 ` [PATCH 5.4 038/240] random: remove unused irq_flags argument from add_interrupt_randomness() Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 039/240] random: use BLAKE2s instead of SHA1 in extraction Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 040/240] random: do not sign extend bytes for rotation when mixing Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 041/240] random: do not re-init if crng_reseed completes before primary init Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 042/240] random: mix bootloader randomness into pool Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 043/240] random: harmonize "crng init done" messages Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 044/240] random: use IS_ENABLED(CONFIG_NUMA) instead of ifdefs Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 045/240] random: initialize ChaCha20 constants with correct endianness Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 046/240] random: early initialization of ChaCha constants Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 047/240] random: avoid superfluous call to RDRAND in CRNG extraction Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 048/240] random: dont reset crng_init_cnt on urandom_read() Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 049/240] random: fix typo in comments Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 050/240] random: cleanup poolinfo abstraction Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 051/240] random: cleanup integer types Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 052/240] random: remove incomplete last_data logic Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 053/240] random: remove unused extract_entropy() reserved argument Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 054/240] random: rather than entropy_store abstraction, use global Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 055/240] random: remove unused OUTPUT_POOL constants Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 056/240] random: de-duplicate INPUT_POOL constants Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 057/240] random: prepend remaining pool constants with POOL_ Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 058/240] random: cleanup fractional entropy shift constants Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 059/240] random: access input_pool_data directly rather than through pointer Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 060/240] random: selectively clang-format where it makes sense Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 061/240] random: simplify arithmetic function flow in account() Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 062/240] random: continually use hwgenerator randomness Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 063/240] random: access primary_pool directly rather than through pointer Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 064/240] random: only call crng_finalize_init() for primary_crng Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 065/240] random: use computational hash for entropy extraction Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 066/240] random: simplify entropy debiting Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 067/240] random: use linear min-entropy accumulation crediting Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 068/240] random: always wake up entropy writers after extraction Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 069/240] random: make credit_entropy_bits() always safe Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 070/240] random: remove use_input_pool parameter from crng_reseed() Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 071/240] random: remove batched entropy locking Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 072/240] random: fix locking in crng_fast_load() Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 073/240] random: use RDSEED instead of RDRAND in entropy extraction Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 074/240] random: get rid of secondary crngs Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 075/240] random: inline leaves of rand_initialize() Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 076/240] random: ensure early RDSEED goes through mixer on init Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 077/240] random: do not xor RDRAND when writing into /dev/random Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 078/240] random: absorb fast pool into input pool after fast load Greg Kroah-Hartman
2022-06-20 12:49 ` Greg Kroah-Hartman [this message]
2022-06-20 12:49 ` [PATCH 5.4 080/240] random: use hash function for crng_slow_load() Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 081/240] random: make more consistent use of integer types Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 082/240] random: remove outdated INT_MAX >> 6 check in urandom_read() Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 083/240] random: zero buffer after reading entropy from userspace Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 084/240] random: fix locking for crng_init in crng_reseed() Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 085/240] random: tie batched entropy generation to base_crng generation Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 086/240] random: remove ifdefd out interrupt bench Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 087/240] random: remove unused tracepoints Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 088/240] random: add proper SPDX header Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 089/240] random: deobfuscate irq u32/u64 contributions Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 090/240] random: introduce drain_entropy() helper to declutter crng_reseed() Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 091/240] random: remove useless header comment Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 092/240] random: remove whitespace and reorder includes Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 093/240] random: group initialization wait functions Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 094/240] random: group crng functions Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 095/240] random: group entropy extraction functions Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 096/240] random: group entropy collection functions Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 097/240] random: group userspace read/write functions Greg Kroah-Hartman
2022-06-20 12:49 ` [PATCH 5.4 098/240] random: group sysctl functions Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 099/240] random: rewrite header introductory comment Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 100/240] random: defer fast pool mixing to worker Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 101/240] random: do not take pool spinlock at boot Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 102/240] random: unify early init crng load accounting Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 103/240] random: check for crng_init == 0 in add_device_randomness() Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 104/240] random: pull add_hwgenerator_randomness() declaration into random.h Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 105/240] random: clear fast pool, crng, and batches in cpuhp bring up Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 106/240] random: round-robin registers as ulong, not u32 Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 107/240] random: only wake up writers after zap if threshold was passed Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 108/240] random: cleanup UUID handling Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 109/240] random: unify cycles_t and jiffies usage and types Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 110/240] random: do crng pre-init loading in worker rather than irq Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 111/240] random: give sysctl_random_min_urandom_seed a more sensible value Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 112/240] random: dont let 644 read-only sysctls be written to Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 113/240] random: replace custom notifier chain with standard one Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 114/240] random: use SipHash as interrupt entropy accumulator Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 115/240] random: make consistent usage of crng_ready() Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 116/240] random: reseed more often immediately after booting Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 117/240] random: check for signal and try earlier when generating entropy Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 118/240] random: skip fast_init if hwrng provides large chunk of entropy Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 119/240] random: treat bootloader trust toggle the same way as cpu trust toggle Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 120/240] random: re-add removed comment about get_random_{u32,u64} reseeding Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 121/240] random: mix build-time latent entropy into pool at init Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 122/240] random: do not split fast init input in add_hwgenerator_randomness() Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 123/240] random: do not allow user to keep crng key around on stack Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 124/240] random: check for signal_pending() outside of need_resched() check Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 125/240] random: check for signals every PAGE_SIZE chunk of /dev/[u]random Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 126/240] random: allow partial reads if later user copies fail Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 127/240] random: make random_get_entropy() return an unsigned long Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 128/240] random: document crng_fast_key_erasure() destination possibility Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 129/240] random: fix sysctl documentation nits Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 130/240] init: call time_init() before rand_initialize() Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 131/240] ia64: define get_cycles macro for arch-override Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 132/240] s390: " Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 133/240] parisc: " Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 134/240] alpha: " Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 135/240] powerpc: " Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 136/240] timekeeping: Add raw clock fallback for random_get_entropy() Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 137/240] m68k: use fallback for random_get_entropy() instead of zero Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 138/240] mips: use fallback for random_get_entropy() instead of just c0 random Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 139/240] arm: use fallback for random_get_entropy() instead of zero Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 140/240] nios2: " Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 141/240] x86/tsc: Use " Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 142/240] um: use " Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 143/240] sparc: " Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 144/240] xtensa: " Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 145/240] random: insist on random_get_entropy() existing in order to simplify Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 146/240] random: do not use batches when !crng_ready() Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 147/240] random: use first 128 bits of input as fast init Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 148/240] random: do not pretend to handle premature next security model Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 149/240] random: order timer entropy functions below interrupt functions Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 150/240] random: do not use input pool from hard IRQs Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 151/240] random: help compiler out with fast_mix() by using simpler arguments Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 152/240] siphash: use one source of truth for siphash permutations Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 153/240] random: use symbolic constants for crng_init states Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 154/240] random: avoid initializing twice in credit race Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 155/240] random: move initialization out of reseeding hot path Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 156/240] random: remove ratelimiting for in-kernel unseeded randomness Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 157/240] random: use proper jiffies comparison macro Greg Kroah-Hartman
2022-06-20 12:50 ` [PATCH 5.4 158/240] random: handle latent entropy and command line from random_init() Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 159/240] random: credit architectural init the exact amount Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 160/240] random: use static branch for crng_ready() Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 161/240] random: remove extern from functions in header Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 162/240] random: use proper return types on get_random_{int,long}_wait() Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 163/240] random: make consistent use of buf and len Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 164/240] random: move initialization functions out of hot pages Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 165/240] random: move randomize_page() into mm where it belongs Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 166/240] random: unify batched entropy implementations Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 167/240] random: convert to using fops->read_iter() Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 168/240] random: convert to using fops->write_iter() Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 169/240] random: wire up fops->splice_{read,write}_iter() Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 170/240] random: check for signals after page of pool writes Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 171/240] Revert "random: use static branch for crng_ready()" Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 172/240] crypto: drbg - always seeded with SP800-90B compliant noise source Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 173/240] crypto: drbg - prepare for more fine-grained tracking of seeding state Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 174/240] crypto: drbg - track whether DRBG was seeded with !rng_is_initialized() Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 175/240] crypto: drbg - move dynamic ->reseed_threshold adjustments to __drbg_seed() Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 176/240] crypto: drbg - always try to free Jitter RNG instance Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 177/240] crypto: drbg - make reseeding from get_random_bytes() synchronous Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 178/240] random: avoid checking crng_ready() twice in random_init() Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 179/240] random: mark bootloader randomness code as __init Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 180/240] random: account for arch randomness in bits Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 181/240] powerpc/kasan: Silence KASAN warnings in __get_wchan() Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 182/240] ASoC: nau8822: Add operation for internal PLL off and on Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 183/240] dma-debug: make things less spammy under memory pressure Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 184/240] ASoC: cs42l52: Fix TLV scales for mixer controls Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 185/240] ASoC: cs35l36: Update digital volume TLV Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 186/240] ASoC: cs53l30: Correct number of volume levels on SX controls Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 187/240] ASoC: cs42l52: Correct TLV for Bypass Volume Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 188/240] ASoC: cs42l56: Correct typo in minimum level for SX volume controls Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 189/240] ata: libata-core: fix NULL pointer deref in ata_host_alloc_pinfo() Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 190/240] ASoC: wm8962: Fix suspend while playing music Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 191/240] ASoC: es8328: Fix event generation for deemphasis control Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 192/240] ASoC: wm_adsp: Fix event generation for wm_adsp_fw_put() Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 193/240] scsi: vmw_pvscsi: Expand vcpuHint to 16 bits Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 194/240] scsi: lpfc: Fix port stuck in bypassed state after LIP in PT2PT topology Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 195/240] scsi: lpfc: Allow reduced polling rate for nvme_admin_async_event cmd completion Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 196/240] scsi: ipr: Fix missing/incorrect resource cleanup in error case Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 197/240] scsi: pmcraid: Fix missing " Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 198/240] ALSA: hda/realtek - Add HW8326 support Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 199/240] virtio-mmio: fix missing put_device() when vm_cmdline_parent registration failed Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 200/240] nfc: nfcmrvl: Fix memory leak in nfcmrvl_play_deferred Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 201/240] ipv6: Fix signed integer overflow in l2tp_ip6_sendmsg Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 202/240] net: ethernet: mtk_eth_soc: fix misuse of mem alloc interface netdev[napi]_alloc_frag Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 203/240] netfs: gcc-12: temporarily disable -Wattribute-warning for now Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 204/240] random: credit cpu and bootloader seeds by default Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 205/240] pNFS: Dont keep retrying if the server replied NFS4ERR_LAYOUTUNAVAILABLE Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 206/240] clocksource: hyper-v: unexport __init-annotated hv_init_clocksource() Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 207/240] i40e: Fix adding ADQ filter to TC0 Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 208/240] i40e: Fix calculating the number of queue pairs Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 209/240] i40e: Fix call trace in setup_tx_descriptors Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 210/240] tty: goldfish: Fix free_irq() on remove Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 211/240] misc: atmel-ssc: Fix IRQ check in ssc_probe Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 212/240] mlxsw: spectrum_cnt: Reorder counter pools Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 213/240] net: bgmac: Fix an erroneous kfree() in bgmac_remove() Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 214/240] arm64: ftrace: fix branch range checks Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 215/240] certs/blacklist_hashes.c: fix const confusion in certs blacklist Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 216/240] faddr2line: Fix overlapping text section failures, the sequel Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 217/240] irqchip/gic/realview: Fix refcount leak in realview_gic_of_init Greg Kroah-Hartman
2022-06-20 12:51 ` [PATCH 5.4 218/240] irqchip/gic-v3: Fix error handling in gic_populate_ppi_partitions Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 219/240] irqchip/gic-v3: Fix refcount leak " Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 220/240] i2c: designware: Use standard optional ref clock implementation Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 221/240] comedi: vmk80xx: fix expression for tx buffer size Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 222/240] USB: serial: option: add support for Cinterion MV31 with new baseline Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 223/240] USB: serial: io_ti: add Agilent E5805A support Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 224/240] usb: dwc2: Fix memory leak in dwc2_hcd_init Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 225/240] usb: gadget: lpc32xx_udc: Fix refcount leak in lpc32xx_udc_probe Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 226/240] serial: 8250: Store to lsr_save_flags after lsr read Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 227/240] dm mirror log: round up region bitmap size to BITS_PER_LONG Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 228/240] ext4: fix bug_on ext4_mb_use_inode_pa Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 229/240] ext4: make variable "count" signed Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 230/240] ext4: add reserved GDT blocks check Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 231/240] ALSA: hda/realtek: fix mute/micmute LEDs for HP 440 G8 Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 232/240] ALSA: hda/realtek: fix right sounds and mute/micmute LEDs for HP machine Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 233/240] virtio-pci: Remove wrong address verification in vp_del_vqs() Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 234/240] net/sched: act_police: more accurate MTU policing Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 235/240] net: openvswitch: fix misuse of the cached connection on tuple changes Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 236/240] net: openvswitch: fix leak of nested actions Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 237/240] arm64: kprobes: Use BRK instead of single-step when executing instructions out-of-line Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 238/240] RISC-V: fix barrier() use in <vdso/processor.h> Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 239/240] riscv: Less inefficient gcc tishift helpers (and export their symbols) Greg Kroah-Hartman
2022-06-20 12:52 ` [PATCH 5.4 240/240] powerpc/mm: Switch obsolete dssall to .long Greg Kroah-Hartman
2022-06-20 17:10 ` [PATCH 5.4 000/240] 5.4.200-rc1 review Florian Fainelli
2022-06-21  0:46 ` Guenter Roeck
2022-06-21  7:59 ` Naresh Kamboju
2022-06-21  9:44 ` Sudip Mukherjee
2022-06-21 12:36 ` Samuel Zou
2022-06-21 13:36 ` Guenter Roeck
2022-06-21 13:42   ` Greg Kroah-Hartman
2022-06-21 21:49 ` Shuah Khan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220620124741.046796051@linuxfoundation.org \
    --to=gregkh@linuxfoundation.org \
    --cc=Jason@zx2c4.com \
    --cc=bigeasy@linutronix.de \
    --cc=ebiggers@google.com \
    --cc=jannh@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@dominikbrodowski.net \
    --cc=stable@vger.kernel.org \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).