linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] random: Make /dev/random wait for crng_ready
@ 2019-02-15  1:47 Bernd Edlinger
  2019-02-15  5:40 ` Bernd Edlinger
  2019-02-15 13:58 ` [PATCHv2] " Bernd Edlinger
  0 siblings, 2 replies; 14+ messages in thread
From: Bernd Edlinger @ 2019-02-15  1:47 UTC (permalink / raw)
  To: Theodore Ts'o, Arnd Bergmann, Greg Kroah-Hartman, linux-kernel

Reading from /dev/random may return data while the getrandom
syscall is still blocking.

Those bytes are not yet cryptographically secure.

Make read and select for reading on /dev/random wait until
the crng is fully initialized.

Signed-off-by: Bernd Edlinger <bernd.edlinger@hotmail.de>
---
 drivers/char/random.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 38c6d1a..ddbc0e2 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -720,7 +720,7 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
 		}
 
 		/* should we wake readers? */
-		if (entropy_bits >= random_read_wakeup_bits &&
+		if (crng_ready() && entropy_bits >= random_read_wakeup_bits &&
 		    wq_has_sleeper(&random_read_wait)) {
 			wake_up_interruptible(&random_read_wait);
 			kill_fasync(&fasync, SIGIO, POLL_IN);
@@ -1851,7 +1851,9 @@ void rand_initialize_disk(struct gendisk *disk)
 
 	nbytes = min_t(size_t, nbytes, SEC_XFER_SIZE);
 	while (1) {
-		n = extract_entropy_user(&blocking_pool, buf, nbytes);
+		n = crng_ready()
+			?  extract_entropy_user(&blocking_pool, buf, nbytes)
+			: 0;
 		if (n < 0)
 			return n;
 		trace_random_read(n*8, (nbytes-n)*8,
@@ -1865,6 +1867,7 @@ void rand_initialize_disk(struct gendisk *disk)
 			return -EAGAIN;
 
 		wait_event_interruptible(random_read_wait,
+			crng_ready() &&
 			ENTROPY_BITS(&input_pool) >=
 			random_read_wakeup_bits);
 		if (signal_pending(current))
@@ -1909,7 +1912,8 @@ void rand_initialize_disk(struct gendisk *disk)
 	poll_wait(file, &random_read_wait, wait);
 	poll_wait(file, &random_write_wait, wait);
 	mask = 0;
-	if (ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
+	if (crng_ready() &&
+		ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
 		mask |= EPOLLIN | EPOLLRDNORM;
 	if (ENTROPY_BITS(&input_pool) < random_write_wakeup_bits)
 		mask |= EPOLLOUT | EPOLLWRNORM;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH] random: Make /dev/random wait for crng_ready
  2019-02-15  1:47 [PATCH] random: Make /dev/random wait for crng_ready Bernd Edlinger
@ 2019-02-15  5:40 ` Bernd Edlinger
  2019-02-15 13:58 ` [PATCHv2] " Bernd Edlinger
  1 sibling, 0 replies; 14+ messages in thread
From: Bernd Edlinger @ 2019-02-15  5:40 UTC (permalink / raw)
  To: Theodore Ts'o, Arnd Bergmann, Greg Kroah-Hartman, linux-kernel

On 2/15/19 2:47 AM, Bernd Edlinger wrote:
>  	while (1) {
> -		n = extract_entropy_user(&blocking_pool, buf, nbytes);
> +		n = crng_ready()
> +			?  extract_entropy_user(&blocking_pool, buf, nbytes)
> +			: 0;

Aehm, the whitespace after ? does not align with :, didn't see that before.

I wonder if it would be better style to use
 if (crng_ready())
      n = extract...;
 else
      n = 0;


Bernd.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCHv2] random: Make /dev/random wait for crng_ready
  2019-02-15  1:47 [PATCH] random: Make /dev/random wait for crng_ready Bernd Edlinger
  2019-02-15  5:40 ` Bernd Edlinger
@ 2019-02-15 13:58 ` Bernd Edlinger
  2019-02-16 18:23   ` Theodore Y. Ts'o
  1 sibling, 1 reply; 14+ messages in thread
From: Bernd Edlinger @ 2019-02-15 13:58 UTC (permalink / raw)
  To: Theodore Ts'o, Arnd Bergmann, Greg Kroah-Hartman, linux-kernel

Reading from /dev/random may return data while the getrandom
syscall is still blocking.

Those bytes are not yet cryptographically secure.

The first byte from /dev/random can have as little
as 8 bits entropy estimation.  Once a read blocks, it will
block until /proc/sys/kernel/random/read_wakeup_threshold
bits are available, which is usually 64 bits, but can be
configured as low as 8 bits.  A select will wake up when
at least read_wakeup_threshold bits are available.
Also when constantly reading bytes out of /dev/random
it will prevent the crng init done event forever.

Fixed by making read and select on /dev/random wait until
the crng is fully initialized.

Signed-off-by: Bernd Edlinger <bernd.edlinger@hotmail.de>
---
v2: Fixed one white space in the code.
    Also added some more details about the problem
    to the commit message.
---
 drivers/char/random.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index bd449ad..c8f16f0 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -721,7 +721,7 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
 		}
 
 		/* should we wake readers? */
-		if (entropy_bits >= random_read_wakeup_bits &&
+		if (crng_ready() && entropy_bits >= random_read_wakeup_bits &&
 		    wq_has_sleeper(&random_read_wait)) {
 			wake_up_interruptible(&random_read_wait);
 			kill_fasync(&fasync, SIGIO, POLL_IN);
@@ -1826,7 +1826,9 @@ _random_read(int nonblock, char __user *buf, size_t nbytes)
 
 	nbytes = min_t(size_t, nbytes, SEC_XFER_SIZE);
 	while (1) {
-		n = extract_entropy_user(&blocking_pool, buf, nbytes);
+		n = crng_ready()
+			? extract_entropy_user(&blocking_pool, buf, nbytes)
+			: 0;
 		if (n < 0)
 			return n;
 		trace_random_read(n*8, (nbytes-n)*8,
@@ -1840,6 +1842,7 @@ _random_read(int nonblock, char __user *buf, size_t nbytes)
 			return -EAGAIN;
 
 		wait_event_interruptible(random_read_wait,
+			crng_ready() &&
 			ENTROPY_BITS(&input_pool) >=
 			random_read_wakeup_bits);
 		if (signal_pending(current))
@@ -1884,7 +1887,8 @@ random_poll(struct file *file, poll_table * wait)
 	poll_wait(file, &random_read_wait, wait);
 	poll_wait(file, &random_write_wait, wait);
 	mask = 0;
-	if (ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
+	if (crng_ready() &&
+		ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
 		mask |= EPOLLIN | EPOLLRDNORM;
 	if (ENTROPY_BITS(&input_pool) < random_write_wakeup_bits)
 		mask |= EPOLLOUT | EPOLLWRNORM;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCHv2] random: Make /dev/random wait for crng_ready
  2019-02-15 13:58 ` [PATCHv2] " Bernd Edlinger
@ 2019-02-16 18:23   ` Theodore Y. Ts'o
  2019-02-16 20:12     ` Bernd Edlinger
  2019-02-17  8:44     ` [PATCHv3] " Bernd Edlinger
  0 siblings, 2 replies; 14+ messages in thread
From: Theodore Y. Ts'o @ 2019-02-16 18:23 UTC (permalink / raw)
  To: Bernd Edlinger; +Cc: Arnd Bergmann, Greg Kroah-Hartman, linux-kernel

On Fri, Feb 15, 2019 at 01:58:20PM +0000, Bernd Edlinger wrote:
> Reading from /dev/random may return data while the getrandom
> syscall is still blocking.
> 
> Those bytes are not yet cryptographically secure.
> 
> The first byte from /dev/random can have as little
> as 8 bits entropy estimation.  Once a read blocks, it will
> block until /proc/sys/kernel/random/read_wakeup_threshold
> bits are available, which is usually 64 bits, but can be
> configured as low as 8 bits.  A select will wake up when
> at least read_wakeup_threshold bits are available.
> Also when constantly reading bytes out of /dev/random
> it will prevent the crng init done event forever.
> 
> Fixed by making read and select on /dev/random wait until
> the crng is fully initialized.
> 
> Signed-off-by: Bernd Edlinger <bernd.edlinger@hotmail.de>

This really isn't a correct way to fix things; since the blocking_pool
used for /dev/random and the CRNG state are different things, and are
fed by different sources of entropy.

What we should do is to have a separate flag which indicates that the
blocking_pool has been adequately initialized, and set it only when
the entropy count in the blocking pool is at least 128 bits.  When get
woken up by the reader lock, we would transfer entropy from the input
pool to the blocking pool, and if the pool is not yet initializedm,
and the entropy count is less than 128 bits, we wait until it is.

    		      	      	       - Ted

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCHv2] random: Make /dev/random wait for crng_ready
  2019-02-16 18:23   ` Theodore Y. Ts'o
@ 2019-02-16 20:12     ` Bernd Edlinger
  2019-02-17  8:44     ` [PATCHv3] " Bernd Edlinger
  1 sibling, 0 replies; 14+ messages in thread
From: Bernd Edlinger @ 2019-02-16 20:12 UTC (permalink / raw)
  To: Theodore Y. Ts'o, Arnd Bergmann, Greg Kroah-Hartman, linux-kernel

On 2/16/19 7:23 PM, Theodore Y. Ts'o wrote:
> 
> This really isn't a correct way to fix things; since the blocking_pool
> used for /dev/random and the CRNG state are different things, and are
> fed by different sources of entropy.
> 

It is interesting that random_poll does only look at input_pool.
Could it be that the existing entropy in blocking_pool also matters?

My observation is that _random_read extracts always the requested
number of bytes from the input_pool by calling extract_entropy_user.
By not calling extract_entropy_user it is straight forward to ensure
that the entropy in the input_pool can accumulate until the primary_crng
is being initialized.

> What we should do is to have a separate flag which indicates that the
> blocking_pool has been adequately initialized, and set it only when
> the entropy count in the blocking pool is at least 128 bits.  When get
> woken up by the reader lock, we would transfer entropy from the input
> pool to the blocking pool, and if the pool is not yet initializedm,
> and the entropy count is less than 128 bits, we wait until it is.
> 

What happens for me, is that single byte reads out of /dev/random are able
to pull so much entropy out of the input_pool that the input_pool
never reaches 128 bit and therefore crng_ready will never be true.

Therefore I want to delay the blocking_pool until the CRNG is fully
initialized.  I hope you agree about that.

When that is done, there are two ways how entropy is transferred
from the input_pool to the blocking_pool, just to confirm I have
not overlooked something:

                /* If the input pool is getting full, send some
                 * entropy to the blocking pool until it is 75% full.
                 */
                if (entropy_bits > random_write_wakeup_bits &&
                    r->initialized &&
                    r->entropy_total >= 2*random_read_wakeup_bits) {
                        struct entropy_store *other = &blocking_pool;

                        if (other->entropy_count <=
                            3 * other->poolinfo->poolfracbits / 4) {
                                schedule_work(&other->push_work);
                                r->entropy_total = 0;
                        }
                }

this rarely happens, before crng_ready because it requires more than
128 bits of entropy, at least in my setup.

The other path is:
_random_read->extract_entropy_user->xfer_secondary_pool

which pulls at least to random_read_wakeup_bits / 8
from the input_pool to the blocking_pool, 
and the same amount of entropy is extracted again
from the blocking_pool so the blocking_pool's entropy_count
effectively stays at zero, when this path is taken.

That you don't like, right?

You propose to disable the second path until the first path
has pulled 128 bits into the blocking_pool, right?


Thanks
Bernd.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCHv3] random: Make /dev/random wait for crng_ready
  2019-02-16 18:23   ` Theodore Y. Ts'o
  2019-02-16 20:12     ` Bernd Edlinger
@ 2019-02-17  8:44     ` Bernd Edlinger
  2019-02-17 13:48       ` Bernd Edlinger
  1 sibling, 1 reply; 14+ messages in thread
From: Bernd Edlinger @ 2019-02-17  8:44 UTC (permalink / raw)
  To: Theodore Y. Ts'o, Arnd Bergmann, Greg Kroah-Hartman, linux-kernel

Reading from /dev/random may return data while the getrandom
syscall is still blocking.

Those bytes are not yet cryptographically secure.

The first byte from /dev/random can have as little
as 8 bits entropy estimation.  Once a read blocks, it will
block until /proc/sys/kernel/random/read_wakeup_threshold
bits are available, which is usually 64 bits, but can be
configured as low as 8 bits.  A select will wake up when
at least read_wakeup_threshold bits are available.
Also when constantly reading bytes out of /dev/random
it will prevent the crng init done event forever.

Fixed by making read and select on /dev/random wait until
the crng is fully initialized and the blocking_pool is
also initialized which means that more than 128 bits of
entopy have been accumulated in the blocking_pool.

Signed-off-by: Bernd Edlinger <bernd.edlinger@hotmail.de>
---
The v3 version waits much longer than the v2 version,
since first 128 bits from the input_pool go into the
CRNG, the next 64 bits are only accounted 3/4 = 48 bits
in the blocking_pool, so we need in total 192 bits from
the input_pool until the blocking_pool is initialized
And another 64 bits until a select on /dev/random wakes up.

On a system with only interrupt_randomness, this
takes 128+192+64=384 random bits, about 6.5 minutes
from boot until /dev/random is readable.

Maybe this is taking too long, after the CRNG ready?

After the input_pool had already been initialized,
I wonder if feeding the next 64 bits from the input pool
to the empty blocking_pool could already be considered 
to be good enough to derive the first random byte from
the blocking_pool?

Thanks
Bernd.
---
 drivers/char/random.c | 22 +++++++++++++++++-----
 1 file changed, 17 insertions(+), 5 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 38c6d1a..666102d 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -476,6 +476,7 @@ struct entropy_store {
 	__u8 last_data[EXTRACT_SIZE];
 };
 
+static void _xfer_secondary_pool(struct entropy_store *r, size_t nbytes);
 static ssize_t extract_entropy(struct entropy_store *r, void *buf,
 			       size_t nbytes, int min, int rsvd);
 static ssize_t _extract_entropy(struct entropy_store *r, void *buf,
@@ -719,8 +720,16 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
 			entropy_bits = r->entropy_count >> ENTROPY_SHIFT;
 		}
 
+		if (crng_ready() && !blocking_pool.initialized &&
+			entropy_bits >= random_read_wakeup_bits) {
+			_xfer_secondary_pool(&blocking_pool, entropy_bits / 8);
+			r->entropy_total = 0;
+			entropy_bits = r->entropy_count >> ENTROPY_SHIFT;
+		}
+
 		/* should we wake readers? */
-		if (entropy_bits >= random_read_wakeup_bits &&
+		if (blocking_pool.initialized &&
+		    entropy_bits >= random_read_wakeup_bits &&
 		    wq_has_sleeper(&random_read_wait)) {
 			wake_up_interruptible(&random_read_wait);
 			kill_fasync(&fasync, SIGIO, POLL_IN);
@@ -1317,7 +1326,6 @@ void add_disk_randomness(struct gendisk *disk)
  * from the primary pool to the secondary extraction pool. We make
  * sure we pull enough for a 'catastrophic reseed'.
  */
-static void _xfer_secondary_pool(struct entropy_store *r, size_t nbytes);
 static void xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
 {
 	if (!r->pull ||
@@ -1661,7 +1669,7 @@ void get_random_bytes(void *buf, int nbytes)
  */
 int wait_for_random_bytes(void)
 {
-	if (likely(crng_ready()))
+	if (crng_ready())
 		return 0;
 	return wait_event_interruptible(crng_init_wait, crng_ready());
 }
@@ -1851,7 +1859,9 @@ void rand_initialize_disk(struct gendisk *disk)
 
 	nbytes = min_t(size_t, nbytes, SEC_XFER_SIZE);
 	while (1) {
-		n = extract_entropy_user(&blocking_pool, buf, nbytes);
+		n = blocking_pool.initialized
+			? extract_entropy_user(&blocking_pool, buf, nbytes)
+			: 0;
 		if (n < 0)
 			return n;
 		trace_random_read(n*8, (nbytes-n)*8,
@@ -1865,6 +1875,7 @@ void rand_initialize_disk(struct gendisk *disk)
 			return -EAGAIN;
 
 		wait_event_interruptible(random_read_wait,
+			blocking_pool.initialized &&
 			ENTROPY_BITS(&input_pool) >=
 			random_read_wakeup_bits);
 		if (signal_pending(current))
@@ -1909,7 +1920,8 @@ void rand_initialize_disk(struct gendisk *disk)
 	poll_wait(file, &random_read_wait, wait);
 	poll_wait(file, &random_write_wait, wait);
 	mask = 0;
-	if (ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
+	if (blocking_pool.initialized &&
+		ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
 		mask |= EPOLLIN | EPOLLRDNORM;
 	if (ENTROPY_BITS(&input_pool) < random_write_wakeup_bits)
 		mask |= EPOLLOUT | EPOLLWRNORM;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCHv3] random: Make /dev/random wait for crng_ready
  2019-02-17  8:44     ` [PATCHv3] " Bernd Edlinger
@ 2019-02-17 13:48       ` Bernd Edlinger
  2019-02-17 20:55         ` [PATCHv4] random: Make /dev/random wait for input_pool initialized Bernd Edlinger
  0 siblings, 1 reply; 14+ messages in thread
From: Bernd Edlinger @ 2019-02-17 13:48 UTC (permalink / raw)
  To: Theodore Y. Ts'o, Arnd Bergmann, Greg Kroah-Hartman, linux-kernel

On 2/17/19 9:44 AM, Bernd Edlinger wrote:
>  
> +		if (crng_ready() && !blocking_pool.initialized &&

After some more debugging I realize that blocking_pool.initialized
is true after 128 bits of input entropy, but that is only 80 bits
credited, due to the asymptotic 3/4 crediting formula.

I see that will also enable the code path below:

                if (entropy_bits > random_write_wakeup_bits &&
                    r->initialized &&
                    r->entropy_total >= 2*random_read_wakeup_bits) {
                        struct entropy_store *other = &blocking_pool;

                        if (other->entropy_count <=
                            3 * other->poolinfo->poolfracbits / 4) {
                                schedule_work(&other->push_work);
                                r->entropy_total = 0;
                        }

when random_write_wakeup_bits is below 80, and random_read_wakeup_bits
is also smallish.  This depletes the input_pool in favor of the
blocking pool, while we are actually waiting for the input_pool to
reach 128 bits security strength, in order to seed the CRNG.

I am testing a new version and will post it later today.

Sorry for all the back-and forth.


Bernd.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCHv4] random: Make /dev/random wait for input_pool initialized
  2019-02-17 13:48       ` Bernd Edlinger
@ 2019-02-17 20:55         ` Bernd Edlinger
  2019-02-19  7:16           ` Bernd Edlinger
  0 siblings, 1 reply; 14+ messages in thread
From: Bernd Edlinger @ 2019-02-17 20:55 UTC (permalink / raw)
  To: Theodore Y. Ts'o, Arnd Bergmann, Greg Kroah-Hartman, linux-kernel

Reading from /dev/random may return data while the getrandom
syscall is still blocking.

Those bytes are not yet cryptographically secure.

The first byte from /dev/random can have as little
as 8 bits entropy estimation.  Once a read blocks, it will
block until /proc/sys/kernel/random/read_wakeup_threshold
bits are available, which is usually 64 bits, but can be
configured as low as 8 bits.  A select will wake up when
at least read_wakeup_threshold bits are available.
Also when constantly reading bytes out of /dev/random
it will prevent the crng init done event forever.

Furthermore previously the input_pool got initialized when
more than 128 bits of raw entropy are accumulated, but that
is only credited 80 bits of pool entroy.  Therefore if
random_write_wakeup_bits is configured lower than 80 bits,
the code path that sends entropy to the blocking_pool can
get activated, before the CRNG is initialized and delays
the initialization of the CRNG until the blocking_pool is
filled to 75%.

Fixed by making read and select on /dev/random wait until
the input_pool is initialized which means that more than
128 bits of entropy have been fed into the input_pool.
This is guaranteed to happen after the CRNG is ready.
So after the first byte is readable from /dev/random
also /dev/urandom is guaranteed to be fully initialized.

Another minor tweak this patch makes, is when the primary
CRNG is periodically reseeded, we reserve twice the amount
of read_wakeup_threshold for fairness reasons, to keep
/dev/random readable when it is not accessed by user mode.

And finally, when the system runs for long times the jiffies
may roll over, but crng_global_init_time is not updated
when reseed every 5 minutes happens.  This could cause
CRNG reseeding all the time, making the input_pool run
low on entropy which would make /dev/random block
unexpectedly.

Signed-off-by: Bernd Edlinger <bernd.edlinger@hotmail.de>
---
v4 makes the /dev/random block until the input_pool has
reached 128 bits of entropy at least once.  Now make
everything depend on input_pool.initialized.
Additionally fixed a potential issue with jiffies roll
over in the CRNG reseeding logic, which would cause the
cgng_global_init_time to be in the future, causing
reseeding to happen all the time.
Finally added a fairness reserve to keep the /dev/random
in the readable state (select can check non-destructively),
when nothing but CRNG reseeding happens.
---
 drivers/char/random.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index bd449ad..97cde01 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -645,6 +645,7 @@ static void process_random_ready_list(void)
 static void credit_entropy_bits(struct entropy_store *r, int nbits)
 {
 	int entropy_count, orig;
+	int entropy_bits;
 	const int pool_size = r->poolinfo->poolfracbits;
 	int nfrac = nbits << ENTROPY_SHIFT;
 
@@ -702,8 +703,9 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
 	if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
 		goto retry;
 
+	entropy_bits = entropy_count >> ENTROPY_SHIFT;
 	r->entropy_total += nbits;
-	if (!r->initialized && r->entropy_total > 128) {
+	if (!r->initialized && entropy_bits >= 128) {
 		r->initialized = 1;
 		r->entropy_total = 0;
 	}
@@ -713,8 +715,6 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
 				  r->entropy_total, _RET_IP_);
 
 	if (r == &input_pool) {
-		int entropy_bits = entropy_count >> ENTROPY_SHIFT;
-
 		if (crng_init < 2 && entropy_bits >= 128) {
 			crng_reseed(&primary_crng, r);
 			entropy_bits = r->entropy_count >> ENTROPY_SHIFT;
@@ -722,10 +722,12 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
 
 		/* should we wake readers? */
 		if (entropy_bits >= random_read_wakeup_bits &&
+		    r->initialized &&
 		    wq_has_sleeper(&random_read_wait)) {
 			wake_up_interruptible(&random_read_wait);
 			kill_fasync(&fasync, SIGIO, POLL_IN);
 		}
+
 		/* If the input pool is getting full, send some
 		 * entropy to the blocking pool until it is 75% full.
 		 */
@@ -917,9 +919,11 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
 	} buf;
 
 	if (r) {
-		num = extract_entropy(r, &buf, 32, 16, 0);
+		num = extract_entropy(r, &buf, 32, 16, crng_ready()
+				      ? random_read_wakeup_bits / 4 : 0);
 		if (num == 0)
 			return;
+		crng_global_init_time = jiffies;
 	} else {
 		_extract_crng(&primary_crng, buf.block);
 		_crng_backtrack_protect(&primary_crng, buf.block,
@@ -1652,7 +1656,7 @@ EXPORT_SYMBOL(get_random_bytes);
  */
 int wait_for_random_bytes(void)
 {
-	if (likely(crng_ready()))
+	if (crng_ready())
 		return 0;
 	return wait_event_interruptible(crng_init_wait, crng_ready());
 }
@@ -1826,7 +1830,9 @@ _random_read(int nonblock, char __user *buf, size_t nbytes)
 
 	nbytes = min_t(size_t, nbytes, SEC_XFER_SIZE);
 	while (1) {
-		n = extract_entropy_user(&blocking_pool, buf, nbytes);
+		n = input_pool.initialized
+			? extract_entropy_user(&blocking_pool, buf, nbytes)
+			: 0;
 		if (n < 0)
 			return n;
 		trace_random_read(n*8, (nbytes-n)*8,
@@ -1840,6 +1846,7 @@ _random_read(int nonblock, char __user *buf, size_t nbytes)
 			return -EAGAIN;
 
 		wait_event_interruptible(random_read_wait,
+			input_pool.initialized &&
 			ENTROPY_BITS(&input_pool) >=
 			random_read_wakeup_bits);
 		if (signal_pending(current))
@@ -1884,7 +1891,8 @@ random_poll(struct file *file, poll_table * wait)
 	poll_wait(file, &random_read_wait, wait);
 	poll_wait(file, &random_write_wait, wait);
 	mask = 0;
-	if (ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
+	if (input_pool.initialized &&
+		ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
 		mask |= EPOLLIN | EPOLLRDNORM;
 	if (ENTROPY_BITS(&input_pool) < random_write_wakeup_bits)
 		mask |= EPOLLOUT | EPOLLWRNORM;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCHv4] random: Make /dev/random wait for input_pool initialized
  2019-02-17 20:55         ` [PATCHv4] random: Make /dev/random wait for input_pool initialized Bernd Edlinger
@ 2019-02-19  7:16           ` Bernd Edlinger
  2019-02-19 17:09             ` [PATCHv5] " Bernd Edlinger
  0 siblings, 1 reply; 14+ messages in thread
From: Bernd Edlinger @ 2019-02-19  7:16 UTC (permalink / raw)
  To: Theodore Y. Ts'o, Arnd Bergmann, Greg Kroah-Hartman, linux-kernel

> @@ -1826,7 +1830,9 @@ _random_read(int nonblock, char __user *buf, size_t nbytes)
> 
>         nbytes = min_t(size_t, nbytes, SEC_XFER_SIZE);
>         while (1) {
> -               n = extract_entropy_user(&blocking_pool, buf, nbytes);
> +               n = input_pool.initialized
> +                       ? extract_entropy_user(&blocking_pool, buf, nbytes)

Aehm, sorry, now I see this creates a race condition with this code here, since
this the crng_reseed here also tries to read from the input_pool,
but input_pool.initialized is already true:

                if (crng_init < 2 && entropy_bits >= 128) {
                        crng_reseed(&primary_crng, r);
                        entropy_bits = r->entropy_count >> ENTROPY_SHIFT;


I was able to get a system in this behavior by running 3 instances of
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>

int main()
{
  int f = open("/dev/random", O_NDELAY);
  if (f<0) return 1;
  for(;;)
  {
    unsigned char buf[16];
    int x = read(f, buf, sizeof(buf));
    if (x>=0)
    {
      int i;

      printf("read %d bytes: ", x);
      for (i=0; i<x; i++) printf("%02x ", buf[i]);
      printf("\n");
    }
  }
}

and it managed to steal the entropy away,
before the crng_reseed was able to run.

So I think I will have to change this condition to:
> +               n = input_pool.initialized && crng_ready()
> +                       ? extract_entropy_user(&blocking_pool, buf, nbytes)


Thanks (for your patience :-)
Bernd.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCHv5] random: Make /dev/random wait for input_pool initialized
  2019-02-19  7:16           ` Bernd Edlinger
@ 2019-02-19 17:09             ` Bernd Edlinger
  2019-02-21  0:32               ` [PATCHv5] random: Make /dev/random wait for input_pool initializedy Theodore Y. Ts'o
  0 siblings, 1 reply; 14+ messages in thread
From: Bernd Edlinger @ 2019-02-19 17:09 UTC (permalink / raw)
  To: Theodore Y. Ts'o, Arnd Bergmann, Greg Kroah-Hartman, linux-kernel

Reading from /dev/random may return data while the getrandom
syscall is still blocking.

Those bytes are not yet cryptographically secure.

The first byte from /dev/random can have as little
as 8 bits entropy estimation.  Once a read blocks, it will
block until /proc/sys/kernel/random/read_wakeup_threshold
bits are available, which is usually 64 bits, but can be
configured as low as 8 bits.  A select will wake up when
at least read_wakeup_threshold bits are available.
Also when constantly reading bytes out of /dev/random
it will prevent the crng init done event forever.

Furthermore previously the input_pool got initialized when
more than 128 bits of raw entropy are accumulated, but that
is only credited 80 bits of pool entroy.  Therefore if
random_write_wakeup_bits is configured lower than 80 bits,
the code path that sends entropy to the blocking_pool can
get activated, before the CRNG is initialized and delays
the initialization of the CRNG until the blocking_pool is
filled to 75%.

Fixed by making read and select on /dev/random wait until
the input_pool is initialized which means that more than
128 bits of entropy have been fed into the input_pool.
This is guaranteed to happen after the CRNG is ready.
So after the first byte is readable from /dev/random
also /dev/urandom is guaranteed to be fully initialized.

Another minor tweak this patch makes, is when the primary
CRNG is periodically reseeded, we reserve twice the amount
of read_wakeup_threshold for fairness reasons, to keep
/dev/random readable when it is not accessed by user mode.

And finally, when the system runs for long times the jiffies
may roll over, but crng_global_init_time is not updated
when reseed every 5 minutes happens.  This could cause
CRNG reseeding all the time, making the input_pool run
low on entropy which would make /dev/random block
unexpectedly.

Signed-off-by: Bernd Edlinger <bernd.edlinger@hotmail.de>
---
v4 makes the /dev/random block until the input_pool has
reached 128 bits of entropy at least once.  Now make
everything depend on input_pool.initialized.
Additionally fixed a potential issue with jiffies roll
over in the CRNG reseeding logic, which would cause the
cgng_global_init_time to be in the future, causing
reseeding to happen all the time.
Finally added a fairness reserve to keep the /dev/random
in the readable state (select can check non-destructively),
when nothing but CRNG reseeding happens.

v5 adds one "&& crng_ready()" to fix a race condition.
---
 drivers/char/random.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index bd449ad..652e024 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -645,6 +645,7 @@ static void process_random_ready_list(void)
 static void credit_entropy_bits(struct entropy_store *r, int nbits)
 {
 	int entropy_count, orig;
+	int entropy_bits;
 	const int pool_size = r->poolinfo->poolfracbits;
 	int nfrac = nbits << ENTROPY_SHIFT;
 
@@ -702,8 +703,9 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
 	if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
 		goto retry;
 
+	entropy_bits = entropy_count >> ENTROPY_SHIFT;
 	r->entropy_total += nbits;
-	if (!r->initialized && r->entropy_total > 128) {
+	if (!r->initialized && entropy_bits >= 128) {
 		r->initialized = 1;
 		r->entropy_total = 0;
 	}
@@ -713,8 +715,6 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
 				  r->entropy_total, _RET_IP_);
 
 	if (r == &input_pool) {
-		int entropy_bits = entropy_count >> ENTROPY_SHIFT;
-
 		if (crng_init < 2 && entropy_bits >= 128) {
 			crng_reseed(&primary_crng, r);
 			entropy_bits = r->entropy_count >> ENTROPY_SHIFT;
@@ -722,10 +722,12 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
 
 		/* should we wake readers? */
 		if (entropy_bits >= random_read_wakeup_bits &&
+		    r->initialized &&
 		    wq_has_sleeper(&random_read_wait)) {
 			wake_up_interruptible(&random_read_wait);
 			kill_fasync(&fasync, SIGIO, POLL_IN);
 		}
+
 		/* If the input pool is getting full, send some
 		 * entropy to the blocking pool until it is 75% full.
 		 */
@@ -917,9 +919,11 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
 	} buf;
 
 	if (r) {
-		num = extract_entropy(r, &buf, 32, 16, 0);
+		num = extract_entropy(r, &buf, 32, 16, crng_ready()
+				      ? random_read_wakeup_bits / 4 : 0);
 		if (num == 0)
 			return;
+		crng_global_init_time = jiffies;
 	} else {
 		_extract_crng(&primary_crng, buf.block);
 		_crng_backtrack_protect(&primary_crng, buf.block,
@@ -1652,7 +1656,7 @@ EXPORT_SYMBOL(get_random_bytes);
  */
 int wait_for_random_bytes(void)
 {
-	if (likely(crng_ready()))
+	if (crng_ready())
 		return 0;
 	return wait_event_interruptible(crng_init_wait, crng_ready());
 }
@@ -1826,7 +1830,9 @@ _random_read(int nonblock, char __user *buf, size_t nbytes)
 
 	nbytes = min_t(size_t, nbytes, SEC_XFER_SIZE);
 	while (1) {
-		n = extract_entropy_user(&blocking_pool, buf, nbytes);
+		n = input_pool.initialized && crng_ready()
+			? extract_entropy_user(&blocking_pool, buf, nbytes)
+			: 0;
 		if (n < 0)
 			return n;
 		trace_random_read(n*8, (nbytes-n)*8,
@@ -1840,6 +1846,7 @@ _random_read(int nonblock, char __user *buf, size_t nbytes)
 			return -EAGAIN;
 
 		wait_event_interruptible(random_read_wait,
+			input_pool.initialized &&
 			ENTROPY_BITS(&input_pool) >=
 			random_read_wakeup_bits);
 		if (signal_pending(current))
@@ -1884,7 +1891,8 @@ random_poll(struct file *file, poll_table * wait)
 	poll_wait(file, &random_read_wait, wait);
 	poll_wait(file, &random_write_wait, wait);
 	mask = 0;
-	if (ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
+	if (input_pool.initialized &&
+		ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
 		mask |= EPOLLIN | EPOLLRDNORM;
 	if (ENTROPY_BITS(&input_pool) < random_write_wakeup_bits)
 		mask |= EPOLLOUT | EPOLLWRNORM;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCHv5] random: Make /dev/random wait for input_pool initializedy
  2019-02-19 17:09             ` [PATCHv5] " Bernd Edlinger
@ 2019-02-21  0:32               ` Theodore Y. Ts'o
  2019-02-21 19:24                 ` Bernd Edlinger
  0 siblings, 1 reply; 14+ messages in thread
From: Theodore Y. Ts'o @ 2019-02-21  0:32 UTC (permalink / raw)
  To: Bernd Edlinger; +Cc: Arnd Bergmann, Greg Kroah-Hartman, linux-kernel

Hi Brend,

I've been looking at your patch, and as far as the core part of what
you're patching, I think this patch below is a conceptually simpler
way of fixing the original core problem.  Please take a look and let
me know what you think.

There are some other auxiliary things that your patch is trying to do
where I'm not sure what you're trying to doing and I'm not at all sure
it's correct.  Those things should really get separated out as a
separate patches.

> Another minor tweak this patch makes, is when the primary
> CRNG is periodically reseeded, we reserve twice the amount
> of read_wakeup_threshold for fairness reasons, to keep
> /dev/random readable when it is not accessed by user mode.

I'm not sure what you're trying to do/fix here.

     	  	 	   	  - Ted

From 9275727af22bc08958bad45d604038aa8e9b6766 Mon Sep 17 00:00:00 2001
From: Theodore Ts'o <tytso@mit.edu>
Date: Wed, 20 Feb 2019 16:06:38 -0500
Subject: [PATCH] random: only read from /dev/random after its pool has received 128 bits

Immediately after boot, we allow reads from /dev/random before its
entropy pool has been fully initialized.  Fix this so that we don't
allow this until the blocking pool has received 128 bits.

We do this by repurposing the initialized flag in the entropy pool
struct, and use the initialized flag in the blocking pool to indicate
whether it is safe to pull from the blocking pool.

To do this, we needed to rework when we decide to push entropy from the
input pool to the blocking pool, since the initialized flag for the
input pool was used for this purpose.  To simplify things, we no
longer use the initialized flag for that purpose, nor do we use the
entropy_total field any more.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
---
 drivers/char/random.c         | 44 +++++++++++++++++------------------
 include/trace/events/random.h | 13 ++++-------
 2 files changed, 27 insertions(+), 30 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 47ac7cd20fb1..e247c45b2772 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -470,7 +470,6 @@ struct entropy_store {
 	unsigned short add_ptr;
 	unsigned short input_rotate;
 	int entropy_count;
-	int entropy_total;
 	unsigned int initialized:1;
 	unsigned int last_data_init:1;
 	__u8 last_data[EXTRACT_SIZE];
@@ -643,7 +642,7 @@ static void process_random_ready_list(void)
  */
 static void credit_entropy_bits(struct entropy_store *r, int nbits)
 {
-	int entropy_count, orig;
+	int entropy_count, orig, has_initialized = 0;
 	const int pool_size = r->poolinfo->poolfracbits;
 	int nfrac = nbits << ENTROPY_SHIFT;
 
@@ -698,23 +697,25 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
 		entropy_count = 0;
 	} else if (entropy_count > pool_size)
 		entropy_count = pool_size;
+	if ((r == &blocking_pool) && !r->initialized &&
+	    (entropy_count >> ENTROPY_SHIFT) > 128)
+		has_initialized = 1;
 	if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
 		goto retry;
 
-	r->entropy_total += nbits;
-	if (!r->initialized && r->entropy_total > 128) {
+	if (has_initialized)
 		r->initialized = 1;
-		r->entropy_total = 0;
-	}
 
 	trace_credit_entropy_bits(r->name, nbits,
-				  entropy_count >> ENTROPY_SHIFT,
-				  r->entropy_total, _RET_IP_);
+				  entropy_count >> ENTROPY_SHIFT, _RET_IP_);
 
 	if (r == &input_pool) {
 		int entropy_bits = entropy_count >> ENTROPY_SHIFT;
+		struct entropy_store *other = &blocking_pool;
 
-		if (crng_init < 2 && entropy_bits >= 128) {
+		if (crng_init < 2) {
+			if (entropy_bits < 128)
+				return;
 			crng_reseed(&primary_crng, r);
 			entropy_bits = r->entropy_count >> ENTROPY_SHIFT;
 		}
@@ -725,20 +726,14 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
 			wake_up_interruptible(&random_read_wait);
 			kill_fasync(&fasync, SIGIO, POLL_IN);
 		}
-		/* If the input pool is getting full, send some
-		 * entropy to the blocking pool until it is 75% full.
+		/* If the input pool is getting full, and the blocking
+		 * pool has room, send some entropy to the blocking
+		 * pool.
 		 */
-		if (entropy_bits > random_write_wakeup_bits &&
-		    r->initialized &&
-		    r->entropy_total >= 2*random_read_wakeup_bits) {
-			struct entropy_store *other = &blocking_pool;
-
-			if (other->entropy_count <=
-			    3 * other->poolinfo->poolfracbits / 4) {
-				schedule_work(&other->push_work);
-				r->entropy_total = 0;
-			}
-		}
+		if (!work_pending(&other->push_work) &&
+		    (ENTROPY_BITS(r) > 6 * r->poolinfo->poolbytes) &&
+		    (ENTROPY_BITS(other) <= 6 * other->poolinfo->poolbytes))
+			schedule_work(&other->push_work);
 	}
 }
 
@@ -1553,6 +1548,11 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf,
 	int large_request = (nbytes > 256);
 
 	trace_extract_entropy_user(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
+	if (!r->initialized && r->pull) {
+		xfer_secondary_pool(r, ENTROPY_BITS(r->pull)/8);
+		if (!r->initialized)
+			return 0;
+	}
 	xfer_secondary_pool(r, nbytes);
 	nbytes = account(r, nbytes, 0, 0);
 
diff --git a/include/trace/events/random.h b/include/trace/events/random.h
index 0560dfc33f1c..32c10a515e2d 100644
--- a/include/trace/events/random.h
+++ b/include/trace/events/random.h
@@ -62,15 +62,14 @@ DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes_nolock,
 
 TRACE_EVENT(credit_entropy_bits,
 	TP_PROTO(const char *pool_name, int bits, int entropy_count,
-		 int entropy_total, unsigned long IP),
+		 unsigned long IP),
 
-	TP_ARGS(pool_name, bits, entropy_count, entropy_total, IP),
+	TP_ARGS(pool_name, bits, entropy_count, IP),
 
 	TP_STRUCT__entry(
 		__field( const char *,	pool_name		)
 		__field(	  int,	bits			)
 		__field(	  int,	entropy_count		)
-		__field(	  int,	entropy_total		)
 		__field(unsigned long,	IP			)
 	),
 
@@ -78,14 +77,12 @@ TRACE_EVENT(credit_entropy_bits,
 		__entry->pool_name	= pool_name;
 		__entry->bits		= bits;
 		__entry->entropy_count	= entropy_count;
-		__entry->entropy_total	= entropy_total;
 		__entry->IP		= IP;
 	),
 
-	TP_printk("%s pool: bits %d entropy_count %d entropy_total %d "
-		  "caller %pS", __entry->pool_name, __entry->bits,
-		  __entry->entropy_count, __entry->entropy_total,
-		  (void *)__entry->IP)
+	TP_printk("%s pool: bits %d entropy_count %d caller %pS",
+		  __entry->pool_name, __entry->bits,
+		  __entry->entropy_count, (void *)__entry->IP)
 );
 
 TRACE_EVENT(push_to_pool,
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCHv5] random: Make /dev/random wait for input_pool initializedy
  2019-02-21  0:32               ` [PATCHv5] random: Make /dev/random wait for input_pool initializedy Theodore Y. Ts'o
@ 2019-02-21 19:24                 ` Bernd Edlinger
  2019-02-21 23:18                   ` Theodore Y. Ts'o
  0 siblings, 1 reply; 14+ messages in thread
From: Bernd Edlinger @ 2019-02-21 19:24 UTC (permalink / raw)
  To: Theodore Y. Ts'o, Arnd Bergmann, Greg Kroah-Hartman, linux-kernel

Hi Theodore,

On 2/21/19 1:32 AM, Theodore Y. Ts'o wrote:
> Hi Brend,
> 
> I've been looking at your patch, and as far as the core part of what
> you're patching, I think this patch below is a conceptually simpler
> way of fixing the original core problem.  Please take a look and let
> me know what you think.
> 
> There are some other auxiliary things that your patch is trying to do
> where I'm not sure what you're trying to doing and I'm not at all sure
> it's correct.  Those things should really get separated out as a
> separate patches.
> 

Yes, that is true.  It just got more each time I looked at the code.
Especially the jiffies wrap-around issue is an independent thing.

>> Another minor tweak this patch makes, is when the primary
>> CRNG is periodically reseeded, we reserve twice the amount
>> of read_wakeup_threshold for fairness reasons, to keep
>> /dev/random readable when it is not accessed by user mode.
> 
> I'm not sure what you're trying to do/fix here.
> 

It is the select function that is not working.

What I try to fix is that you can select on /dev/random for
readable, and that it guarantees that at least one process
will be able to read at least one byte out of the device,
and if that is not done, it should stay in that state.
Additionally if the first byte is read out of /dev/random
or /dev/random is selected for readable, that should
ensure that /dev/urandom is also fully initialized.

Additionally I want to achieve that the CRNG is initialized
as fast as possible and once that is done, the /dev/random
is working as soon as possible.  And that the re-seeding
of the CRNG is not creating race conditions which invalidate
the readability guarantee from the select.

Currently on a low entropy system the entroy_available does
count from 0 to 128, where CRNG initializes then again
from 0 to 128..256, where CRNG reseeds then again
from 0 to 128..256, where CRNG reseeds then again...
so each time the entropy_available is in the range 0..63
select indicates !ready and then again ready, so it toggles
all the time.  That is what I wanted to fix mainly.





>      	  	 	   	  - Ted
> 
> From 9275727af22bc08958bad45d604038aa8e9b6766 Mon Sep 17 00:00:00 2001
> From: Theodore Ts'o <tytso@mit.edu>
> Date: Wed, 20 Feb 2019 16:06:38 -0500
> Subject: [PATCH] random: only read from /dev/random after its pool has received 128 bits
> 
> Immediately after boot, we allow reads from /dev/random before its
> entropy pool has been fully initialized.  Fix this so that we don't
> allow this until the blocking pool has received 128 bits.
> 

My observation, with a previous attempt (v3) of my patch is that a system
where only interrupt randomness is available it takes typically 2 minutes
to accomplish the initial seeding of the CRNG, if from there one has to
wait until there is 128 bit entropy in the blocking pool it will take
much too long.  The patch was actually buggy and did only ensure 80 bits
of entropy in the blocking pool but even that did take 6.5 minutes, which
felt to me like an absolutely unacceptable waiting time.

Therefore I would like to keep that to 2-3 minutes that the CRNG takes
for it's initialization.  From there I thought, even if the entroy
in the input_pool drops to zero, the information content however is still
there, and after adding the next 64 bits if raw entropy, it would be fine
to extract 8 bytes form the input_pool and feed that to the
blocking pool, that would make 6 bytes of output, which should be
completely unpredictable, the approach taken in my v5 patch.
If that is not good enough, maybe extract 128 bits from the urandom
and inject that into the blocking pool without incrementing
the blocking_pool's entropy count.

> We do this by repurposing the initialized flag in the entropy pool
> struct, and use the initialized flag in the blocking pool to indicate
> whether it is safe to pull from the blocking pool.
> 
> To do this, we needed to rework when we decide to push entropy from the
> input pool to the blocking pool, since the initialized flag for the
> input pool was used for this purpose.  To simplify things, we no
> longer use the initialized flag for that purpose, nor do we use the
> entropy_total field any more.
> 
> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
> ---
>  drivers/char/random.c         | 44 +++++++++++++++++------------------
>  include/trace/events/random.h | 13 ++++-------
>  2 files changed, 27 insertions(+), 30 deletions(-)
> 
> diff --git a/drivers/char/random.c b/drivers/char/random.c
> index 47ac7cd20fb1..e247c45b2772 100644
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -470,7 +470,6 @@ struct entropy_store {
>  	unsigned short add_ptr;
>  	unsigned short input_rotate;
>  	int entropy_count;
> -	int entropy_total;
>  	unsigned int initialized:1;
>  	unsigned int last_data_init:1;
>  	__u8 last_data[EXTRACT_SIZE];
> @@ -643,7 +642,7 @@ static void process_random_ready_list(void)
>   */
>  static void credit_entropy_bits(struct entropy_store *r, int nbits)
>  {
> -	int entropy_count, orig;
> +	int entropy_count, orig, has_initialized = 0;
>  	const int pool_size = r->poolinfo->poolfracbits;
>  	int nfrac = nbits << ENTROPY_SHIFT;
>  
> @@ -698,23 +697,25 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
>  		entropy_count = 0;
>  	} else if (entropy_count > pool_size)
>  		entropy_count = pool_size;
> +	if ((r == &blocking_pool) && !r->initialized &&
> +	    (entropy_count >> ENTROPY_SHIFT) > 128)
> +		has_initialized = 1;
>  	if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
>  		goto retry;
>  
> -	r->entropy_total += nbits;
> -	if (!r->initialized && r->entropy_total > 128) {
> +	if (has_initialized)
>  		r->initialized = 1;
> -		r->entropy_total = 0;
> -	}
>  
>  	trace_credit_entropy_bits(r->name, nbits,
> -				  entropy_count >> ENTROPY_SHIFT,
> -				  r->entropy_total, _RET_IP_);
> +				  entropy_count >> ENTROPY_SHIFT, _RET_IP_);
>  
>  	if (r == &input_pool) {
>  		int entropy_bits = entropy_count >> ENTROPY_SHIFT;
> +		struct entropy_store *other = &blocking_pool;
>  
> -		if (crng_init < 2 && entropy_bits >= 128) {
> +		if (crng_init < 2) {
> +			if (entropy_bits < 128)
> +				return;
>  			crng_reseed(&primary_crng, r);
>  			entropy_bits = r->entropy_count >> ENTROPY_SHIFT;
>  		}
> @@ -725,20 +726,14 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
>  			wake_up_interruptible(&random_read_wait);
>  			kill_fasync(&fasync, SIGIO, POLL_IN);
>  		}
> -		/* If the input pool is getting full, send some
> -		 * entropy to the blocking pool until it is 75% full.
> +		/* If the input pool is getting full, and the blocking
> +		 * pool has room, send some entropy to the blocking
> +		 * pool.
>  		 */
> -		if (entropy_bits > random_write_wakeup_bits &&
> -		    r->initialized &&
> -		    r->entropy_total >= 2*random_read_wakeup_bits) {

This condition (entropy_total >= 2*random_read_wakeup_bits) was not really bad,
since it was preventing all arriving entropy to be transferred from the
input pool to the blocking pool.  It appeared to extract around 50%
of the entropy, leaving some left for CRNG reseeding:

        if (crng_ready() &&
            (time_after(crng_global_init_time, crng->init_time) ||
             time_after(jiffies, crng->init_time + CRNG_RESEED_INTERVAL)))
                crng_reseed(crng, crng == &primary_crng ? &input_pool : NULL);

This can only be done if >= 128 bits of entropy are in the input_pool, so it
should be avoided to extract everything from the input pool,
except if a user process is reading from /dev/random, then that request should
of course be satisfied as fast as possible.


> -			struct entropy_store *other = &blocking_pool;
> -
> -			if (other->entropy_count <=
> -			    3 * other->poolinfo->poolfracbits / 4) {
> -				schedule_work(&other->push_work);
> -				r->entropy_total = 0;
> -			}
> -		}
> +		if (!work_pending(&other->push_work) &&
> +		    (ENTROPY_BITS(r) > 6 * r->poolinfo->poolbytes) &&
> +		    (ENTROPY_BITS(other) <= 6 * other->poolinfo->poolbytes))
> +			schedule_work(&other->push_work);

push_to_pool will transfer chunks of random_read_wakeup_bits.

I think push_to_pool should also match this change.

I like it that this path is controllable via random_write_wakeup_bits,
that would be lost with this change.

>  	}
>  }
>  
> @@ -1553,6 +1548,11 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf,
>  	int large_request = (nbytes > 256);
>  
>  	trace_extract_entropy_user(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
> +	if (!r->initialized && r->pull) {
> +		xfer_secondary_pool(r, ENTROPY_BITS(r->pull)/8);
> +		if (!r->initialized)
> +			return 0;

Did you want to do that in push_to_pool?

> +	}
>  	xfer_secondary_pool(r, nbytes);
>  	nbytes = account(r, nbytes, 0, 0);
>  


The second part of the _random_read does not match this change:


                wait_event_interruptible(random_read_wait,
                        ENTROPY_BITS(&input_pool) >=
                        random_read_wakeup_bits);
                if (signal_pending(current))
                        return -ERESTARTSYS;


and will go into a busy loop, when
ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits, right?

The select is basically done here I think this should not indicate read readiness
before the pool is initialized that is needs to be changed, right?

random_poll(struct file *file, poll_table * wait)
{
        __poll_t mask;

        poll_wait(file, &random_read_wait, wait);
        poll_wait(file, &random_write_wait, wait);
        mask = 0;
        if (ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
                mask |= EPOLLIN | EPOLLRDNORM;


That said, I think about my patch this part is most likely subject to change:

                num = extract_entropy(r, &buf, 32, 16, crng_ready()
                                      ? random_read_wakeup_bits / 4 : 0);

This makes the input_pool entropy_avail hover between 128..384 bits on a system
with very little entropy, in the normal case where random_read_wakeup_bits=64,
while actually, the goal I want to achive (the readability condition in
random_pool keep true at all times) it would only require

                num = extract_entropy(r, &buf, 32, 16, crng_ready()
                                      ? (random_read_wakeup_bits + 7) / 8 : 0);

The former version just leaves a few bits more in the input_pool, but the
reason why I did it is just to be simple (save one add), and ensure the
ready condition in the random_pool does not change due to the CRNG reseeding. 

What might also be changed, is inject random data from the CRNG once that
is initialized to the blocking_pool, to make that device start up faster, or
at least not make it much worse than before.  As I said, I did not regard
that as absolutely necessary, but might change my mind about that.


I would be happy to hear your thoughts, how to proceed.


Bernd.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCHv5] random: Make /dev/random wait for input_pool initializedy
  2019-02-21 19:24                 ` Bernd Edlinger
@ 2019-02-21 23:18                   ` Theodore Y. Ts'o
  2019-02-22 13:45                     ` Bernd Edlinger
  0 siblings, 1 reply; 14+ messages in thread
From: Theodore Y. Ts'o @ 2019-02-21 23:18 UTC (permalink / raw)
  To: Bernd Edlinger; +Cc: Arnd Bergmann, Greg Kroah-Hartman, linux-kernel

On Thu, Feb 21, 2019 at 07:24:14PM +0000, Bernd Edlinger wrote:
> 
> My observation, with a previous attempt (v3) of my patch is that a system
> where only interrupt randomness is available it takes typically 2 minutes
> to accomplish the initial seeding of the CRNG, if from there one has to
> wait until there is 128 bit entropy in the blocking pool it will take
> much too long.  The patch was actually buggy and did only ensure 80 bits
> of entropy in the blocking pool but even that did take 6.5 minutes, which
> felt to me like an absolutely unacceptable waiting time.
> 
> Therefore I would like to keep that to 2-3 minutes that the CRNG takes
> for it's initialization.  From there I thought, even if the entroy
> in the input_pool drops to zero, the information content however is still
> there, and after adding the next 64 bits if raw entropy, it would be fine
> to extract 8 bytes form the input_pool and feed that to the
> blocking pool, that would make 6 bytes of output, which should be
> completely unpredictable, the approach taken in my v5 patch.
> If that is not good enough, maybe extract 128 bits from the urandom
> and inject that into the blocking pool without incrementing
> the blocking_pool's entropy count.

The whole premise of reading from /dev/random is that it should only
allow reads up to available estimated entropy.  I'm assuming here that
sane users of /dev/random will be reading in chunks of at least 128
bits, reading smaller amounts for, say, a 32-bit SPECK key, is not
someone who is paranoid enough to want to use /dev/random is likely to
want to do.  So what I was doing was to simply prevent any reads from
/dev/random until it had accumulated 128 bits worth of entropy.  If
the user is reading 128 bits in order to generate a 128-bit session
key, this won't actually slow down /dev/random any more that it does
today.

It will slow down someone who just wants to read a byte from
/dev/random immediately after boot --- but as far as I'm concerned,
that's crazy, so I don't really care about optimizing it.  Your
suggestion of simply not allowing any reads until the CRNG is
initialized, and then injecting 128-bits into the blocking pool would
also work, but it wouldn't speed up the use case of "the user is
trying to read 128 bits from /dev/random".  It only speeds up "read 1
byte from /dev/random".

Personally, I would generally be simply tell users, "use getrandom(2)
and be happy", and I consider /dev/random to be a legacy interface.
It's just that there are some crazy customers who seem to believe that
/dev/random is required for FIPS compliance.

So optimizing for users who want to read vast amount of data from
/dev/random is a non-goal as far as I am concerned.  In particular,
seeding the CRNG and keeping it properly reseeded is higher priority
as far as I'm concerned.  If that slows down /dev/random a bit,
/dev/random is *always* going to be slow.

> > -			struct entropy_store *other = &blocking_pool;
> > -
> > -			if (other->entropy_count <=
> > -			    3 * other->poolinfo->poolfracbits / 4) {
> > -				schedule_work(&other->push_work);
> > -				r->entropy_total = 0;
> > -			}
> > -		}
> > +		if (!work_pending(&other->push_work) &&
> > +		    (ENTROPY_BITS(r) > 6 * r->poolinfo->poolbytes) &&
> > +		    (ENTROPY_BITS(other) <= 6 * other->poolinfo->poolbytes))
> > +			schedule_work(&other->push_work);
> 
> push_to_pool will transfer chunks of random_read_wakeup_bits.
> 
> I think push_to_pool should also match this change.

I was trying to keep the size of this patch as small as practical,
since the primary goal was to improve the security of the bits
returned when reading the a 128 bit of randomness immediately after
boot.

> I like it that this path is controllable via random_write_wakeup_bits,
> that would be lost with this change.

Very few people actually use these knobs, and in fact I regret making
them available, since changing these to insane values can impact the
security properties of /dev/random.  I don't actually see a good
reason why a user *would* want to adjust the behavior of this code
path, and it makes it much simpler to reason about how this code path
works if we don't make it controllable by the user.

> > @@ -1553,6 +1548,11 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf,
> >  	int large_request = (nbytes > 256);
> >  
> >  	trace_extract_entropy_user(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
> > +	if (!r->initialized && r->pull) {
> > +		xfer_secondary_pool(r, ENTROPY_BITS(r->pull)/8);
> > +		if (!r->initialized)
> > +			return 0;
> 
> Did you want to do that in push_to_pool?

No, this was deliberate.  The point here is that if the blocking pool
is not initialized (which is defined as having accumulated 128 bits of
entropy once), we refuse to return any entropy at all.

> The second part of the _random_read does not match this change:
> 
>                 wait_event_interruptible(random_read_wait,
>                         ENTROPY_BITS(&input_pool) >=
>                         random_read_wakeup_bits);
>                 if (signal_pending(current))
>                         return -ERESTARTSYS;
> 
> 
> and will go into a busy loop, when
> ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits, right?

No, because what's being tested is the entropy estimate in the *input*
pool.  If there is entropy available, then xfer_secondary_pool() will
transfer entropy from the input pool to the blocking pool.  This will
decrease the entropy estimate for the input pool, so we won't busy
loop.  If the entropy estimate for the blocking pool increases to
above 128 bits, then the initialized flag will get set, and at that
point we will start returning random data to the user.

> The select is basically done here I think this should not indicate read readiness
> before the pool is initialized that is needs to be changed, right?

Yes, we should adjust random_pool so it won't report that the fd is
readable unless the blocking pool is initialized.

	 					- Ted

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCHv5] random: Make /dev/random wait for input_pool initializedy
  2019-02-21 23:18                   ` Theodore Y. Ts'o
@ 2019-02-22 13:45                     ` Bernd Edlinger
  0 siblings, 0 replies; 14+ messages in thread
From: Bernd Edlinger @ 2019-02-22 13:45 UTC (permalink / raw)
  To: Theodore Y. Ts'o, Arnd Bergmann, Greg Kroah-Hartman, linux-kernel

On 2/22/19 12:18 AM, Theodore Y. Ts'o wrote:
> The whole premise of reading from /dev/random is that it should only
> allow reads up to available estimated entropy.  I'm assuming here that
> sane users of /dev/random will be reading in chunks of at least 128
> bits, reading smaller amounts for, say, a 32-bit SPECK key, is not
> someone who is paranoid enough to want to use /dev/random is likely to
> want to do.  So what I was doing was to simply prevent any reads from
> /dev/random until it had accumulated 128 bits worth of entropy.  If
> the user is reading 128 bits in order to generate a 128-bit session
> key, this won't actually slow down /dev/random any more that it does
> today.
> 
> It will slow down someone who just wants to read a byte from
> /dev/random immediately after boot --- but as far as I'm concerned,
> that's crazy, so I don't really care about optimizing it.  Your
> suggestion of simply not allowing any reads until the CRNG is
> initialized, and then injecting 128-bits into the blocking pool would
> also work, but it wouldn't speed up the use case of "the user is
> trying to read 128 bits from /dev/random".  It only speeds up "read 1
> byte from /dev/random".
> 
> Personally, I would generally be simply tell users, "use getrandom(2)
> and be happy", and I consider /dev/random to be a legacy interface.
> It's just that there are some crazy customers who seem to believe that
> /dev/random is required for FIPS compliance.
> 

Sure.

> So optimizing for users who want to read vast amount of data from
> /dev/random is a non-goal as far as I am concerned.  In particular,
> seeding the CRNG and keeping it properly reseeded is higher priority
> as far as I'm concerned.  If that slows down /dev/random a bit,
> /dev/random is *always* going to be slow.
> 

There are rumors that reading one byte from /dev/random in the rcS
script makes /dev/urandom properly seeded.  That sounds logical,
but does not work as expected due to a bug that I am trying to fix.

>>> -			struct entropy_store *other = &blocking_pool;
>>> -
>>> -			if (other->entropy_count <=
>>> -			    3 * other->poolinfo->poolfracbits / 4) {
>>> -				schedule_work(&other->push_work);
>>> -				r->entropy_total = 0;
>>> -			}
>>> -		}
>>> +		if (!work_pending(&other->push_work) &&
>>> +		    (ENTROPY_BITS(r) > 6 * r->poolinfo->poolbytes) &&
>>> +		    (ENTROPY_BITS(other) <= 6 * other->poolinfo->poolbytes))
>>> +			schedule_work(&other->push_work);
>>
>> push_to_pool will transfer chunks of random_read_wakeup_bits.
>>
>> I think push_to_pool should also match this change.
> 
> I was trying to keep the size of this patch as small as practical,
> since the primary goal was to improve the security of the bits
> returned when reading the a 128 bit of randomness immediately after
> boot.
> 

I definitely share this desire with you.

But there are also issues with the CRNG initialization, it is very important
to me not to make those worse.

>> I like it that this path is controllable via random_write_wakeup_bits,
>> that would be lost with this change.
> 
> Very few people actually use these knobs, and in fact I regret making
> them available, since changing these to insane values can impact the
> security properties of /dev/random.  I don't actually see a good
> reason why a user *would* want to adjust the behavior of this code
> path, and it makes it much simpler to reason about how this code path
> works if we don't make it controllable by the user.
> 
>>> @@ -1553,6 +1548,11 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf,
>>>  	int large_request = (nbytes > 256);
>>>  
>>>  	trace_extract_entropy_user(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
>>> +	if (!r->initialized && r->pull) {
>>> +		xfer_secondary_pool(r, ENTROPY_BITS(r->pull)/8);
>>> +		if (!r->initialized)
>>> +			return 0;
>>
>> Did you want to do that in push_to_pool?
> 
> No, this was deliberate.  The point here is that if the blocking pool
> is not initialized (which is defined as having accumulated 128 bits of
> entropy once), we refuse to return any entropy at all.
> 

Somehow the entropy is not returned, but still transferred from the input_pool
to the blocking_pool, which prevents the initialization of the CRNG.
And finally the blocking_pool is initialized, but not the CRNG.

See my tests below.


>> The second part of the _random_read does not match this change:
>>
>>                 wait_event_interruptible(random_read_wait,
>>                         ENTROPY_BITS(&input_pool) >=
>>                         random_read_wakeup_bits);
>>                 if (signal_pending(current))
>>                         return -ERESTARTSYS;
>>
>>
>> and will go into a busy loop, when
>> ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits, right?
> 
> No, because what's being tested is the entropy estimate in the *input*
> pool.  If there is entropy available, then xfer_secondary_pool() will
> transfer entropy from the input pool to the blocking pool.  This will
> decrease the entropy estimate for the input pool, so we won't busy
> loop.  If the entropy estimate for the blocking pool increases to
> above 128 bits, then the initialized flag will get set, and at that
> point we will start returning random data to the user.
> 
>> The select is basically done here I think this should not indicate read readiness
>> before the pool is initialized that is needs to be changed, right?
> 
> Yes, we should adjust random_pool so it won't report that the fd is
> readable unless the blocking pool is initialized.
> 

Agreed, and the CRNG should never initialize after the blocking pool FWIW.

I use two small test programs to demonstrate what is currently broken.

I hope you are able to reproduce my tests.

I use only interrupt randomness, no dedicated hardware whatsoever,
and no other input events.

$ cat test1.c
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/select.h>

int main()
{
  int f = open("/dev/random", O_NDELAY);
  if (f<0) return 1;
  for(;;)
  {
    struct timeval tv = {1, 0};
    fd_set fds;
    int x;
    FD_ZERO(&fds);
    FD_SET(f, &fds);
    x = select(f+1, &fds, NULL, NULL, &tv);
    if (x==1)
    {
      printf("ready\n");
      sleep(1);
    }
    else if (x==0)
    {
      printf("not ready\n");
    }
    else
    {
      printf("error\n");
    }
  }
}

$ cat test2.c
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>

int main()
{
  int f = open("/dev/random", O_NDELAY);
  if (f<0) return 1;
  for(;;)
  {
    unsigned char buf[16];
    int x = read(f, buf, sizeof(buf));
    if (x>=0)
    {
      int i;
      printf("read %d bytes: ", x);
      for (i=0; i<x; i++) printf("%02x ", buf[i]);
      printf("\n");
    }
  }
}

I build your patch and install.
I boot new, and start test1 and test2 quickly in two terminals,
I start the following command in a bash shell:
while sleep 1; do cat /proc/sys/kernel/random/entropy_avail; done

First the test1 prints not ready, while the entropy avail goes
3 times from 0 to 63
then it goes from 64 to 95, while the test1 (select) prints ready,
but test2 (read) prints nothing.
Then entropy goes to 0 again, and 
read 16 bytes: 38 06 71 17 e6 ac 95 45 57 c4 ab 4a 16 6b 4d 7b
read 3 bytes: 8b d4 50

now always entropy count from 0..63 and
read 6 bytes: e8 08 f7 80 c5 ce

entropy is not reaching 128 bits,
therefore always random: dd: uninitialized urandom read (1 bytes read)
and never random: crng init done


I forgot to mention one other problem in your patch:

> -		if (crng_init < 2 && entropy_bits >= 128) {
> +		if (crng_init < 2) {
> +			if (entropy_bits < 128)
> +				return;
>  			crng_reseed(&primary_crng, r);
>  			entropy_bits = r->entropy_count >> ENTROPY_SHIFT;
>  		}


This will make select more inconsistent than before, because the
test int random_poll (which makes select wait) is now no longer consistent with
the test that follows (which makes select wake up):

this is not waking up, when rng_init < 2 and entropy_bits < 128
                /* should we wake readers? */
                if (entropy_bits >= random_read_wakeup_bits &&
                    wq_has_sleeper(&random_read_wait)) {
                        wake_up_interruptible(&random_read_wait);
                        kill_fasync(&fasync, SIGIO, POLL_IN);
                }

but a select will be immediately satisfied if

        if (ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
                mask |= EPOLLIN | EPOLLRDNORM;

those need to match, or the select behaves erratically.

        if (crng_ready() && ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
                mask |= EPOLLIN | EPOLLRDNORM;

would behave consistently.


Bernd.

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-02-22 13:46 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-15  1:47 [PATCH] random: Make /dev/random wait for crng_ready Bernd Edlinger
2019-02-15  5:40 ` Bernd Edlinger
2019-02-15 13:58 ` [PATCHv2] " Bernd Edlinger
2019-02-16 18:23   ` Theodore Y. Ts'o
2019-02-16 20:12     ` Bernd Edlinger
2019-02-17  8:44     ` [PATCHv3] " Bernd Edlinger
2019-02-17 13:48       ` Bernd Edlinger
2019-02-17 20:55         ` [PATCHv4] random: Make /dev/random wait for input_pool initialized Bernd Edlinger
2019-02-19  7:16           ` Bernd Edlinger
2019-02-19 17:09             ` [PATCHv5] " Bernd Edlinger
2019-02-21  0:32               ` [PATCHv5] random: Make /dev/random wait for input_pool initializedy Theodore Y. Ts'o
2019-02-21 19:24                 ` Bernd Edlinger
2019-02-21 23:18                   ` Theodore Y. Ts'o
2019-02-22 13:45                     ` Bernd Edlinger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).