linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* random: /dev/random often returns short reads
@ 2017-01-16 18:50 Denys Vlasenko
  2017-01-16 18:52 ` Denys Vlasenko
  2017-01-17  4:36 ` Theodore Ts'o
  0 siblings, 2 replies; 14+ messages in thread
From: Denys Vlasenko @ 2017-01-16 18:50 UTC (permalink / raw)
  To: Linux Kernel Mailing List, Theodore Ts'o, H. Peter Anvin,
	Denys Vlasenko

Hi,

/dev/random can legitimately returns short reads
when there is not enough entropy for the full request.
However, now it does so far too often,
and it appears to be a bug:

#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
int main(int argc, char **argv)
{
        int fd, ret, len;
        char buf[16 * 1024];

        len = argv[1] ? atoi(argv[1]) : 32;
        fd = open("/dev/random", O_RDONLY);
        ret = read(fd, buf, len);
        printf("read of %d returns %d\n", len, ret);
        if (ret != len)
                return 1;
        return 0;
}

# gcc -Os -Wall eat_dev_random.c -o eat_dev_random

# while ./eat_dev_random; do ./eat_dev_random; done; ./eat_dev_random
read of 32 returns 32
read of 32 returns 32
read of 32 returns 28
read of 32 returns 24

Just two few first requests worked, and then ouch...

I think this is what happens here:
we transfer 4 bytes of entrophy to /dev/random pool:

_xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
        int bytes = nbytes;
        /* pull at least as much as a wakeup */
        bytes = max_t(int, bytes, random_read_wakeup_bits / 8);
        /* but never more than the buffer size */
        bytes = min_t(int, bytes, sizeof(tmp));
        bytes = extract_entropy(r->pull, tmp, bytes,
                                random_read_wakeup_bits / 8, rsvd_bytes);
        mix_pool_bytes(r, tmp, bytes);
        credit_entropy_bits(r, bytes*8);


but when we enter credit_entropy_bits(), there is a defensive code
which slightly underestimates the amount of entropy!
It was added by this commit:

commit 30e37ec516ae5a6957596de7661673c615c82ea4
Author: H. Peter Anvin <hpa@linux.intel.com>
Date:   Tue Sep 10 23:16:17 2013 -0400

    random: account for entropy loss due to overwrites

    When we write entropy into a non-empty pool, we currently don't
    account at all for the fact that we will probabilistically overwrite
    some of the entropy in that pool.  This means that unless the pool is
    fully empty, we are currently *guaranteed* to overestimate the amount
    of entropy in the pool!


The code looks like it effectively credits the pool only for ~3/4
of the amount, i.e. 24 bytes, not 32.

If /dev/random pool was empty or nearly so, further it results
in a short read.

This is wrong because _xfer_secondary_pool() could well had
lots and lots of entropy to supply, it just did not give enough.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: random: /dev/random often returns short reads
  2017-01-16 18:50 random: /dev/random often returns short reads Denys Vlasenko
@ 2017-01-16 18:52 ` Denys Vlasenko
  2017-01-17  4:36 ` Theodore Ts'o
  1 sibling, 0 replies; 14+ messages in thread
From: Denys Vlasenko @ 2017-01-16 18:52 UTC (permalink / raw)
  To: Linux Kernel Mailing List, Theodore Ts'o, H. Peter Anvin,
	Denys Vlasenko

> # while ./eat_dev_random; do ./eat_dev_random; done; ./eat_dev_random
> read of 32 returns 32
> read of 32 returns 32
> read of 32 returns 28
> read of 32 returns 24
>
> Just two few first requests worked, and then ouch...
>
> I think this is what happens here:
> we transfer 4 bytes of entrophy to /dev/random pool:


erm... 32 bytes, of course.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: random: /dev/random often returns short reads
  2017-01-16 18:50 random: /dev/random often returns short reads Denys Vlasenko
  2017-01-16 18:52 ` Denys Vlasenko
@ 2017-01-17  4:36 ` Theodore Ts'o
  2017-01-17  8:21   ` Denys Vlasenko
  1 sibling, 1 reply; 14+ messages in thread
From: Theodore Ts'o @ 2017-01-17  4:36 UTC (permalink / raw)
  To: Denys Vlasenko; +Cc: Linux Kernel Mailing List, H. Peter Anvin, Denys Vlasenko

On Mon, Jan 16, 2017 at 07:50:55PM +0100, Denys Vlasenko wrote:
> 
> /dev/random can legitimately returns short reads
> when there is not enough entropy for the full request.

Yes, but callers of /dev/random should be able to handle short reads.
So it's a bug in the application as well.

You have correctly identified the correct commit which introduced the
problem, but it's been in the tree for three years with very few
people who are complaining.

As near as I can tell, the (few) people who are complaining are those
who are using Havegnd to add pretend history, and then are are
reconfiguring OpenSSL to use /dev/random, in an misguided attempt to
try to get FIPS certification, and for some reason it's ok to use
pretend entropy from Havegnd, and modify OpenSSL to use /dev/random,
but it's not OK to modify OpenSSL to retry short reads.

> The code looks like it effectively credits the pool only for ~3/4
> of the amount, i.e. 24 bytes, not 32.

How much it credits the pools varies depending on how many bits of
entropy are being transferred and how full the pool happens to be
beforehand.  Reversing the calculation so that we transfer exactly the
right number of bits is tricky, and if we transfer too many bits, we
risk "wasting" entropy bits.  Of course, it doesn't matter if we're
transfering pretend entropy only for the purposes of getting FIPS
certification, but getting it Right(tm) is non-trivial.

If someone wants to send me a patch, I'll happily take a look at it,m
but given that fixing userspace is something you really should do
anyway, it's not high on my priority list for me to try to look at and
fixing myself.

						- Ted

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: random: /dev/random often returns short reads
  2017-01-17  4:36 ` Theodore Ts'o
@ 2017-01-17  8:21   ` Denys Vlasenko
  2017-01-17 17:15     ` Theodore Ts'o
  0 siblings, 1 reply; 14+ messages in thread
From: Denys Vlasenko @ 2017-01-17  8:21 UTC (permalink / raw)
  To: Theodore Ts'o, Denys Vlasenko, Linux Kernel Mailing List,
	H. Peter Anvin, Denys Vlasenko

On Tue, Jan 17, 2017 at 5:36 AM, Theodore Ts'o <tytso@mit.edu> wrote:
> On Mon, Jan 16, 2017 at 07:50:55PM +0100, Denys Vlasenko wrote:
>>
>> /dev/random can legitimately returns short reads
>> when there is not enough entropy for the full request.
>
> Yes, but callers of /dev/random should be able to handle short reads.
> So it's a bug in the application as well.

I absolutely agree, whoever stumbled over it has a bug
in their app.

>> The code looks like it effectively credits the pool only for ~3/4
>> of the amount, i.e. 24 bytes, not 32.
>
> How much it credits the pools varies depending on how many bits of
> entropy are being transferred and how full the pool happens to be
> beforehand.

I think the problem is that even if the target pool has no entropy
at all, current algorithm thinks that transferring N random bytes
to it gives it N*3/4 bytes of randomness.

> Reversing the calculation so that we transfer exactly the
> right number of bits is tricky, and if we transfer too many bits, we
> risk "wasting" entropy bits.  Of course, it doesn't matter if we're
> transfering pretend entropy only for the purposes of getting FIPS
> certification, but getting it Right(tm) is non-trivial.
>
> If someone wants to send me a patch, I'll happily take a look at it,

Will something along these lines be accepted?

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -653,6 +653,9 @@ static void credit_entropy_bits(struct
entropy_store *r, int nbits)
        if (nfrac < 0) {
                /* Debit */
                entropy_count += nfrac;
+       } else if (entropy_count == 0) {
+               /* Credit, and the pool is empty */
+               entropy_count += nfrac;
        } else {
                /*
                 * Credit: we have to account for the possibility of
                 * overwriting already present entropy.  Even in the


> but given that fixing userspace is something you really should do
> anyway

I agree. It's just not in my (or my company's, IIUC) userspace code.
I wouldn't even know about this thing since *my* programs do handle
short reads correctly.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: random: /dev/random often returns short reads
  2017-01-17  8:21   ` Denys Vlasenko
@ 2017-01-17 17:15     ` Theodore Ts'o
  2017-01-17 17:34       ` Denys Vlasenko
  0 siblings, 1 reply; 14+ messages in thread
From: Theodore Ts'o @ 2017-01-17 17:15 UTC (permalink / raw)
  To: Denys Vlasenko; +Cc: Linux Kernel Mailing List, H. Peter Anvin, Denys Vlasenko

On Tue, Jan 17, 2017 at 09:21:31AM +0100, Denys Vlasenko wrote:
> > If someone wants to send me a patch, I'll happily take a look at it,
> 
> Will something along these lines be accepted?

The problem is that this won't work.  In the cases that we're talking
about, the entropy counter in the secondary pool is not zero, but
close to zero, we'll still have short reads.  And that's going to
happen a fair amount of the time.

Perhaps the best *hacky* solution would be to say, ok if the entropy
count is less than some threshold, don't use the correct entropy
calculation, but rather assume that all of the new bits won't land on
top of existing entropy bits.  It undoes some of Peter's very careful
and accurate calculations (so I'd like Peter's thoughts), but in
reality, very few people use /dev/random these days except for GPG and
people hacking OpenSSL for FIPS certification so they can feed at at
the US Federal Government's trough.

I'm not sure I care *that* much, but then I think FIPS certification
is a complete waste of Taxpayer dollars, so it's not something I care
a whole lot about fixing, either.  :-)

						- Ted

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: random: /dev/random often returns short reads
  2017-01-17 17:15     ` Theodore Ts'o
@ 2017-01-17 17:34       ` Denys Vlasenko
  2017-01-17 22:29         ` H. Peter Anvin
  0 siblings, 1 reply; 14+ messages in thread
From: Denys Vlasenko @ 2017-01-17 17:34 UTC (permalink / raw)
  To: Theodore Ts'o, Denys Vlasenko, Linux Kernel Mailing List,
	H. Peter Anvin



On 01/17/2017 06:15 PM, Theodore Ts'o wrote:
> On Tue, Jan 17, 2017 at 09:21:31AM +0100, Denys Vlasenko wrote:
>>> If someone wants to send me a patch, I'll happily take a look at it,
>>
>> Will something along these lines be accepted?
>
> The problem is that this won't work.  In the cases that we're talking
> about, the entropy counter in the secondary pool is not zero, but
> close to zero, we'll still have short reads.  And that's going to
> happen a fair amount of the time.
>
> Perhaps the best *hacky* solution would be to say, ok if the entropy
> count is less than some threshold, don't use the correct entropy
> calculation, but rather assume that all of the new bits won't land on
> top of existing entropy bits.

IOW, something like this:

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -653,6 +653,9 @@ static void credit_entropy_bits(struct
entropy_store *r, int nbits)
         if (nfrac < 0) {
                 /* Debit */
                 entropy_count += nfrac;
+       } else if (entropy_count < ((8 * 8) << ENTROPY_SHIFT)) {
+               /* Credit, and the pool is almost empty */
+               entropy_count += nfrac;
         } else {
                 /*
                  * Credit: we have to account for the possibility of
                  * overwriting already present entropy.  Even in the

Want the patch? If yes, what name of the constant you prefer? How about

/* Has less than 8 bytes */
#define ALMOST_EMPTY_POOL_frac   ((8 * 8) << ENTROPY_SHIFT)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: random: /dev/random often returns short reads
  2017-01-17 17:34       ` Denys Vlasenko
@ 2017-01-17 22:29         ` H. Peter Anvin
  2017-01-17 23:41           ` Theodore Ts'o
  2017-01-18 15:44           ` Denys Vlasenko
  0 siblings, 2 replies; 14+ messages in thread
From: H. Peter Anvin @ 2017-01-17 22:29 UTC (permalink / raw)
  To: Denys Vlasenko, Theodore Ts'o, Denys Vlasenko,
	Linux Kernel Mailing List

[-- Attachment #1: Type: text/plain, Size: 2987 bytes --]

On 01/17/17 09:34, Denys Vlasenko wrote:
> 
> 
> On 01/17/2017 06:15 PM, Theodore Ts'o wrote:
>> On Tue, Jan 17, 2017 at 09:21:31AM +0100, Denys Vlasenko wrote:
>>>> If someone wants to send me a patch, I'll happily take a look at it,
>>>
>>> Will something along these lines be accepted?
>>
>> The problem is that this won't work.  In the cases that we're talking
>> about, the entropy counter in the secondary pool is not zero, but
>> close to zero, we'll still have short reads.  And that's going to
>> happen a fair amount of the time.
>>
>> Perhaps the best *hacky* solution would be to say, ok if the entropy
>> count is less than some threshold, don't use the correct entropy
>> calculation, but rather assume that all of the new bits won't land on
>> top of existing entropy bits.
> 
> IOW, something like this:
> 
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -653,6 +653,9 @@ static void credit_entropy_bits(struct
> entropy_store *r, int nbits)
>         if (nfrac < 0) {
>                 /* Debit */
>                 entropy_count += nfrac;
> +       } else if (entropy_count < ((8 * 8) << ENTROPY_SHIFT)) {
> +               /* Credit, and the pool is almost empty */
> +               entropy_count += nfrac;
>         } else {
>                 /*
>                  * Credit: we have to account for the possibility of
>                  * overwriting already present entropy.  Even in the
> 
> Want the patch? If yes, what name of the constant you prefer? How about
> 

This seems very wrong.  The whole point is that we keep it conservative
-- always less than or equal to the correct number.  You chould derate
the value based on the top part of the threshold using a more
conservative constant (using smaller fill steps) than the 3/4 used in
the current derating algorithm, but first of all, you would only recover
<= 1/4 of the credit in the first place, so it is questionable if it
really buys you all that much.

I really, really would hate to see something that introduces an active
error to cope with a broken application somewhere.

> On Mon, Jan 16, 2017 at 07:50:55PM +0100, Denys Vlasenko wrote:
>>
>> /dev/random can legitimately returns short reads
>> when there is not enough entropy for the full request.
> 
> Yes, but callers of /dev/random should be able to handle short reads.
> So it's a bug in the application as well.

It's not a bug in the application "as well", it is a bug in the
application, *period*.  There are a number of other conditions which
could cause this exact effect.

If there is a real need to hack around this, then I would instead
suggest modifying random_read() to block rather than return if the user
requests below a certain value, O_NONBLOCK is not set, and the whole
request cannot be fulfilled.  It probably needs to be a sysctl
configurable, though, and most likely defaulting to 1, as it could just
as easily break properly functioning applications.

A *completely* untested patch attached...

	-hpa


[-- Attachment #2: diff --]
[-- Type: text/plain, Size: 2513 bytes --]

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 1ef2640..618ca9b 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -320,6 +320,13 @@ static int random_write_wakeup_bits = 28 * OUTPUT_POOL_WORDS;
 static int random_min_urandom_seed = 60;
 
 /*
+ * If /dev/random can't fulfil a request, block unless we can return
+ * this many bytes.  If O_NONBLOCK is set, we always return,
+ * unconditionally.
+ */
+static int random_min_return = 1;
+
+/*
  * Originally, we used a primitive polynomial of degree .poolwords
  * over GF(2).  The taps for various sizes are defined below.  They
  * were chosen to be evenly spaced except for the last tap, which is 1
@@ -1702,24 +1709,26 @@ static ssize_t
 _random_read(int nonblock, char __user *buf, size_t nbytes)
 {
 	ssize_t n;
+	size_t done = 0;
 
 	if (nbytes == 0)
 		return 0;
 
 	nbytes = min_t(size_t, nbytes, SEC_XFER_SIZE);
-	while (1) {
-		n = extract_entropy_user(&blocking_pool, buf, nbytes);
+	while (done < nbytes) {
+		n = extract_entropy_user(&blocking_pool, buf, nbytes-done);
 		if (n < 0)
-			return n;
+			return done ? done : n;
 		trace_random_read(n*8, (nbytes-n)*8,
 				  ENTROPY_BITS(&blocking_pool),
 				  ENTROPY_BITS(&input_pool));
-		if (n > 0)
-			return n;
+		done += n;
+		if (done >= min_t(size_t, nbytes, random_min_read))
+			break;
 
 		/* Pool is (near) empty.  Maybe wait and retry. */
 		if (nonblock)
-			return -EAGAIN;
+			return done ? done : -EAGAIN;
 
 		wait_event_interruptible(random_read_wait,
 			ENTROPY_BITS(&input_pool) >=
@@ -1727,6 +1736,7 @@ _random_read(int nonblock, char __user *buf, size_t nbytes)
 		if (signal_pending(current))
 			return -ERESTARTSYS;
 	}
+	return done;
 }
 
 static ssize_t
@@ -1909,6 +1919,8 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
 
 #include <linux/sysctl.h>
 
+static int min_random_min_read = 1;
+static int max_random_min_read = SEC_XFER_SIZE;
 static int min_read_thresh = 8, min_write_thresh;
 static int max_read_thresh = OUTPUT_POOL_WORDS * 32;
 static int max_write_thresh = INPUT_POOL_WORDS * 32;
@@ -2022,6 +2034,15 @@ struct ctl_table random_table[] = {
 		.mode		= 0444,
 		.proc_handler	= proc_do_uuid,
 	},
+	{
+		.procname	= "random_min_return",
+		.data		= &random_min_return,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec_minmax,
+		.extra1		= &min_random_min_read,
+		.extra2		= &max_random_min_read,
+	},
 #ifdef ADD_INTERRUPT_BENCH
 	{
 		.procname	= "add_interrupt_avg_cycles",

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: random: /dev/random often returns short reads
  2017-01-17 22:29         ` H. Peter Anvin
@ 2017-01-17 23:41           ` Theodore Ts'o
  2017-01-18  1:54             ` H. Peter Anvin
  2017-01-18 15:44           ` Denys Vlasenko
  1 sibling, 1 reply; 14+ messages in thread
From: Theodore Ts'o @ 2017-01-17 23:41 UTC (permalink / raw)
  To: H. Peter Anvin; +Cc: Denys Vlasenko, Denys Vlasenko, Linux Kernel Mailing List

On Tue, Jan 17, 2017 at 02:29:30PM -0800, H. Peter Anvin wrote:
> If there is a real need to hack around this, then I would instead
> suggest modifying random_read() to block rather than return if the user
> requests below a certain value, O_NONBLOCK is not set, and the whole
> request cannot be fulfilled.  It probably needs to be a sysctl
> configurable, though, and most likely defaulting to 1, as it could just
> as easily break properly functioning applications.

Ugh.  This seems horribly complicated.  If we _really_ need to give
aid and comfort to people trying to do pointless FIPS certification
workarounds (as opposed to closing bugzilla complaints with "working
as intended"), how about this?

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 7f0622426b97..d35281492e04 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1460,7 +1460,13 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf,
 	int large_request = (nbytes > 256);
 
 	trace_extract_entropy_user(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
-	xfer_secondary_pool(r, nbytes);
+	if (r->entropy_count < (nbytes << (ENTROPY_SHIFT + 3))) {
+		int hack_xfer_size = nbytes;
+
+		if (3 * r->entropy_count < r->poolinfo->poolfracbits)
+			hack_xfer_size *= 2;
+		_xfer_secondary_pool(r, hack_xfer_size);
+	}
 	nbytes = account(r, nbytes, 0, 0);
 
 	while (nbytes) {

					- Ted

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: random: /dev/random often returns short reads
  2017-01-17 23:41           ` Theodore Ts'o
@ 2017-01-18  1:54             ` H. Peter Anvin
  0 siblings, 0 replies; 14+ messages in thread
From: H. Peter Anvin @ 2017-01-18  1:54 UTC (permalink / raw)
  To: Theodore Ts'o, Denys Vlasenko, Denys Vlasenko,
	Linux Kernel Mailing List

On 01/17/17 15:41, Theodore Ts'o wrote:
> On Tue, Jan 17, 2017 at 02:29:30PM -0800, H. Peter Anvin wrote:
>> If there is a real need to hack around this, then I would instead
>> suggest modifying random_read() to block rather than return if the user
>> requests below a certain value, O_NONBLOCK is not set, and the whole
>> request cannot be fulfilled.  It probably needs to be a sysctl
>> configurable, though, and most likely defaulting to 1, as it could just
>> as easily break properly functioning applications.
> 
> Ugh.  This seems horribly complicated.  If we _really_ need to give
> aid and comfort to people trying to do pointless FIPS certification
> workarounds (as opposed to closing bugzilla complaints with "working
> as intended"), how about this?

Personally I'm fine with your parenthesized solution, and we can always
tell them that the workaround for their broken app is to mount
/dev/urandom over /dev/random until they have fixed their software.  ;)

	-hpa

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: random: /dev/random often returns short reads
  2017-01-17 22:29         ` H. Peter Anvin
  2017-01-17 23:41           ` Theodore Ts'o
@ 2017-01-18 15:44           ` Denys Vlasenko
  2017-01-18 18:07             ` Theodore Ts'o
  1 sibling, 1 reply; 14+ messages in thread
From: Denys Vlasenko @ 2017-01-18 15:44 UTC (permalink / raw)
  To: H. Peter Anvin, Theodore Ts'o, Denys Vlasenko,
	Linux Kernel Mailing List

On 01/17/2017 11:29 PM, H. Peter Anvin wrote:
> On 01/17/17 09:34, Denys Vlasenko wrote:
>>
>>
>> On 01/17/2017 06:15 PM, Theodore Ts'o wrote:
>>> On Tue, Jan 17, 2017 at 09:21:31AM +0100, Denys Vlasenko wrote:
>>>>> If someone wants to send me a patch, I'll happily take a look at it,
>>>>
>>>> Will something along these lines be accepted?
>>>
>>> The problem is that this won't work.  In the cases that we're talking
>>> about, the entropy counter in the secondary pool is not zero, but
>>> close to zero, we'll still have short reads.  And that's going to
>>> happen a fair amount of the time.
>>>
>>> Perhaps the best *hacky* solution would be to say, ok if the entropy
>>> count is less than some threshold, don't use the correct entropy
>>> calculation, but rather assume that all of the new bits won't land on
>>> top of existing entropy bits.
>>
>> IOW, something like this:
>>
>> --- a/drivers/char/random.c
>> +++ b/drivers/char/random.c
>> @@ -653,6 +653,9 @@ static void credit_entropy_bits(struct
>> entropy_store *r, int nbits)
>>         if (nfrac < 0) {
>>                 /* Debit */
>>                 entropy_count += nfrac;
>> +       } else if (entropy_count < ((8 * 8) << ENTROPY_SHIFT)) {
>> +               /* Credit, and the pool is almost empty */
>> +               entropy_count += nfrac;
>>         } else {
>>                 /*
>>                  * Credit: we have to account for the possibility of
>>                  * overwriting already present entropy.  Even in the
>>
>> Want the patch? If yes, what name of the constant you prefer? How about
>>
>
> This seems very wrong.  The whole point is that we keep it conservative
> -- always less than or equal to the correct number.

In this case, what code does is it returns fewer bytes,
even though *it has enough random bytes to return the full request*.

This happens because the patch which added more conservative
accounting, while containing technically correct accounting per se,
it  forgot to take in the account another part of the code
which was relying on the previous, simplistic logic
"if we add N random bytes to a pool, now it has +N random bytes".

In my opinion, it should have changed that part of code simultaneously
with introducing more conservative accounting.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: random: /dev/random often returns short reads
  2017-01-18 15:44           ` Denys Vlasenko
@ 2017-01-18 18:07             ` Theodore Ts'o
  2017-01-19 21:45               ` Denys Vlasenko
  2017-02-15 17:55               ` Denys Vlasenko
  0 siblings, 2 replies; 14+ messages in thread
From: Theodore Ts'o @ 2017-01-18 18:07 UTC (permalink / raw)
  To: Denys Vlasenko; +Cc: H. Peter Anvin, Denys Vlasenko, Linux Kernel Mailing List

On Wed, Jan 18, 2017 at 04:44:54PM +0100, Denys Vlasenko wrote:
> In this case, what code does is it returns fewer bytes,
> even though *it has enough random bytes to return the full request*.
> 
> This happens because the patch which added more conservative
> accounting, while containing technically correct accounting per se,
> it  forgot to take in the account another part of the code
> which was relying on the previous, simplistic logic
> "if we add N random bytes to a pool, now it has +N random bytes".
> 
> In my opinion, it should have changed that part of code simultaneously
> with introducing more conservative accounting.

In the ideal world, yes.  I've acknowledged this is a bug, in the "be
conservative in what you send, liberal in what you receive" sense..
But no one complained for three year, and userspace needs to be able
to retry short reads instead of immediately erroring out.

The problem is changing that code to figure out exactly how many bytes
you need to get in order to have N random bytes is non-trivial.  So
our choices are:

1) Transfer more bytes than might be needed to the secondary pool,
which results in resource stranding --- since entropy in the secondary
pool isn't available for reseeding the CRNG.  OTOH, given that we're
now using the CRNG solution, and we're only reseeding every five
minutes, I'm not actually all that worried about stranding some extra
entropy bits in the blocking pool, since that's only going to happen
if we have people *using* the /dev/random pool, and so that entropy
will likely be used eventually anyway.

2) Transfer bytes without using the conservative entropy "overwrite"
calculation if the blocking pool is mostly empty.  This means we will
be over-estimating the entropy in that pool, which is unfortunate.
One could argue that all of the cases where people have been whining
about this, they are using HAVEGED which is providing pretend entropy
based on the presumed unpredictability of Intel cahce timing, so
careful entropy calculations is kind of silly anyway.  However, there
might be some people who really are trying to do carefule entropy
measurement, so compromising this isn't really a great idea.

3) Make reads of /dev/random block, or have it try multiple times to
call extract_entropy() and transfering secondary entropy.  This will
work, but it is less efficient, and it makes the code a bit uglier.

4) Close the bug as WAI, and tell Red Hat's paying customers to go
pound sand.  Since they should be fixing their userspace, and why
changing OpenSSL to use /dev/random, but not changing it to fix the
damned bug in OpenSSL isn't OK, is crazy FIPS certification talk.

I'm leaning a bit towards 1 if we have to do something (which is my
proposed, untested patch).

						- Ted

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: random: /dev/random often returns short reads
  2017-01-18 18:07             ` Theodore Ts'o
@ 2017-01-19 21:45               ` Denys Vlasenko
  2017-01-20  3:17                 ` H. Peter Anvin
  2017-02-15 17:55               ` Denys Vlasenko
  1 sibling, 1 reply; 14+ messages in thread
From: Denys Vlasenko @ 2017-01-19 21:45 UTC (permalink / raw)
  To: Theodore Ts'o, Denys Vlasenko, H. Peter Anvin,
	Denys Vlasenko, Linux Kernel Mailing List

On Wed, Jan 18, 2017 at 7:07 PM, Theodore Ts'o <tytso@mit.edu> wrote:
> In the ideal world, yes.  I've acknowledged this is a bug, in the "be
> conservative in what you send, liberal in what you receive" sense..
> But no one complained for three year, and userspace needs to be able
> to retry short reads instead of immediately erroring out.
>
> The problem is changing that code to figure out exactly how many bytes
> you need to get in order to have N random bytes is non-trivial.  So
> our choices are:
>
> 1) Transfer more bytes than might be needed to the secondary pool,
> which results in resource stranding --- since entropy in the secondary
> pool isn't available for reseeding the CRNG.  OTOH, given that we're
> now using the CRNG solution, and we're only reseeding every five
> minutes, I'm not actually all that worried about stranding some extra
> entropy bits in the blocking pool, since that's only going to happen
> if we have people *using* the /dev/random pool, and so that entropy
> will likely be used eventually anyway
...
...
> I'm leaning a bit towards 1 if we have to do something (which is my
> proposed, untested patch).

Thanks, this solution is okay for me.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: random: /dev/random often returns short reads
  2017-01-19 21:45               ` Denys Vlasenko
@ 2017-01-20  3:17                 ` H. Peter Anvin
  0 siblings, 0 replies; 14+ messages in thread
From: H. Peter Anvin @ 2017-01-20  3:17 UTC (permalink / raw)
  To: Denys Vlasenko, Theodore Ts'o, Denys Vlasenko,
	Linux Kernel Mailing List

On 01/19/17 13:45, Denys Vlasenko wrote:
> On Wed, Jan 18, 2017 at 7:07 PM, Theodore Ts'o <tytso@mit.edu> wrote:
>> In the ideal world, yes.  I've acknowledged this is a bug, in the "be
>> conservative in what you send, liberal in what you receive" sense..
>> But no one complained for three year, and userspace needs to be able
>> to retry short reads instead of immediately erroring out.
>>
>> The problem is changing that code to figure out exactly how many bytes
>> you need to get in order to have N random bytes is non-trivial.  So
>> our choices are:
>>
>> 1) Transfer more bytes than might be needed to the secondary pool,
>> which results in resource stranding --- since entropy in the secondary
>> pool isn't available for reseeding the CRNG.  OTOH, given that we're
>> now using the CRNG solution, and we're only reseeding every five
>> minutes, I'm not actually all that worried about stranding some extra
>> entropy bits in the blocking pool, since that's only going to happen
>> if we have people *using* the /dev/random pool, and so that entropy
>> will likely be used eventually anyway
> ...
> ...
>> I'm leaning a bit towards 1 if we have to do something (which is my
>> proposed, untested patch).
> 
> Thanks, this solution is okay for me.
> 

That seems to make sense to me as well.

	-hpa

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: random: /dev/random often returns short reads
  2017-01-18 18:07             ` Theodore Ts'o
  2017-01-19 21:45               ` Denys Vlasenko
@ 2017-02-15 17:55               ` Denys Vlasenko
  1 sibling, 0 replies; 14+ messages in thread
From: Denys Vlasenko @ 2017-02-15 17:55 UTC (permalink / raw)
  To: Theodore Ts'o, Denys Vlasenko, H. Peter Anvin,
	Denys Vlasenko, Linux Kernel Mailing List

On Wed, Jan 18, 2017 at 7:07 PM, Theodore Ts'o <tytso@mit.edu> wrote:
> On Wed, Jan 18, 2017 at 04:44:54PM +0100, Denys Vlasenko wrote:
>> In this case, what code does is it returns fewer bytes,
>> even though *it has enough random bytes to return the full request*.
>>
>> This happens because the patch which added more conservative
>> accounting, while containing technically correct accounting per se,
>> it  forgot to take in the account another part of the code
>> which was relying on the previous, simplistic logic
>> "if we add N random bytes to a pool, now it has +N random bytes".
>>
>> In my opinion, it should have changed that part of code simultaneously
>> with introducing more conservative accounting.
>
> In the ideal world, yes.  I've acknowledged this is a bug, in the "be
> conservative in what you send, liberal in what you receive" sense..
> But no one complained for three year, and userspace needs to be able
> to retry short reads instead of immediately erroring out.
>
> The problem is changing that code to figure out exactly how many bytes
> you need to get in order to have N random bytes is non-trivial.  So
> our choices are:
>
> 1) Transfer more bytes than might be needed to the secondary pool
...
> 2) Transfer bytes without using the conservative entropy "overwrite"
> calculation if the blocking pool is mostly empty.  This means we will
> be over-estimating the entropy in that pool, which is unfortunate.
> One could argue that all of the cases where people have been whining
> about this, they are using HAVEGED which is providing pretend entropy
> based on the presumed unpredictability of Intel cahce timing, so
> careful entropy calculations is kind of silly anyway.  However, there
> might be some people who really are trying to do carefule entropy
> measurement, so compromising this isn't really a great idea.>
> I'm leaning a bit towards 1 if we have to do something (which is my
> proposed, untested patch).

I spend quite some time looking at both your patch which implements #1
and at the commit 30e37ec516ae5a6957596de7661673c615c82ea4 which introduced
"conservative accounting" code, and the same thought returns to me:
this complexity and problems are self-inflicted by
commit 30e37ec516ae5a6957596de7661673c615c82ea4.

The code is already conservative elsewhere, adding more conservative code -
and more complex code, especially considering that it should be even more
complex than it is, since it should be further fixed up in
"xfer_secondary_pool(r, nbytes)" location -
it does not look like worthy improvement to me.

I propose to simply revert 30e37ec516ae5a6957596de7661673c615c82ea4.

If you worry about this accounting more than I do,
how about dealing with it in a simpler way, a-la

-       xfer_secondary_pool(r, nbytes);
+       xfer_secondary_pool(r, nbytes * 5/4); /* be a bit paranoid */

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2017-02-15 17:55 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-16 18:50 random: /dev/random often returns short reads Denys Vlasenko
2017-01-16 18:52 ` Denys Vlasenko
2017-01-17  4:36 ` Theodore Ts'o
2017-01-17  8:21   ` Denys Vlasenko
2017-01-17 17:15     ` Theodore Ts'o
2017-01-17 17:34       ` Denys Vlasenko
2017-01-17 22:29         ` H. Peter Anvin
2017-01-17 23:41           ` Theodore Ts'o
2017-01-18  1:54             ` H. Peter Anvin
2017-01-18 15:44           ` Denys Vlasenko
2017-01-18 18:07             ` Theodore Ts'o
2017-01-19 21:45               ` Denys Vlasenko
2017-01-20  3:17                 ` H. Peter Anvin
2017-02-15 17:55               ` Denys Vlasenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).