linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* random: /dev/random often returns short reads
@ 2017-01-16 18:50 Denys Vlasenko
  2017-01-16 18:52 ` Denys Vlasenko
  2017-01-17  4:36 ` Theodore Ts'o
  0 siblings, 2 replies; 14+ messages in thread
From: Denys Vlasenko @ 2017-01-16 18:50 UTC (permalink / raw)
  To: Linux Kernel Mailing List, Theodore Ts'o, H. Peter Anvin,
	Denys Vlasenko

Hi,

/dev/random can legitimately returns short reads
when there is not enough entropy for the full request.
However, now it does so far too often,
and it appears to be a bug:

#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
int main(int argc, char **argv)
{
        int fd, ret, len;
        char buf[16 * 1024];

        len = argv[1] ? atoi(argv[1]) : 32;
        fd = open("/dev/random", O_RDONLY);
        ret = read(fd, buf, len);
        printf("read of %d returns %d\n", len, ret);
        if (ret != len)
                return 1;
        return 0;
}

# gcc -Os -Wall eat_dev_random.c -o eat_dev_random

# while ./eat_dev_random; do ./eat_dev_random; done; ./eat_dev_random
read of 32 returns 32
read of 32 returns 32
read of 32 returns 28
read of 32 returns 24

Just two few first requests worked, and then ouch...

I think this is what happens here:
we transfer 4 bytes of entrophy to /dev/random pool:

_xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
        int bytes = nbytes;
        /* pull at least as much as a wakeup */
        bytes = max_t(int, bytes, random_read_wakeup_bits / 8);
        /* but never more than the buffer size */
        bytes = min_t(int, bytes, sizeof(tmp));
        bytes = extract_entropy(r->pull, tmp, bytes,
                                random_read_wakeup_bits / 8, rsvd_bytes);
        mix_pool_bytes(r, tmp, bytes);
        credit_entropy_bits(r, bytes*8);


but when we enter credit_entropy_bits(), there is a defensive code
which slightly underestimates the amount of entropy!
It was added by this commit:

commit 30e37ec516ae5a6957596de7661673c615c82ea4
Author: H. Peter Anvin <hpa@linux.intel.com>
Date:   Tue Sep 10 23:16:17 2013 -0400

    random: account for entropy loss due to overwrites

    When we write entropy into a non-empty pool, we currently don't
    account at all for the fact that we will probabilistically overwrite
    some of the entropy in that pool.  This means that unless the pool is
    fully empty, we are currently *guaranteed* to overestimate the amount
    of entropy in the pool!


The code looks like it effectively credits the pool only for ~3/4
of the amount, i.e. 24 bytes, not 32.

If /dev/random pool was empty or nearly so, further it results
in a short read.

This is wrong because _xfer_secondary_pool() could well had
lots and lots of entropy to supply, it just did not give enough.

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2017-02-15 17:55 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-16 18:50 random: /dev/random often returns short reads Denys Vlasenko
2017-01-16 18:52 ` Denys Vlasenko
2017-01-17  4:36 ` Theodore Ts'o
2017-01-17  8:21   ` Denys Vlasenko
2017-01-17 17:15     ` Theodore Ts'o
2017-01-17 17:34       ` Denys Vlasenko
2017-01-17 22:29         ` H. Peter Anvin
2017-01-17 23:41           ` Theodore Ts'o
2017-01-18  1:54             ` H. Peter Anvin
2017-01-18 15:44           ` Denys Vlasenko
2017-01-18 18:07             ` Theodore Ts'o
2017-01-19 21:45               ` Denys Vlasenko
2017-01-20  3:17                 ` H. Peter Anvin
2017-02-15 17:55               ` Denys Vlasenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).