linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Reading large amounts from /dev/urandom broken
@ 2014-07-23 13:52 Andrey Utkin
  2014-07-23 14:32 ` Hannes Frederic Sowa
  2014-07-23 15:14 ` Theodore Ts'o
  0 siblings, 2 replies; 8+ messages in thread
From: Andrey Utkin @ 2014-07-23 13:52 UTC (permalink / raw)
  To: tytso, hannes; +Cc: linux-kernel

Dear developers, please check bugzilla ticket
https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
issue, but starting with comment#3.

Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
with the chunk

nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));

which is described in commit message as "additional paranoia check to
prevent overly large count values to be passed into urandom_read()".

I don't know why people pull such large amounts of data from urandom,
but given today there are two bugreports regarding problems doing
that, i consider that this is practiced.

-- 
Andrey Utkin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Reading large amounts from /dev/urandom broken
  2014-07-23 13:52 Reading large amounts from /dev/urandom broken Andrey Utkin
@ 2014-07-23 14:32 ` Hannes Frederic Sowa
  2014-07-23 15:14 ` Theodore Ts'o
  1 sibling, 0 replies; 8+ messages in thread
From: Hannes Frederic Sowa @ 2014-07-23 14:32 UTC (permalink / raw)
  To: Andrey Utkin; +Cc: tytso, linux-kernel

Hi Andrey,

thanks for the heads up!

On Mi, 2014-07-23 at 16:52 +0300, Andrey Utkin wrote:
> Dear developers, please check bugzilla ticket
> https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
> issue, but starting with comment#3.
> 
> Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
> it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
> with the chunk
> 
> nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
> 
> which is described in commit message as "additional paranoia check to
> prevent overly large count values to be passed into urandom_read()".
> 
> I don't know why people pull such large amounts of data from urandom,
> but given today there are two bugreports regarding problems doing
> that, i consider that this is practiced.

Ted, we should roll back the min_t change and just account for SIZE_MAX
bytes in case we overflow the nbytes << (ENTROPY_SHIFT + 3) calculation.

Or something alike?

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 71529e1..f11a6cc 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1006,7 +1006,10 @@ retry:
 		WARN_ON(1);
 		entropy_count = 0;
 	}
-	nfrac = ibytes << (ENTROPY_SHIFT + 3);
+	if (ibytes > SIZE_MAX >> (ENTROPY_SHIFT + 3))
+		nfrac = SIZE_MAX;
+	else
+		nfrac = ibytes << (ENTROPY_SHIFT + 3);
 	if ((size_t) entropy_count > nfrac)
 		entropy_count -= nfrac;
 	else
@@ -1386,7 +1389,6 @@ urandom_read(struct file *file, char __user *buf,
size_t nbytes, loff_t *ppos)
 			    "with %d bits of entropy available\n",
 			    current->comm, nonblocking_pool.entropy_total);
 
-	nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
 	ret = extract_entropy_user(&nonblocking_pool, buf, nbytes);
 
 	trace_urandom_read(8 * nbytes, ENTROPY_BITS(&nonblocking_pool),



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: Reading large amounts from /dev/urandom broken
  2014-07-23 13:52 Reading large amounts from /dev/urandom broken Andrey Utkin
  2014-07-23 14:32 ` Hannes Frederic Sowa
@ 2014-07-23 15:14 ` Theodore Ts'o
  2014-07-23 15:19   ` Hannes Frederic Sowa
  2014-08-09  7:45   ` Pavel Machek
  1 sibling, 2 replies; 8+ messages in thread
From: Theodore Ts'o @ 2014-07-23 15:14 UTC (permalink / raw)
  To: Andrey Utkin; +Cc: hannes, linux-kernel

On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote:
> Dear developers, please check bugzilla ticket
> https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
> issue, but starting with comment#3.
> 
> Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
> it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
> with the chunk
> 
> nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
> 
> which is described in commit message as "additional paranoia check to
> prevent overly large count values to be passed into urandom_read()".
> 
> I don't know why people pull such large amounts of data from urandom,
> but given today there are two bugreports regarding problems doing
> that, i consider that this is practiced.

I've inquired on the bugzilla why the reporter is abusing urandom in
this way.  The other commenter on the bug replicated the problem, but
that's not a "second bug report" in my book.

At the very least, this will probably cause me to insert a warning
printk: "insane user of /dev/urandom: [current->comm] requested %d
bytes" whenever someone tries to request more than 4k.

						- Ted

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Reading large amounts from /dev/urandom broken
  2014-07-23 15:14 ` Theodore Ts'o
@ 2014-07-23 15:19   ` Hannes Frederic Sowa
  2014-07-24 20:39     ` Alex Elsayed
  2014-08-09  7:45   ` Pavel Machek
  1 sibling, 1 reply; 8+ messages in thread
From: Hannes Frederic Sowa @ 2014-07-23 15:19 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: Andrey Utkin, linux-kernel

On Mi, 2014-07-23 at 11:14 -0400, Theodore Ts'o wrote:
> On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote:
> > Dear developers, please check bugzilla ticket
> > https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
> > issue, but starting with comment#3.
> > 
> > Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
> > it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
> > with the chunk
> > 
> > nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
> > 
> > which is described in commit message as "additional paranoia check to
> > prevent overly large count values to be passed into urandom_read()".
> > 
> > I don't know why people pull such large amounts of data from urandom,
> > but given today there are two bugreports regarding problems doing
> > that, i consider that this is practiced.
> 
> I've inquired on the bugzilla why the reporter is abusing urandom in
> this way.  The other commenter on the bug replicated the problem, but
> that's not a "second bug report" in my book.
> 
> At the very least, this will probably cause me to insert a warning
> printk: "insane user of /dev/urandom: [current->comm] requested %d
> bytes" whenever someone tries to request more than 4k.

Ok, I would be fine with that.

The dd if=/dev/urandom of=random_file.dat seems reasonable to me to try
to not break it. But, of course, there are other possibilities.

Bye,
Hannes


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Reading large amounts from /dev/urandom broken
  2014-07-23 15:19   ` Hannes Frederic Sowa
@ 2014-07-24 20:39     ` Alex Elsayed
  0 siblings, 0 replies; 8+ messages in thread
From: Alex Elsayed @ 2014-07-24 20:39 UTC (permalink / raw)
  To: linux-kernel

Hannes Frederic Sowa wrote:

> On Mi, 2014-07-23 at 11:14 -0400, Theodore Ts'o wrote:
>> On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote:
>> > Dear developers, please check bugzilla ticket
>> > https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
>> > issue, but starting with comment#3.
>> > 
>> > Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
>> > it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
>> > with the chunk
>> > 
>> > nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
>> > 
>> > which is described in commit message as "additional paranoia check to
>> > prevent overly large count values to be passed into urandom_read()".
>> > 
>> > I don't know why people pull such large amounts of data from urandom,
>> > but given today there are two bugreports regarding problems doing
>> > that, i consider that this is practiced.
>> 
>> I've inquired on the bugzilla why the reporter is abusing urandom in
>> this way.  The other commenter on the bug replicated the problem, but
>> that's not a "second bug report" in my book.
>> 
>> At the very least, this will probably cause me to insert a warning
>> printk: "insane user of /dev/urandom: [current->comm] requested %d
>> bytes" whenever someone tries to request more than 4k.
> 
> Ok, I would be fine with that.
> 
> The dd if=/dev/urandom of=random_file.dat seems reasonable to me to try
> to not break it. But, of course, there are other possibilities.

Personally, I'd say that _is_ insane - reading from urandom still consumes 
entropy (causing readers of /dev/random to block more often); when 
alternatives (such as dd'ing to dm-crypt) both avoid the issue _and_ are 
faster then it should very well be considered pathological.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Reading large amounts from /dev/urandom broken
  2014-07-23 15:14 ` Theodore Ts'o
  2014-07-23 15:19   ` Hannes Frederic Sowa
@ 2014-08-09  7:45   ` Pavel Machek
  2014-08-10 11:51     ` Andrey Utkin
  1 sibling, 1 reply; 8+ messages in thread
From: Pavel Machek @ 2014-08-09  7:45 UTC (permalink / raw)
  To: Theodore Ts'o, Andrey Utkin, hannes, linux-kernel

Hi!

> I've inquired on the bugzilla why the reporter is abusing urandom in
> this way.  The other commenter on the bug replicated the problem, but
> that's not a "second bug report" in my book.
> 
> At the very least, this will probably cause me to insert a warning
> printk: "insane user of /dev/urandom: [current->comm] requested %d
> bytes" whenever someone tries to request more than 4k.

Warn about my quick benchmark?

http://atrey.karlin.mff.cuni.cz/~pavel/quickbench.html

I don't see what is insane about it. Yes, there might be more
effective generators of random bits, but that is not the point, this
is quick&dirty attempt at benchmark.

Also people will use cat /dev/urandom > /dev/sdX (and similar) to
clean hard drives. There should be no need to warn about
that. (Performance is going to be disk-limited anyway).

									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Reading large amounts from /dev/urandom broken
  2014-08-09  7:45   ` Pavel Machek
@ 2014-08-10 11:51     ` Andrey Utkin
  2014-08-12  9:14       ` Pavel Machek
  0 siblings, 1 reply; 8+ messages in thread
From: Andrey Utkin @ 2014-08-10 11:51 UTC (permalink / raw)
  To: Pavel Machek; +Cc: Theodore Ts'o, hannes, linux-kernel

2014-08-09 10:45 GMT+03:00 Pavel Machek <pavel@ucw.cz>:
> Warn about my quick benchmark?
>
> http://atrey.karlin.mff.cuni.cz/~pavel/quickbench.html
>
> I don't see what is insane about it. Yes, there might be more
> effective generators of random bits, but that is not the point, this
> is quick&dirty attempt at benchmark.
>
> Also people will use cat /dev/urandom > /dev/sdX (and similar) to
> clean hard drives. There should be no need to warn about
> that. (Performance is going to be disk-limited anyway).

I believe "cat /dev/urandom" doesn't result in reading with _block
size_ more than 32 MB. As it was already explained by Theodore,
currently single read() operation will return "short read" when
requested to give more than 32 MB at once. You still can read more in
consecuential reading acts.

-- 
Andrey Utkin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Reading large amounts from /dev/urandom broken
  2014-08-10 11:51     ` Andrey Utkin
@ 2014-08-12  9:14       ` Pavel Machek
  0 siblings, 0 replies; 8+ messages in thread
From: Pavel Machek @ 2014-08-12  9:14 UTC (permalink / raw)
  To: Andrey Utkin; +Cc: Theodore Ts'o, hannes, linux-kernel

On Sun 2014-08-10 14:51:08, Andrey Utkin wrote:
> 2014-08-09 10:45 GMT+03:00 Pavel Machek <pavel@ucw.cz>:
> > Warn about my quick benchmark?
> >
> > http://atrey.karlin.mff.cuni.cz/~pavel/quickbench.html
> >
> > I don't see what is insane about it. Yes, there might be more
> > effective generators of random bits, but that is not the point, this
> > is quick&dirty attempt at benchmark.
> >
> > Also people will use cat /dev/urandom > /dev/sdX (and similar) to
> > clean hard drives. There should be no need to warn about
> > that. (Performance is going to be disk-limited anyway).
> 
> I believe "cat /dev/urandom" doesn't result in reading with _block
> size_ more than 32 MB. As it was already explained by Theodore,
> currently single read() operation will return "short read" when
> requested to give more than 32 MB at once. You still can read more in
> consecuential reading acts.

Well, I still don't think this behaviour change is a good idea.

									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-08-12  9:14 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-23 13:52 Reading large amounts from /dev/urandom broken Andrey Utkin
2014-07-23 14:32 ` Hannes Frederic Sowa
2014-07-23 15:14 ` Theodore Ts'o
2014-07-23 15:19   ` Hannes Frederic Sowa
2014-07-24 20:39     ` Alex Elsayed
2014-08-09  7:45   ` Pavel Machek
2014-08-10 11:51     ` Andrey Utkin
2014-08-12  9:14       ` Pavel Machek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).