linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* /dev/random in 2.4.6
@ 2001-08-15 15:07 Steve Hill
  2001-08-15 15:21 ` Richard B. Johnson
                   ` (2 more replies)
  0 siblings, 3 replies; 59+ messages in thread
From: Steve Hill @ 2001-08-15 15:07 UTC (permalink / raw)
  To: linux-kernel


Until recently I've been using the 2.2.16 kernel on Cobalt Qube 3's, but
I've just upgraded to 2.4.6.  Since there's no mouse, keyboard, etc, there
isn't much entropy data.  I had no problem getting plenty of data from
/dev/random under 2.2, but under 2.4.6 there seems to be a distinct lack
of data - it takes absolutely ages to extract about 256 bytes from it
(whereas under 2.2 it was relatively quick).  Has there been a major
change in the way the random number generator works under 2.4?

-- 

- Steve Hill
System Administrator         Email: steve@navaho.co.uk
Navaho Technologies Ltd.       Tel: +44-870-7034015

        ... Alcohol and calculus don't mix - Don't drink and derive! ...



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-15 15:07 /dev/random in 2.4.6 Steve Hill
@ 2001-08-15 15:21 ` Richard B. Johnson
  2001-08-15 15:27   ` Steve Hill
  2001-08-15 19:25 ` Alex Bligh - linux-kernel
  2001-08-15 20:55 ` Robert Love
  2 siblings, 1 reply; 59+ messages in thread
From: Richard B. Johnson @ 2001-08-15 15:21 UTC (permalink / raw)
  To: Steve Hill; +Cc: linux-kernel

On Wed, 15 Aug 2001, Steve Hill wrote:

> 
> Until recently I've been using the 2.2.16 kernel on Cobalt Qube 3's, but
> I've just upgraded to 2.4.6.  Since there's no mouse, keyboard, etc, there
> isn't much entropy data.  I had no problem getting plenty of data from
> /dev/random under 2.2, but under 2.4.6 there seems to be a distinct lack
> of data - it takes absolutely ages to extract about 256 bytes from it
> (whereas under 2.2 it was relatively quick).  Has there been a major
> change in the way the random number generator works under 2.4?
> 
> -- 
> 

Same problem on 2.4.1. The first 512 bytes comes right away if
/dev/random hasn't been accessed since boot, then the rest trickles
a few words per second.

Cheers,
Dick Johnson

Penguin : Linux version 2.4.1 on an i686 machine (799.53 BogoMips).

    I was going to compile a list of innovations that could be
    attributed to Microsoft. Once I realized that Ctrl-Alt-Del
    was handled in the BIOS, I found that there aren't any.



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-15 15:21 ` Richard B. Johnson
@ 2001-08-15 15:27   ` Steve Hill
  2001-08-15 15:42     ` Richard B. Johnson
                       ` (2 more replies)
  0 siblings, 3 replies; 59+ messages in thread
From: Steve Hill @ 2001-08-15 15:27 UTC (permalink / raw)
  To: Richard B. Johnson; +Cc: linux-kernel

On Wed, 15 Aug 2001, Richard B. Johnson wrote:

> Same problem on 2.4.1. The first 512 bytes comes right away if
> /dev/random hasn't been accessed since boot, then the rest trickles
> a few words per second.

Hmm...  Well, ATM I've kludged a fix by using /dev/urandom instead, but
it's not ideal because it's being used to generate cryptographic keys, and
urandom isn't cryptographically secure.

Are you seeing the problem on a normal machine? (I assumed I was seeing it
because I'm using Cobalt hardware that's not going to get much entropy
data due to the lack of keyboard, etc)...  although when I'm generating
this data I'm using a root NFS filesystem, so there should be plenty of
network interrupts happening,which should generate some entropy...

I might have a look into increasing the size of the entropy pool so
there's more data to access at once...

-- 

- Steve Hill
System Administrator         Email: steve@navaho.co.uk
Navaho Technologies Ltd.       Tel: +44-870-7034015

        ... Alcohol and calculus don't mix - Don't drink and derive! ...



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-15 15:27   ` Steve Hill
@ 2001-08-15 15:42     ` Richard B. Johnson
  2001-08-15 16:29       ` Tim Walberg
  2001-08-15 17:13     ` Andreas Dilger
  2001-08-19 17:27     ` David Wagner
  2 siblings, 1 reply; 59+ messages in thread
From: Richard B. Johnson @ 2001-08-15 15:42 UTC (permalink / raw)
  To: Steve Hill; +Cc: linux-kernel

On Wed, 15 Aug 2001, Steve Hill wrote:

> On Wed, 15 Aug 2001, Richard B. Johnson wrote:
> 
> > Same problem on 2.4.1. The first 512 bytes comes right away if
> > /dev/random hasn't been accessed since boot, then the rest trickles
> > a few words per second.
> 
> Hmm...  Well, ATM I've kludged a fix by using /dev/urandom instead, but
> it's not ideal because it's being used to generate cryptographic keys, and
> urandom isn't cryptographically secure.
> 
> Are you seeing the problem on a normal machine? (I assumed I was seeing it
> because I'm using Cobalt hardware that's not going to get much entropy
> data due to the lack of keyboard, etc)...  although when I'm generating
> this data I'm using a root NFS filesystem, so there should be plenty of
> network interrupts happening,which should generate some entropy...
> 
> I might have a look into increasing the size of the entropy pool so
> there's more data to access at once...
> 
Well here's what happends here:

Script started on Wed Aug 15 11:21:53 2001
# od /dev/random
0000000 130674 116756 132001 007612 006026 172713 104065 064656
0000020 116442 062343 121475 137430 055664 120262 026123 163564
0000040 033327 150202 024063 171506 057577 076666 032573 136163
0000060 065715 044033 106023 170742 113116 165162 174552 030442
0000100 120516 015667 067741 024013 074131 167165 046440 174337
0000120 024433 173272 174077 166252 061222 165164 101233 067027
0000140 105137 142465 053441 102473 105611 047232 012244 007461
0000160 022153 174254 147632 150762 065452 057635 007715 127427
0000200 030603 120467 036735 010661 177320 107643 101210 037421
0000220 032771 022570 110575 116457 016235 020242 063455 121261
0000240 011150 104011 167330 065547 176443 041525 132551 030143
0000260 111676 057565 057406 046766 076027 041337 032770 140766

# exit
exit

Script done on Wed Aug 15 11:23:51 2001

I finally ^C  out of it. Look at the start and stop times of
the script!

Script started on Wed Aug 15 11:21:53 2001
Script done on Wed Aug 15 11:23:51 2001
That's almost 3 minutes I waited for the data displayed above.

In the meantine, I've gotten hundreds, perhaps thousands of
network interrupts because there is a lot of broadcast traffic
on the LAN. This should have added enough stuff to the pool.

Cheers,
Dick Johnson

Penguin : Linux version 2.4.1 on an i686 machine (799.53 BogoMips).

    I was going to compile a list of innovations that could be
    attributed to Microsoft. Once I realized that Ctrl-Alt-Del
    was handled in the BIOS, I found that there aren't any.



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-15 15:42     ` Richard B. Johnson
@ 2001-08-15 16:29       ` Tim Walberg
  0 siblings, 0 replies; 59+ messages in thread
From: Tim Walberg @ 2001-08-15 16:29 UTC (permalink / raw)
  To: Richard B. Johnson; +Cc: Steve Hill, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1228 bytes --]

I may be wrong here - haven't looked at the source lately -
and I'm sure someone will correct me if I am, but I don't
think that network interrupts in general contribute to
the random driver, the theory being that an attacker
could carefully time the packets sent and thus possibly
influence the entropy pool in some way that would gain
some advantage. I don't think this has been proven, just
that network interrupts are not used because of general
paranoia to that effect. The sources I know of that contribute
to the entropy pool are keyboard and mouse interrupts (and
scancodes and pointer positions), some block device timing
information and some other interrupts. Actually, a quick
perusal of 2.4.8-ac3 shows that the sk_mca, 3c523, and ibmlana
network drivers seem to be the only other drivers that
include the SA_SAMPLE_RANDOM bit in their interrupt processing.

So, my guess is that on a system without mouse and keyboard,
you may need to do something (low priority-ish to minimize
performance impact) that generates a fair amount of disk activity
in order to keep the entropy pool full (unless you happen to have
one of the above network drivers).



			tw



-- 
twalberg@mindspring.com

[-- Attachment #2: Type: application/pgp-signature, Size: 175 bytes --]

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-15 15:27   ` Steve Hill
  2001-08-15 15:42     ` Richard B. Johnson
@ 2001-08-15 17:13     ` Andreas Dilger
  2001-08-16  8:37       ` Steve Hill
  2001-08-17 21:18       ` Theodore Tso
  2001-08-19 17:27     ` David Wagner
  2 siblings, 2 replies; 59+ messages in thread
From: Andreas Dilger @ 2001-08-15 17:13 UTC (permalink / raw)
  To: Steve Hill; +Cc: Richard B. Johnson, linux-kernel

Steve Hill writes:
> On Wed, 15 Aug 2001, Richard B. Johnson wrote:
> > Same problem on 2.4.1. The first 512 bytes comes right away if
> > /dev/random hasn't been accessed since boot, then the rest trickles
> > a few words per second.
> 
> Hmm...  Well, ATM I've kludged a fix by using /dev/urandom instead, but
> it's not ideal because it's being used to generate cryptographic keys, and
> urandom isn't cryptographically secure.
> 
> I might have a look into increasing the size of the entropy pool so
> there's more data to access at once...

Yes, it is possible to increase the size of the in-kernel entropy pool
by changing the value in linux/drivers/char/random.c.  You will likely
also need to fix up the user-space scripts that save and restore the
entropy on shutdown/startup (they can check /proc/sys/kernel/random/poolsize,
if available, to see how many bytes to read/write).

> Are you seeing the problem on a normal machine? (I assumed I was seeing it
> because I'm using Cobalt hardware that's not going to get much entropy
> data due to the lack of keyboard, etc)...  although when I'm generating
> this data I'm using a root NFS filesystem, so there should be plenty of
> network interrupts happening,which should generate some entropy...

Note that network interrupts do NOT normally contribute to the entropy
pool.  This is because of the _very_theoretical_ possibility that an
attacker can "control" the network traffic to such a precise extent as
to flush or otherwise contaminate the entropy from the pool by sending
packets with very precise intervals and generating interrupts so exactly
as to fill the entropy pool with known data.  IMVHO, this is basically
impossible, as the attacker could not possibly control ALL of the network
traffic, and you could optionally define "safe" and "unsafe" interfaces
for terms of entropy.

It is basically inconcievable (IMHO) that anything but a machine connected
via a single crossover cable could "attack" a system precisely enough to
actually compromise the entropy pool because of network interrupts, but
then again people with more knowledge of this than I are in charge.

That said, if you are running on a machine with an Intel i81[05]
chipset, it has a hardware random number generator (I doubt Cobalt boxes
have these).  Alternately, you you need to add SA_SAMPLE_RANDOM to the
request_irq() call of the network card in your system.

You may also want to add a kernel/module parameter to make this flag
conditional on a per-controller basis, so for example it will only add
entropy from the internal interface of a firewall, and not the external
interface (if you have identical NICs on both).

Cheers, Andreas
-- 
Andreas Dilger  \ "If a man ate a pound of pasta and a pound of antipasto,
                 \  would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/               -- Dogbert


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-15 15:07 /dev/random in 2.4.6 Steve Hill
  2001-08-15 15:21 ` Richard B. Johnson
@ 2001-08-15 19:25 ` Alex Bligh - linux-kernel
  2001-08-16  8:55   ` Steve Hill
  2001-08-15 20:55 ` Robert Love
  2 siblings, 1 reply; 59+ messages in thread
From: Alex Bligh - linux-kernel @ 2001-08-15 19:25 UTC (permalink / raw)
  To: Steve Hill, linux-kernel; +Cc: Alex Bligh - linux-kernel

Steve,

> Until recently I've been using the 2.2.16 kernel on Cobalt Qube 3's, but
> I've just upgraded to 2.4.6.  Since there's no mouse, keyboard, etc, there
> isn't much entropy data.  I had no problem getting plenty of data from
> /dev/random under 2.2, but under 2.4.6 there seems to be a distinct lack
> of data - it takes absolutely ages to extract about 256 bytes from it
> (whereas under 2.2 it was relatively quick).  Has there been a major
> change in the way the random number generator works under 2.4?

Some network drivers generate entropy on network interrupts, some
don't. Apparently this inconsistent state is the way people want
to keep it.

If you want to add entropy on network interrupts, look for the line
in your driver which does a request_irq, and | in SA_SAMPLE_RANDOM
to the flags value.

I'd prefer a single /proc/ entry to turn entropy on from ALL network
devices for precisely the reason you state (SCSI means no IDE
entity either), even if its off by default for ALL network
devices for paranoia reasons, but there seems to be some religious
issue at play which means the state currently depends on which
brand of network card you have.

Example 'fix:' (and do this in reverse to remove entropy from
network interrupts if you are paranoid) below.

(tabs probably broken in the text below but easier to do it manually
anyway)

--- eepro100.c~ Tue Feb 13 21:15:05 2001
+++ eepro100.c  Sun Apr  8 22:17:00 2001
@@ -923,7 +923,7 @@
        sp->in_interrupt = 0;

        /* .. we can safely take handler calls during init. */
-       retval = request_irq(dev->irq, &speedo_interrupt, SA_SHIRQ, 
dev->name, dev);
+       retval = request_irq(dev->irq, &speedo_interrupt, SA_SHIRQ | 
SA_SAMPLE_RANDOM, dev->name, dev);
        if (retval) {
                MOD_DEC_USE_COUNT;
                return retval;



--
Alex Bligh

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-15 15:07 /dev/random in 2.4.6 Steve Hill
  2001-08-15 15:21 ` Richard B. Johnson
  2001-08-15 19:25 ` Alex Bligh - linux-kernel
@ 2001-08-15 20:55 ` Robert Love
  2001-08-15 21:27   ` Alex Bligh - linux-kernel
  2 siblings, 1 reply; 59+ messages in thread
From: Robert Love @ 2001-08-15 20:55 UTC (permalink / raw)
  To: Alex Bligh - linux-kernel; +Cc: Steve Hill, linux-kernel

On 15 Aug 2001 20:25:56 +0100, Alex Bligh - linux-kernel wrote:
> I'd prefer a single /proc/ entry to turn entropy on from ALL network
> devices for precisely the reason you state (SCSI means no IDE
> entity either), even if its off by default for ALL network
> devices for paranoia reasons, but there seems to be some religious
> issue at play which means the state currently depends on which
> brand of network card you have.

This is a _very_ good idea and one I suspect most people won't find
fault with.

Personally, I want entropy gathering enabled for my network devices.
While I disagree that there is any chance in hell that a remote intruder
can influence the entropy pool in a manner where the returned hash is
able to be determined, I understand some people don't want entropy
gathering enabled on their NICs.

There are two approaches to this.  Neither idea would be too hard.

Method one, your idea, would have us add SA_SAMPLE_NET_RANDOM to each
NIC's request_irq call.  The random gatherer would then need to be made
aware of the sysctl and check and add/remove interripts derived from
NICs as needed.  This would require a bit of recoding (take a look at
request_irq and random.c)

Note we can't do the check once in request_irq because this is only
called once.  Anything loaded before the sysctl was set would be out of
luck (note this is anything not a module).  Additionally, we wouldn't be
able to change the sysctl on the fly and have the NICs start/stop adding
entropy.

An easier, although less robust idea (although one I like) is a
configure statement "Gather entropy using Network Devices".  Then we add
SA_SAMPLE_NET_RANDOM to each NIC's request_irq flags and define it like
this:

#ifdef CONFIG_USE_NET_ENTRY
#define SA_SAMPLE_NET_RANDOM SA_SAMPLE_RANDOM
#else
#define SA_SAMPLE_NET_RANDOM 0
#endif

and voila.  No extra code after compile, everyone can choose, and who
would complain?  Those who want the entropy, will get it.

-- 
Robert M. Love
rml at ufl.edu
rml at tech9.net


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-15 20:55 ` Robert Love
@ 2001-08-15 21:27   ` Alex Bligh - linux-kernel
  0 siblings, 0 replies; 59+ messages in thread
From: Alex Bligh - linux-kernel @ 2001-08-15 21:27 UTC (permalink / raw)
  To: Robert Love, Alex Bligh - linux-kernel
  Cc: Steve Hill, linux-kernel, Alex Bligh - linux-kernel

Robert,

> Method one, your idea, would have us add SA_SAMPLE_NET_RANDOM to each
> NIC's request_irq call.  The random gatherer would then need to be made
> aware of the sysctl and check and add/remove interripts derived from
> NICs as needed.  This would require a bit of recoding (take a look at
> request_irq and random.c)

Hardly any - apart from adding a (new) SA_SAMPLE_NET_RANDOM to request_irq
in each drivers/net/*.c, you just need (manual diff) in handle_IRQ_event:

         } while (action);
-        if (status & SA_SAMPLE_RANDOM)
+        if ((status & SA_SAMPLE_RANDOM) ||
+            (entropy_from_net &&
+            (status & SA_SAMPLE_NET_RANDOM)))
                 add_interrupt_randomness(irq);
         __cli();

and then the completely trivial /proc (and/or sysctl if that's
really necessary) code for twiddling
/proc/driver/entropy_from_net (or whatever it's called).

+	int sysctl_entropy_from_net
...
+        {DR_RANDOM_ENTROPYNET, "entropy_from_net",
+         &sysctl_entropy_from_net,
+         sizeof(sysctl_entropy_from_net), 0644, NULL, &proc_dointvec},

in somewhere where it gets into dev_table in sysctl.c -
and that's about it.

Given distributions normally have installers with 'hints' as
to whether they are running headless or not, this could
via some rc script write a '1' here if the machine was
perceived as headless, or leave it default (0) otherwise.

--
Alex Bligh

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-15 17:13     ` Andreas Dilger
@ 2001-08-16  8:37       ` Steve Hill
  2001-08-16 19:11         ` Andreas Dilger
  2001-08-17  0:49         ` Robert Love
  2001-08-17 21:18       ` Theodore Tso
  1 sibling, 2 replies; 59+ messages in thread
From: Steve Hill @ 2001-08-16  8:37 UTC (permalink / raw)
  To: Andreas Dilger; +Cc: Richard B. Johnson, linux-kernel

On Wed, 15 Aug 2001, Andreas Dilger wrote:

> Yes, it is possible to increase the size of the in-kernel entropy pool
> by changing the value in linux/drivers/char/random.c.  You will likely
> also need to fix up the user-space scripts that save and restore the
> entropy on shutdown/startup (they can check /proc/sys/kernel/random/poolsize,
> if available, to see how many bytes to read/write).

It didn't help - there just isn't enough entropy data being generated
between boot time and when I extract the random numbers.  This is
basically a system to install a linux distribution, so it's booted off the
network with a readonly root NFS, so there is no saved entropy data to
load, so I'm starting off with an empty entropy pool and having to rely on
the kernel to generate the data from scratch.  The random numbers are used
to initialise the ssh and VPN keys.

-- 

- Steve Hill
System Administrator         Email: steve@navaho.co.uk
Navaho Technologies Ltd.       Tel: +44-870-7034015

        ... Alcohol and calculus don't mix - Don't drink and derive! ...



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-15 19:25 ` Alex Bligh - linux-kernel
@ 2001-08-16  8:55   ` Steve Hill
  0 siblings, 0 replies; 59+ messages in thread
From: Steve Hill @ 2001-08-16  8:55 UTC (permalink / raw)
  To: Alex Bligh - linux-kernel; +Cc: linux-kernel

On Wed, 15 Aug 2001, Alex Bligh - linux-kernel wrote:

> Some network drivers generate entropy on network interrupts, some
> don't. Apparently this inconsistent state is the way people want
> to keep it.

This is very bad I would think - if the main source of entropy data is the
keybaord & mouse, there are a lot of servers out there with no keyboard
and mouse plugged into them that must be really having problems getting
enough data.

I was originally using Cobalt's kernel, so I have my suspicions that they
may have kludged the random number generator into working better with
their hardware.

> If you want to add entropy on network interrupts, look for the line
> in your driver which does a request_irq, and | in SA_SAMPLE_RANDOM
> to the flags value.

I've just added that to the natsemi ethernet driver (used on the Cobalt
Qube 3's) and the eepro100 driver (used on the Cobalt Raq 3/4) and it
seems to have fixed the problem, thanks.  I can only assume that this is
what Cobalt did to their own kernels in the first place...

> I'd prefer a single /proc/ entry to turn entropy on from ALL network
> devices for precisely the reason you state (SCSI means no IDE
> entity either), even if its off by default for ALL network
> devices for paranoia reasons, but there seems to be some religious
> issue at play which means the state currently depends on which
> brand of network card you have.

Yes, this would be a very nice idea.

-- 

- Steve Hill
System Administrator         Email: steve@navaho.co.uk
Navaho Technologies Ltd.       Tel: +44-870-7034015

        ... Alcohol and calculus don't mix - Don't drink and derive! ...



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-16  8:37       ` Steve Hill
@ 2001-08-16 19:11         ` Andreas Dilger
  2001-08-16 19:35           ` Alex Bligh - linux-kernel
  2001-08-17  1:05           ` Robert Love
  2001-08-17  0:49         ` Robert Love
  1 sibling, 2 replies; 59+ messages in thread
From: Andreas Dilger @ 2001-08-16 19:11 UTC (permalink / raw)
  To: Steve Hill; +Cc: Richard B. Johnson, linux-kernel

On Thu, Aug 16, 2001 at 09:37:58AM +0100, Steve Hill wrote:
> On Wed, 15 Aug 2001, Andreas Dilger wrote:
> > Yes, it is possible to increase the size of the in-kernel entropy pool
> > by changing the value in linux/drivers/char/random.c.  You will likely
> > also need to fix up the user-space scripts that save and restore the
> > entropy on shutdown/startup (check /proc/sys/kernel/random/poolsize,
> > if available, to see how many bytes to read/write).
> 
> It didn't help - there just isn't enough entropy data being generated
> between boot time and when I extract the random numbers.  This is
> basically a system to install a linux distribution, so it's booted off the
> network with a readonly root NFS, so there is no saved entropy data to
> load, so I'm starting off with an empty entropy pool and having to rely on
> the kernel to generate the data from scratch.  The random numbers are used
> to initialise the ssh and VPN keys.

Hmm, since it IS critical that the ssh and VPN keys of a new system be
very good, you could do something like run "bonnie++" on one of the new
partitions, until you get enough entropy from block I/O completions.

Alternately, you could generate "weak" keys on the client using urandom
just to get ssh working, and then send keys generated on the server (which
presumably has more real entropy) to replace the weak keys.

That said, there are still cases where network traffic _has_ to be enough
for /dev/random, given that some firewalls (e.g. LRP) can run from only
ramdisk, so have no other source of entropy than the network traffic.

Cheers, Andreas
-- 
Andreas Dilger  \ "If a man ate a pound of pasta and a pound of antipasto,
                 \  would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/               -- Dogbert


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-16 19:11         ` Andreas Dilger
@ 2001-08-16 19:35           ` Alex Bligh - linux-kernel
  2001-08-16 20:30             ` Andreas Dilger
  2001-08-17  1:05           ` Robert Love
  1 sibling, 1 reply; 59+ messages in thread
From: Alex Bligh - linux-kernel @ 2001-08-16 19:35 UTC (permalink / raw)
  To: Andreas Dilger, Steve Hill
  Cc: Richard B. Johnson, linux-kernel, Alex Bligh - linux-kernel

> Hmm, since it IS critical that the ssh and VPN keys of a new system be
> very good, you could do something like run "bonnie++" on one of the new
> partitions, until you get enough entropy from block I/O completions.
...
> That said, there are still cases where network traffic _has_ to be enough
> for /dev/random, given that some firewalls (e.g. LRP) can run from only
> ramdisk, so have no other source of entropy than the network traffic.

It's a while since I looked, but I /thought/ entropy only came from
IDE (not for instance from SCSI, and certainly not when everything
is sitting in cache). I have a reasonably active mailserver (SCSI,
no k/b, no mouse, lots of RAM) which doesn't have enough entropy
to cope with SSL/TLS gracefully without relying on the network
traffic (i.e. behaves like it is ramdisk only).

--
Alex Bligh

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-16 19:35           ` Alex Bligh - linux-kernel
@ 2001-08-16 20:30             ` Andreas Dilger
  0 siblings, 0 replies; 59+ messages in thread
From: Andreas Dilger @ 2001-08-16 20:30 UTC (permalink / raw)
  To: Alex Bligh - linux-kernel; +Cc: Steve Hill, Richard B. Johnson, linux-kernel

On Thu, Aug 16, 2001 at 08:35:12PM +0100, Alex Bligh - linux-kernel wrote:
> It's a while since I looked, but I /thought/ entropy only came from
> IDE (not for instance from SCSI, and certainly not when everything
> is sitting in cache). I have a reasonably active mailserver (SCSI,
> no k/b, no mouse, lots of RAM) which doesn't have enough entropy
> to cope with SSL/TLS gracefully without relying on the network
> traffic (i.e. behaves like it is ramdisk only).

Actually, AFAIK you _may_ get entropy from the IDE interrupts directly
(I didn't check), but you also appear to get entropy from block I/O
completions as well.  I _thought_ I had seen such lines in IDE, DAC960,
and also in the SCSI midlayer, but then I wasn't paying much attention
to those - I was trying to track down the interrupts for net devices.

If it is NOT in the SCSI block I/O layer, it should probably be added.
Whether it makes sense to have entropy from both the block layer and
the actual device IRQs is an issue left up to the security experts (it
may have correlation, and not be good entropy).

Cheers, Andreas
-- 
Andreas Dilger  \ "If a man ate a pound of pasta and a pound of antipasto,
                 \  would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/               -- Dogbert


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-16  8:37       ` Steve Hill
  2001-08-16 19:11         ` Andreas Dilger
@ 2001-08-17  0:49         ` Robert Love
  2001-08-19 17:29           ` David Wagner
  1 sibling, 1 reply; 59+ messages in thread
From: Robert Love @ 2001-08-17  0:49 UTC (permalink / raw)
  To: Andreas Dilger; +Cc: Steve Hill, Richard B. Johnson, linux-kernel

On 16 Aug 2001 13:11:12 -0600, Andreas Dilger wrote:
> That said, there are still cases where network traffic _has_ to be enough
> for /dev/random, given that some firewalls (e.g. LRP) can run from only
> ramdisk, so have no other source of entropy than the network traffic.

I put together a patch that addresses this, it allows the user to
configure whether or not network devices contribute to the entropy pool.
more information can be found
-- 
Robert M. Love
rml at ufl.edu
rml at tech9.net


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-16 19:11         ` Andreas Dilger
  2001-08-16 19:35           ` Alex Bligh - linux-kernel
@ 2001-08-17  1:05           ` Robert Love
  1 sibling, 0 replies; 59+ messages in thread
From: Robert Love @ 2001-08-17  1:05 UTC (permalink / raw)
  To: Robert Love; +Cc: Andreas Dilger, Steve Hill, Richard B. Johnson, linux-kernel

On 16 Aug 2001 20:49:01 -0400, Robert Love wrote:
> I put together a patch that addresses this, it allows the user to
> configure whether or not network devices contribute to the entropy pool.
> more information can be found

i guess i should mention where:

see the thread "[PATCH] Optionally let Net Devices feed Entropy" or
http://tech9.net/rml/linux

-- 
Robert M. Love
rml at ufl.edu
rml at tech9.net


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-15 17:13     ` Andreas Dilger
  2001-08-16  8:37       ` Steve Hill
@ 2001-08-17 21:18       ` Theodore Tso
  2001-08-17 22:05         ` David Schwartz
  2001-08-19 17:31         ` David Wagner
  1 sibling, 2 replies; 59+ messages in thread
From: Theodore Tso @ 2001-08-17 21:18 UTC (permalink / raw)
  To: Andreas Dilger; +Cc: Steve Hill, Richard B. Johnson, linux-kernel

On Wed, Aug 15, 2001 at 11:13:41AM -0600, Andreas Dilger wrote:
> Note that network interrupts do NOT normally contribute to the entropy
> pool.  This is because of the _very_theoretical_ possibility that an
> attacker can "control" the network traffic to such a precise extent as
> to flush or otherwise contaminate the entropy from the pool by sending
> packets with very precise intervals and generating interrupts so exactly
> as to fill the entropy pool with known data.  IMVHO, this is basically
> impossible, as the attacker could not possibly control ALL of the network
> traffic, and you could optionally define "safe" and "unsafe" interfaces
> for terms of entropy.

That's not the only attack, actually.  The much simpler attack pathis
for an attack to **observe** the network traffic to such a precise
extent as to be able to guess what the entropy numbers are that are
going into the pool.  (Think: FBI's Carnivore).

The one saving grace here is that in order to really do this well, the
attacker would need to be sitting on the local area network to get the
best and most precise timing numbers.  You can argue that this is
still a theoretical attack; but it's not quite so difficult as saying
that the attacker has to "control" the network traffic.

						- Ted


^ permalink raw reply	[flat|nested] 59+ messages in thread

* RE: /dev/random in 2.4.6
  2001-08-17 21:18       ` Theodore Tso
@ 2001-08-17 22:05         ` David Schwartz
  2001-08-19 15:13           ` Theodore Tso
  2001-08-19 17:31         ` David Wagner
  1 sibling, 1 reply; 59+ messages in thread
From: David Schwartz @ 2001-08-17 22:05 UTC (permalink / raw)
  To: Theodore Tso, Andreas Dilger; +Cc: linux-kernel


> That's not the only attack, actually.  The much simpler attack pathis
> for an attack to **observe** the network traffic to such a precise
> extent as to be able to guess what the entropy numbers are that are
> going into the pool.  (Think: FBI's Carnivore).
>
> The one saving grace here is that in order to really do this well, the
> attacker would need to be sitting on the local area network to get the
> best and most precise timing numbers.  You can argue that this is
> still a theoretical attack; but it's not quite so difficult as saying
> that the attacker has to "control" the network traffic.
>
> 						- Ted

	This is a non-issue providing the entropy pool code correctly estimates the
amount of entropy. The Linux entropy code is written so that there is no
harm from putting fully known or partially known numbers into the pool
provided that the pool does not overestimate the amount of entropy in those
numbers.

	Even if you could perfectly time the packets on the LAN, you still could
not tell the clock skew between the clock on the LAN card and the TSC. There
would still be unknowns involving how long it would take for the interrupt
to be acknowledged and the entropy gathering code to get to the CPU. These
unknowns still contain real entropy that there is no known way an attacker
could know.

	DS


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-17 22:05         ` David Schwartz
@ 2001-08-19 15:13           ` Theodore Tso
  2001-08-19 15:33             ` Rob Radez
                               ` (2 more replies)
  0 siblings, 3 replies; 59+ messages in thread
From: Theodore Tso @ 2001-08-19 15:13 UTC (permalink / raw)
  To: David Schwartz; +Cc: Theodore Tso, Andreas Dilger, linux-kernel

On Fri, Aug 17, 2001 at 03:05:39PM -0700, David Schwartz wrote:
> > That's not the only attack, actually.  The much simpler attack pathis
> > for an attack to **observe** the network traffic to such a precise
> > extent as to be able to guess what the entropy numbers are that are
> > going into the pool.  (Think: FBI's Carnivore).
>
> 
> 	This is a non-issue providing the entropy pool code correctly
> estimates the amount of entropy. The Linux entropy code is written
> so that there is no harm from putting fully known or partially known
> numbers into the pool provided that the pool does not overestimate
> the amount of entropy in those numbers.

Err, yes, I know that....

The problem is that by being able to perfectly observe packets on the
LAN, you know a lot more about the numbers that are going in, and it's
therefore extremely likely that the entropy estimate will be
overestimated.  

> 	Even if you could perfectly time the packets on the LAN, you
> still could not tell the clock skew between the clock on the LAN
> card and the TSC. There would still be unknowns involving how long
> it would take for the interrupt to be acknowledged and the entropy
> gathering code to get to the CPU. These unknowns still contain real
> entropy that there is no known way an attacker could know.

Yes, but the problem is that the entropy estimation code needs to
differentiate between the differences caused by packet (which is
observable by an outsider), and differenences caused by interrupt
timings, etc.  At the moment, it doesn't do this at all.  

Also, the clock skew between the TSC and the LAN card has only two
components.  One is the actual clock frequency on the CPU, and the
other is a fixed offset (roughly based on when the system was booted).
So to an determined adversary, the amount of randomness this adds is a
fixed constant, and not something which changes each time entropy is
sampled.  Also, it wouldn't surprise me if a determined adversary
(read: NSA) wouldn't be able to come up with statistical models of
interrupts response times for Linux such that the amount of entropy
that you can dependably rely on is less than you might think.

The bottom line is it really depends on how paranoid you want to be,
and how much and how closely you want /dev/random to reliably replace
a true hardware random number generator which relies on some physical
process (by measuring quantum noise using a noise diode, or by
measuring radioactive decay).  For most purposes, and against most
adversaries, it's probably acceptable to depend on network interrupts,
even if the entropy estimator may be overestimating things.

							- Ted



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-19 15:13           ` Theodore Tso
@ 2001-08-19 15:33             ` Rob Radez
  2001-08-19 17:32             ` David Wagner
  2001-08-19 23:32             ` Oliver Xymoron
  2 siblings, 0 replies; 59+ messages in thread
From: Rob Radez @ 2001-08-19 15:33 UTC (permalink / raw)
  To: linux-kernel


On Sun, 19 Aug 2001, Theodore Tso wrote:

> The bottom line is it really depends on how paranoid you want to be,
> and how much and how closely you want /dev/random to reliably replace
> a true hardware random number generator which relies on some physical
> process (by measuring quantum noise using a noise diode, or by
> measuring radioactive decay).  For most purposes, and against most
> adversaries, it's probably acceptable to depend on network interrupts,
> even if the entropy estimator may be overestimating things.

Not picking on you Ted, but in the end, people have to remember this
is a configurable option.  If you don't want it, don't enable it.  In
fact, I believe it's set to be off by default, so just have a
Configure.help entry that says "Don't enable unless you really know what
you're doing."

-Rob Radez


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-15 15:27   ` Steve Hill
  2001-08-15 15:42     ` Richard B. Johnson
  2001-08-15 17:13     ` Andreas Dilger
@ 2001-08-19 17:27     ` David Wagner
  2 siblings, 0 replies; 59+ messages in thread
From: David Wagner @ 2001-08-19 17:27 UTC (permalink / raw)
  To: linux-kernel

Steve Hill  wrote:
>Hmm...  Well, ATM I've kludged a fix by using /dev/urandom instead, but
>it's not ideal because it's being used to generate cryptographic keys, and
>urandom isn't cryptographically secure.

I think you may want to check again.  /dev/urandom *is* cryptographically
secure, and should be fine to use for generating crypto keys [1].

This seems to be a common point of confusion.


[1] Well, if SHA isn't secure, then /dev/urandom might not be any good.
    But if SHA isn't secure, then the rest of your crypto might not be
    any good either, so you might as well trust /dev/urandom.

    There *is* a subtle difference between the two.  When you want
    forward secrecy, /dev/urandom might be insufficient: If your machine
    is broken into, an attacker can learn the state of the pool, and
    then if you kick off the attacker without rebooting or refreshing
    the /dev/urandom pool, the attacker might be able to predict your
    crypto keys for some time after he's lost access to your machine.
    However, I would imagine that in many settings this may not be a
    major concern, and it is easily remedied by rebooting or by
    otherwise re-seeding the /dev/urandom pool.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-17  0:49         ` Robert Love
@ 2001-08-19 17:29           ` David Wagner
  0 siblings, 0 replies; 59+ messages in thread
From: David Wagner @ 2001-08-19 17:29 UTC (permalink / raw)
  To: linux-kernel

Robert Love  wrote:
>I put together a patch that addresses this, it allows the user to
>configure whether or not network devices contribute to the entropy pool.

Be careful.  Probably the right thing to do is to have network
devices contribute to the entropy pool but not to the entropy
count.  If you implement this policy, and use /dev/urandom, you
essentially get the best of both worlds, as far as I can see.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-17 21:18       ` Theodore Tso
  2001-08-17 22:05         ` David Schwartz
@ 2001-08-19 17:31         ` David Wagner
  1 sibling, 0 replies; 59+ messages in thread
From: David Wagner @ 2001-08-19 17:31 UTC (permalink / raw)
  To: linux-kernel

Theodore Tso  wrote:
>On Wed, Aug 15, 2001 at 11:13:41AM -0600, Andreas Dilger wrote:
>> Note that network interrupts do NOT normally contribute to the entropy
>> pool.  This is because of the _very_theoretical_ possibility that an
>> attacker can "control" the network traffic to such a precise extent as
>> to flush or otherwise contaminate the entropy from the pool [...]
>
>That's not the only attack, actually.  The much simpler attack pathis
>for an attack to **observe** the network traffic to such a precise
>extent as to be able to guess what the entropy numbers are that are
>going into the pool.  (Think: FBI's Carnivore).

Right.  Ted's observation says that network traffic should not
contribute to the entropy *count*.  However, it is probably still
useful to add network traffic timings to the pool (without bumping
up the count).  Adding extra traffic to the pool should not hurt,
unless the cryptographic hash function is insecure (in which case
you've probably got worse problems than chosen-timing attacks).

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-19 15:13           ` Theodore Tso
  2001-08-19 15:33             ` Rob Radez
@ 2001-08-19 17:32             ` David Wagner
  2001-08-19 23:32             ` Oliver Xymoron
  2 siblings, 0 replies; 59+ messages in thread
From: David Wagner @ 2001-08-19 17:32 UTC (permalink / raw)
  To: linux-kernel

Theodore Tso  wrote:
>On Fri, Aug 17, 2001 at 03:05:39PM -0700, David Schwartz wrote:
>> 	This is a non-issue providing the entropy pool code correctly
>> estimates the amount of entropy. The Linux entropy code is written
>> so that there is no harm from putting fully known or partially known
>> numbers into the pool provided that the pool does not overestimate
>> the amount of entropy in those numbers.
>
>The problem is that by being able to perfectly observe packets on the
>LAN, you know a lot more about the numbers that are going in, and it's
>therefore extremely likely that the entropy estimate will be
>overestimated.  

Right.  Therefore, it seems to me that the correct thing to do is to
add network timings into the pool using an entropy estimate of zero.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-19 15:13           ` Theodore Tso
  2001-08-19 15:33             ` Rob Radez
  2001-08-19 17:32             ` David Wagner
@ 2001-08-19 23:32             ` Oliver Xymoron
  2001-08-20  7:40               ` Helge Hafting
  2001-08-20 13:37               ` Alex Bligh - linux-kernel
  2 siblings, 2 replies; 59+ messages in thread
From: Oliver Xymoron @ 2001-08-19 23:32 UTC (permalink / raw)
  To: Theodore Tso; +Cc: David Schwartz, Andreas Dilger, linux-kernel

On Sun, 19 Aug 2001, Theodore Tso wrote:

> The bottom line is it really depends on how paranoid you want to be,
> and how much and how closely you want /dev/random to reliably replace
> a true hardware random number generator which relies on some physical
> process (by measuring quantum noise using a noise diode, or by
> measuring radioactive decay).  For most purposes, and against most
> adversaries, it's probably acceptable to depend on network interrupts,
> even if the entropy estimator may be overestimating things.

Can I propose an add_untrusted_randomness()? This would work identically
to add_timer_randomness but would pass batch_entropy_store() 0 as the
entropy estimate. The store would then be made to drop 0-entropy elements
on the floor if the queue was more than, say, half full. This would let us
take advantage of 'potential' entropy sources like network interrupts and
strengthen /dev/urandom without weakening /dev/random.

(Yes, I see dont_count_entropy, but it doesn't appear to be used, and
doesn't address flooding the queue with 0-entropy entries. I'd take it
out.)

--
 "Love the dolphins," she advised him. "Write by W.A.S.T.E.."


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-19 23:32             ` Oliver Xymoron
@ 2001-08-20  7:40               ` Helge Hafting
  2001-08-20 14:01                 ` Oliver Xymoron
  2001-08-20 13:37               ` Alex Bligh - linux-kernel
  1 sibling, 1 reply; 59+ messages in thread
From: Helge Hafting @ 2001-08-20  7:40 UTC (permalink / raw)
  To: Oliver Xymoron, Theodore Tso, David Wagner, linux-kernel

Oliver Xymoron wrote:
> 
> On Sun, 19 Aug 2001, Theodore Tso wrote:
> 
> > The bottom line is it really depends on how paranoid you want to be,
> > and how much and how closely you want /dev/random to reliably replace
> > a true hardware random number generator which relies on some physical
> > process (by measuring quantum noise using a noise diode, or by
> > measuring radioactive decay).  For most purposes, and against most
> > adversaries, it's probably acceptable to depend on network interrupts,
> > even if the entropy estimator may be overestimating things.
> 
> Can I propose an add_untrusted_randomness()? This would work identically
> to add_timer_randomness but would pass batch_entropy_store() 0 as the
> entropy estimate. The store would then be made to drop 0-entropy elements
> on the floor if the queue was more than, say, half full. This would let us
> take advantage of 'potential' entropy sources like network interrupts and
> strengthen /dev/urandom without weakening /dev/random.

It seems to me that it'd be better with an
add_interrupt_timing_randomness() function.

This one should modify the entropy pool, and add no more to the
entropy count than the internal interrupt timing allow,
i.e. assume that "the ouside" observed the event that
trigged the interrupt.   How much is architecture dependent:

A machine with a clock-counter, like a pentium, can add
a number of bits from the counter, as the timing is
documented variable.  (There could be several interrupts
queued up, the interrupt stacks and routines
may or may not be in level-1 cache)  Even a conservative approach
assuming a lot of worst cases would end up adding _some_.

A 386 may have to add 0to the count, as it don't have a high-speed
timer.
People who have a network-only machine can go for
something better than 386 though.

Helge Hafting

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-19 23:32             ` Oliver Xymoron
  2001-08-20  7:40               ` Helge Hafting
@ 2001-08-20 13:37               ` Alex Bligh - linux-kernel
  2001-08-20 14:12                 ` Oliver Xymoron
  1 sibling, 1 reply; 59+ messages in thread
From: Alex Bligh - linux-kernel @ 2001-08-20 13:37 UTC (permalink / raw)
  To: Oliver Xymoron, Theodore Tso
  Cc: David Schwartz, Andreas Dilger, linux-kernel, Alex Bligh - linux-kernel

> Can I propose an add_untrusted_randomness()? This would work identically
> to add_timer_randomness but would pass batch_entropy_store() 0 as the
> entropy estimate. The store would then be made to drop 0-entropy elements
> on the floor if the queue was more than, say, half full. This would let us
> take advantage of 'potential' entropy sources like network interrupts and
> strengthen /dev/urandom without weakening /dev/random.

Am I correct in assuming that in the absence of other entropy sources, it
would use these (potentially inferior) sources, and /dev/random would
then not block? In which case fine, it solves my problem.

--
Alex Bligh

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20  7:40               ` Helge Hafting
@ 2001-08-20 14:01                 ` Oliver Xymoron
  0 siblings, 0 replies; 59+ messages in thread
From: Oliver Xymoron @ 2001-08-20 14:01 UTC (permalink / raw)
  To: Helge Hafting; +Cc: Theodore Tso, David Wagner, linux-kernel

On Mon, 20 Aug 2001, Helge Hafting wrote:

> Oliver Xymoron wrote:
> >
> > Can I propose an add_untrusted_randomness()? This would work identically
> > to add_timer_randomness but would pass batch_entropy_store() 0 as the
> > entropy estimate. The store would then be made to drop 0-entropy elements
> > on the floor if the queue was more than, say, half full. This would let us
> > take advantage of 'potential' entropy sources like network interrupts and
> > strengthen /dev/urandom without weakening /dev/random.
>
> It seems to me that it'd be better with an
> add_interrupt_timing_randomness() function.
>
> This one should modify the entropy pool, and add no more to the
> entropy count than the internal interrupt timing allow,
> i.e. assume that "the ouside" observed the event that
> trigged the interrupt.   How much is architecture dependent:
>
> A machine with a clock-counter, like a pentium, can add
> a number of bits from the counter, as the timing is
> documented variable.  (There could be several interrupts
> queued up, the interrupt stacks and routines
> may or may not be in level-1 cache)  Even a conservative approach
> assuming a lot of worst cases would end up adding _some_.

Until you've spent a while trying to mount a serious timing attack, I
think any arguments as to how much entropy is there are just a bunch of
handwaving. Interrupt generation on an otherwise quiescent machine is
extremely deterministic.

--
 "Love the dolphins," she advised him. "Write by W.A.S.T.E.."


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 13:37               ` Alex Bligh - linux-kernel
@ 2001-08-20 14:12                 ` Oliver Xymoron
  2001-08-20 14:40                   ` Alex Bligh - linux-kernel
  0 siblings, 1 reply; 59+ messages in thread
From: Oliver Xymoron @ 2001-08-20 14:12 UTC (permalink / raw)
  To: Alex Bligh - linux-kernel
  Cc: Theodore Tso, David Schwartz, Andreas Dilger, linux-kernel

On Mon, 20 Aug 2001, Alex Bligh - linux-kernel wrote:

> > Can I propose an add_untrusted_randomness()? This would work identically
> > to add_timer_randomness but would pass batch_entropy_store() 0 as the
> > entropy estimate. The store would then be made to drop 0-entropy elements
> > on the floor if the queue was more than, say, half full. This would let us
> > take advantage of 'potential' entropy sources like network interrupts and
> > strengthen /dev/urandom without weakening /dev/random.
>
> Am I correct in assuming that in the absence of other entropy sources, it
> would use these (potentially inferior) sources, and /dev/random would
> then not block? In which case fine, it solves my problem.

No, /dev/random would always keep a conservative estimate of entropy.
Assuming that network entropy > 0, this would add more real (but
unaccounted) entropy to the pool, and if you agree with this assumption,
you would be able to take advantage of it by reading /dev/urandom.

The only case where it would make things worse is if you're getting so
many entropy events batched that the queue fills up and high entropy
events get discarded in favor of earlier low entropy ones.

--
 "Love the dolphins," she advised him. "Write by W.A.S.T.E.."


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 14:12                 ` Oliver Xymoron
@ 2001-08-20 14:40                   ` Alex Bligh - linux-kernel
  2001-08-20 14:55                     ` Chris Friesen
                                       ` (3 more replies)
  0 siblings, 4 replies; 59+ messages in thread
From: Alex Bligh - linux-kernel @ 2001-08-20 14:40 UTC (permalink / raw)
  To: Oliver Xymoron, Alex Bligh - linux-kernel
  Cc: Theodore Tso, David Schwartz, Andreas Dilger, linux-kernel,
	Alex Bligh - linux-kernel

>> Am I correct in assuming that in the absence of other entropy sources, it
>> would use these (potentially inferior) sources, and /dev/random would
>> then not block? In which case fine, it solves my problem.
>
> No, /dev/random would always keep a conservative estimate of entropy.
> Assuming that network entropy > 0, this would add more real (but
> unaccounted) entropy to the pool, and if you agree with this assumption,
> you would be able to take advantage of it by reading /dev/urandom.

OK; well in which case it doesn't solve the problem. I assert there are
configurations where using the network to generate accounted for entropy
is no worse than the other available options. In that case, if my entropy
pool is low, I want to wait long enough for it to fill up (i.e. have the
/dev/random blocking behaviour) before reading my random number. If your
interpretation of Ted's suggestion is correct, this is no better than
switching to /dev/urandom, which is considerably worse than the effect
of using Robert's patch. I thought what Ted was suggesting was only
accounting for network-IRQ derived entropy when the entropy pool was
mostly empty. This would mean that if there were other sources of entropy
about, the network entropy would not be accounted for (which sounds
reasonable, on the presumption that they were better quality).

An alternative approach to all of this, perhaps, would be to use extremely
finely grained timers (if they exist), in which case more bits of entropy
could perhaps be derived per sample, and perhaps sample them on
more operations. I don't know what the finest resolution timer we have
is, but I'd have thought people would be happier using ANY existing
mechanism (including network IRQs) if the timer resolution was (say)
1 nanosecond.

--
Alex Bligh

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 14:40                   ` Alex Bligh - linux-kernel
@ 2001-08-20 14:55                     ` Chris Friesen
  2001-08-20 15:22                       ` Oliver Xymoron
                                         ` (3 more replies)
  2001-08-20 15:07                     ` Oliver Xymoron
                                       ` (2 subsequent siblings)
  3 siblings, 4 replies; 59+ messages in thread
From: Chris Friesen @ 2001-08-20 14:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Alex Bligh - linux-kernel, Oliver Xymoron, Theodore Tso,
	David Schwartz, Andreas Dilger

Alex Bligh - linux-kernel wrote:

> An alternative approach to all of this, perhaps, would be to use extremely
> finely grained timers (if they exist), in which case more bits of entropy
> could perhaps be derived per sample, and perhaps sample them on
> more operations. I don't know what the finest resolution timer we have
> is, but I'd have thought people would be happier using ANY existing
> mechanism (including network IRQs) if the timer resolution was (say)
> 1 nanosecond.

Why don't we also switch to a cryptographically secure algorithm for
/dev/urandom?  Then we could seed it with a value from /dev/random and we would
have a known number of cryptographically secure pseudorandom values.  Once we
reach the end of the png cycle, we could re-seed it with another value from
/dev/random.

Would this be a valid solution, or am I totally off my rocker?

Chris


-- 
Chris Friesen                    | MailStop: 043/33/F10  
Nortel Networks                  | work: (613) 765-0557
3500 Carling Avenue              | fax:  (613) 765-2986
Nepean, ON K2H 8E9 Canada        | email: cfriesen@nortelnetworks.com

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 14:40                   ` Alex Bligh - linux-kernel
  2001-08-20 14:55                     ` Chris Friesen
@ 2001-08-20 15:07                     ` Oliver Xymoron
  2001-08-21  8:33                       ` Alex Bligh - linux-kernel
  2001-08-20 16:00                     ` David Wagner
  2001-08-20 22:55                     ` D. Stimits
  3 siblings, 1 reply; 59+ messages in thread
From: Oliver Xymoron @ 2001-08-20 15:07 UTC (permalink / raw)
  To: Alex Bligh - linux-kernel
  Cc: Theodore Tso, David Schwartz, Andreas Dilger, linux-kernel

On Mon, 20 Aug 2001, Alex Bligh - linux-kernel wrote:

> >> Am I correct in assuming that in the absence of other entropy sources, it
> >> would use these (potentially inferior) sources, and /dev/random would
> >> then not block? In which case fine, it solves my problem.
> >
> > No, /dev/random would always keep a conservative estimate of entropy.
> > Assuming that network entropy > 0, this would add more real (but
> > unaccounted) entropy to the pool, and if you agree with this assumption,
> > you would be able to take advantage of it by reading /dev/urandom.
>
> OK; well in which case it doesn't solve the problem. I assert there are
> configurations where using the network to generate accounted for entropy
> is no worse than the other available options. In that case, if my entropy
> pool is low, I want to wait long enough for it to fill up (i.e. have the
> /dev/random blocking behaviour) before reading my random number.

No you don't, that's your whole complaint to start with. You're clearly
entropy-limited. If you were willing to block waiting for enough entropy,
you'd be fine with the current scheme. Now you've just pushed the problem
out a little further.  Let's just assume that your application is some
sorta secure web server, generating session keys for SSL. For short
transactions, you'll need possibly hundreds of bits of entropy for a small
handful of packets. With packet queueing on your NIC, under load you might
only see a couple interrupts for an entire transaction.

Look, /dev/urandom _is_ cryptographically strong, and there's no attack
against it that's even vaguely practical. It's good enough, and we can
make it better. Overestimating entropy makes /dev/random no better in
theory than /dev/urandom, blocking or no. What's the point?

> An alternative approach to all of this, perhaps, would be to use extremely
> finely grained timers (if they exist), in which case more bits of entropy
> could perhaps be derived per sample, and perhaps sample them on
> more operations. I don't know what the finest resolution timer we have
> is, but I'd have thought people would be happier using ANY existing
> mechanism (including network IRQs) if the timer resolution was (say)
> 1 nanosecond.

We can use cycle counters where they exist. They're already used on x86
where available. I suspect that particular code could be made more
generic.

--
 "Love the dolphins," she advised him. "Write by W.A.S.T.E.."


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 14:55                     ` Chris Friesen
@ 2001-08-20 15:22                       ` Oliver Xymoron
  2001-08-20 15:25                       ` Doug McNaught
                                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 59+ messages in thread
From: Oliver Xymoron @ 2001-08-20 15:22 UTC (permalink / raw)
  To: Chris Friesen
  Cc: linux-kernel, Alex Bligh - linux-kernel, Theodore Tso,
	David Schwartz, Andreas Dilger

On Mon, 20 Aug 2001, Chris Friesen wrote:

> Alex Bligh - linux-kernel wrote:
>
> > An alternative approach to all of this, perhaps, would be to use extremely
> > finely grained timers (if they exist), in which case more bits of entropy
> > could perhaps be derived per sample, and perhaps sample them on
> > more operations. I don't know what the finest resolution timer we have
> > is, but I'd have thought people would be happier using ANY existing
> > mechanism (including network IRQs) if the timer resolution was (say)
> > 1 nanosecond.
>
> Why don't we also switch to a cryptographically secure algorithm for
> /dev/urandom?  Then we could seed it with a value from /dev/random and we would
> have a known number of cryptographically secure pseudorandom values.  Once we
> reach the end of the png cycle, we could re-seed it with another value from
> /dev/random.

/dev/random and /dev/urandom use the same SHA1 hashing algorithm and there
are no known practical attacks against them. What makes /dev/random
different is that it keeps track of how much random information has been
put in vs how much is taken out and never lets you go below zero. This
means that the data coming out (already unguessable in practice) is now
unguessable even in theory[1].

This whole discussion is pretty silly. /dev/urandom is already at least as
strong as any encryption scheme you're likely to use with it. To break it,
you're going to have break SHA over a 512-bit pool with a nasty round
function, never mind all the other complexities. It's almost certainly
more practical for someone to brute force your 128-bit SSL session keys or
factor your RSA keys. Or beat your root password out of you with a rubber
hose.

[1] Assuming its entropy estimates are conservative enough.
--
 "Love the dolphins," she advised him. "Write by W.A.S.T.E.."


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 14:55                     ` Chris Friesen
  2001-08-20 15:22                       ` Oliver Xymoron
@ 2001-08-20 15:25                       ` Doug McNaught
  2001-08-20 15:42                         ` Chris Friesen
  2001-08-20 16:01                       ` David Wagner
  2001-08-20 19:30                       ` Gérard Roudier
  3 siblings, 1 reply; 59+ messages in thread
From: Doug McNaught @ 2001-08-20 15:25 UTC (permalink / raw)
  To: Chris Friesen; +Cc: linux-kernel

Chris Friesen <cfriesen@nortelnetworks.com> writes:

> Why don't we also switch to a cryptographically secure algorithm for
> /dev/urandom? 

It IS cryptographically secure.  Have you ever read the manpage?

-Doug
-- 
Free Dmitry Sklyarov! 
http://www.freesklyarov.org/ 

We will return to our regularly scheduled signature shortly.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 15:25                       ` Doug McNaught
@ 2001-08-20 15:42                         ` Chris Friesen
  2001-08-21 10:03                           ` Steve Hill
  0 siblings, 1 reply; 59+ messages in thread
From: Chris Friesen @ 2001-08-20 15:42 UTC (permalink / raw)
  To: Doug McNaught; +Cc: linux-kernel

Doug McNaught wrote:
> 
> Chris Friesen <cfriesen@nortelnetworks.com> writes:
> 
> > Why don't we also switch to a cryptographically secure algorithm for
> > /dev/urandom?
> 
> It IS cryptographically secure.  Have you ever read the manpage?

Oops, my bad.  I got errors doing man on urandom, but neglected to try a man on
random.

The main reason for my comment was the suggestion by Steve Hill that
/dev/urandom was NOT cryptographically secure.  Re-reading it, his comment was
in the context of generating cryptographic keys, so perhaps I misunderstood what
he meant.

Chris


-- 
Chris Friesen                    | MailStop: 043/33/F10  
Nortel Networks                  | work: (613) 765-0557
3500 Carling Avenue              | fax:  (613) 765-2986
Nepean, ON K2H 8E9 Canada        | email: cfriesen@nortelnetworks.com

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 14:40                   ` Alex Bligh - linux-kernel
  2001-08-20 14:55                     ` Chris Friesen
  2001-08-20 15:07                     ` Oliver Xymoron
@ 2001-08-20 16:00                     ` David Wagner
  2001-08-21  1:20                       ` Theodore Tso
  2001-08-21  8:39                       ` Alex Bligh - linux-kernel
  2001-08-20 22:55                     ` D. Stimits
  3 siblings, 2 replies; 59+ messages in thread
From: David Wagner @ 2001-08-20 16:00 UTC (permalink / raw)
  To: linux-kernel

Alex Bligh - linux-kernel  wrote:
>OK; well in which case it doesn't solve the problem.

I don't see why not.  Apply this change, and use /dev/urandom.
You'll never block, and the outputs should be thoroughly unpredictable.
What's missing?

(I don't see why so many people use /dev/random rather than /dev/urandom.
I harbor suspicions that this is a misunderstanding about the properties
of pseudorandom number generation.)

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 14:55                     ` Chris Friesen
  2001-08-20 15:22                       ` Oliver Xymoron
  2001-08-20 15:25                       ` Doug McNaught
@ 2001-08-20 16:01                       ` David Wagner
  2001-08-20 19:30                       ` Gérard Roudier
  3 siblings, 0 replies; 59+ messages in thread
From: David Wagner @ 2001-08-20 16:01 UTC (permalink / raw)
  To: linux-kernel

Chris Friesen  wrote:
>Why don't we also switch to a cryptographically secure algorithm for
>/dev/urandom?

/dev/urandom already is using a cryptographically secure algorithm.
Everything you want is already in place.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 14:55                     ` Chris Friesen
                                         ` (2 preceding siblings ...)
  2001-08-20 16:01                       ` David Wagner
@ 2001-08-20 19:30                       ` Gérard Roudier
  3 siblings, 0 replies; 59+ messages in thread
From: Gérard Roudier @ 2001-08-20 19:30 UTC (permalink / raw)
  To: Chris Friesen
  Cc: linux-kernel, Alex Bligh - linux-kernel, Oliver Xymoron,
	Theodore Tso, David Schwartz, Andreas Dilger



On Mon, 20 Aug 2001, Chris Friesen wrote:

> Alex Bligh - linux-kernel wrote:
>
> > An alternative approach to all of this, perhaps, would be to use extremely
> > finely grained timers (if they exist), in which case more bits of entropy
> > could perhaps be derived per sample, and perhaps sample them on
> > more operations. I don't know what the finest resolution timer we have
> > is, but I'd have thought people would be happier using ANY existing
> > mechanism (including network IRQs) if the timer resolution was (say)
> > 1 nanosecond.
>
> Why don't we also switch to a cryptographically secure algorithm for
> /dev/urandom?  Then we could seed it with a value from /dev/random and we would
> have a known number of cryptographically secure pseudorandom values.  Once we
> reach the end of the png cycle, we could re-seed it with another value from
> /dev/random.
>
> Would this be a valid solution, or am I totally off my rocker?

The latter, unless you only want to protect against lame attackers :-)

Given the knowledge of the seed and the algorithm used, everything gets
fully deterministic for an attacker -> enthropy _zero_.

For example, let an attacker observe enough of your magic random data in
order to guess the algorithm, and a whole prng cycle will only contain as
many random bits as the number of bits of the seed value for this
attacker.

> Chris

  Gérard.



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 14:40                   ` Alex Bligh - linux-kernel
                                       ` (2 preceding siblings ...)
  2001-08-20 16:00                     ` David Wagner
@ 2001-08-20 22:55                     ` D. Stimits
  2001-08-21  1:06                       ` David Schwartz
  3 siblings, 1 reply; 59+ messages in thread
From: D. Stimits @ 2001-08-20 22:55 UTC (permalink / raw)
  Cc: linux-kernel

Alex Bligh - linux-kernel wrote:
...
> An alternative approach to all of this, perhaps, would be to use extremely
> finely grained timers (if they exist), in which case more bits of entropy
> could perhaps be derived per sample, and perhaps sample them on
> more operations. I don't know what the finest resolution timer we have
> is, but I'd have thought people would be happier using ANY existing
> mechanism (including network IRQs) if the timer resolution was (say)
> 1 nanosecond.

I'm probably wording this poorly, but I'm going to call a device that
provides irqs, and which is probably not predictable except under a
severe attack, a pseudo-entropy source, and other devices which are
presumed to be non-observable or otherwise unpredictable an entropy
source. Examples, eth0 or a modem as pseudo-entropy source, and hard
drive, keyboard, or mouse activity as entropy source (although if you
use wireless keyboard or mouse, more likely they'd have to be
reclassified as pseudo-entropy sources).

Why is it required to use only the time between irq's of a single device
for start/stop times (time delta measurement)? If the time period were
instead calculated not on the time between irq of a single device, but
instead start and stop times were based on different device irq's, then
many pseudo entropy sources could become full entropy sources...to
accurately/precisely snoop eth0 traffic and determine it's irq would no
longer be helpful if the time involved the starting of a timer from
eth0, but the stop (and thus delta) was derived from something else like
hard drive activity. Force the attacker to monitor more than one irq,
plus require the attacker to know which irq device corresponds to the
stop irq of a particular timer.

A variation on the theme would be to rotate which entropy source is used
as the stop event whenever a pseudo entropy source starts a timer. E.G.,
eth0 irq starts a timer, and the hard drive is set to stop the timer on
eth0 (the same hard drive timer could also start or stop a hard drive
time delta, it's a separate issue) the first time, but using this irq
causes eth0 stop events to now be bound to mouse events instead. In that
example, the attacker would have to monitor both eth0 irq, and hard
drive irq, as well as knowing that the hard drive is the stop event; the
first time it is used, the attacker would then have to be able to
monitor the eth0 irq, and the mouse irq, along with knowledge that the
mouse irq is now the stop event (and the first time a mouse event is
used as a stop event to the eth0 timer, a new stop event device would be
rotated in). Given a mouseless/keyboardless/headless machine, but with
several pseudo entropy sources, one could still have some degree of
confidence even when the attacker controls and monitors all pseudo
entropy source irq's, based on the notion that the attacker won't know
*which* irq corresponds to a start or stop event (knowing irq no longer
provides a direct time measurement). If pseudo entropy sources used
trusted entropy sources for stop events, this would be preferable, but a
completely pseudo entropy set of sources could become much more trusted
if the exact rotation scheme in use at each irq was not known to the
attacker. Would it be practical to segregate begin/end irq timer events
to different devices, rather than having begin/end events based soley on
a single device?

D. Stimits, stimits@idcomm.com

> 
> --
> Alex Bligh
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 59+ messages in thread

* RE: /dev/random in 2.4.6
  2001-08-20 22:55                     ` D. Stimits
@ 2001-08-21  1:06                       ` David Schwartz
  0 siblings, 0 replies; 59+ messages in thread
From: David Schwartz @ 2001-08-21  1:06 UTC (permalink / raw)
  To: stimits; +Cc: linux-kernel


> Why is it required to use only the time between irq's of a single device
> for start/stop times (time delta measurement)? If the time period were
> instead calculated not on the time between irq of a single device, but
> instead start and stop times were based on different device irq's, then
> many pseudo entropy sources could become full entropy sources...to
> accurately/precisely snoop eth0 traffic and determine it's irq would no
> longer be helpful if the time involved the starting of a timer from
> eth0, but the stop (and thus delta) was derived from something else like
> hard drive activity. Force the attacker to monitor more than one irq,
> plus require the attacker to know which irq device corresponds to the
> stop irq of a particular timer.
>
> A variation on the theme would be to rotate which entropy source is used
> as the stop event whenever a pseudo entropy source starts a timer. E.G.,
> eth0 irq starts a timer, and the hard drive is set to stop the timer on

	This makes no difference at all. It comes down to, ultimately, the
difference between adding 'X' and 'Y' to an entropy pool or 'X' and 'X+Y'.
The set 'X','Y' has the same amount of entropy as the set 'X','X+Y'. Either
the time at which events occur contains entropy or it doesn't. No amount of
math can increase the amount of entropy present in those times. Sorry.

	DS


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 16:00                     ` David Wagner
@ 2001-08-21  1:20                       ` Theodore Tso
  2001-08-21  8:39                       ` Alex Bligh - linux-kernel
  1 sibling, 0 replies; 59+ messages in thread
From: Theodore Tso @ 2001-08-21  1:20 UTC (permalink / raw)
  To: David Wagner; +Cc: linux-kernel

On Mon, Aug 20, 2001 at 04:00:30PM +0000, David Wagner wrote:
> 
> I don't see why not.  Apply this change, and use /dev/urandom.
> You'll never block, and the outputs should be thoroughly unpredictable.
> What's missing?

Absolutely.  And if /dev/urandom is not unpredictable, that means
someone has broken SHA-1 in a pretty complete way, in which case it's
very likely that most of the users of the randomness are completely
screwed, since they probably depend on SHA-1 (or some other MAC which
is probably in pretty major danger if someone has indeed managed to
crack SHA-1).

> (I don't see why so many people use /dev/random rather than /dev/urandom.
> I harbor suspicions that this is a misunderstanding about the properties
> of pseudorandom number generation.)

Probably.  /dev/random is probably appropriate when you're trying to
get randomness for a long-term RSA/DSA key, but for session key
generation which is what most server boxes will be doing, /dev/urandom
will be just fine.

Of course, then you have the crazies who are doing Monte Carlo
simulations, and then send me mail asking why using /dev/urandom is so
slow, and how can they the reseed /dev/urandom so they can get
repeatable, measureable results on their Monte Carlo sinulations....

					- Ted

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 15:07                     ` Oliver Xymoron
@ 2001-08-21  8:33                       ` Alex Bligh - linux-kernel
  2001-08-21 16:13                         ` Oliver Xymoron
  2001-08-21 18:19                         ` David Wagner
  0 siblings, 2 replies; 59+ messages in thread
From: Alex Bligh - linux-kernel @ 2001-08-21  8:33 UTC (permalink / raw)
  To: Oliver Xymoron, Alex Bligh - linux-kernel
  Cc: Theodore Tso, David Schwartz, Andreas Dilger, linux-kernel,
	Alex Bligh - linux-kernel

Oliver,

--On Monday, 20 August, 2001 10:07 AM -0500 Oliver Xymoron 
<oxymoron@waste.org> wrote:

>> OK; well in which case it doesn't solve the problem. I assert there are
>> configurations where using the network to generate accounted for entropy
>> is no worse than the other available options. In that case, if my entropy
>> pool is low, I want to wait long enough for it to fill up (i.e. have the
>> /dev/random blocking behaviour) before reading my random number.
>
> No you don't, that's your whole complaint to start with. You're clearly
> entropy-limited. If you were willing to block waiting for enough entropy,
> you'd be fine with the current scheme.

Yes I /do/. I want to wait for sufficient entropy. I count inter-IRQ
timing from network as a source of entropy. IE if the entropy pool
is exhausted, I'm prepared to, and desire to, block, until a few packets
have arrived. However, I do not wish to block indefinitely (actually
about 3 minutes as there's a little periodic block I/O) which is what
happens if I do not count network IRQ timing as an entropy source
(current /dev/random result, without Robert's patch, on most NICs).
Equally, I do not want want to read /dev/urandom (and not block)
which, in an absence of entropy, is (arguably) cryptographically weaker
(see below).

> Now you've just pushed the problem
> out a little further.  Let's just assume that your application is some
> sorta secure web server, generating session keys for SSL. For short
> transactions, you'll need possibly hundreds of bits of entropy for a small
> handful of packets. With packet queueing on your NIC, under load you might
> only see a couple interrupts for an entire transaction.

Well it's mostly doing SMTP/IMAP over SSL, and on port 25, yes.

Measuring it there at least 16 network IRQs for the minimum
SSL transaction. That generates 16x12 = 192 bits of
entropy (each IRQ contributes 12 bits). However, that's aside from the
2 to 3 packets a second of other stuff arriving on the server (unencrypted
port 25, ARP broadcast, VRRP, all the rest of the crap on the LAN).

It's a point worth making that I'm not talking about theory here. It's
there, tested, both ways, in a real environment. Opening an IMAP folder
over SSL stalls 3 times, for between 30 seconds and 3 minutes *EACH*
from an idle machine without Robert's patch (well I just patched eepro).
With the patch, no stalls > about 3 secs, which are far rarer (quite
acceptable).

> Look, /dev/urandom _is_ cryptographically strong, and there's no attack
> against it that's even vaguely practical. It's good enough, and we can
> make it better. Overestimating entropy makes /dev/random no better in
> theory than /dev/urandom, blocking or no. What's the point?

You can't have your cake and eat it.

The point is simple: We say to authors of cryptographic applications
(ssl, ssh etc.) that they should use /dev/random, because /dev/urandom
is not cryptographically strong enough. Let's say I buy into this statement
(as if I don't, as one other poster has mentioned, we should just scrap
the blocking behaviour entirely). Well, I'd like /dev/random to be
functional in a headless environment. Perhaps not /quite/ as 'wonderful'
as it is in a non-headless environment, but better than /dev/urandom.

Saying 'well if you are writing a cryptographic application use should
use /dev/random, as this is the best source, but by the way, this means
your app will be dysfunctional on a headless machine, and we aren't
going to give you a config option to fix it' is the wrong approach
to OS design.

>> An alternative approach to all of this, perhaps, would be to use
>> extremely finely grained timers (if they exist), in which case more bits
>> of entropy could perhaps be derived per sample, and perhaps sample them
>> on more operations. I don't know what the finest resolution timer we have
>> is, but I'd have thought people would be happier using ANY existing
>> mechanism (including network IRQs) if the timer resolution was (say) 1
>> nanosecond.
>
> We can use cycle counters where they exist. They're already used on x86
> where available. I suspect that particular code could be made more
> generic.

Would you accept that if you are actually counting CPU clock cycles, then
in practice, timing between network IRQs is, in a substantial number of
configurations, a non-externally observable property, without access or
equipment that would allow you to observe the innards of the machine too?

--
Alex Bligh

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 16:00                     ` David Wagner
  2001-08-21  1:20                       ` Theodore Tso
@ 2001-08-21  8:39                       ` Alex Bligh - linux-kernel
  2001-08-21 10:46                         ` Marco Colombo
  2001-08-21 18:25                         ` David Wagner
  1 sibling, 2 replies; 59+ messages in thread
From: Alex Bligh - linux-kernel @ 2001-08-21  8:39 UTC (permalink / raw)
  To: David Wagner, linux-kernel; +Cc: Alex Bligh - linux-kernel


> Alex Bligh - linux-kernel  wrote:
>> OK; well in which case it doesn't solve the problem.
>
> I don't see why not.  Apply this change, and use /dev/urandom.
> You'll never block, and the outputs should be thoroughly unpredictable.
> What's missing?

See message to Oliver - para on waiting for sufficient entropy;
/dev/urandom (before that arrives) is just as theoretically
vulnerable as before.

> (I don't see why so many people use /dev/random rather than /dev/urandom.
> I harbor suspicions that this is a misunderstanding about the properties
> of pseudorandom number generation.)

Things like (from the manpage):

       When read, the /dev/random device will only return  random
       bytes  within the estimated number of bits of noise in the
       entropy pool.  /dev/random should  be  suitable  for  uses
       that  need  very  high quality randomness such as one-time
       pad or key generation.  When the entropy  pool  is  empty,
       reads  to /dev/random will block until additional environ­
       mental noise is gathered.

       When read, /dev/urandom device will return as  many  bytes
       as are requested.  As a result, if there is not sufficient
       entropy in the entropy pool, the returned values are theo­
       retically  vulnerable  to  a  cryptographic  attack on the
       algorithms used by the driver.  Knowledge  of  how  to  do
       this is not available in the current non-classified liter­
       ature, but it  is  theoretically  possible  that  such  an
       attack  may  exist.  If this is a concern in your applica­
       tion, use /dev/random instead.

So writers of ssh, ssl etc. all go use /dev/random, which is not
'theoretically vulnerable to a cryptographic attack'. This means,
in practice, that they are dysfunctional on some headless systems
without Robert's patch. Robert's patch may make them slightly
less 'perfect', but not as imperfect as using /dev/urandom instead.
Using /dev/urandom has another problem: Do we expect all applications
now to have a compile option 'Are you using this on a headless
system in which case you might well want to use /dev/urandom
instead of /dev/random?'. By putting a config option in the kernel,
this can be set ONCE and only degrade behaviour to the minimal
amount possible.


--
Alex Bligh

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-20 15:42                         ` Chris Friesen
@ 2001-08-21 10:03                           ` Steve Hill
  2001-08-21 18:14                             ` David Wagner
  0 siblings, 1 reply; 59+ messages in thread
From: Steve Hill @ 2001-08-21 10:03 UTC (permalink / raw)
  To: Chris Friesen; +Cc: Doug McNaught, linux-kernel

On Mon, 20 Aug 2001, Chris Friesen wrote:

> The main reason for my comment was the suggestion by Steve Hill that
> /dev/urandom was NOT cryptographically secure.  Re-reading it, his comment was
> in the context of generating cryptographic keys, so perhaps I misunderstood what
> he meant.

Sorry - I'm not a cryptography expert, just your average Linux hacker :)

I was under the impression that urandom was considered insecure (hence why
it is not used by ssh, FreeS/WAN, etc), and so was very dubious about just
linking /dev/random to /dev/urandom.  However I still had the problem that
being a headless system, there wasn't much entropy (something which had
never been a problem under Cobalt's kernels - presumably they had kludged
/dev/random on the kernel-side).

After various suggestions, and a correction (I now understand urandom to
be secure despite the very theoretical vulnerability), I opted to get
extra entropy from the eepro100 and natsemi network devices.

Anyway, I would consider that the idea of generating entropy purely from
the local console (keyboard / mouse) to be a rather flawed idea - think
how many headless linux servers there are that are running some kind of
cryptographic software.  Maybe a compile-time option in the kernel to
change quality of /dev/random would be an idea so that the person
compiling the kernel can decide on the level of the tradeoff between the
quality and speed/amount of randomness...  Just a thought anyway.

-- 

- Steve Hill
System Administrator         Email: steve@navaho.co.uk
Navaho Technologies Ltd.       Tel: +44-870-7034015

        ... Alcohol and calculus don't mix - Don't drink and derive! ...



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-21  8:39                       ` Alex Bligh - linux-kernel
@ 2001-08-21 10:46                         ` Marco Colombo
  2001-08-21 12:40                           ` Alex Bligh - linux-kernel
                                             ` (2 more replies)
  2001-08-21 18:25                         ` David Wagner
  1 sibling, 3 replies; 59+ messages in thread
From: Marco Colombo @ 2001-08-21 10:46 UTC (permalink / raw)
  To: Alex Bligh - linux-kernel; +Cc: David Wagner, linux-kernel

On Tue, 21 Aug 2001, Alex Bligh - linux-kernel wrote:

> So writers of ssh, ssl etc. all go use /dev/random, which is not
> 'theoretically vulnerable to a cryptographic attack'. This means,
> in practice, that they are dysfunctional on some headless systems
> without Robert's patch. Robert's patch may make them slightly
> less 'perfect', but not as imperfect as using /dev/urandom instead.
> Using /dev/urandom has another problem: Do we expect all applications
> now to have a compile option 'Are you using this on a headless
> system in which case you might well want to use /dev/urandom
> instead of /dev/random?'. By putting a config option in the kernel,
> this can be set ONCE and only degrade behaviour to the minimal
> amount possible.

A little question: I used to believe that crypto software requires
strong random source to generate key pairs, but this requirement in
not true for session keys.  You don't usually generate a key pair on
a remote system, of course, so that's not a big issue. On low-entropy
systems (headless servers) is /dev/urandom strong enough to generate
session keys? I guess the little entropy collected by the system is
enough to feed the crypto secure PRNG for /dev/urandom, is it correct?

.TM.
-- 
      ____/  ____/   /
     /      /       /			Marco Colombo
    ___/  ___  /   /		      Technical Manager
   /          /   /			 ESI s.r.l.
 _____/ _____/  _/		       Colombo@ESI.it


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-21 10:46                         ` Marco Colombo
@ 2001-08-21 12:40                           ` Alex Bligh - linux-kernel
  2001-08-21 17:06                           ` cfs+linux-kernel
  2001-08-21 18:27                           ` David Wagner
  2 siblings, 0 replies; 59+ messages in thread
From: Alex Bligh - linux-kernel @ 2001-08-21 12:40 UTC (permalink / raw)
  To: Marco Colombo, Alex Bligh - linux-kernel
  Cc: David Wagner, linux-kernel, Alex Bligh - linux-kernel

> A little question: I used to believe that crypto software requires
> strong random source to generate key pairs, but this requirement in
> not true for session keys.  You don't usually generate a key pair on
> a remote system, of course, so that's not a big issue. On low-entropy
> systems (headless servers) is /dev/urandom strong enough to generate
> session keys? I guess the little entropy collected by the system is
> enough to feed the crypto secure PRNG for /dev/urandom, is it correct?

I /think/ the answer is 'it depends'.

a) If 'low entropy' meant 'no entropy', then the seed would be the
   same booting one system as on a black-hat identical system.

b) If you can obtain (one way or another) a session key, you can hijack
   that session. Whether or not you can then intercept other sessions
   depends in part what that session is (if, for instance, it is a root
   ssh session...). If you reduce the search space for session keys, you
   make being able to hijack a session considerably easier.

--
Alex Bligh

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-21  8:33                       ` Alex Bligh - linux-kernel
@ 2001-08-21 16:13                         ` Oliver Xymoron
  2001-08-21 17:44                           ` Alex Bligh - linux-kernel
  2001-08-21 18:19                         ` David Wagner
  1 sibling, 1 reply; 59+ messages in thread
From: Oliver Xymoron @ 2001-08-21 16:13 UTC (permalink / raw)
  To: Alex Bligh - linux-kernel
  Cc: Theodore Tso, David Schwartz, Andreas Dilger, linux-kernel

On Tue, 21 Aug 2001, Alex Bligh - linux-kernel wrote:

> > No you don't, that's your whole complaint to start with. You're clearly
> > entropy-limited. If you were willing to block waiting for enough entropy,
> > you'd be fine with the current scheme.
>
> Yes I /do/. I want to wait for sufficient entropy. I count inter-IRQ
> timing from network as a source of entropy. IE if the entropy pool
> is exhausted, I'm prepared to, and desire to, block, until a few packets
> have arrived. However, I do not wish to block indefinitely (actually
> about 3 minutes as there's a little periodic block I/O) which is what
> happens if I do not count network IRQ timing as an entropy source
> (current /dev/random result, without Robert's patch, on most NICs).
> Equally, I do not want want to read /dev/urandom (and not block)
> which, in an absence of entropy, is (arguably) cryptographically weaker
> (see below).

You're throwing the baby out with the bathwater. If you overestimate the
entropy added by even a small amount, /dev/random is no better than
/dev/urandom.

Imagine your attacker has broken into your firewall and can observe all
packets on your network at high accuracy. She's also got an exact
duplicate of your operational setup, software, hardware, switches,
routers, so she's got a pretty good idea of what your interrupt latency
looks like, etc., and she can even try to simulate the loads you're
experiencing by replaying packets. She's also a brilliant statistician. So
on each network interrupt, when you're adding, say, 8 bits of entropy to
the count, she's able to reliably guess 1. Really she only needs to guess
one bit right more than half the time - so long as she can gather slightly
more information than we think she can. Since your app is occassionally
blocking waiting for entropy, you're giving it out faster than you're
taking it in. Assuming your attacker has broken the hash function (SHA and
then some), it's just a short matter of time before she has enough
information to correlate her guesses and what you've sent to figure out
exactly what's in the pool and start guessing your session keys. Assuming
she hasn't broken SHA, then /dev/urandom is just as good.

So the whole point of /dev/random is to be more secure than just the hash
function and mixing. Which do you think is easier, breaking the hash
function or breaking into your network operations center and walking off
with your server? If your NOC is so secure, then you can probably afford a
hardware entropy source..

Read Schneier's essay on attack trees if the above argument doesn't make
sense to you:

 http://www.ddj.com/articles/1999/9912/9912a/9912a.htm?topic=security

> Measuring it there at least 16 network IRQs for the minimum
> SSL transaction. That generates 16x12 = 192 bits of
> entropy (each IRQ contributes 12 bits).

12 bits is a maximum and it's based on the apparent randomness of the
interrupt timing deltas. If your attacker is impatient, she can just ping
you at pseudo-random intervals tuned to clean your pool more rapidly.
You're also forgetting that TCP initial sequence numbers come from the
pool to prevent connection spoofing - more entropy lost.

> The point is simple: We say to authors of cryptographic applications
> (ssl, ssh etc.) that they should use /dev/random, because /dev/urandom
> is not cryptographically strong enough.

Who ever said that? /dev/random is a cute exercise in paranoia, not
practicality. It's nice for seeding personal GPG keys and ssh identities,
but was never intended for bulk cryptography. It's also nice for keys
you're going to reuse because if your attacker monitors all your traffic
and holds onto it for 50 years, and SHA happens to gets broken before El
Gamal, your GPG key is still safe.

--
 "Love the dolphins," she advised him. "Write by W.A.S.T.E.."


^ permalink raw reply	[flat|nested] 59+ messages in thread

* RE: /dev/random in 2.4.6
  2001-08-21 10:46                         ` Marco Colombo
  2001-08-21 12:40                           ` Alex Bligh - linux-kernel
@ 2001-08-21 17:06                           ` cfs+linux-kernel
  2001-08-21 17:48                             ` Alex Bligh - linux-kernel
  2001-08-21 18:27                           ` David Wagner
  2 siblings, 1 reply; 59+ messages in thread
From: cfs+linux-kernel @ 2001-08-21 17:06 UTC (permalink / raw)
  To: 'Marco Colombo', 'Alex Bligh - linux-kernel'
  Cc: 'David Wagner', linux-kernel

> -----Original Message-----
> From: linux-kernel-owner@vger.kernel.org 
> [mailto:linux-kernel-owner@vger.kernel.org] On Behalf Of Marco Colombo
> Sent: Tuesday, 21 August 2001 03:46
> To: Alex Bligh - linux-kernel
> Cc: David Wagner; linux-kernel@vger.kernel.org
> Subject: Re: /dev/random in 2.4.6
> 
> A little question: I used to believe that crypto software 
> requires strong random source to generate key pairs, but this 
> requirement in not true for session keys.  You don't usually 
> generate a key pair on a remote system, of course, so that's 
> not a big issue. On low-entropy systems (headless servers) is 
> /dev/urandom strong enough to generate session keys? I guess 
> the little entropy collected by the system is enough to feed 
> the crypto secure PRNG for /dev/urandom, is it correct?

I dunno about you, but I want good random for session keys too!  You can
still capture network traffic and decrypt at your leisure if you can
determine what the "random" number was used in making the session key.

cfs


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-21 16:13                         ` Oliver Xymoron
@ 2001-08-21 17:44                           ` Alex Bligh - linux-kernel
  2001-08-21 18:24                             ` David Wagner
  2001-08-21 19:04                             ` Oliver Xymoron
  0 siblings, 2 replies; 59+ messages in thread
From: Alex Bligh - linux-kernel @ 2001-08-21 17:44 UTC (permalink / raw)
  To: Oliver Xymoron, Alex Bligh - linux-kernel
  Cc: Theodore Tso, David Schwartz, Andreas Dilger, linux-kernel,
	Alex Bligh - linux-kernel

> You're throwing the baby out with the bathwater. If you overestimate the
> entropy added by even a small amount, /dev/random is no better than
> /dev/urandom.

I guess the option I'm asking for is really 'Say [Y] if you think
network IRQ timing contributes more than 0 bits of entropy'.

> Imagine your attacker has broken into your firewall and can observe all
> packets on your network at high accuracy. She's also got an exact
> duplicate of your operational setup, software, hardware, switches,
> routers, so she's got a pretty good idea of what your interrupt latency
> looks like, etc., and she can even try to simulate the loads you're
> experiencing by replaying packets. She's also a brilliant statistician. So
> on each network interrupt, when you're adding, say, 8 bits of entropy to
> the count, she's able to reliably guess 1. Really she only needs to guess
> one bit right more than half the time - so long as she can gather slightly
> more information than we think she can. Since your app is occassionally
> blocking waiting for entropy, you're giving it out faster than you're
> taking it in. Assuming your attacker has broken the hash function (SHA and
> then some), it's just a short matter of time before she has enough
> information to correlate her guesses and what you've sent to figure out
> exactly what's in the pool and start guessing your session keys. Assuming
> she hasn't broken SHA, then /dev/urandom is just as good.

Your logic so far is fine bar one minor nit: If we assume SHA-1 was
not breakable, then /dev/urandom in a ZERO ENTROPY environment would
give the the same value on a reboot of your machine as a simultaneous
reboot of a hacker's machine. /dev/random would block (indefinitely)
under these conditions. So /dev/urandom and /dev/random are
both dysfunctional in this circumstance (one spits out a predictable
number, one blocks), but differently dysfunctional, and
/dev/random's behaviour is better.

Similarly, if entropy disappears later on, then using /dev/urandom
eventually  provides you with information about the state of the pool,
though as the pool is SHA-1 hashed, it's a difficult attack to exploit.

So let's use Occam's razor and assume the attacker could have an SHA-1
exploit, because if they could not, and if thus we don't need to
consider this situation, as a couple of other posters have pointed
out, you don't need to worry about this whole entropy thing at all,
and never need to block on /dev/random.

> So the whole point of /dev/random is to be more secure than just the hash
> function and mixing.

Yes.

> Which do you think is easier, breaking the hash
> function or breaking into your network operations center and walking off
> with your server? If your NOC is so secure, then you can probably afford a
> hardware entropy source..

Here's the leap of logic I don't understand.

Firstly, the cost of breaking SHA-1 to read the contents of my
server will not be worth it. The cost
of breaking into the data center may well not be worth it! However, if
someone has already broken it... I was talking to someone this afternoon
who had DES (56 bit) cracking in FPGA (read cheap board) in a couple of
hours. He has triple-DES (112 bit) cracking in twice the time WHERE THERE
ARE ALGORITHMIC OR IMPLEMENTATION WEAKNESSES. So far, of the 4 hardware
accelerators he's examined (things with glue and gunk on), in default
config, he's found these in two. The same thing that's said (now) about
SHA-1 was said about triple-DES years ago. So I am assuming the hacker /
intelligence agency already has the tool (as we said above), and it
was developed for other purposes, cost 0.

Secondly, to put the argument the other way around, if I have no other
entropy sources, and no other random number generator, then using
entropy from the network INCREASES the cost of an attack, IF the
alternative is to use /dev/urandom. This is because all that network
timing information is expensive to gather. Sure, if I am getting
entropy from elsewhere, then by potentially overcontributing
entropy, it may well DECREASE the cost of an attack, if the
alternative is to continue using /dev/random. Hence the config option.

>> Measuring it there at least 16 network IRQs for the minimum
>> SSL transaction. That generates 16x12 = 192 bits of
>> entropy (each IRQ contributes 12 bits).
>
> 12 bits is a maximum and it's based on the apparent randomness of the
> interrupt timing deltas. If your attacker is impatient, she can just ping
> you at pseudo-random intervals tuned to clean your pool more rapidly.

Correct, and it's quite possible it should be contributing less bits
than 12 if the option is turned on. However, a better response would
be to fix the timers to be more accurate :-)

> You're also forgetting that TCP initial sequence numbers come from the
> pool to prevent connection spoofing - more entropy lost.

I /think/ this irrelevant. Let's assume that the TCP initial sequence
numbers are also observable by the attacker, and contribute to knowledge
about the pool (which is I think your point) - well, the relevant amount
of entropy is knocked off (actually, more is as not all the bits are
used), which means you have to block for more if entropy gets short.
Provided that (and this is the key thing) the entropy contribution of
network IRQ timing is not overestimated (but I allege can be non-zero),
this shouldn't be a problem.

>> The point is simple: We say to authors of cryptographic applications
>> (ssl, ssh etc.) that they should use /dev/random, because /dev/urandom
>> is not cryptographically strong enough.
>
> Who ever said that? /dev/random is a cute exercise in paranoia, not
> practicality. It's nice for seeding personal GPG keys and ssh identities,
> but was never intended for bulk cryptography. It's also nice for keys
> you're going to reuse because if your attacker monitors all your traffic
> and holds onto it for 50 years, and SHA happens to gets broken before El
> Gamal, your GPG key is still safe.

People are using /dev/random for session keys, for various reasons
(possibly because of initial seeding worries, possibly because they
want the additional strength). It has been alleged by some posters
that this is incorrect behaviour. Others seem to think that having
some 'wait for entropy' property for such random-number consumers
is useful, even if that entropy MIGHT be tainted, because there is
a high probability that it's not (not least as other attacks would
be cheaper).

I agree with your point that Robert's patch /could/ taint /dev/random,
but only if you switch it on!

--
Alex Bligh

^ permalink raw reply	[flat|nested] 59+ messages in thread

* RE: /dev/random in 2.4.6
  2001-08-21 17:06                           ` cfs+linux-kernel
@ 2001-08-21 17:48                             ` Alex Bligh - linux-kernel
  0 siblings, 0 replies; 59+ messages in thread
From: Alex Bligh - linux-kernel @ 2001-08-21 17:48 UTC (permalink / raw)
  To: cfs+linux-kernel, 'Marco Colombo',
	'Alex Bligh - linux-kernel'
  Cc: 'David Wagner', linux-kernel, Alex Bligh - linux-kernel

> I dunno about you, but I want good random for session keys too!  You can
> still capture network traffic and decrypt at your leisure if you can
> determine what the "random" number was used in making the session key.

That's why the pool is hashed before use. Modulo the seeding issue,
there is an implicit assumption in this argument that EITHER the
hash is breakable, OR we might as well scrap the entropy stuff entirely
and never block and speed up lots of applications that occasionally
block as a biproduct. The position that the hash is unbreakable
and never will be breakable, BUT we still need to block, is only
tenable in the context of initial seeding (AFAICS).

--
Alex Bligh

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-21 10:03                           ` Steve Hill
@ 2001-08-21 18:14                             ` David Wagner
  0 siblings, 0 replies; 59+ messages in thread
From: David Wagner @ 2001-08-21 18:14 UTC (permalink / raw)
  To: linux-kernel

Steve Hill  wrote:
>I was under the impression that urandom was considered insecure [...]

I think there is a perception among certain quarters that it is insecure,
but I also think this perception is completely unjustified.  We don't know
of a single attack, not even an academic weakness.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-21  8:33                       ` Alex Bligh - linux-kernel
  2001-08-21 16:13                         ` Oliver Xymoron
@ 2001-08-21 18:19                         ` David Wagner
  1 sibling, 0 replies; 59+ messages in thread
From: David Wagner @ 2001-08-21 18:19 UTC (permalink / raw)
  To: linux-kernel

Alex Bligh - linux-kernel  wrote:
>Equally, I do not want want to read /dev/urandom (and not block) which, in
>an absence of entropy, is (arguably) cryptographically weaker (see below).

This doesn't make any sense to me.  You're using SSL, so you already
have to trust MD5 and SHA.  The only way that /dev/urandom might even
plausibly be in trouble is if those hash functions are broken, but in
this case SSL is in even worse trouble.  If you're using the random
numbers for cryptographic purposes, you might as well use /dev/urandom.
Your fears about weaknesses in /dev/urandom seem to completely unfounded.

There is a perfectly good technical solution available, and it is called
/dev/urandom.  I hope this reassures you...

>The point is simple: We say to authors of cryptographic applications
>(ssl, ssh etc.) that they should use /dev/random, because /dev/urandom
>is not cryptographically strong enough.

Whoever says this is simply wrong.  There are a few isolated scenarios
where /dev/urandom is not sufficient and where /dev/random is needed,
but they are very rare, and they have nothing to do with weaknesses in
/dev/urandom (they have to do with recovering from host compromises,
and they're a second- or third-order concern).

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-21 17:44                           ` Alex Bligh - linux-kernel
@ 2001-08-21 18:24                             ` David Wagner
  2001-08-21 18:49                               ` Alex Bligh - linux-kernel
  2001-08-21 19:04                             ` Oliver Xymoron
  1 sibling, 1 reply; 59+ messages in thread
From: David Wagner @ 2001-08-21 18:24 UTC (permalink / raw)
  To: linux-kernel

Alex Bligh - linux-kernel  wrote:
>If we assume SHA-1 was
>not breakable, then /dev/urandom in a ZERO ENTROPY environment would
>give the the same value on a reboot of your machine as a simultaneous
>reboot of a hacker's machine. /dev/random would block (indefinitely)
>under these conditions. So /dev/urandom and /dev/random are
>both dysfunctional in this circumstance (one spits out a predictable
>number, one blocks), but differently dysfunctional, and
>/dev/random's behaviour is better.

Yup.  Fortunately, the countermeasure is simple: Your init script
should contain something like
  dd count=16 ibs=1 if=/dev/random of=/dev/urandom
This fixes the problem.  So this is arguably a user-level issue, not
a kernel issue.

>Similarly, if entropy disappears later on, then using /dev/urandom
>eventually  provides you with information about the state of the pool,
>though as the pool is SHA-1 hashed, it's a difficult attack to exploit.

No, it's not just difficult, it is completely infeasible under
current knowledge.

>So let's use Occam's razor and assume the attacker could have an SHA-1
>exploit,

No, let's not.  If the attacker has a SHA-1 exploit, then all your
SSL and IPSEC and other implementations are insecure, and they are
probably the only reason you're using /dev/random anyway.

Instead, let's assume SHA-1 is good, since it probably is, and since
you have to assume this anyway for the rest of your system.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-21  8:39                       ` Alex Bligh - linux-kernel
  2001-08-21 10:46                         ` Marco Colombo
@ 2001-08-21 18:25                         ` David Wagner
  1 sibling, 0 replies; 59+ messages in thread
From: David Wagner @ 2001-08-21 18:25 UTC (permalink / raw)
  To: linux-kernel

Alex Bligh - linux-kernel  wrote:
>Things like (from the manpage):

Yes.  The manpage should be changed.  No argument there.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-21 10:46                         ` Marco Colombo
  2001-08-21 12:40                           ` Alex Bligh - linux-kernel
  2001-08-21 17:06                           ` cfs+linux-kernel
@ 2001-08-21 18:27                           ` David Wagner
  2 siblings, 0 replies; 59+ messages in thread
From: David Wagner @ 2001-08-21 18:27 UTC (permalink / raw)
  To: linux-kernel

Marco Colombo  wrote:
>A little question: I used to believe that crypto software requires
>strong random source to generate key pairs, but this requirement in
>not true for session keys.

It is true for session keys, too.  Session keys should not be guessable,
so you must use an unpredictable source for them.  Fortunately,
/dev/urandom is essentially just as good as /dev/random in this respect.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-21 18:24                             ` David Wagner
@ 2001-08-21 18:49                               ` Alex Bligh - linux-kernel
  0 siblings, 0 replies; 59+ messages in thread
From: Alex Bligh - linux-kernel @ 2001-08-21 18:49 UTC (permalink / raw)
  To: David Wagner, linux-kernel; +Cc: Alex Bligh - linux-kernel

> No, let's not.  If the attacker has a SHA-1 exploit, then all your
> SSL and IPSEC and other implementations are insecure, and they are
> probably the only reason you're using /dev/random anyway.

Fair point, though for some applications one could conceivably be
using a different hash, and there are applications where breaking
the hash gives you less than breaking the encryption.

> Instead, let's assume SHA-1 is good, since it probably is, and since
> you have to assume this anyway for the rest of your system.

But if we assume SHA-1 is good, then you might as well drop all the
entropy measurement and blocking logic, and /dev/urandom is
fine for /ANY/ application. Furthermore, if SHA-1 is good,
Robert's patch does no harm, but makes existing applications
work.

IE if we assume SHA-1 is unbreakable, Robert's patch is harmless.
If we assume SHA-1 /is/ breakable, Robert's patch is harmless
if, and only if, in situations where it is configured on,
it doesn't overestimate the entropy network events provide
(sometimes this may be 0, in which case don't switch it on).

--
Alex Bligh

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-21 17:44                           ` Alex Bligh - linux-kernel
  2001-08-21 18:24                             ` David Wagner
@ 2001-08-21 19:04                             ` Oliver Xymoron
  2001-08-21 19:20                               ` Alex Bligh - linux-kernel
  2001-08-21 21:44                               ` Robert Love
  1 sibling, 2 replies; 59+ messages in thread
From: Oliver Xymoron @ 2001-08-21 19:04 UTC (permalink / raw)
  To: Alex Bligh - linux-kernel; +Cc: linux-kernel

On Tue, 21 Aug 2001, Alex Bligh - linux-kernel wrote:

> > You're throwing the baby out with the bathwater. If you overestimate the
> > entropy added by even a small amount, /dev/random is no better than
> > /dev/urandom.
>
> I guess the option I'm asking for is really 'Say [Y] if you think
> network IRQ timing contributes more than 0 bits of entropy'.

The trouble is no one has a good model of how much more than zero it is.

> Your logic so far is fine bar one minor nit: If we assume SHA-1 was
> not breakable, then /dev/urandom in a ZERO ENTROPY environment would
> give the the same value on a reboot of your machine as a simultaneous
> reboot of a hacker's machine.

That's why you seed your pool at boot. Zero entropy is reducio ad absurdum
argument anyway.

> So let's use Occam's razor and assume the attacker could have an SHA-1
> exploit, because if they could not, and if thus we don't need to
> consider this situation, as a couple of other posters have pointed
> out, you don't need to worry about this whole entropy thing at all,
> and never need to block on /dev/random.

Again, /dev/random is an exercise in paranoia. If the blocking thing is a
hassle, then you probably shouldn't be using it.

> Here's the leap of logic I don't understand.
>
> Firstly, the cost of breaking SHA-1 to read the contents of my
> server will not be worth it. The cost
> of breaking into the data center may well not be worth it!

So then /dev/urandom is good enough.

> However, if someone has already broken it... I was talking to someone
> this afternoon who had DES (56 bit) cracking in FPGA (read cheap
> board) in a couple of hours. He has triple-DES (112 bit) cracking in
> twice the time WHERE THERE ARE ALGORITHMIC OR IMPLEMENTATION
> WEAKNESSES. So far, of the 4 hardware accelerators he's examined
> (things with glue and gunk on), in default config, he's found these in
> two. The same thing that's said (now) about SHA-1 was said about
> triple-DES years ago. So I am assuming the hacker / intelligence
> agency already has the tool (as we said above), and it was developed
> for other purposes, cost 0.

Ok, you're going to assume that the 160-bit SHA hash with lots and lots
and lots of mixing is more vulnerable than the IDEA or Blowfish or 3DES
that you're using for your actual encryption?

> Secondly, to put the argument the other way around, if I have no other
> entropy sources, and no other random number generator, then using
> entropy from the network INCREASES the cost of an attack, IF the
> alternative is to use /dev/urandom. This is because all that network
> timing information is expensive to gather. Sure, if I am getting
> entropy from elsewhere, then by potentially overcontributing
> entropy, it may well DECREASE the cost of an attack, if the
> alternative is to continue using /dev/random. Hence the config option.

How about simply adding possible entropy from the network but not
accounting for it? /dev/urandom then becomes as strong as the proposed
/dev/random (up to the load that /dev/random would allow), while
/dev/random isn't weakened.

> >> Measuring it there at least 16 network IRQs for the minimum
> >> SSL transaction. That generates 16x12 = 192 bits of
> >> entropy (each IRQ contributes 12 bits).
> >
> > 12 bits is a maximum and it's based on the apparent randomness of the
> > interrupt timing deltas. If your attacker is impatient, she can just ping
> > you at pseudo-random intervals tuned to clean your pool more rapidly.
>
> Correct, and it's quite possible it should be contributing less bits
> than 12 if the option is turned on. However, a better response would
> be to fix the timers to be more accurate :-)

We're already using cycle counters - do you propose being more accurate
than that?

> > You're also forgetting that TCP initial sequence numbers come from the
> > pool to prevent connection spoofing - more entropy lost.
>
> I /think/ this irrelevant. Let's assume that the TCP initial sequence
> numbers are also observable by the attacker, and contribute to knowledge
> about the pool (which is I think your point) - well, the relevant amount
> of entropy is knocked off (actually, more is as not all the bits are
> used), which means you have to block for more if entropy gets short.
> Provided that (and this is the key thing) the entropy contribution of
> network IRQ timing is not overestimated (but I allege can be non-zero),
> this shouldn't be a problem.

ISNs are 32 bits and it takes one interrupt to trigger a SYN. Happily, the
network stack doesn't block when it runs out of entropy, otherwise your
headless box would never get anywhere.

> I agree with your point that Robert's patch /could/ taint /dev/random,
> but only if you switch it on!

As it stands, it does. Assuming a 1GHz processor and hitting the maximum
12 bits of entropy per interrupt, we only need to guess the interrupt
timing to within 4us - probably not hard. As I've pointed out, it's not
hard to send our own apparently random packets to open up that window.

--
 "Love the dolphins," she advised him. "Write by W.A.S.T.E.."


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-21 19:04                             ` Oliver Xymoron
@ 2001-08-21 19:20                               ` Alex Bligh - linux-kernel
  2001-08-21 21:44                               ` Robert Love
  1 sibling, 0 replies; 59+ messages in thread
From: Alex Bligh - linux-kernel @ 2001-08-21 19:20 UTC (permalink / raw)
  To: Oliver Xymoron; +Cc: linux-kernel, Alex Bligh - linux-kernel

Oliver,

> Ok, you're going to assume that the 160-bit SHA hash with lots and lots
> and lots of mixing is more vulnerable than the IDEA or Blowfish or 3DES
> that you're using for your actual encryption?

Only because if not, this argument is irrelevant, in that if SHA-1
is not broken and can never be, then there is no point in the entropy
measurement and blocking behaviour at all, and in this case, Robert's
patch does no harm whatsoever.

> How about simply adding possible entropy from the network but not
> accounting for it? /dev/urandom then becomes as strong as the proposed
> /dev/random (up to the load that /dev/random would allow), while
> /dev/random isn't weakened.

Perhaps I'm missing something, but /dev/urandom is only weaker than
/dev/random NOW if SHA-1 is cracked (if not, the two are identical
in 'strength'). And, in the extremely theoretical case that
SHA-1 is crackable (perhaps with an attack that takes many days),
then wanting to have gathered entropy before reading is useful.

As you point out (above), and as did David, SHA-1 being cracked
is a remote possibility in the extreme. But this scenario is the
only one where Robert's patch could make a conceivable difference.
If SHA-1 is invulnerable, then why argue against Robert's patch
which merely stops some applications from not working on some machines.

But people obviously do have concerns about the reversibility of SHA-1,
even if only at a very theoretical level, or they wouldn't be
spending all this time arguing about the importance of metering entropy.

Adding entropy from the network and not accounting for it is probably
better than nothing (as it makes the seeding problem better).

>> Correct, and it's quite possible it should be contributing less bits
>> than 12 if the option is turned on. However, a better response would
>> be to fix the timers to be more accurate :-)
>
> We're already using cycle counters - do you propose being more accurate
> than that?

I propose not using Jiffies on machines other than some i386's, and
not exposing /proc/interrupts 644 which reduces the attack space
hugely.

>> I agree with your point that Robert's patch /could/ taint /dev/random,
>> but only if you switch it on!
>
> As it stands, it does. Assuming a 1GHz processor and hitting the maximum
> 12 bits of entropy per interrupt, we only need to guess the interrupt
> timing to within 4us - probably not hard. As I've pointed out, it's not
> hard to send our own apparently random packets to open up that window.

IF you can observe packets, and IF you turn the config option
on against advice [1] THEN the strength of /dev/random
will be tainted IF and ONLY IF SHA-1 is breakable (in which
case, as you point out, all sorts of other things break anyway).
I consider this an acceptable risk.

[1] against the advice of a config option which should read
'DO NOT SWITCH THIS ON IF YOU BELIEVE THERE IS ANY CHANCE OF
YOUR NETWORK PACKETS BEING OBSERVABLE AND YOU ARE WORRIED
ABOUT THE SECURITY OF SHA-1' (but the same applies btw using k/b
interrupts with wireless keyboards).

--
Alex Bligh

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: /dev/random in 2.4.6
  2001-08-21 19:04                             ` Oliver Xymoron
  2001-08-21 19:20                               ` Alex Bligh - linux-kernel
@ 2001-08-21 21:44                               ` Robert Love
  1 sibling, 0 replies; 59+ messages in thread
From: Robert Love @ 2001-08-21 21:44 UTC (permalink / raw)
  To: Alex Bligh - linux-kernel; +Cc: Oliver Xymoron, linux-kernel

On Tue, 2001-08-21 at 15:20, Alex Bligh - linux-kernel wrote:
> Only because if not, this argument is irrelevant, in that if SHA-1
> is not broken and can never be, then there is no point in the entropy
> measurement and blocking behaviour at all, and in this case, Robert's
> patch does no harm whatsoever.
>
> Perhaps I'm missing something, but /dev/urandom is only weaker than
> /dev/random NOW if SHA-1 is cracked (if not, the two are identical
> in 'strength'). And, in the extremely theoretical case that
> SHA-1 is crackable (perhaps with an attack that takes many days),
> then wanting to have gathered entropy before reading is useful.

I believe this is all correct.  If the one-way hash is uncrackable, then
we are never giving any clue to the state of the pool out, thus the
entropy estimate means nothing (entropy would never drop on read from
/dev/[u]random).

Thus, as you have pointed out, we have two situations:

One, SHA-1 is unbreakable: my patch does no harm, and (beneficently)
feeds the random pool with bits.

Two, SHA-1 is breakable: the patch, if _enabled_ and iff the entropy
estimate is too large (agreeable entirely possible) will weaken the
entropy estimate.  that is why you need to judge the situation.

> As you point out (above), and as did David, SHA-1 being cracked
> is a remote possibility in the extreme. But this scenario is the
> only one where Robert's patch could make a conceivable difference.
> If SHA-1 is invulnerable, then why argue against Robert's patch
> which merely stops some applications from not working on some machines.

True.  And, if we worry about SHA-1 being cracked, and network
interrupts having some air of predictability, then we might as well
worry up other interrupts causing an overestimate of entropy.

Its all a gamble.  Its all a theoretical estimate.

> But people obviously do have concerns about the reversibility of SHA-1,
> even if only at a very theoretical level, or they wouldn't be
> spending all this time arguing about the importance of metering entropy.
> 
> Adding entropy from the network and not accounting for it is probably
> better than nothing (as it makes the seeding problem better).
> 
> I propose not using Jiffies on machines other than some i386's, and
> not exposing /proc/interrupts 644 which reduces the attack space
> hugely.

agreed.

> IF you can observe packets, and IF you turn the config option
> on against advice [1] THEN the strength of /dev/random
> will be tainted IF and ONLY IF SHA-1 is breakable (in which
> case, as you point out, all sorts of other things break anyway).
> I consider this an acceptable risk.
> 
> [1] against the advice of a config option which should read
> 'DO NOT SWITCH THIS ON IF YOU BELIEVE THERE IS ANY CHANCE OF
> YOUR NETWORK PACKETS BEING OBSERVABLE AND YOU ARE WORRIED
> ABOUT THE SECURITY OF SHA-1' (but the same applies btw using k/b
> interrupts with wireless keyboards).

I will add this wording to the Configure.help of the next patch release.

-- 
Robert M. Love
rml at ufl.edu
rml at tech9.net


^ permalink raw reply	[flat|nested] 59+ messages in thread

end of thread, other threads:[~2001-08-21 21:45 UTC | newest]

Thread overview: 59+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-08-15 15:07 /dev/random in 2.4.6 Steve Hill
2001-08-15 15:21 ` Richard B. Johnson
2001-08-15 15:27   ` Steve Hill
2001-08-15 15:42     ` Richard B. Johnson
2001-08-15 16:29       ` Tim Walberg
2001-08-15 17:13     ` Andreas Dilger
2001-08-16  8:37       ` Steve Hill
2001-08-16 19:11         ` Andreas Dilger
2001-08-16 19:35           ` Alex Bligh - linux-kernel
2001-08-16 20:30             ` Andreas Dilger
2001-08-17  1:05           ` Robert Love
2001-08-17  0:49         ` Robert Love
2001-08-19 17:29           ` David Wagner
2001-08-17 21:18       ` Theodore Tso
2001-08-17 22:05         ` David Schwartz
2001-08-19 15:13           ` Theodore Tso
2001-08-19 15:33             ` Rob Radez
2001-08-19 17:32             ` David Wagner
2001-08-19 23:32             ` Oliver Xymoron
2001-08-20  7:40               ` Helge Hafting
2001-08-20 14:01                 ` Oliver Xymoron
2001-08-20 13:37               ` Alex Bligh - linux-kernel
2001-08-20 14:12                 ` Oliver Xymoron
2001-08-20 14:40                   ` Alex Bligh - linux-kernel
2001-08-20 14:55                     ` Chris Friesen
2001-08-20 15:22                       ` Oliver Xymoron
2001-08-20 15:25                       ` Doug McNaught
2001-08-20 15:42                         ` Chris Friesen
2001-08-21 10:03                           ` Steve Hill
2001-08-21 18:14                             ` David Wagner
2001-08-20 16:01                       ` David Wagner
2001-08-20 19:30                       ` Gérard Roudier
2001-08-20 15:07                     ` Oliver Xymoron
2001-08-21  8:33                       ` Alex Bligh - linux-kernel
2001-08-21 16:13                         ` Oliver Xymoron
2001-08-21 17:44                           ` Alex Bligh - linux-kernel
2001-08-21 18:24                             ` David Wagner
2001-08-21 18:49                               ` Alex Bligh - linux-kernel
2001-08-21 19:04                             ` Oliver Xymoron
2001-08-21 19:20                               ` Alex Bligh - linux-kernel
2001-08-21 21:44                               ` Robert Love
2001-08-21 18:19                         ` David Wagner
2001-08-20 16:00                     ` David Wagner
2001-08-21  1:20                       ` Theodore Tso
2001-08-21  8:39                       ` Alex Bligh - linux-kernel
2001-08-21 10:46                         ` Marco Colombo
2001-08-21 12:40                           ` Alex Bligh - linux-kernel
2001-08-21 17:06                           ` cfs+linux-kernel
2001-08-21 17:48                             ` Alex Bligh - linux-kernel
2001-08-21 18:27                           ` David Wagner
2001-08-21 18:25                         ` David Wagner
2001-08-20 22:55                     ` D. Stimits
2001-08-21  1:06                       ` David Schwartz
2001-08-19 17:31         ` David Wagner
2001-08-19 17:27     ` David Wagner
2001-08-15 19:25 ` Alex Bligh - linux-kernel
2001-08-16  8:55   ` Steve Hill
2001-08-15 20:55 ` Robert Love
2001-08-15 21:27   ` Alex Bligh - linux-kernel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).