All of lore.kernel.org
 help / color / mirror / Atom feed
* random(4) overheads question
@ 2011-09-26  6:41 Sandy Harris
  0 siblings, 0 replies; only message in thread
From: Sandy Harris @ 2011-09-26  6:41 UTC (permalink / raw)
  To: linux-crypto

I'm working on a demon that collects timer randomness, distills it
some, and pushes the results into /dev/random.

My code produces the random material in 32-bit chunks. The current
version sends it to /dev/random 32 bits at a time, doing a write() and
an entropy-update ioctl() for each chunk. Obviously I could add some
buffering and write fewer and larger chunks. My questions are whether
that is worth doing and, if so, what the optimum write() size is
likely to be.

I am not overly concerned about overheads on my side of the interface,
unless they are quite large. My concern is whether doing many small
writes wastes kernel resources.

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2011-09-26  6:41 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-09-26  6:41 random(4) overheads question Sandy Harris

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.