linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/5] /dev/random - a new approach
@ 2016-08-11 12:24 Stephan Mueller
  2016-08-11 12:24 ` [PATCH v6 1/5] crypto: DRBG - externalize DRBG functions for LRNG Stephan Mueller
                   ` (6 more replies)
  0 siblings, 7 replies; 26+ messages in thread
From: Stephan Mueller @ 2016-08-11 12:24 UTC (permalink / raw)
  To: herbert, Ted Tso
  Cc: sandyinchina, Jason Cooper, John Denker, H. Peter Anvin,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

Hi Herbert, Ted,

The following patch set provides a different approach to /dev/random which
I call Linux Random Number Generator (LRNG) to collect entropy within the Linux
kernel. The main improvements compared to the legacy /dev/random is to provide
sufficient entropy during boot time as well as in virtual environments and when
using SSDs. A secondary design goal is to limit the impact of the entropy
collection on massive parallel systems and also allow the use accelerated
cryptographic primitives. Also, all steps of the entropic data processing are
testable. Finally massive performance improvements are visible at /dev/urandom
and get_random_bytes.

The design and implementation is driven by a set of goals described in [1]
that the LRNG completely implements. Furthermore, [1] includes a
comparison with RNG design suggestions such as SP800-90B, SP800-90C, and
AIS20/31.

Changes v6:
* port to 4.8-rc1
* add missing memzero_explicit to ChaCha20 DRNG
* use kernel-doc documentation style
* use of min3 in lrng_get_pool to beautify code
* prevent fast noise sources from dominating slow noise sources
  in case of /dev/random
* set read wakeup threshold to 64 bits to comply with legacy /dev/random
* simplify the interrupt to entropy amount conversion code
* move wakeup call of entropy-providers to a code location where /dev/urandom
  will benefit from the wake up as well (i.e. when the primary DRBG entropy
  runs low because of /dev/urandom reseeds, the entropy provider is woken up)
* inject current time into primary DRBG at the time of seeding from noise
  sources (suggested by Sandy Harris)

Changes v5:
* fix treating LRNG_POOL_SIZE_BITS as entropy value in lrng_get_pool
* use CTR DRBG with AES256 as default due to its superior speed -- on X86_64
  executing within a KVM I get read speeds of up to 850 MB/s now. When using a
  fake NUMA system with 4 nodes on 4 CPUs, I still get up to 430 MB/s read speed
  with four parallel reads. Note, this patch applies to the current
  cryptodev-2.6 tree.
* simplify lrng_get_arch
* use DRBG security strengths as defined in SP800-57 section 5.6.1
* add security strength to /proc/sys/kernel/random/lrng_type
* add ChaCha20 DRNG: in case the kernel crypto API is not compiled, the ChaCha20
  DRNG with the SHA-1 C implementations are used to drive the cryptographic part
  of the LRNG.The ChaCha20 RNG is described in [1]. I analyzed it with a user
  space version of it.
* Editorial changes requested by checkpatch.pl

Changes v4:
* port to 4.7-rc1
* Use classical twisted LFSR approach to collect entropic data as requested by
  George Spelvin. The LFSR is based on a primitive and irreducible polynomial
  whose taps are not too close to the location the current byte is mixed in.
  Primitive polynomials for other entropy pool sizes are offered in the code.
* The reading of the entropy pool is performed with a hash. The hash can be
  specified at compile time. The pre-defined hashes are the same as used for
  the DRBG type (e.g. a SHA256 Hash DRBG implies the use of SHA-256, an AES256
  CTR DRBG implies the use of CMAC-AES).
* Addition of the example defines for a CTR DRBG with AES128 which can be
  enabled during compile time.
* Entropy estimate: one bit of entropy per interrupt. In case a system does
  not have a high-resolution timer, apply 1/10th bit of entropy per interrupt.
  The interrupt estimates can be changed arbitrarily at compile time.
* Use kmalloc_node for the per-NUMA node secondary DRBGs.
* Add boot time entropy tests discussed in section 3.4.3 [1].
* Align all buffers that are processed by the kernel crypto API to an 8 byte
  boundary. This boundary covers all currently existing cipher implementations.

Changes v3:
* Convert debug printk to pr_debug as suggested by Joe Perches
* Add missing \n as suggested by Joe Perches
* Do not mix in struck IRQ measurements as requested by Pavel Machek
* Add handling logic for systems without high-res timer as suggested by Pavel
  Machek -- it uses ideas from the add_interrupt_randomness of the legacy
  /dev/random implementation
* add per NUMA node secondary DRBGs as suggested by Andi Kleen -- the
  explanation of how the logic works is given in section 2.1.1 of my
  documentation [1], especially how the initial seeding is performed.

Changes v2:
* Removal of the Jitter RNG fast noise source as requested by Ted
* Addition of processing of add_input_randomness as suggested by Ted
* Update documentation and testing in [1] to cover the updates
* Addition of a SystemTap script to test add_input_randomness
* To clarify the question whether sufficient entropy is present during boot
  I added one more test in 3.3.1 [1] which demonstrates the providing of
  sufficient entropy during initialization. In the worst case of no fast noise
  sources, in the worst case of a virtual machine with only very few hardware
  devices, the testing shows that the secondary DRBG is fully seeded with 256
  bits of entropy before user space injects the random data obtained
  during shutdown of the previous boot (i.e. the requirement phrased by the
  legacy /dev/random implementation). As the writing of the random data into
  /dev/random by user space will happen before any cryptographic service
  is initialized in user space, this test demonstrates that sufficient
  entropy is already present in the LRNG at the time user space requires it
  for seeding cryptographic daemons. Note, this test result was obtained
  for different architectures, such as x86 64 bit, x86 32 bit, ARM 32 bit and
  MIPS 32 bit.

[1] http://www.chronox.de/lrng/doc/lrng.pdf

[2] http://www.chronox.de/lrng.html

Stephan Mueller (5):
  crypto: DRBG - externalize DRBG functions for LRNG
  random: conditionally compile code depending on LRNG
  crypto: Linux Random Number Generator
  crypto: LRNG - enable compile
  crypto: LRNG - add ChaCha20 support

 crypto/Kconfig           |   10 +
 crypto/Makefile          |    7 +
 crypto/drbg.c            |   11 +-
 crypto/lrng_base.c       | 1960 ++++++++++++++++++++++++++++++++++++++++++++++
 crypto/lrng_kcapi.c      |  167 ++++
 crypto/lrng_standalone.c |  220 ++++++
 drivers/char/random.c    |    8 +
 include/crypto/drbg.h    |    7 +
 include/linux/genhd.h    |    5 +
 include/linux/random.h   |    7 +-
 10 files changed, 2395 insertions(+), 7 deletions(-)
 create mode 100644 crypto/lrng_base.c
 create mode 100644 crypto/lrng_kcapi.c
 create mode 100644 crypto/lrng_standalone.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6 1/5] crypto: DRBG - externalize DRBG functions for LRNG
  2016-08-11 12:24 [PATCH v6 0/5] /dev/random - a new approach Stephan Mueller
@ 2016-08-11 12:24 ` Stephan Mueller
  2016-08-11 12:25 ` [PATCH v6 2/5] random: conditionally compile code depending on LRNG Stephan Mueller
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 26+ messages in thread
From: Stephan Mueller @ 2016-08-11 12:24 UTC (permalink / raw)
  To: herbert
  Cc: Ted Tso, sandyinchina, Jason Cooper, John Denker, H. Peter Anvin,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

This patch allows several DRBG functions to be called by the LRNG kernel
code paths outside the drbg.c file.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
---
 crypto/drbg.c         | 11 +++++------
 include/crypto/drbg.h |  7 +++++++
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/crypto/drbg.c b/crypto/drbg.c
index f752da3..42084c2 100644
--- a/crypto/drbg.c
+++ b/crypto/drbg.c
@@ -113,7 +113,7 @@
  * the SHA256 / AES 256 over other ciphers. Thus, the favored
  * DRBGs are the latest entries in this array.
  */
-static const struct drbg_core drbg_cores[] = {
+const struct drbg_core drbg_cores[] = {
 #ifdef CONFIG_CRYPTO_DRBG_CTR
 	{
 		.flags = DRBG_CTR | DRBG_STRENGTH128,
@@ -205,7 +205,7 @@ static int drbg_uninstantiate(struct drbg_state *drbg);
  * Return: normalized strength in *bytes* value or 32 as default
  *	   to counter programming errors
  */
-static inline unsigned short drbg_sec_strength(drbg_flag_t flags)
+unsigned short drbg_sec_strength(drbg_flag_t flags)
 {
 	switch (flags & DRBG_STRENGTH_MASK) {
 	case DRBG_STRENGTH128:
@@ -1128,7 +1128,7 @@ static int drbg_seed(struct drbg_state *drbg, struct drbg_string *pers,
 }
 
 /* Free all substructures in a DRBG state without the DRBG state structure */
-static inline void drbg_dealloc_state(struct drbg_state *drbg)
+void drbg_dealloc_state(struct drbg_state *drbg)
 {
 	if (!drbg)
 		return;
@@ -1147,7 +1147,7 @@ static inline void drbg_dealloc_state(struct drbg_state *drbg)
  * Allocate all sub-structures for a DRBG state.
  * The DRBG state structure must already be allocated.
  */
-static inline int drbg_alloc_state(struct drbg_state *drbg)
+int drbg_alloc_state(struct drbg_state *drbg)
 {
 	int ret = -ENOMEM;
 	unsigned int sb_size = 0;
@@ -1781,8 +1781,7 @@ static int drbg_kcapi_sym_ctr(struct drbg_state *drbg,
  *
  * return: flags
  */
-static inline void drbg_convert_tfm_core(const char *cra_driver_name,
-					 int *coreref, bool *pr)
+void drbg_convert_tfm_core(const char *cra_driver_name, int *coreref, bool *pr)
 {
 	int i = 0;
 	size_t start = 0;
diff --git a/include/crypto/drbg.h b/include/crypto/drbg.h
index 61580b1..1755d07 100644
--- a/include/crypto/drbg.h
+++ b/include/crypto/drbg.h
@@ -280,4 +280,11 @@ enum drbg_prefixes {
 	DRBG_PREFIX3
 };
 
+extern int drbg_alloc_state(struct drbg_state *drbg);
+extern void drbg_dealloc_state(struct drbg_state *drbg);
+extern void drbg_convert_tfm_core(const char *cra_driver_name, int *coreref,
+				  bool *pr);
+extern const struct drbg_core drbg_cores[];
+extern unsigned short drbg_sec_strength(drbg_flag_t flags);
+
 #endif /* _DRBG_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 2/5] random: conditionally compile code depending on LRNG
  2016-08-11 12:24 [PATCH v6 0/5] /dev/random - a new approach Stephan Mueller
  2016-08-11 12:24 ` [PATCH v6 1/5] crypto: DRBG - externalize DRBG functions for LRNG Stephan Mueller
@ 2016-08-11 12:25 ` Stephan Mueller
  2016-08-11 12:25 ` [PATCH v6 3/5] crypto: Linux Random Number Generator Stephan Mueller
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 26+ messages in thread
From: Stephan Mueller @ 2016-08-11 12:25 UTC (permalink / raw)
  To: herbert
  Cc: Ted Tso, sandyinchina, Jason Cooper, John Denker, H. Peter Anvin,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

When selecting the LRNG for compilation, disable the legacy /dev/random
implementation.

The LRNG is a drop-in replacement for the legacy /dev/random which
implements the same in-kernel and user space API. Only the hooks of
/dev/random into other parts of the kernel need to be disabled.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
---
 drivers/char/random.c  | 8 ++++++++
 include/linux/genhd.h  | 5 +++++
 include/linux/random.h | 7 ++++++-
 3 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 3efb3bf..730a12e 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -270,6 +270,8 @@
 #include <asm/irq_regs.h>
 #include <asm/io.h>
 
+#ifndef CONFIG_CRYPTO_LRNG
+
 #define CREATE_TRACE_POINTS
 #include <trace/events/random.h>
 
@@ -1898,6 +1900,7 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
 	}
 	return urandom_read(NULL, buf, count, NULL);
 }
+#endif	/* CONFIG_CRYPTO_LRNG */
 
 /********************************************************************
  *
@@ -1905,6 +1908,7 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
  *
  ********************************************************************/
 
+#ifndef CONFIG_CRYPTO_LRNG
 #ifdef CONFIG_SYSCTL
 
 #include <linux/sysctl.h>
@@ -2042,6 +2046,8 @@ struct ctl_table random_table[] = {
 };
 #endif 	/* CONFIG_SYSCTL */
 
+#endif	/* CONFIG_CRYPTO_LRNG */
+
 static u32 random_int_secret[MD5_MESSAGE_BYTES / 4] ____cacheline_aligned;
 
 int random_int_secret_init(void)
@@ -2119,6 +2125,7 @@ randomize_range(unsigned long start, unsigned long end, unsigned long len)
 	return PAGE_ALIGN(get_random_int() % range + start);
 }
 
+#ifndef CONFIG_CRYPTO_LRNG
 /* Interface for in-kernel drivers of true hardware RNGs.
  * Those devices may produce endless random bits and will be throttled
  * when our pool is full.
@@ -2143,3 +2150,4 @@ void add_hwgenerator_randomness(const char *buffer, size_t count,
 	credit_entropy_bits(poolp, entropy);
 }
 EXPORT_SYMBOL_GPL(add_hwgenerator_randomness);
+#endif	/* CONFIG_CRYPTO_LRNG */
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 1dbf52f..387770d1 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -437,8 +437,13 @@ extern void disk_flush_events(struct gendisk *disk, unsigned int mask);
 extern unsigned int disk_clear_events(struct gendisk *disk, unsigned int mask);
 
 /* drivers/char/random.c */
+#ifdef CONFIG_CRYPTO_LRNG
+#define add_disk_randomness(disk) do {} while (0)
+#define rand_initialize_disk(disk) do {} while (0)
+#else
 extern void add_disk_randomness(struct gendisk *disk);
 extern void rand_initialize_disk(struct gendisk *disk);
+#endif
 
 static inline sector_t get_start_sect(struct block_device *bdev)
 {
diff --git a/include/linux/random.h b/include/linux/random.h
index 3d6e981..fd39c11 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -17,10 +17,15 @@ struct random_ready_callback {
 	struct module *owner;
 };
 
-extern void add_device_randomness(const void *, unsigned int);
 extern void add_input_randomness(unsigned int type, unsigned int code,
 				 unsigned int value);
 extern void add_interrupt_randomness(int irq, int irq_flags);
+#ifdef CONFIG_CRYPTO_LRNG
+#define add_device_randomness(buf, nbytes) do {} while (0)
+#else	/* CONFIG_CRYPTO_LRNG */
+extern void add_device_randomness(const void *, unsigned int);
+#define lrng_irq_process()
+#endif	/* CONFIG_CRYPTO_LRNG */
 
 extern void get_random_bytes(void *buf, int nbytes);
 extern int add_random_ready_callback(struct random_ready_callback *rdy);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 3/5] crypto: Linux Random Number Generator
  2016-08-11 12:24 [PATCH v6 0/5] /dev/random - a new approach Stephan Mueller
  2016-08-11 12:24 ` [PATCH v6 1/5] crypto: DRBG - externalize DRBG functions for LRNG Stephan Mueller
  2016-08-11 12:25 ` [PATCH v6 2/5] random: conditionally compile code depending on LRNG Stephan Mueller
@ 2016-08-11 12:25 ` Stephan Mueller
  2016-08-11 12:26 ` [PATCH v6 4/5] crypto: LRNG - enable compile Stephan Mueller
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 26+ messages in thread
From: Stephan Mueller @ 2016-08-11 12:25 UTC (permalink / raw)
  To: herbert
  Cc: Ted Tso, sandyinchina, Jason Cooper, John Denker, H. Peter Anvin,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

The LRNG with the following properties:

* noise source: interrupts timing with fast boot time seeding

* lockless LFSR to collect raw entropy

* use of kernel crypto API DRBG

* in case kernel crypto API is not compiled, use standalone
  ChaCha20 based RNG

* used cipher types for hashes and DRBG is selectable at
  compile time

* "atomic" seeding of secondary DRBG to ensure full entropy
  transport

* instantiate one DRBG per NUMA node

Further details including the rationale for the design choices and
properties of the LRNG together with testing is provided at [1].
In addition, the documentation explains the conducted regression
tests to verify that the LRNG is API and ABI compatible with the
legacy /dev/random implementation.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
---
 crypto/lrng_base.c  | 1960 +++++++++++++++++++++++++++++++++++++++++++++++++++
 crypto/lrng_kcapi.c |  167 +++++
 2 files changed, 2127 insertions(+)
 create mode 100644 crypto/lrng_base.c
 create mode 100644 crypto/lrng_kcapi.c

diff --git a/crypto/lrng_base.c b/crypto/lrng_base.c
new file mode 100644
index 0000000..ab92298
--- /dev/null
+++ b/crypto/lrng_base.c
@@ -0,0 +1,1960 @@
+/*
+ * Linux Random Number Generator (LRNG)
+ *
+ * Documentation and test code: http://www.chronox.de/lrng.html
+ *
+ * Copyright (C) 2016, Stephan Mueller <smueller@chronox.de>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, and the entire permission notice in its entirety,
+ *    including the disclaimer of warranties.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote
+ *    products derived from this software without specific prior
+ *    written permission.
+ *
+ * ALTERNATIVELY, this product may be distributed under the terms of
+ * the GNU General Public License, in which case the provisions of the GPL2
+ * are required INSTEAD OF the above restrictions.  (This clause is
+ * necessary due to a potential bad interaction between the GPL and
+ * the restrictions contained in a BSD-style copyright.)
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF
+ * WHICH ARE HEREBY DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
+ * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH
+ * DAMAGE.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/timex.h>
+#include <linux/percpu.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/fs.h>
+#include <linux/spinlock.h>
+#include <linux/kthread.h>
+#include <linux/random.h>
+#include <linux/workqueue.h>
+#include <linux/poll.h>
+#include <linux/cryptohash.h>
+#include <linux/syscalls.h>
+#include <linux/uuid.h>
+#include <linux/fips.h>
+#include <linux/slab.h>
+
+/*
+ * Define a DRBG plus a hash / MAC used to extract data from the entropy pool.
+ * For LRNG_HASH_NAME you can use a hash or a MAC (HMAC or CMAC) of your choice
+ * (Note, you should use the suggested selections below -- using SHA-1 or MD5
+ * is not wise). The idea is that the used cipher primitive can be selected to
+ * be the same as used for the DRBG. I.e. the LRNG only uses one cipher
+ * primitive using the same cipher implementation with the options offered in
+ * the following. This means, if the CTR DRBG is selected and AES-NI is present,
+ * both the CTR DRBG and the selected cmac(aes) use AES-NI.
+ *
+ * The security strengths of the DRBGs are taken from SP800-57 section 5.6.1.
+ *
+ * This definition is allowed to be changed.
+ */
+#ifdef CONFIG_CRYPTO_DRBG_CTR
+# define LRNG_HASH_NAME "cmac(aes)"
+# if 0
+#  define LRNG_DRBG_BLOCKLEN_BYTES 16
+#  define LRNG_DRBG_SECURITY_STRENGTH_BYTES 16
+#  define LRNG_DRBG_CORE "drbg_nopr_ctr_aes128"		/* CTR DRBG AES-128 */
+# else
+#  define LRNG_DRBG_BLOCKLEN_BYTES 16
+#  define LRNG_DRBG_SECURITY_STRENGTH_BYTES 32
+#  define LRNG_DRBG_CORE "drbg_nopr_ctr_aes256"		/* CTR DRBG AES-256 */
+# endif
+#elif defined CONFIG_CRYPTO_DRBG_HMAC
+# if 0
+#  define LRNG_DRBG_BLOCKLEN_BYTES 32
+#  define LRNG_DRBG_SECURITY_STRENGTH_BYTES 16
+#  define LRNG_DRBG_CORE "drbg_nopr_hmac_sha256"	/* HMAC DRBG SHA-256 */
+#  define LRNG_HASH_NAME "sha256"
+# else
+#  define LRNG_DRBG_BLOCKLEN_BYTES 64
+#  define LRNG_DRBG_SECURITY_STRENGTH_BYTES 32
+#  define LRNG_DRBG_CORE "drbg_nopr_hmac_sha512"	/* HMAC DRBG SHA-512 */
+#  define LRNG_HASH_NAME "sha512"
+# endif
+#elif defined CONFIG_CRYPTO_DRBG_HASH
+# if 0
+#  define LRNG_DRBG_BLOCKLEN_BYTES 32
+#  define LRNG_DRBG_SECURITY_STRENGTH_BYTES 16
+#  define LRNG_DRBG_CORE "drbg_nopr_sha256"		/* Hash DRBG SHA-256 */
+#  define LRNG_HASH_NAME "sha256"
+# else
+#  define LRNG_DRBG_BLOCKLEN_BYTES 64
+#  define LRNG_DRBG_SECURITY_STRENGTH_BYTES 32
+#  define LRNG_DRBG_CORE "drbg_nopr_sha512"		/* Hash DRBG SHA-512 */
+#  define LRNG_HASH_NAME "sha512"
+# endif
+#else
+# define LRNG_DRBG_BLOCKLEN_BYTES 64
+# define LRNG_DRBG_SECURITY_STRENGTH_BYTES 32
+# define LRNG_DRBG_CORE "ChaCha20"			/* ChaCha20 */
+# define LRNG_HASH_NAME "sha1"
+#endif
+
+#define LRNG_DRBG_SECURITY_STRENGTH_BITS (LRNG_DRBG_SECURITY_STRENGTH_BYTES * 8)
+
+/* Alignmask which should cover all cipher implementations */
+#define LRNG_KCAPI_ALIGN 8
+
+/* Primary DRBG state handle */
+struct lrng_pdrbg {
+	void *pdrbg;			/* DRNG handle */
+	bool pdrbg_fully_seeded;	/* Is DRBG fully seeded? */
+	bool pdrbg_min_seeded;		/* Is DRBG minimally seeded? */
+	u32 pdrbg_entropy_bits;		/* Is DRBG entropy level */
+	struct work_struct lrng_seed_work;	/* (re)seed work queue */
+	spinlock_t lock;
+};
+
+/* Secondary DRBG state handle */
+struct lrng_sdrbg {
+	void *sdrbg;			/* DRNG handle */
+	atomic_t requests;		/* Number of DRBG requests */
+	unsigned long last_seeded;	/* Last time it was seeded */
+	bool fully_seeded;		/* Is DRBG fully seeded? */
+	spinlock_t lock;
+};
+
+/*
+ * SP800-90A defines a maximum request size of 1<<16 bytes. The given value is
+ * considered a safer margin. This applies to secondary DRBG.
+ *
+ * This value is allowed to be changed.
+ */
+#define LRNG_DRBG_MAX_REQSIZE (1<<12)
+
+/*
+ * SP800-90A defines a maximum number of requests between reseeds of 1<<48.
+ * The given value is considered a much safer margin, balancing requests for
+ * frequent reseeds with the need to conserve entropy. This value MUST NOT be
+ * larger than INT_MAX because it is used in an atomic_t. This applies to
+ * secondary DRBG.
+ *
+ * This value is allowed to be changed.
+ */
+#define LRNG_DRBG_RESEED_THRESH (1<<12)
+
+/* Status information about IRQ noise source */
+struct lrng_irq_info {
+	atomic_t num_events;	/* Number of non-stuck IRQs since last read */
+	atomic_t num_events_thresh;	/* Reseed threshold */
+	atomic_t last_time;	/* Stuck test: time of previous IRQ */
+	atomic_t last_delta;	/* Stuck test: delta of previous IRQ */
+	atomic_t last_delta2;	/* Stuck test: 2. time derivation of prev IRQ */
+	atomic_t reseed_in_progress;	/* Flag for on executing reseed */
+	atomic_t crngt_ctr;	/* FIPS 140-2 CRNGT counter */
+	bool irq_highres_timer;	/* Is high-resolution timer available? */
+	u32 irq_entropy_bits;	/* LRNG_IRQ_ENTROPY_BITS? */
+};
+
+/*
+ * According to FIPS 140-2 IG 9.8, our C threshold is at 3 back to back stuck
+ * values. It should be highly unlikely that we see three consecutive
+ * identical time stamps.
+ *
+ * This value is allowed to be changed.
+ */
+#define LRNG_FIPS_CRNGT 3
+
+/*
+ * This is the entropy pool used by the slow noise source. Its size should
+ * be at least as large as the interrupt entropy estimate.
+ *
+ * The pool array is aligned to 8 bytes to comfort the kernel crypto API cipher
+ * implementations: for some accelerated implementations, we need an alignment
+ * to avoid a realignment which involes memcpy(). The alignment to 8 bytes
+ * should satisfy all crypto implementations.
+ *
+ * LRNG_POOL_SIZE is allowed to be changed only if the taps for the LFSR are
+ * changed as well. The size must be in powers of 2 due to the mask handling in
+ * lrng_pool_lfsr which uses AND instead of modulo.
+ *
+ * The polynomials for the LFSR are taken from the following URL
+ * which lists primitive polynomials
+ * http://courses.cse.tamu.edu/csce680/walker/lfsr_table.pdf. The first
+ * polynomial is from "Primitive Binary Polynomials" by Wayne Stahnke (1993)
+ * and is primitive as well as irreducible.
+ *
+ * Note, the tap values are smaller by one compared to the documentation because
+ * they are used as an index into an array where the index starts by zero.
+ *
+ * All polynomials were also checked to be primitive with magma.
+ */
+static u32 const lrng_lfsr_polynomial[] =
+	{ 127, 28, 26, 1 };			/* 128 words by Stahnke */
+	/* { 255, 253, 250, 245 }; */		/* 256 words */
+	/* { 511, 509, 506, 503 }; */		/* 512 words */
+	/* { 1023, 1014, 1001, 1000 }; */	/* 1024 words */
+	/* { 2047, 2034, 2033, 2028 }; */	/* 2048 words */
+	/* { 4095, 4094, 4080, 4068 }; */	/* 4096 words */
+struct lrng_pool {
+#define LRNG_POOL_SIZE 128
+#define LRNG_POOL_WORD_BYTES (sizeof(atomic_t))
+#define LRNG_POOL_SIZE_BYTES (LRNG_POOL_SIZE * LRNG_POOL_WORD_BYTES)
+#define LRNG_POOL_SIZE_BITS (LRNG_POOL_SIZE_BYTES * 8)
+#define LRNG_POOL_WORD_BITS (LRNG_POOL_WORD_BYTES * 8)
+	atomic_t pool[LRNG_POOL_SIZE] __aligned(LRNG_KCAPI_ALIGN); /* Pool */
+	atomic_t pool_ptr;	/* Ptr into pool for next IRQ word injection */
+	atomic_t input_rotate;	/* rotate for LFSR */
+	u32 last_numa_node;	/* Last NUMA node */
+	void *lrng_hash;
+	struct lrng_irq_info irq_info;	/* IRQ noise source status info */
+};
+
+/*
+ * Number of interrupts to be recorded to assume that DRBG security strength
+ * bits of entropy are received.
+ * Note: a value below the DRBG security strength should not be defined as this
+ *	 may imply the DRBG can never be fully seeded in case other noise
+ *	 sources are unavailable.
+ *
+ * This value is allowed to be changed.
+ */
+#define LRNG_IRQ_ENTROPY_BYTES (LRNG_DRBG_SECURITY_STRENGTH_BYTES)
+#define LRNG_IRQ_ENTROPY_BITS (LRNG_IRQ_ENTROPY_BYTES * 8)
+
+/*
+ * Leave given amount of entropy in bits entropy pool to serve /dev/random while
+ * /dev/urandom is stressed.
+ *
+ * This value is allowed to be changed.
+ */
+#define LRNG_EMERG_ENTROPY (LRNG_DRBG_SECURITY_STRENGTH_BITS * 2)
+
+/*
+ * Min required seed entropy is 112 bits as per FIPS 140-2 and AIS20/31.
+ *
+ * This value is allowed to be changed.
+ */
+#define LRNG_MIN_SEED_ENTROPY_BITS 112
+
+#define LRNG_INIT_ENTROPY_BITS 32
+/*
+ * Oversampling factor of IRQ events to obtain
+ * LRNG_DRBG_SECURITY_STRENGTH_BYTES. This factor is used when a
+ * high-resolution time stamp is not available. In this case, jiffies and
+ * register contents are used to fill the entropy pool. These noise sources
+ * are much less entropic than the high-resolution timer. The entropy content
+ * is the entropy content assumed with LRNG_IRQ_ENTROPY_BYTES divided by
+ * LRNG_IRQ_OVERSAMPLING_FACTOR.
+ *
+ * This value is allowed to be changed.
+ */
+#define LRNG_IRQ_OVERSAMPLING_FACTOR 10
+
+static struct lrng_pdrbg lrng_pdrbg = {
+	.lock = __SPIN_LOCK_UNLOCKED(lrng.pdrbg.lock)
+};
+
+static struct lrng_sdrbg **lrng_sdrbg __read_mostly;
+
+static struct lrng_pool lrng_pool = {
+	.irq_info = {
+		.crngt_ctr = ATOMIC_INIT(LRNG_FIPS_CRNGT),
+	},
+};
+
+static LIST_HEAD(lrng_ready_list);
+static DEFINE_SPINLOCK(lrng_ready_list_lock);
+
+static atomic_t lrng_pdrbg_avail = ATOMIC_INIT(0);
+static atomic_t lrng_initrng_bytes = ATOMIC_INIT(0);
+static DEFINE_SPINLOCK(lrng_init_rng_lock);	/* Lock the init RNG state */
+
+static DECLARE_WAIT_QUEUE_HEAD(lrng_read_wait);
+static DECLARE_WAIT_QUEUE_HEAD(lrng_write_wait);
+static DECLARE_WAIT_QUEUE_HEAD(lrng_pdrbg_init_wait);
+static struct fasync_struct *fasync;
+
+/*
+ * Estimated entropy of data is a 32th of LRNG_DRBG_SECURITY_STRENGTH_BITS.
+ * As we have no ability to review the implementation of those noise sources,
+ * it is prudent to have a conservative estimate here.
+ */
+static u32 archrandom = LRNG_DRBG_SECURITY_STRENGTH_BITS>>5;
+module_param(archrandom, uint, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH);
+MODULE_PARM_DESC(archrandom, "Entropy in bits of 256 data bits from CPU noise source (e.g. RDRAND)");
+
+/*
+ * If the entropy count falls under this number of bits, then we
+ * should wake up processes which are selecting or polling on write
+ * access to /dev/random.
+ */
+static u32 lrng_write_wakeup_bits = LRNG_DRBG_SECURITY_STRENGTH_BITS;
+
+/*
+ * The minimum number of bits of entropy before we wake up a read on
+ * /dev/random.
+ */
+static u32 lrng_read_wakeup_bits = LRNG_POOL_WORD_BITS * 2;
+
+/*
+ * Maximum number of seconds between DRBG reseed intervals of the secondary
+ * DRBG. Note, this is enforced with the next request of random numbers from
+ * the secondary DRBG. Setting this value to zero implies a reseeding attempt
+ * before every generated random number.
+ */
+static int lrng_sdrbg_reseed_max_time = 600;
+
+/************************** Crypto Implementations ***************************/
+
+/**
+ * Allocate DRNG -- the provided integers should be used for sanity checks.
+ * @return: allocated data structure or NULL on error
+ */
+void *lrng_drng_alloc(u8 *drng_name, u32 blocklen_bytes, u32 sec_strength);
+
+/* Deallocate DRNG */
+void lrng_drng_dealloc(void *drng);
+
+/**
+ * Seed the DRNG with data of arbitrary length
+ * @drng: is pointer to data structure allocated with lrng_drng_alloc
+ * @return: >= 0 on success, < 0 on error
+ */
+int lrng_drng_seed_helper(void *drng, const u8 *inbuf, u32 inbuflen);
+
+/**
+ * Generate random numbers from the DRNG with arbitrary length
+ * @return: generated number of bytes, < 0 on error
+ */
+int lrng_drng_generate_helper(void *drng, u8 *outbuf, u32 outbuflen);
+
+/**
+ * Allocate the hash for reading the entropy pool
+ * @return: allocated data structure (NULL is success too) or ERR_PTR on error
+ */
+void *lrng_hash_alloc(u8 *hashname, u8 *key, u32 keylen);
+
+/**
+ * Return the digestsize for the used hash to read out entropy pool
+ * @hash: is pointer to data structure allocated with lrng_hash_alloc
+ * @return: size of digest of hash in bytes
+ */
+u32 lrng_hash_digestsize(void *hash);
+
+/**
+ * Generate hash
+ * @hash: is pointer to data structure allocated with lrng_hash_alloc
+ * @return: 0 on success, < 0 on error
+ */
+int lrng_hash_buffer(void *hash, u8 *inbuf, u32 inbuflen, u8 *digest);
+
+/********************************** Helper ***********************************/
+
+static inline u32 atomic_read_u32(atomic_t *v)
+{
+	return (u32)atomic_read(v);
+}
+
+static inline u32 atomic_xchg_u32(atomic_t *v, u32 x)
+{
+	return (u32)atomic_xchg(v, x);
+}
+
+static inline u32 lrng_entropy_to_data(u32 entropy_bits)
+{
+	return ((entropy_bits * lrng_pool.irq_info.irq_entropy_bits) /
+		LRNG_DRBG_SECURITY_STRENGTH_BITS);
+}
+
+static inline u32 lrng_data_to_entropy(u32 irqnum)
+{
+	return ((irqnum * LRNG_DRBG_SECURITY_STRENGTH_BITS) /
+		lrng_pool.irq_info.irq_entropy_bits);
+}
+
+static inline u32 lrng_avail_entropy(void)
+{
+	return min_t(u32, LRNG_POOL_SIZE_BITS,
+		     lrng_data_to_entropy(atomic_read_u32(
+					&lrng_pool.irq_info.num_events)));
+}
+
+static inline void lrng_set_entropy_thresh(u32 new)
+{
+	atomic_set(&lrng_pool.irq_info.num_events_thresh,
+		   lrng_entropy_to_data(new));
+}
+
+/* Is the primary DRBG seed level too low? */
+static inline bool lrng_need_entropy(void)
+{
+	return (lrng_pdrbg.pdrbg_entropy_bits < lrng_write_wakeup_bits);
+}
+
+/* Is the entropy pool filled for /dev/random pull or DRBG fully seeded? */
+static inline bool lrng_have_entropy_full(void)
+{
+	return ((lrng_avail_entropy() >= lrng_read_wakeup_bits) ||
+		lrng_pdrbg.pdrbg_entropy_bits >=
+					LRNG_DRBG_SECURITY_STRENGTH_BITS);
+}
+
+/*********************** Fast soise source processing ************************/
+
+/**
+ * Get CPU noise source entropy
+ *
+ * @outbuf: buffer to store entropy of size LRNG_DRBG_SECURITY_STRENGTH_BYTES
+ * @return: > 0 on success where value provides the added entropy in bits
+ *	    0 if no fast source was available
+ */
+static inline u32 lrng_get_arch(u8 *outbuf)
+{
+	u32 i;
+	u32 ent_bits = archrandom;
+
+	/* operate on full blocks */
+	BUILD_BUG_ON(LRNG_DRBG_SECURITY_STRENGTH_BYTES % sizeof(unsigned long));
+
+	if (!ent_bits)
+		return 0;
+
+	for (i = 0; i < LRNG_DRBG_SECURITY_STRENGTH_BYTES;
+	     i += sizeof(unsigned long)) {
+		if (!arch_get_random_long((unsigned long *)(outbuf + i))) {
+			archrandom = 0;
+			return 0;
+		}
+	}
+
+	/* Obtain entropy statement -- cap entropy to buffer size in bits */
+	ent_bits = min_t(u32, ent_bits, LRNG_DRBG_SECURITY_STRENGTH_BITS);
+	pr_debug("obtained %u bits of entropy from CPU RNG noise source\n",
+		 ent_bits);
+	return ent_bits;
+}
+
+/************************ Slow noise source processing ************************/
+
+/*
+ * Implement a (modified) twisted Generalized Feedback Shift Register. (See M.
+ * Matsumoto & Y. Kurita, 1992.  Twisted GFSR generators. ACM Transactions on
+ * Modeling and Computer Simulation 2(3):179-194.  Also see M. Matsumoto & Y.
+ * Kurita, 1994.  Twisted GFSR generators II.  ACM Transactions on Modeling and
+ * Computer Simulation 4:254-266).
+ */
+static u32 const lrng_twist_table[8] = {
+	0x00000000, 0x3b6e20c8, 0x76dc4190, 0x4db26158,
+	0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 };
+
+/**
+ * Hot code path - inject data into entropy pool using LFSR
+ */
+static void lrng_pool_lfsr(const u8 *buf, u32 buflen)
+{
+	u32 mask = (LRNG_POOL_SIZE - 1);
+
+	while (buflen--) {
+		u32 ptr = (u32)atomic_add_return(1, &lrng_pool.pool_ptr) & mask;
+		/*
+		 * Add 7 bits of rotation to the pool. At the beginning of the
+		 * pool, add an extra 7 bits rotation, so that successive passes
+		 * spread the input bits across the pool evenly.
+		 */
+		u32 input_rotate = (u32)atomic_add_return((ptr ? 7 : 14),
+						&lrng_pool.input_rotate) & 31;
+		u32 word = rol32(*buf++, input_rotate);
+
+		BUILD_BUG_ON(LRNG_POOL_SIZE - 1 != lrng_lfsr_polynomial[0]);
+		word ^= atomic_read_u32(&lrng_pool.pool[ptr]);
+		word ^= atomic_read_u32(&lrng_pool.pool[
+				(ptr + lrng_lfsr_polynomial[0]) & mask]);
+		word ^= atomic_read_u32(&lrng_pool.pool[
+				(ptr + lrng_lfsr_polynomial[1]) & mask]);
+		word ^= atomic_read_u32(&lrng_pool.pool[
+				(ptr + lrng_lfsr_polynomial[2]) & mask]);
+		word ^= atomic_read_u32(&lrng_pool.pool[
+				(ptr + lrng_lfsr_polynomial[3]) & mask]);
+
+		word = (word >> 3) ^ lrng_twist_table[word & 7];
+		atomic_set(&lrng_pool.pool[ptr], word);
+	}
+}
+
+/**
+ * Hot code path - Stuck test by checking the:
+ *      1st derivation of the event occurrence (time delta)
+ *      2nd derivation of the event occurrence (delta of time deltas)
+ *      3rd derivation of the event occurrence (delta of delta of time deltas)
+ *
+ * All values must always be non-zero. This is also the FIPS 140-2 CRNGT.
+ *
+ * @irq_info: Reference to IRQ information
+ * @now: Event time
+ * @return: 0 event occurrence not stuck (good bit)
+ *	    1 event occurrence stuck (reject bit)
+ */
+static int lrng_irq_stuck(struct lrng_irq_info *irq_info, u32 now_time)
+{
+	u32 delta = now_time - atomic_xchg_u32(&irq_info->last_time, now_time);
+	int delta2 = delta - atomic_xchg_u32(&irq_info->last_delta, delta);
+	int delta3 = delta2 - atomic_xchg(&irq_info->last_delta2, delta2);
+
+#ifdef CONFIG_CRYPTO_FIPS
+	if (fips_enabled) {
+		if (!delta) {
+			if (atomic_dec_and_test(&irq_info->crngt_ctr))
+				panic("FIPS 140-2 continuous random number generator test failed\n");
+		} else
+			atomic_set(&irq_info->crngt_ctr, LRNG_FIPS_CRNGT);
+	}
+#endif
+
+	if (!delta || !delta2 || !delta3)
+		return 1;
+
+	return 0;
+}
+
+/**
+ * Hot code path - mix data into entropy pool
+ */
+static inline void lrng_pool_mixin(const u8 *buf, u32 buflen, u32 irq_num)
+{
+	lrng_pool_lfsr(buf, buflen);
+
+	/* Should we wake readers? */
+	if (irq_num == lrng_entropy_to_data(lrng_read_wakeup_bits)) {
+		wake_up_interruptible(&lrng_read_wait);
+		kill_fasync(&fasync, SIGIO, POLL_IN);
+	}
+
+	/* Only try to reseed if the DRBG is alive. */
+	if (!atomic_read(&lrng_pdrbg_avail))
+		return;
+
+	/*
+	 * Once all secondary DRBGs are fully seeded, the interrupt noise
+	 * sources will not trigger any reseeding any more.
+	 */
+	if (lrng_sdrbg[lrng_pool.last_numa_node]->fully_seeded)
+		return;
+
+	/* Only trigger the DRBG reseed if we have collected enough IRQs. */
+	if (atomic_read_u32(&lrng_pool.irq_info.num_events) <
+	    atomic_read_u32(&lrng_pool.irq_info.num_events_thresh))
+		return;
+
+	/* Ensure that the seeding only occurs once at any given time. */
+	if (atomic_cmpxchg(&lrng_pool.irq_info.reseed_in_progress, 0, 1))
+		return;
+
+	/* Seed the DRBG with IRQ noise. */
+	schedule_work(&lrng_pdrbg.lrng_seed_work);
+}
+
+/**
+ * Hot code path - Callback for interrupt handler
+ */
+void add_interrupt_randomness(int irq, int irq_flags)
+{
+	u32 now_time = random_get_entropy();
+	struct lrng_irq_info *irq_info = &lrng_pool.irq_info;
+	u32 irq_num = (u32)atomic_add_return(1, &irq_info->num_events);
+
+	if (lrng_pool.irq_info.irq_highres_timer) {
+		if (lrng_irq_stuck(irq_info, now_time))
+			return;
+		lrng_pool_mixin((u8 *)&now_time, sizeof(now_time), irq_num);
+	} else {
+		struct pt_regs *regs = get_irq_regs();
+		static atomic_t reg_idx = ATOMIC_INIT(0);
+
+		struct {
+			long jiffies;
+			int irq;
+			int irq_flags;
+			u64 ip;
+			u32 curr_reg;
+		} data;
+
+		data.jiffies = jiffies;
+		data.irq = irq;
+		data.irq_flags = irq_flags;
+		if (regs) {
+			u32 *ptr = (u32 *)regs;
+
+			data.ip = instruction_pointer(regs);
+			if (atomic_read(&reg_idx) >=
+					sizeof(struct pt_regs) / sizeof(u32))
+				atomic_set(&reg_idx, 0);
+			data.curr_reg = *(ptr + atomic_add_return(1, &reg_idx));
+		}
+
+		lrng_pool_mixin((u8 *)&data, sizeof(data), irq_num);
+	}
+}
+EXPORT_SYMBOL(add_interrupt_randomness);
+
+/**
+ * Callback for HID layer
+ */
+void add_input_randomness(unsigned int type, unsigned int code,
+			  unsigned int value)
+{
+	static unsigned char last_value;
+	unsigned int val;
+
+	/* ignore autorepeat and the like */
+	if (value == last_value)
+		return;
+
+	last_value = value;
+
+	val = (type << 4) ^ code ^ (code >> 4) ^ value;
+	lrng_pool_mixin((u8 *)&val, sizeof(val), 0);
+}
+EXPORT_SYMBOL_GPL(add_input_randomness);
+
+/**
+ * Read the entropy pool out for use. The caller must ensure this function
+ * is only called once at a time.
+ *
+ * This function handles the translation from the number of received interrupts
+ * into an entropy statement. The conversion depends on LRNG_IRQ_ENTROPY_BYTES
+ * which defines how many interrupts must be received to obtain 256 bits of
+ * entropy. With this value, the function lrng_data_to_entropy converts a given
+ * data size (received interrupts, requested amount of data, etc.) into an
+ * entropy statement. lrng_entropy_to_data does the reverse.
+ *
+ * Both functions are agnostic about the type of data: when the number of
+ * interrupts is processed by these functions, the resulting entropy value is in
+ * bits as we assume the entropy of interrupts is measured in bits. When data is
+ * processed, the entropy value is in bytes as the data is measured in bytes.
+ *
+ * @outbuf: buffer to store data in with size LRNG_DRBG_SECURITY_STRENGTH_BYTES
+ * @requested_entropy_bits: requested bits of entropy -- the function will
+ *			    return at least this amount of entropy if available
+ * @drain: boolean indicating that that all entropy of pool can be used
+ *	   (otherwise some emergency amount of entropy is left)
+ * @return: estimated entropy from the IRQs that went into the pool since last
+ *	    readout.
+ */
+static u32 lrng_get_pool(u8 *outbuf, u32 requested_entropy_bits, bool drain)
+{
+	u32 i, avail_entropy_bytes, irq_num_events_used, irq_num_event_back;
+	/* How many unused interrupts are in entropy pool? */
+	u32 irq_num_events = atomic_xchg_u32(&lrng_pool.irq_info.num_events, 0);
+	/* Convert available interrupts into entropy statement */
+	u32 avail_entropy_bits = lrng_data_to_entropy(irq_num_events);
+	u32 digestsize = lrng_hash_digestsize(lrng_pool.lrng_hash);
+	u8 digest[digestsize] __aligned(LRNG_KCAPI_ALIGN);
+
+	/* Cap available entropy to pool size */
+	avail_entropy_bits =
+			min_t(u32, avail_entropy_bits, LRNG_POOL_SIZE_BITS);
+
+	/* How much entropy we need to and can we use? */
+	if (drain)
+		avail_entropy_bits = min_t(u32, avail_entropy_bits,
+					   requested_entropy_bits);
+	else
+		avail_entropy_bits = min_t(u32, (avail_entropy_bits -
+			 min_t(u32, LRNG_EMERG_ENTROPY, avail_entropy_bits)),
+						requested_entropy_bits);
+
+	/* Hash is a compression function: we generate entropy amount of data */
+	avail_entropy_bits = round_down(avail_entropy_bits, 8);
+	avail_entropy_bytes = avail_entropy_bits >> 3;
+	BUG_ON(avail_entropy_bytes > LRNG_DRBG_SECURITY_STRENGTH_BYTES);
+
+	/* Hash the entire entropy pool */
+	for (i = 0;
+	     i < LRNG_DRBG_SECURITY_STRENGTH_BYTES && avail_entropy_bytes > 0;
+	     i += digestsize) {
+		u32 tocopy = min3(avail_entropy_bytes, digestsize,
+				  (LRNG_DRBG_SECURITY_STRENGTH_BYTES - i));
+
+		if (lrng_hash_buffer(lrng_pool.lrng_hash, (u8 *)lrng_pool.pool,
+				     LRNG_POOL_SIZE_BYTES, digest))
+			return 0;
+
+		/* Mix read data back into pool for backtracking resistance */
+		lrng_pool_lfsr(digest, digestsize);
+		/* Copy the data out to the caller */
+		memcpy(outbuf + i, digest, tocopy);
+		avail_entropy_bytes -= tocopy;
+	}
+	memzero_explicit(digest, digestsize);
+
+	/* There may be new events that came in while we processed this logic */
+	irq_num_events += atomic_xchg_u32(&lrng_pool.irq_info.num_events, 0);
+	/* Convert used entropy into interrupt number for subtraction */
+	irq_num_events_used = lrng_entropy_to_data(avail_entropy_bits);
+	/* Cap the number of events we say we have left to not reuse events */
+	irq_num_event_back = min_t(u32, irq_num_events - irq_num_events_used,
+				   lrng_entropy_to_data(LRNG_POOL_SIZE_BITS) -
+				    irq_num_events_used);
+	/* Add the unused interrupt number back to the state variable */
+	atomic_add(irq_num_event_back, &lrng_pool.irq_info.num_events);
+
+	/* Obtain entropy statement in bits from the used entropy */
+	pr_debug("obtained %u bits of entropy from %u newly collected interrupts - not using %u interrupts\n",
+		 avail_entropy_bits, irq_num_events_used, irq_num_event_back);
+
+	return avail_entropy_bits;
+}
+
+/****************************** DRBG processing *******************************/
+
+/**
+ * Ping all kernel internal callers waiting until the DRBG is fully
+ * seeded that the DRBG is now fully seeded.
+ */
+static void lrng_process_ready_list(void)
+{
+	unsigned long flags;
+	struct random_ready_callback *rdy, *tmp;
+
+	spin_lock_irqsave(&lrng_ready_list_lock, flags);
+	list_for_each_entry_safe(rdy, tmp, &lrng_ready_list, list) {
+		struct module *owner = rdy->owner;
+
+		list_del_init(&rdy->list);
+		rdy->func(rdy);
+		module_put(owner);
+	}
+	spin_unlock_irqrestore(&lrng_ready_list_lock, flags);
+}
+
+/**
+ * Set the slow noise source reseed trigger threshold. The initial threshold
+ * is set to the minimum data size that can be read from the pool: a word. Upon
+ * reaching this value, the next seed threshold of 112 bits is set followed
+ * by 256 bits.
+ *
+ * @entropy_bits: size of entropy currently injected into DRBG
+ */
+static void lrng_pdrbg_init_ops(u32 entropy_bits)
+{
+	if (lrng_pdrbg.pdrbg_fully_seeded)
+		return;
+
+	/* DRBG is seeded with full security strength */
+	if (entropy_bits >= LRNG_DRBG_SECURITY_STRENGTH_BITS) {
+		lrng_pdrbg.pdrbg_fully_seeded = true;
+		lrng_pdrbg.pdrbg_min_seeded = true;
+		pr_info("primary DRBG fully seeded with %u bits of entropy\n",
+			entropy_bits);
+		lrng_process_ready_list();
+		wake_up_all(&lrng_pdrbg_init_wait);
+
+	} else if (!lrng_pdrbg.pdrbg_min_seeded) {
+
+		/* DRBG is seeded with at least 112 bits of entropy */
+		if (entropy_bits >= LRNG_MIN_SEED_ENTROPY_BITS) {
+			lrng_pdrbg.pdrbg_min_seeded = true;
+			pr_info("primary DRBG minimally seeded with %u bits of entropy\n",
+				entropy_bits);
+			lrng_set_entropy_thresh(
+					LRNG_DRBG_SECURITY_STRENGTH_BITS);
+
+		/* DRBG is seeded with at least LRNG_INIT_ENTROPY_BITS bits */
+		} else if (entropy_bits >= LRNG_INIT_ENTROPY_BITS) {
+			pr_info("primary DRBG initially seeded with %u bits of entropy\n",
+				entropy_bits);
+			lrng_set_entropy_thresh(LRNG_MIN_SEED_ENTROPY_BITS);
+		}
+	}
+}
+
+/* Caller must hold lrng_pdrbg.lock */
+static int lrng_pdrbg_generate(u8 *outbuf, u32 outbuflen, bool fullentropy)
+{
+	int ret;
+
+	/* /dev/random only works from a fully seeded DRBG */
+	if (fullentropy && !lrng_pdrbg.pdrbg_fully_seeded)
+		return 0;
+
+	/*
+	 * Only deliver as many bytes as the DRBG is seeded with except during
+	 * initialization to provide a first seed to the secondary DRBG.
+	 */
+	if (lrng_pdrbg.pdrbg_min_seeded)
+		outbuflen = min_t(u32, outbuflen,
+				  lrng_pdrbg.pdrbg_entropy_bits>>3);
+	else
+		outbuflen = min_t(u32, outbuflen,
+				  LRNG_MIN_SEED_ENTROPY_BITS>>3);
+
+	ret = lrng_drng_generate_helper(lrng_pdrbg.pdrbg, outbuf, outbuflen);
+	if (ret != outbuflen) {
+		pr_warn("getting random data from primary DRBG failed (%d)\n",
+			ret);
+		return ret;
+	}
+
+	if (lrng_pdrbg.pdrbg_entropy_bits > (u32)(ret<<3))
+		lrng_pdrbg.pdrbg_entropy_bits -= ret<<3;
+	else
+		lrng_pdrbg.pdrbg_entropy_bits = 0;
+	pr_debug("obtained %d bytes of random data from primary DRBG\n", ret);
+	pr_debug("primary DRBG entropy level at %u bits\n",
+		 lrng_pdrbg.pdrbg_entropy_bits);
+
+	return ret;
+}
+
+/**
+ * Inject data into the primary DRBG with a given entropy value. The function
+ * calls the DRBG's update function. This function also generates random data
+ * if requested by caller. The caller is only returned the amount of random
+ * data that is at most equal to the amount of entropy that just seeded the
+ * DRBG.
+ *
+ * Note, this function seeds the primary DRBG and generates data from it
+ * in an atomic operation.
+ *
+ * @inbuf: buffer to inject
+ * @inbuflen: length of inbuf
+ * @entropy_bits: entropy value of the data in inbuf in bits
+ * @outbuf: buffer to fill immediately after seeding to get full entropy
+ * @outbuflen: length of outbuf
+ * @fullentropy: start /dev/random output only after the DRBG was fully seeded
+ * @return: number of bytes written to outbuf, 0 if outbuf is not supplied,
+ *	    or < 0 in case of error
+ */
+static int lrng_pdrbg_inject(const u8 *inbuf, u32 inbuflen, u32 entropy_bits,
+			     u8 *outbuf, u32 outbuflen, bool fullentropy)
+{
+	int ret;
+	unsigned long flags;
+
+	/* cap the maximum entropy value to the provided data length */
+	entropy_bits = min_t(u32, entropy_bits, inbuflen<<3);
+
+	spin_lock_irqsave(&lrng_pdrbg.lock, flags);
+	ret = lrng_drng_seed_helper(lrng_pdrbg.pdrbg, inbuf, inbuflen);
+	if (ret < 0) {
+		pr_warn("(re)seeding of primary DRBG failed\n");
+		goto unlock;
+	}
+	pr_debug("inject %u bytes with %u bits of entropy into primary DRBG\n",
+		 inbuflen, entropy_bits);
+
+	/* Adjust the fill level indicator to at most the DRBG sec strength */
+	lrng_pdrbg.pdrbg_entropy_bits =
+		min_t(u32, lrng_pdrbg.pdrbg_entropy_bits + entropy_bits,
+		      LRNG_DRBG_SECURITY_STRENGTH_BITS);
+	lrng_pdrbg_init_ops(lrng_pdrbg.pdrbg_entropy_bits);
+
+	if (outbuf && outbuflen)
+		ret = lrng_pdrbg_generate(outbuf, outbuflen, fullentropy);
+
+unlock:
+	spin_unlock_irqrestore(&lrng_pdrbg.lock, flags);
+
+	if (lrng_have_entropy_full()) {
+		/* Wake readers */
+		wake_up_interruptible(&lrng_read_wait);
+		kill_fasync(&fasync, SIGIO, POLL_IN);
+	}
+
+	return ret;
+}
+
+/**
+ * Seed the DRBG from the internal noise sources.
+ */
+static int lrng_pdrbg_seed_internal(u8 *outbuf, u32 outbuflen, bool fullentropy,
+				    bool drain)
+{
+	u32 total_entropy_bits, now;
+	u8 entropy_buf[(LRNG_DRBG_SECURITY_STRENGTH_BYTES * 2) + sizeof(now)]
+						__aligned(LRNG_KCAPI_ALIGN);
+	int ret;
+
+	/* No reseeding if sufficient entropy in primary DRBG */
+	if (lrng_pdrbg.pdrbg_entropy_bits >= outbuflen<<3) {
+		unsigned long flags;
+
+		spin_lock_irqsave(&lrng_pdrbg.lock, flags);
+		ret = lrng_pdrbg_generate(outbuf, outbuflen, fullentropy);
+		spin_unlock_irqrestore(&lrng_pdrbg.lock, flags);
+		if (ret == outbuflen)
+			goto out;
+	}
+
+	/*
+	 * drain the pool completely during init and when /dev/random calls.
+	 *
+	 * lrng_get_pool must be guaranteed to be called with multiples of 8
+	 * (bits) of entropy as it can only operate byte-wise.
+	 */
+	total_entropy_bits = lrng_get_pool(entropy_buf,
+					   LRNG_DRBG_SECURITY_STRENGTH_BITS,
+					   drain);
+
+	/*
+	 * Prevent domination of fast noise sources over slow noise sources
+	 * in case /dev/random calls. Only when the slow noise sources have
+	 * some entropy, pull from the fast noise sources and inject all
+	 * into the DRBG.
+	 */
+	if (!total_entropy_bits && drain) {
+		ret = 0;
+		goto memzero;
+	}
+
+	/*
+	 * Concatenate the output of the noise sources. This would be the
+	 * spot to add an entropy extractor logic if desired. Note, this
+	 * entirety should have the ability to collect entropy equal or larger
+	 * than the DRBG strength to be able to feed /dev/random.
+	 */
+	total_entropy_bits += lrng_get_arch(entropy_buf +
+					    LRNG_DRBG_SECURITY_STRENGTH_BYTES);
+
+	pr_debug("reseed primary DRBG from internal noise sources with %u bits of entropy\n",
+		 total_entropy_bits);
+
+	/* also reseed the DRBG with the current time stamp */
+	now = random_get_entropy();
+	memcpy(entropy_buf + (LRNG_DRBG_SECURITY_STRENGTH_BYTES * 2), &now,
+	       sizeof(now));
+
+	ret = lrng_pdrbg_inject(entropy_buf, sizeof(entropy_buf),
+				total_entropy_bits,
+				outbuf, outbuflen, fullentropy);
+
+memzero:
+	memzero_explicit(entropy_buf, sizeof(entropy_buf));
+
+	/*
+	 * Shall we wake up user space writers? This location covers
+	 * /dev/urandom as well, but also ensures that the user space provider
+	 * does not dominate the internal noise sources since in case the
+	 * first call of this function finds sufficient entropy in the primary
+	 * DRBG, it will not trigger the wakeup. This implies that when the next
+	 * /dev/urandom read happens, the primary DRBG is drained and the
+	 * internal noise sources are asked to feed the primary DRBG.
+	 */
+	if (lrng_need_entropy()) {
+		wake_up_interruptible(&lrng_write_wait);
+		kill_fasync(&fasync, SIGIO, POLL_OUT);
+	}
+
+out:
+	/* Allow the seeding operation to be called again */
+	atomic_set(&lrng_pool.irq_info.reseed_in_progress, 0);
+
+	return ret;
+}
+
+/**
+ * Inject a data buffer into the secondary DRBG
+ *
+ * @sdrbg: reference to secondary DRBG
+ * @inbuf: buffer with data to inject
+ * @inbuflen: buffer length
+ * @internal: did random data originate from internal sources? Update the
+ *	      reseed threshold and the reseed timer when seeded with entropic
+ *	      data from noise sources to prevent unprivileged users from
+ *	      stopping reseeding the secondary DRBG with entropic data.
+ */
+static void lrng_sdrbg_inject(struct lrng_sdrbg *sdrbg,
+			      u8 *inbuf, u32 inbuflen, bool internal)
+{
+	unsigned long flags;
+
+	BUILD_BUG_ON(LRNG_DRBG_RESEED_THRESH > INT_MAX);
+	pr_debug("seeding secondary DRBG with %u bytes\n", inbuflen);
+	spin_lock_irqsave(&sdrbg->lock, flags);
+	if (lrng_drng_seed_helper(sdrbg->sdrbg, inbuf, inbuflen) < 0) {
+		pr_warn("seeding of secondary DRBG failed\n");
+		atomic_set(&sdrbg->requests, 1);
+	} else if (internal) {
+		pr_debug("secondary DRBG stats since last seeding: %lu secs; generate calls: %d\n",
+			 (jiffies - sdrbg->last_seeded) / HZ,
+			 (LRNG_DRBG_RESEED_THRESH -
+			  atomic_read(&sdrbg->requests)));
+		sdrbg->last_seeded = jiffies;
+		atomic_set(&sdrbg->requests, LRNG_DRBG_RESEED_THRESH);
+	}
+	spin_unlock_irqrestore(&sdrbg->lock, flags);
+}
+
+/**
+ * Try to seed the secondary DRBG
+ *
+ * @sdrbg: reference to secondary DRBG
+ * @seedfunc: function to use to seed and obtain random data from primary DRBG
+ */
+static void lrng_sdrbg_seed(struct lrng_sdrbg *sdrbg,
+	int (*seed_func)(u8 *outbuf, u32 outbuflen, bool fullentropy,
+			 bool drain))
+{
+	u8 seedbuf[LRNG_DRBG_SECURITY_STRENGTH_BYTES];
+	int ret;
+
+	BUILD_BUG_ON(LRNG_MIN_SEED_ENTROPY_BITS >
+		     LRNG_DRBG_SECURITY_STRENGTH_BITS);
+
+	pr_debug("reseed of secondary DRBG triggered\n");
+	ret = seed_func(seedbuf, LRNG_DRBG_SECURITY_STRENGTH_BYTES, false,
+			!sdrbg->fully_seeded);
+	/* Update the DRBG state even though we received zero random data */
+	if (ret < 0) {
+		/*
+		 * Try to reseed at next round - note if EINPROGRESS is returned
+		 * the request counter may fall below zero in case of parallel
+		 * operations. We accept such "underflow" temporarily as the
+		 * counter will be set back to a positive number in the course
+		 * of the reseed. For these few generate operations under
+		 * heavy parallel strain of /dev/urandom we therefore exceed
+		 * the LRNG_DRBG_RESEED_THRESH threshold.
+		 */
+		if (ret != -EINPROGRESS)
+			atomic_set(&sdrbg->requests, 1);
+		return;
+	}
+
+	lrng_sdrbg_inject(sdrbg, seedbuf, ret, true);
+	memzero_explicit(seedbuf, ret);
+
+	if (ret >= LRNG_DRBG_SECURITY_STRENGTH_BYTES)
+		sdrbg->fully_seeded = true;
+}
+
+/**
+ * DRBG reseed trigger: Kernel thread handler triggered by the schedule_work()
+ */
+static void lrng_pdrbg_seed_work(struct work_struct *dummy)
+{
+	u32 node;
+
+	for (node = 0; node <= lrng_pool.last_numa_node; node++) {
+		struct lrng_sdrbg *sdrbg = lrng_sdrbg[node];
+
+		if (!sdrbg->fully_seeded) {
+			pr_debug("reseed triggered by interrupt noise source for secondary DRBG on NUMA node %d\n", node);
+			lrng_sdrbg_seed(sdrbg, lrng_pdrbg_seed_internal);
+			if (node && sdrbg->fully_seeded) {
+				/* Prevent reseed storm */
+				sdrbg->last_seeded += node * 100 * HZ;
+				/* Prevent draining of pool on idle systems */
+				lrng_sdrbg_reseed_max_time += 100;
+			}
+			return;
+		}
+	}
+}
+
+/**
+ * DRBG reseed trigger: Synchronous reseed request
+ */
+static int lrng_pdrbg_seed(u8 *outbuf, u32 outbuflen, bool fullentropy,
+			   bool drain)
+{
+	/* Ensure that the seeding only occurs once at any given time */
+	if (atomic_cmpxchg(&lrng_pool.irq_info.reseed_in_progress, 0, 1))
+		return -EINPROGRESS;
+	return lrng_pdrbg_seed_internal(outbuf, outbuflen, fullentropy, drain);
+}
+
+/**
+ * Obtain random data from DRBG with information theoretical entropy by
+ * triggering a reseed. The primary DRBG will only return as many random
+ * bytes as it was seeded with.
+ *
+ * @outbuf: buffer to store the random data in
+ * @outbuflen: length of outbuf
+ * @return: < 0 on error
+ *	    >= 0 the number of bytes that were obtained
+ */
+static int lrng_pdrbg_get(u8 *outbuf, u32 outbuflen)
+{
+	int ret;
+
+	if (!outbuf || !outbuflen)
+		return 0;
+
+	/* DRBG is not yet available */
+	if (!atomic_read(&lrng_pdrbg_avail))
+		return 0;
+
+	ret = lrng_pdrbg_seed(outbuf, outbuflen, true, true);
+	pr_debug("read %u bytes of full entropy data from primary DRBG\n", ret);
+
+	return ret;
+}
+
+/**
+ * Initial RNG provides random data with as much entropy as we have
+ * at boot time until the DRBG becomes available during late_initcall() but
+ * before user space boots. When the DRBG is initialized, the initial RNG
+ * is retired.
+ *
+ * Note: until retirement of this RNG, the system did not generate too much
+ * entropy yet. Hence, a proven DRNG like a DRBG is not necessary here anyway.
+ *
+ * The RNG is using the following as noise source:
+ *	* high resolution time stamps
+ *	* the collected IRQ state
+ *	* CPU noise source if available
+ *
+ * Input/output: it is a drop-in replacement for lrng_sdrbg_get.
+ */
+static u32 lrng_init_state[SHA_WORKSPACE_WORDS];
+static int lrng_init_rng(u8 *outbuf, u32 outbuflen)
+{
+	u32 hash[SHA_DIGEST_WORDS];
+	u32 outbuflen_orig = outbuflen;
+	u32 workspace[SHA_WORKSPACE_WORDS];
+
+	BUILD_BUG_ON(sizeof(lrng_init_state[0]) != LRNG_POOL_WORD_BYTES);
+
+	sha_init(hash);
+	while (outbuflen) {
+		unsigned int arch;
+		u32 i;
+		u32 todo = min_t(u32, outbuflen,
+				 SHA_WORKSPACE_WORDS * sizeof(u32));
+
+		/* Update init RNG state with CPU RNG and timer data */
+		for (i = 0; i < SHA_WORKSPACE_WORDS; i++) {
+			if (arch_get_random_int(&arch))
+				lrng_init_state[i] ^= arch;
+			lrng_init_state[i] ^= random_get_entropy();
+		}
+		/* SHA-1 update using the init RNG state */
+		sha_transform(hash, (u8 *)&lrng_init_state, workspace);
+
+		/* SHA-1 update with all words of the entropy pool */
+		BUILD_BUG_ON(LRNG_POOL_SIZE % 16);
+		for (i = 0; i < LRNG_POOL_SIZE; i += 16)
+			sha_transform(hash, (u8 *)(lrng_pool.pool + i),
+				      workspace);
+
+		/* Mix generated data into state for backtracking resistance */
+		for (i = 0; i < SHA_DIGEST_WORDS; i++)
+			lrng_init_state[i] ^= hash[i];
+
+		memcpy(outbuf, hash, todo);
+		outbuf += todo;
+		outbuflen -= todo;
+		atomic_add(todo, &lrng_initrng_bytes);
+	}
+	memzero_explicit(hash, sizeof(hash));
+	memzero_explicit(workspace, sizeof(workspace));
+
+	return outbuflen_orig;
+}
+
+static inline struct lrng_sdrbg *lrng_get_current_sdrbg(void)
+{
+	struct lrng_sdrbg *sdrbg = lrng_sdrbg[numa_node_id()];
+
+	return (sdrbg->fully_seeded) ? sdrbg : lrng_sdrbg[0];
+}
+
+/**
+ * Get random data out of the secondary DRBG which is reseeded frequently. In
+ * the worst case, the DRBG may generate random numbers without being reseeded
+ * for LRNG_DRBG_RESEED_THRESH requests times LRNG_DRBG_MAX_REQSIZE bytes.
+ *
+ * If the DRBG is not yet initialized, use the initial RNG output.
+ *
+ * @outbuf: buffer for storing random data
+ * @outbuflen: length of outbuf
+ * @return: < 0 in error case (DRBG generation or update failed)
+ *	    >=0 returning the returned number of bytes
+ */
+static int lrng_sdrbg_get(u8 *outbuf, u32 outbuflen)
+{
+	u32 processed = 0;
+	struct lrng_sdrbg *sdrbg;
+	unsigned long flags;
+	int ret;
+
+	if (!outbuf || !outbuflen)
+		return 0;
+
+	outbuflen = min_t(size_t, outbuflen, INT_MAX);
+
+	/* DRBG is not yet available */
+	if (!atomic_read(&lrng_pdrbg_avail)) {
+		spin_lock_irqsave(&lrng_init_rng_lock, flags);
+		/* Prevent race with lrng_init */
+		if (!atomic_read(&lrng_pdrbg_avail)) {
+			ret = lrng_init_rng(outbuf, outbuflen);
+			spin_unlock_irqrestore(&lrng_init_rng_lock, flags);
+			return ret;
+		}
+		spin_unlock_irqrestore(&lrng_init_rng_lock, flags);
+	}
+
+	sdrbg = lrng_get_current_sdrbg();
+	while (outbuflen) {
+		unsigned long now = jiffies;
+		u32 todo = min_t(u32, outbuflen, LRNG_DRBG_MAX_REQSIZE);
+
+		if (atomic_dec_and_test(&sdrbg->requests) ||
+		    time_after(now, sdrbg->last_seeded +
+			       lrng_sdrbg_reseed_max_time * HZ))
+			lrng_sdrbg_seed(sdrbg, lrng_pdrbg_seed);
+
+		spin_lock_irqsave(&sdrbg->lock, flags);
+		ret = lrng_drng_generate_helper(sdrbg->sdrbg,
+						outbuf + processed, todo);
+		spin_unlock_irqrestore(&sdrbg->lock, flags);
+		if (ret <= 0) {
+			pr_warn("getting random data from secondary DRBG failed (%d)\n",
+				ret);
+			return -EFAULT;
+		}
+		processed += ret;
+		outbuflen -= ret;
+	}
+
+	return processed;
+}
+
+static int lrng_drngs_alloc(void)
+{
+	unsigned long flags;
+	struct drbg_state *pdrbg;
+	u32 node;
+	u32 num_nodes = num_possible_nodes();
+
+	pdrbg = lrng_drng_alloc(LRNG_DRBG_CORE, LRNG_DRBG_BLOCKLEN_BYTES,
+				LRNG_DRBG_SECURITY_STRENGTH_BYTES);
+	if (!pdrbg)
+		return -ENOMEM;
+
+	spin_lock_irqsave(&lrng_pdrbg.lock, flags);
+	if (lrng_pdrbg.pdrbg) {
+		lrng_drng_dealloc(pdrbg);
+		kfree(pdrbg);
+	} else {
+		lrng_pdrbg.pdrbg = pdrbg;
+		INIT_WORK(&lrng_pdrbg.lrng_seed_work, lrng_pdrbg_seed_work);
+		pr_info("primary DRBG allocated\n");
+	}
+
+	lrng_pool.last_numa_node = num_nodes - 1;
+
+	spin_unlock_irqrestore(&lrng_pdrbg.lock, flags);
+
+	lrng_sdrbg = kmalloc_array(num_nodes, sizeof(void *),
+				   GFP_KERNEL|__GFP_NOFAIL);
+	for (node = 0; node < num_nodes; node++) {
+		struct lrng_sdrbg *sdrbg;
+
+		sdrbg = kmalloc_node(sizeof(struct lrng_sdrbg),
+				     GFP_KERNEL|__GFP_NOFAIL, node);
+		if (!sdrbg)
+			goto err;
+		memset(sdrbg, 0, sizeof(lrng_sdrbg));
+		lrng_sdrbg[node] = sdrbg;
+
+		sdrbg->sdrbg = lrng_drng_alloc(LRNG_DRBG_CORE,
+					       LRNG_DRBG_BLOCKLEN_BYTES,
+					LRNG_DRBG_SECURITY_STRENGTH_BYTES);
+		if (!sdrbg->sdrbg)
+			goto err;
+
+		atomic_set(&sdrbg->requests, 1);
+		spin_lock_init(&sdrbg->lock);
+		sdrbg->last_seeded = jiffies;
+		sdrbg->fully_seeded = false;
+
+		pr_info("secondary DRBG for NUMA node %d allocated\n", node);
+	}
+
+	return 0;
+
+err:
+	for (node = 0; node < num_nodes; node++) {
+		struct lrng_sdrbg *sdrbg = lrng_sdrbg[node];
+
+		if (sdrbg) {
+			if (sdrbg->sdrbg)
+				lrng_drng_dealloc(sdrbg->sdrbg);
+			kfree(sdrbg);
+		}
+	}
+	kfree(lrng_sdrbg);
+
+	lrng_drng_dealloc(pdrbg);
+	kfree(pdrbg);
+
+	return -ENOMEM;
+}
+
+static int lrng_alloc(void)
+{
+	u8 key[LRNG_DRBG_SECURITY_STRENGTH_BYTES] __aligned(LRNG_KCAPI_ALIGN);
+	int ret = lrng_drngs_alloc();
+
+	if (ret)
+		return ret;
+
+	/* If the used hash is no MAC, ignore the ENOSYS return code */
+	lrng_init_rng(key, sizeof(key));
+	lrng_pool.lrng_hash = lrng_hash_alloc(LRNG_HASH_NAME, key, sizeof(key));
+	memzero_explicit(key, sizeof(key));
+	if (IS_ERR(lrng_pool.lrng_hash))
+		return -PTR_ERR(lrng_pool.lrng_hash);
+
+	return 0;
+}
+
+/************************** LRNG kernel interfaces ***************************/
+
+void get_random_bytes(void *buf, int nbytes)
+{
+	lrng_sdrbg_get((u8 *)buf, (u32)nbytes);
+}
+EXPORT_SYMBOL(get_random_bytes);
+
+/**
+ * This function will use the architecture-specific hardware random
+ * number generator if it is available.  The arch-specific hw RNG will
+ * almost certainly be faster than what we can do in software, but it
+ * is impossible to verify that it is implemented securely (as
+ * opposed, to, say, the AES encryption of a sequence number using a
+ * key known by the NSA).  So it's useful if we need the speed, but
+ * only if we're willing to trust the hardware manufacturer not to
+ * have put in a back door.
+ *
+ * @buf: buffer allocated by caller to store the random data in
+ * @nbytes: length of outbuf
+ */
+void get_random_bytes_arch(void *buf, int nbytes)
+{
+	u8 *p = buf;
+
+	while (nbytes) {
+		unsigned long v;
+		int chunk = min_t(int, nbytes, sizeof(unsigned long));
+
+		if (!arch_get_random_long(&v))
+			break;
+
+		memcpy(p, &v, chunk);
+		p += chunk;
+		nbytes -= chunk;
+	}
+
+	if (nbytes)
+		lrng_sdrbg_get((u8 *)p, (u32)nbytes);
+}
+EXPORT_SYMBOL(get_random_bytes_arch);
+
+/**
+ * Interface for in-kernel drivers of true hardware RNGs.
+ * Those devices may produce endless random bits and will be throttled
+ * when our pool is full.
+ *
+ * @buffer: buffer holding the entropic data from HW noise sources to be used to
+ *	    (re)seed the DRBG.
+ * @count: length of buffer
+ * @entropy_bits: amount of entropy in buffer (value is in bits)
+ */
+void add_hwgenerator_randomness(const char *buffer, size_t count,
+				size_t entropy_bits)
+{
+	/* DRBG is not yet online */
+	if (!atomic_read(&lrng_pdrbg_avail))
+		return;
+	/*
+	 * Suspend writing if we are fully loaded with entropy.
+	 * We'll be woken up again once below lrng_write_wakeup_thresh,
+	 * or when the calling thread is about to terminate.
+	 */
+	wait_event_interruptible(lrng_write_wait,
+				 kthread_should_stop() || lrng_need_entropy());
+	lrng_pdrbg_inject(buffer, count, entropy_bits, NULL, 0, false);
+}
+EXPORT_SYMBOL_GPL(add_hwgenerator_randomness);
+
+/**
+ * Delete a previously registered readiness callback function.
+ */
+void del_random_ready_callback(struct random_ready_callback *rdy)
+{
+	unsigned long flags;
+	struct module *owner = NULL;
+
+	spin_lock_irqsave(&lrng_ready_list_lock, flags);
+	if (!list_empty(&rdy->list)) {
+		list_del_init(&rdy->list);
+		owner = rdy->owner;
+	}
+	spin_unlock_irqrestore(&lrng_ready_list_lock, flags);
+
+	module_put(owner);
+}
+EXPORT_SYMBOL(del_random_ready_callback);
+
+/**
+ * Add a callback function that will be invoked when the DRBG is fully seeded.
+ *
+ * @return: 0 if callback is successfully added
+ *          -EALREADY if pool is already initialised (callback not called)
+ *	    -ENOENT if module for callback is not alive
+ */
+int add_random_ready_callback(struct random_ready_callback *rdy)
+{
+	struct module *owner;
+	unsigned long flags;
+	int err = -EALREADY;
+
+	if (likely(lrng_pdrbg.pdrbg_fully_seeded))
+		return err;
+
+	owner = rdy->owner;
+	if (!try_module_get(owner))
+		return -ENOENT;
+
+	spin_lock_irqsave(&lrng_ready_list_lock, flags);
+	if (lrng_pdrbg.pdrbg_fully_seeded)
+		goto out;
+
+	owner = NULL;
+
+	list_add(&rdy->list, &lrng_ready_list);
+	err = 0;
+
+out:
+	spin_unlock_irqrestore(&lrng_ready_list_lock, flags);
+
+	module_put(owner);
+
+	return err;
+}
+EXPORT_SYMBOL(add_random_ready_callback);
+
+/************************ LRNG user space interfaces *************************/
+
+static ssize_t lrng_read_common(char __user *buf, size_t nbytes,
+			int (*lrng_read_random)(u8 *outbuf, u32 outbuflen))
+{
+	ssize_t ret = 0;
+	u8 tmpbuf[64] __aligned(LRNG_KCAPI_ALIGN);
+	u8 *tmp_large = NULL;
+	u8 *tmp = tmpbuf;
+	u32 tmplen = sizeof(tmpbuf);
+
+	if (nbytes == 0)
+		return 0;
+
+	/*
+	 * Satisfy large read requests -- as the common case are smaller
+	 * request sizes, such as 16 or 32 bytes, avoid a kmalloc overhead for
+	 * those by using the stack variable of tmpbuf.
+	 */
+	if (nbytes > 64) {
+		tmplen = min_t(u32, nbytes, LRNG_DRBG_MAX_REQSIZE);
+		tmp_large = kmalloc(tmplen + LRNG_KCAPI_ALIGN, GFP_KERNEL);
+		if (!tmp_large)
+			tmplen = sizeof(tmpbuf);
+		else
+			tmp = PTR_ALIGN(tmp_large, LRNG_KCAPI_ALIGN);
+	}
+
+	while (nbytes) {
+		u32 todo = min_t(u32, nbytes, tmplen);
+		int rc = 0;
+
+		if (tmp_large && need_resched()) {
+			if (signal_pending(current)) {
+				if (ret == 0)
+					ret = -ERESTARTSYS;
+				break;
+			}
+			schedule();
+		}
+
+		rc = lrng_read_random(tmp, todo);
+		if (rc <= 0)
+			break;
+		if (copy_to_user(buf, tmp, rc)) {
+			ret = -EFAULT;
+			break;
+		}
+
+		nbytes -= rc;
+		buf += rc;
+		ret += rc;
+	}
+
+	/* Wipe data just returned from memory */
+	if (tmp_large)
+		kzfree(tmp_large);
+	else
+		memzero_explicit(tmpbuf, sizeof(tmpbuf));
+
+	return ret;
+}
+
+static ssize_t
+lrng_pdrbg_read_common(int nonblock, char __user *buf, size_t nbytes)
+{
+	if (nbytes == 0)
+		return 0;
+
+	nbytes = min_t(u32, nbytes, LRNG_DRBG_BLOCKLEN_BYTES);
+	while (1) {
+		ssize_t n;
+
+		n = lrng_read_common(buf, nbytes, lrng_pdrbg_get);
+		if (n < 0)
+			return n;
+		if (n > 0)
+			return n;
+
+		/* No entropy available.  Maybe wait and retry. */
+		if (nonblock)
+			return -EAGAIN;
+
+		wait_event_interruptible(lrng_read_wait,
+					 lrng_have_entropy_full());
+		if (signal_pending(current))
+			return -ERESTARTSYS;
+	}
+}
+
+static ssize_t lrng_pdrbg_read(struct file *file, char __user *buf,
+			       size_t nbytes, loff_t *ppos)
+{
+	return lrng_pdrbg_read_common(file->f_flags & O_NONBLOCK, buf, nbytes);
+}
+
+static unsigned int lrng_pdrbg_poll(struct file *file, poll_table *wait)
+{
+	unsigned int mask;
+
+	poll_wait(file, &lrng_read_wait, wait);
+	poll_wait(file, &lrng_write_wait, wait);
+	mask = 0;
+	if (lrng_have_entropy_full())
+		mask |= POLLIN | POLLRDNORM;
+	if (lrng_need_entropy())
+		mask |= POLLOUT | POLLWRNORM;
+	return mask;
+}
+
+static ssize_t lrng_drbg_write_common(const char __user *buffer, size_t count,
+				      u32 entropy_bits, bool sdrbg)
+{
+	ssize_t ret = 0;
+	u8 buf[64] __aligned(LRNG_KCAPI_ALIGN);
+	const char __user *p = buffer;
+
+	if (!atomic_read(&lrng_pdrbg_avail))
+		return -EAGAIN;
+
+	count = min_t(size_t, count, INT_MAX);
+	while (count > 0) {
+		size_t bytes = min_t(size_t, count, sizeof(buf));
+		u32 ent = min_t(u32, bytes<<3, entropy_bits);
+
+		if (copy_from_user(&buf, p, bytes))
+			return -EFAULT;
+		/* Inject data into primary DRBG */
+		lrng_pdrbg_inject(buf, bytes, ent, NULL, 0, false);
+		/* Data from /dev/[|u]random is injected into secondary DRBG */
+		if (sdrbg) {
+			u32 node;
+			int num_nodes = num_possible_nodes();
+
+			for (node = 0; node < num_nodes; node++)
+				lrng_sdrbg_inject(lrng_sdrbg[node], buf, bytes,
+						  false);
+		}
+
+		count -= bytes;
+		p += bytes;
+		ret += bytes;
+		entropy_bits -= ent;
+
+		cond_resched();
+	}
+
+	return ret;
+}
+
+static ssize_t lrng_sdrbg_read(struct file *file, char __user *buf,
+			       size_t nbytes, loff_t *ppos)
+{
+	return lrng_read_common(buf, nbytes, lrng_sdrbg_get);
+}
+
+static ssize_t lrng_drbg_write(struct file *file, const char __user *buffer,
+			       size_t count, loff_t *ppos)
+{
+	return lrng_drbg_write_common(buffer, count, 0, true);
+}
+
+static long lrng_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+{
+	int size, ent_count_bits;
+	int __user *p = (int __user *)arg;
+
+	switch (cmd) {
+	case RNDGETENTCNT:
+		ent_count_bits = lrng_avail_entropy();
+		if (put_user(ent_count_bits, p))
+			return -EFAULT;
+		return 0;
+	case RNDADDTOENTCNT:
+		if (!capable(CAP_SYS_ADMIN))
+			return -EPERM;
+		if (get_user(ent_count_bits, p))
+			return -EFAULT;
+		ent_count_bits = (int)lrng_avail_entropy() + ent_count_bits;
+		if (ent_count_bits < 0)
+			ent_count_bits = 0;
+		if (ent_count_bits > LRNG_POOL_SIZE_BITS)
+			ent_count_bits = LRNG_POOL_SIZE_BITS;
+		atomic_set(&lrng_pool.irq_info.num_events,
+			   lrng_entropy_to_data(ent_count_bits));
+		return 0;
+	case RNDADDENTROPY:
+		if (!capable(CAP_SYS_ADMIN))
+			return -EPERM;
+		if (get_user(ent_count_bits, p++))
+			return -EFAULT;
+		if (ent_count_bits < 0)
+			return -EINVAL;
+		if (get_user(size, p++))
+			return -EFAULT;
+		if (size < 0)
+			return -EINVAL;
+		/* there cannot be more entropy than data */
+		ent_count_bits = min(ent_count_bits, size<<3);
+		return lrng_drbg_write_common((const char __user *)p, size,
+					      ent_count_bits, false);
+	case RNDZAPENTCNT:
+	case RNDCLEARPOOL:
+		/* Clear the entropy pool counter. */
+		if (!capable(CAP_SYS_ADMIN))
+			return -EPERM;
+		atomic_set(&lrng_pool.irq_info.num_events, 0);
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
+
+static int lrng_fasync(int fd, struct file *filp, int on)
+{
+	return fasync_helper(fd, filp, on, &fasync);
+}
+
+const struct file_operations random_fops = {
+	.read  = lrng_pdrbg_read,
+	.write = lrng_drbg_write,
+	.poll  = lrng_pdrbg_poll,
+	.unlocked_ioctl = lrng_ioctl,
+	.fasync = lrng_fasync,
+	.llseek = noop_llseek,
+};
+
+const struct file_operations urandom_fops = {
+	.read  = lrng_sdrbg_read,
+	.write = lrng_drbg_write,
+	.unlocked_ioctl = lrng_ioctl,
+	.fasync = lrng_fasync,
+	.llseek = noop_llseek,
+};
+
+SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
+		unsigned int, flags)
+{
+	if (flags & ~(GRND_NONBLOCK|GRND_RANDOM))
+		return -EINVAL;
+
+	if (count > INT_MAX)
+		count = INT_MAX;
+
+	if (flags & GRND_RANDOM)
+		return lrng_pdrbg_read_common(flags & GRND_NONBLOCK, buf,
+					      count);
+
+	if (unlikely(!lrng_pdrbg.pdrbg_fully_seeded)) {
+		if (flags & GRND_NONBLOCK)
+			return -EAGAIN;
+		wait_event_interruptible(lrng_pdrbg_init_wait,
+					 lrng_pdrbg.pdrbg_fully_seeded);
+		if (signal_pending(current))
+			return -ERESTARTSYS;
+	}
+	return lrng_sdrbg_read(NULL, buf, count, NULL);
+}
+
+/*************************** LRNG proc interfaces ****************************/
+
+#ifdef CONFIG_SYSCTL
+
+#include <linux/sysctl.h>
+
+static int lrng_min_read_thresh = LRNG_POOL_WORD_BITS;
+static int lrng_min_write_thresh;
+static int lrng_max_read_thresh = LRNG_POOL_SIZE_BITS;
+static int lrng_max_write_thresh = LRNG_DRBG_SECURITY_STRENGTH_BITS;
+static char lrng_sysctl_bootid[16];
+static int lrng_sdrbg_reseed_max_min;
+
+/*
+ * This function is used to return both the bootid UUID, and random
+ * UUID.  The difference is in whether table->data is NULL; if it is,
+ * then a new UUID is generated and returned to the user.
+ *
+ * If the user accesses this via the proc interface, the UUID will be
+ * returned as an ASCII string in the standard UUID format; if via the
+ * sysctl system call, as 16 bytes of binary data.
+ */
+static int lrng_proc_do_uuid(struct ctl_table *table, int write,
+			     void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+	struct ctl_table fake_table;
+	unsigned char buf[64], tmp_uuid[16], *uuid;
+
+	uuid = table->data;
+	if (!uuid) {
+		uuid = tmp_uuid;
+		generate_random_uuid(uuid);
+	} else {
+		static DEFINE_SPINLOCK(bootid_spinlock);
+
+		spin_lock(&bootid_spinlock);
+		if (!uuid[8])
+			generate_random_uuid(uuid);
+		spin_unlock(&bootid_spinlock);
+	}
+
+	sprintf(buf, "%pU", uuid);
+
+	fake_table.data = buf;
+	fake_table.maxlen = sizeof(buf);
+
+	return proc_dostring(&fake_table, write, buffer, lenp, ppos);
+}
+
+static int lrng_proc_do_type(struct ctl_table *table, int write,
+			     void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+	struct ctl_table fake_table;
+	unsigned char buf[130];
+
+	snprintf(buf, sizeof(buf), "%s: %s\nDRNG security strength: %u bits\nentropy pool read hash: %s",
+#ifdef CONFIG_CRYPTO_DRBG_CTR
+		 "CTR DRBG",
+#elif defined CONFIG_CRYPTO_DRBG_HMAC
+		 "HMAC DRBG",
+#elif defined CONFIG_CRYPTO_DRBG_HASH
+		 "HASH DRBG",
+#else
+		 "DRNG",
+#endif
+		 LRNG_DRBG_CORE, LRNG_DRBG_SECURITY_STRENGTH_BITS,
+		 LRNG_HASH_NAME);
+
+	fake_table.data = buf;
+	fake_table.maxlen = sizeof(buf);
+
+	return proc_dostring(&fake_table, write, buffer, lenp, ppos);
+}
+
+/* Return entropy available scaled to integral bits */
+static int lrng_proc_do_entropy(struct ctl_table *table, int write,
+				void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+	struct ctl_table fake_table;
+	int entropy_count;
+
+	entropy_count = lrng_avail_entropy();
+
+	fake_table.data = &entropy_count;
+	fake_table.maxlen = sizeof(entropy_count);
+
+	return proc_dointvec(&fake_table, write, buffer, lenp, ppos);
+}
+
+static int lrng_proc_bool(struct ctl_table *table, int write,
+			  void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+	struct ctl_table fake_table;
+	int loc_boolean = 0;
+	bool *boolean = (bool *)table->data;
+
+	if (*boolean)
+		loc_boolean = 1;
+
+	fake_table.data = &loc_boolean;
+	fake_table.maxlen = sizeof(loc_boolean);
+
+	return proc_dointvec(&fake_table, write, buffer, lenp, ppos);
+}
+
+static int lrng_sysctl_poolsize = LRNG_POOL_SIZE_BITS;
+static int pdrbg_security_strength = LRNG_DRBG_SECURITY_STRENGTH_BYTES;
+extern struct ctl_table random_table[];
+struct ctl_table random_table[] = {
+	{
+		.procname	= "poolsize",
+		.data		= &lrng_sysctl_poolsize,
+		.maxlen		= sizeof(int),
+		.mode		= 0444,
+		.proc_handler	= proc_dointvec,
+	},
+	{
+		.procname	= "entropy_avail",
+		.maxlen		= sizeof(int),
+		.mode		= 0444,
+		.proc_handler	= lrng_proc_do_entropy,
+	},
+	{
+		.procname	= "read_wakeup_threshold",
+		.data		= &lrng_read_wakeup_bits,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec_minmax,
+		.extra1		= &lrng_min_read_thresh,
+		.extra2		= &lrng_max_read_thresh,
+	},
+	{
+		.procname	= "write_wakeup_threshold",
+		.data		= &lrng_write_wakeup_bits,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec_minmax,
+		.extra1		= &lrng_min_write_thresh,
+		.extra2		= &lrng_max_write_thresh,
+	},
+	{
+		.procname	= "boot_id",
+		.data		= &lrng_sysctl_bootid,
+		.maxlen		= 16,
+		.mode		= 0444,
+		.proc_handler	= lrng_proc_do_uuid,
+	},
+	{
+		.procname	= "uuid",
+		.maxlen		= 16,
+		.mode		= 0444,
+		.proc_handler	= lrng_proc_do_uuid,
+	},
+	{
+		.procname       = "urandom_min_reseed_secs",
+		.data           = &lrng_sdrbg_reseed_max_time,
+		.maxlen         = sizeof(int),
+		.mode           = 0644,
+		.proc_handler   = proc_dointvec,
+		.extra1		= &lrng_sdrbg_reseed_max_min,
+	},
+	{
+		.procname	= "drbg_fully_seeded",
+		.data		= &lrng_pdrbg.pdrbg_fully_seeded,
+		.maxlen		= sizeof(int),
+		.mode		= 0444,
+		.proc_handler	= lrng_proc_bool,
+	},
+	{
+		.procname	= "drbg_minimally_seeded",
+		.data		= &lrng_pdrbg.pdrbg_min_seeded,
+		.maxlen		= sizeof(int),
+		.mode		= 0444,
+		.proc_handler	= lrng_proc_bool,
+	},
+	{
+		.procname	= "lrng_type",
+		.maxlen		= 30,
+		.mode		= 0444,
+		.proc_handler	= lrng_proc_do_type,
+	},
+	{
+		.procname	= "drbg_security_strength",
+		.data		= &pdrbg_security_strength,
+		.maxlen		= sizeof(int),
+		.mode		= 0444,
+		.proc_handler	= proc_dointvec,
+	},
+	{
+		.procname	= "high_resolution_timer",
+		.data		= &lrng_pool.irq_info.irq_highres_timer,
+		.maxlen		= sizeof(int),
+		.mode		= 0444,
+		.proc_handler	= lrng_proc_bool,
+	},
+	{ }
+};
+#endif /* CONFIG_SYSCTL */
+
+/***************************** Initialize DRBG *******************************/
+
+static int __init lrng_init(void)
+{
+	unsigned long flags;
+
+	BUG_ON(lrng_alloc());
+
+	/*
+	 * As we use the IRQ entropic input data processed by the init RNG
+	 * again during lrng_pdrbg_seed_internal, we must not claim that
+	 * the init RNG state has any entropy when injecting its contents as
+	 * an initial seed into the DRBG.
+	 */
+	spin_lock_irqsave(&lrng_init_rng_lock, flags);
+
+	if (random_get_entropy() || random_get_entropy()) {
+		lrng_pool.irq_info.irq_highres_timer = true;
+		lrng_pool.irq_info.irq_entropy_bits = LRNG_IRQ_ENTROPY_BITS;
+	} else {
+		lrng_pool.irq_info.irq_entropy_bits =
+			LRNG_IRQ_ENTROPY_BITS * LRNG_IRQ_OVERSAMPLING_FACTOR;
+		if (fips_enabled) {
+			pr_warn("LRNG not suitable for FIPS 140-2 use cases\n");
+			WARN_ON(1);
+		}
+		pr_warn("operating without high-resolution timer and applying IRQ oversampling factor %u\n",
+			LRNG_IRQ_OVERSAMPLING_FACTOR);
+	}
+	lrng_set_entropy_thresh(LRNG_INIT_ENTROPY_BITS);
+
+	lrng_pdrbg_inject((u8 *)&lrng_init_state,
+			  SHA_WORKSPACE_WORDS * sizeof(lrng_init_state[0]),
+			  0, NULL, 0, false);
+	lrng_sdrbg_seed(lrng_sdrbg[0], lrng_pdrbg_seed);
+	atomic_inc(&lrng_pdrbg_avail);
+	memzero_explicit(&lrng_init_state,
+			 SHA_WORKSPACE_WORDS * sizeof(lrng_init_state[0]));
+	spin_unlock_irqrestore(&lrng_init_rng_lock, flags);
+	pr_info("deactivating initial RNG - %d bytes delivered\n",
+		atomic_read(&lrng_initrng_bytes));
+	return 0;
+}
+
+/* A late init implies that more interrupts are collected for initial seeding */
+late_initcall(lrng_init);
+
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_AUTHOR("Stephan Mueller <smueller@chronox.de>");
+MODULE_DESCRIPTION("Linux Random Number Generator");
diff --git a/crypto/lrng_kcapi.c b/crypto/lrng_kcapi.c
new file mode 100644
index 0000000..25c37c5
--- /dev/null
+++ b/crypto/lrng_kcapi.c
@@ -0,0 +1,167 @@
+/*
+ * Backend for the LRNG providing the cryptographic primitives using the
+ * kernel crypto API.
+ *
+ * Copyright (C) 2016, Stephan Mueller <smueller@chronox.de>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, and the entire permission notice in its entirety,
+ *    including the disclaimer of warranties.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote
+ *    products derived from this software without specific prior
+ *    written permission.
+ *
+ * ALTERNATIVELY, this product may be distributed under the terms of
+ * the GNU General Public License, in which case the provisions of the GPL2
+ * are required INSTEAD OF the above restrictions.  (This clause is
+ * necessary due to a potential bad interaction between the GPL and
+ * the restrictions contained in a BSD-style copyright.)
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF
+ * WHICH ARE HEREBY DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
+ * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH
+ * DAMAGE.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <crypto/drbg.h>
+
+struct lrng_hash_info {
+	struct shash_desc shash;
+	char ctx[];
+};
+
+int lrng_drng_seed_helper(void *drng, const u8 *inbuf, u32 inbuflen)
+{
+	struct drbg_state *drbg = (struct drbg_state *)drng;
+	LIST_HEAD(seedlist);
+	struct drbg_string data;
+	int ret;
+
+	drbg_string_fill(&data, inbuf, inbuflen);
+	list_add_tail(&data.list, &seedlist);
+	ret = drbg->d_ops->update(drbg, &seedlist, drbg->seeded);
+
+	if (ret >= 0)
+		drbg->seeded = true;
+
+	return ret;
+}
+
+int lrng_drng_generate_helper(void *drng, u8 *outbuf, u32 outbuflen)
+{
+	struct drbg_state *drbg = (struct drbg_state *)drng;
+
+	return drbg->d_ops->generate(drbg, outbuf, outbuflen, NULL);
+}
+
+void *lrng_drng_alloc(u8 *drng_name, u32 blocklen_bytes, u32 sec_strength)
+{
+	struct drbg_state *drbg = NULL;
+	int coreref = -1;
+	bool pr = false;
+	int ret;
+
+	drbg_convert_tfm_core(drng_name, &coreref, &pr);
+	if (coreref < 0)
+		return NULL;
+
+	drbg = kzalloc(sizeof(struct drbg_state), GFP_KERNEL);
+	if (!drbg)
+		return NULL;
+
+	drbg->core = &drbg_cores[coreref];
+	drbg->seeded = false;
+	ret = drbg_alloc_state(drbg);
+	if (ret)
+		goto err;
+
+	if (blocklen_bytes != drbg->core->blocklen_bytes)
+		goto dealloc;
+	if (sec_strength > drbg_sec_strength(drbg->core->flags))
+		goto dealloc;
+
+	pr_info("DRBG with %s core allocated\n", drbg->core->backend_cra_name);
+
+	return drbg;
+
+dealloc:
+	if (drbg->d_ops)
+		drbg->d_ops->crypto_fini(drbg);
+	drbg_dealloc_state(drbg);
+err:
+	kfree(drbg);
+	return NULL;
+}
+
+void lrng_drng_dealloc(void *drng)
+{
+	struct drbg_state *drbg = (struct drbg_state *)drng;
+
+	drbg_dealloc_state(drbg);
+	kzfree(drbg);
+}
+
+void *lrng_hash_alloc(u8 *hashname, u8 *key, u32 keylen)
+{
+	struct lrng_hash_info *lrng_hash;
+	struct crypto_shash *tfm;
+	int size, ret;
+
+	tfm = crypto_alloc_shash(hashname, 0, 0);
+	if (IS_ERR(tfm)) {
+		pr_err("could not allocate hash %s\n", hashname);
+		return ERR_PTR(PTR_ERR(tfm));
+	}
+
+	size = sizeof(struct lrng_hash_info) + crypto_shash_descsize(tfm);
+	lrng_hash = kmalloc(size, GFP_KERNEL);
+	if (!lrng_hash) {
+		crypto_free_shash(tfm);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	lrng_hash->shash.tfm = tfm;
+	lrng_hash->shash.flags = 0x0;
+
+	ret = crypto_shash_setkey(tfm, key, keylen);
+	if (ret && ret != -ENOSYS) {
+		pr_err("could not set the key for MAC\n");
+		crypto_free_shash(tfm);
+		kfree(lrng_hash);
+		return ERR_PTR(ret);
+	}
+
+	return lrng_hash;
+}
+
+u32 lrng_hash_digestsize(void *hash)
+{
+	struct lrng_hash_info *lrng_hash = (struct lrng_hash_info *)hash;
+	struct shash_desc *shash = &lrng_hash->shash;
+
+	return crypto_shash_digestsize(shash->tfm);
+}
+
+int lrng_hash_buffer(void *hash, u8 *inbuf, u32 inbuflen, u8 *digest)
+{
+	struct lrng_hash_info *lrng_hash = (struct lrng_hash_info *)hash;
+	struct shash_desc *shash = &lrng_hash->shash;
+
+	return crypto_shash_digest(shash, inbuf, inbuflen, digest);
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 4/5] crypto: LRNG - enable compile
  2016-08-11 12:24 [PATCH v6 0/5] /dev/random - a new approach Stephan Mueller
                   ` (2 preceding siblings ...)
  2016-08-11 12:25 ` [PATCH v6 3/5] crypto: Linux Random Number Generator Stephan Mueller
@ 2016-08-11 12:26 ` Stephan Mueller
  2016-08-11 13:50   ` kbuild test robot
  2016-08-11 12:26 ` [PATCH v6 5/5] crypto: LRNG - add ChaCha20 support Stephan Mueller
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 26+ messages in thread
From: Stephan Mueller @ 2016-08-11 12:26 UTC (permalink / raw)
  To: herbert
  Cc: Ted Tso, sandyinchina, Jason Cooper, John Denker, H. Peter Anvin,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

Add LRNG compilation support.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
---
 crypto/Kconfig  | 11 +++++++++++
 crypto/Makefile |  2 ++
 2 files changed, 13 insertions(+)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 84d7148..71df7fc 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1665,6 +1665,17 @@ config CRYPTO_JITTERENTROPY
 	  random numbers. This Jitterentropy RNG registers with
 	  the kernel crypto API and can be used by any caller.
 
+config CRYPTO_LRNG
+	bool "Linux Random Number Generator"
+	select CRYPTO_DRBG_MENU
+	select CRYPTO_CMAC if CRYPTO_DRBG_CTR
+	help
+	  The Linux Random Number Generator (LRNG) is the replacement
+	  of the legacy /dev/random provided with drivers/char/random.c.
+	  It generates entropy from different noise sources and
+	  delivers significant entropy during boot. The LRNG only
+	  works with the presence of a high-resolution timer.
+
 config CRYPTO_USER_API
 	tristate
 
diff --git a/crypto/Makefile b/crypto/Makefile
index 99cc64a..12d4249 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -122,6 +122,8 @@ obj-$(CONFIG_CRYPTO_DRBG) += drbg.o
 obj-$(CONFIG_CRYPTO_JITTERENTROPY) += jitterentropy_rng.o
 CFLAGS_jitterentropy.o = -O0
 jitterentropy_rng-y := jitterentropy.o jitterentropy-kcapi.o
+obj-$(CONFIG_CRYPTO_LRNG) += lrng.o
+lrng-y += lrng_base.o lrng_kcapi.o
 obj-$(CONFIG_CRYPTO_TEST) += tcrypt.o
 obj-$(CONFIG_CRYPTO_GHASH) += ghash-generic.o
 obj-$(CONFIG_CRYPTO_USER_API) += af_alg.o
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 5/5] crypto: LRNG - add ChaCha20 support
  2016-08-11 12:24 [PATCH v6 0/5] /dev/random - a new approach Stephan Mueller
                   ` (3 preceding siblings ...)
  2016-08-11 12:26 ` [PATCH v6 4/5] crypto: LRNG - enable compile Stephan Mueller
@ 2016-08-11 12:26 ` Stephan Mueller
  2016-08-11 21:36 ` [PATCH v6 0/5] /dev/random - a new approach Theodore Ts'o
  2016-08-15 20:42 ` H. Peter Anvin
  6 siblings, 0 replies; 26+ messages in thread
From: Stephan Mueller @ 2016-08-11 12:26 UTC (permalink / raw)
  To: herbert
  Cc: Ted Tso, sandyinchina, Jason Cooper, John Denker, H. Peter Anvin,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

In case the kernel crypto API is not compiled, use ChaCha20 stream
cipher as DRNG. The LRNG ChaCha20 support provides the DRNG
implementation with the generate and update functions.

Th DRNG implements enhanced backward secrecy by re-creating the
entire internal state after generating random numbers.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
---
 crypto/Kconfig           |   1 -
 crypto/Makefile          |   7 +-
 crypto/lrng_standalone.c | 220 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 226 insertions(+), 2 deletions(-)
 create mode 100644 crypto/lrng_standalone.c

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 71df7fc..ee5aff4 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1667,7 +1667,6 @@ config CRYPTO_JITTERENTROPY
 
 config CRYPTO_LRNG
 	bool "Linux Random Number Generator"
-	select CRYPTO_DRBG_MENU
 	select CRYPTO_CMAC if CRYPTO_DRBG_CTR
 	help
 	  The Linux Random Number Generator (LRNG) is the replacement
diff --git a/crypto/Makefile b/crypto/Makefile
index 12d4249..99fb0e1 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -123,7 +123,12 @@ obj-$(CONFIG_CRYPTO_JITTERENTROPY) += jitterentropy_rng.o
 CFLAGS_jitterentropy.o = -O0
 jitterentropy_rng-y := jitterentropy.o jitterentropy-kcapi.o
 obj-$(CONFIG_CRYPTO_LRNG) += lrng.o
-lrng-y += lrng_base.o lrng_kcapi.o
+lrng-y += lrng_base.o
+ifeq ($(CONFIG_CRYPTO_DRBG),y)
+lrng-y += lrng_kcapi.o
+else
+lrng-y += lrng_standalone.o
+endif
 obj-$(CONFIG_CRYPTO_TEST) += tcrypt.o
 obj-$(CONFIG_CRYPTO_GHASH) += ghash-generic.o
 obj-$(CONFIG_CRYPTO_USER_API) += af_alg.o
diff --git a/crypto/lrng_standalone.c b/crypto/lrng_standalone.c
new file mode 100644
index 0000000..7c5d456
--- /dev/null
+++ b/crypto/lrng_standalone.c
@@ -0,0 +1,220 @@
+/*
+ * Backend for the LRNG providing the cryptographic primitives using
+ * standalone cipher implementations.
+ *
+ * Copyright (C) 2016, Stephan Mueller <smueller@chronox.de>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, and the entire permission notice in its entirety,
+ *    including the disclaimer of warranties.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote
+ *    products derived from this software without specific prior
+ *    written permission.
+ *
+ * ALTERNATIVELY, this product may be distributed under the terms of
+ * the GNU General Public License, in which case the provisions of the GPL2
+ * are required INSTEAD OF the above restrictions.  (This clause is
+ * necessary due to a potential bad interaction between the GPL and
+ * the restrictions contained in a BSD-style copyright.)
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF
+ * WHICH ARE HEREBY DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
+ * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH
+ * DAMAGE.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/cryptohash.h>
+#include <crypto/chacha20.h>
+#include <linux/random.h>
+
+/******************************* ChaCha20 DRNG *******************************/
+
+/* State according to RFC 7539 section 2.3 */
+struct chacha20_state {
+	u32 constants[4];
+	union {
+		u32 u[(CHACHA20_KEY_SIZE / sizeof(u32))];
+		u8  b[CHACHA20_KEY_SIZE];
+	} key;
+	u32 counter;
+	u32 nonce[3];
+};
+
+/**
+ * Update of the ChaCha20 state by generating one ChaCha20 block which is
+ * equal to the state of the ChaCha20. The generated block is XORed into
+ * the key part of the state. This shall ensure backtracking resistance as well
+ * as a proper mix of the ChaCha20 state once the key is injected.
+ */
+static void lrng_chacha20_update(struct chacha20_state *chacha20)
+{
+	u32 tmp[(CHACHA20_BLOCK_SIZE / sizeof(u32))];
+	u32 i;
+
+	BUILD_BUG_ON(sizeof(struct chacha20_state) != CHACHA20_BLOCK_SIZE);
+	BUILD_BUG_ON(CHACHA20_BLOCK_SIZE != 2 * CHACHA20_KEY_SIZE);
+
+	chacha20_block(&chacha20->constants[0], tmp);
+	for (i = 0; i < (CHACHA20_KEY_SIZE / sizeof(uint32_t)); i++)
+		chacha20->key.u[i] ^= tmp[i];
+	for (i = 0; i < (CHACHA20_KEY_SIZE / sizeof(uint32_t)); i++)
+		chacha20->key.u[i] ^=
+			tmp[i + (CHACHA20_KEY_SIZE / sizeof(uint32_t))];
+
+	memzero_explicit(tmp, sizeof(tmp));
+
+	/* Deterministic increment of nonce as required in RFC 7539 chapter 4 */
+	chacha20->nonce[0]++;
+	if (chacha20->nonce[0] == 0)
+		chacha20->nonce[1]++;
+	if (chacha20->nonce[1] == 0)
+		chacha20->nonce[2]++;
+
+	/* Leave counter untouched as it is start value is undefined in RFC */
+}
+
+/**
+ * Seed the ChaCha20 DRNG by injecting the input data into the key part of
+ * the ChaCha20 state. If the input data is longer than the ChaCha20 key size,
+ * perform a ChaCha20 operation after processing of key size input data.
+ * This operation shall spread out the entropy into the ChaCha20 state before
+ * new entropy is injected into the key part.
+ */
+int lrng_drng_seed_helper(void *drng, const u8 *inbuf, u32 inbuflen)
+{
+	struct chacha20_state *chacha20 = (struct chacha20_state *)drng;
+
+	while (inbuflen) {
+		u32 i, todo = min_t(u32, inbuflen, CHACHA20_KEY_SIZE);
+
+		for (i = 0; i < todo; i++)
+			chacha20->key.b[i] ^= inbuf[i];
+
+		/* Break potential dependencies between the inbuf key blocks */
+		lrng_chacha20_update(chacha20);
+		inbuf += todo;
+		inbuflen -= todo;
+	}
+
+	return 0;
+}
+
+/**
+ * Chacha20 DRNG generation of random numbers: the stream output of ChaCha20
+ * is the random number. After the completion of the generation of the
+ * stream, the entire ChaCha20 state is updated.
+ *
+ * Note, as the ChaCha20 implements a 32 bit counter, we must ensure
+ * that this function is only invoked for at most 2^32 - 1 ChaCha20 blocks
+ * before a reseed or an update happens. This is ensured by the variable
+ * outbuflen which is a 32 bit integer defining the number of bytes to be
+ * generated by the ChaCha20 DRNG. At the end of this function, an update
+ * operation is invoked which implies that the 32 bit counter will never be
+ * overflown in this implementation.
+ */
+int lrng_drng_generate_helper(void *drng, u8 *outbuf, u32 outbuflen)
+{
+	struct chacha20_state *chacha20 = (struct chacha20_state *)drng;
+	u32 ret = outbuflen;
+
+	while (outbuflen >= CHACHA20_BLOCK_SIZE) {
+		chacha20_block(&chacha20->constants[0], outbuf);
+		outbuf += CHACHA20_BLOCK_SIZE;
+		outbuflen -= CHACHA20_BLOCK_SIZE;
+	}
+
+	if (outbuflen) {
+		u8 stream[CHACHA20_BLOCK_SIZE];
+
+		chacha20_block(&chacha20->constants[0], stream);
+		memcpy(outbuf, stream, outbuflen);
+		memzero_explicit(stream, sizeof(stream));
+	}
+
+	lrng_chacha20_update(chacha20);
+
+	return ret;
+}
+
+/**
+ * Allocation of the DRBG state
+ */
+void *lrng_drng_alloc(u8 *drng_name, u32 blocklen_bytes, u32 sec_strength)
+{
+	struct chacha20_state *chacha20;
+	unsigned long v;
+
+	chacha20 = kzalloc(sizeof(struct chacha20_state), GFP_KERNEL);
+	if (!chacha20)
+		return NULL;
+
+	memcpy(&chacha20->constants[0], "expand 32-byte k", 16);
+	if (arch_get_random_long(&v))
+		chacha20->nonce[0] ^= v;
+	if (arch_get_random_long(&v))
+		chacha20->nonce[1] ^= v;
+	if (arch_get_random_long(&v))
+		chacha20->nonce[2] ^= v;
+
+	if (sec_strength > CHACHA20_KEY_SIZE)
+		goto err;
+	if (blocklen_bytes != CHACHA20_BLOCK_SIZE)
+		goto err;
+
+	pr_info("ChaCha20 core allocated\n");
+
+	return chacha20;
+
+err:
+	kfree(chacha20);
+	return NULL;
+}
+
+void lrng_drng_dealloc(void *drng)
+{
+	struct chacha20_state *chacha20 = (struct chacha20_state *)drng;
+
+	kzfree(chacha20);
+}
+
+/******************************* Hash Operation *******************************/
+
+void *lrng_hash_alloc(u8 *hashname, u8 *key, u32 keylen)
+{
+	return NULL;
+}
+
+u32 lrng_hash_digestsize(void *hash)
+{
+	return (SHA_DIGEST_WORDS * sizeof(u32));
+}
+
+int lrng_hash_buffer(void *hash, u8 *inbuf, u32 inbuflen, u8 *digest)
+{
+	u32 i;
+	u32 workspace[SHA_WORKSPACE_WORDS];
+
+	WARN_ON(inbuflen % SHA_WORKSPACE_WORDS);
+
+	for (i = 0; i < inbuflen; i += (SHA_WORKSPACE_WORDS * sizeof(u32)))
+		sha_transform((u32 *)digest, (inbuf + i), workspace);
+	memzero_explicit(workspace, sizeof(workspace));
+
+	return 0;
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 4/5] crypto: LRNG - enable compile
  2016-08-11 12:26 ` [PATCH v6 4/5] crypto: LRNG - enable compile Stephan Mueller
@ 2016-08-11 13:50   ` kbuild test robot
  2016-08-11 14:03     ` Stephan Mueller
  0 siblings, 1 reply; 26+ messages in thread
From: kbuild test robot @ 2016-08-11 13:50 UTC (permalink / raw)
  To: Stephan Mueller
  Cc: kbuild-all, herbert, Ted Tso, sandyinchina, Jason Cooper,
	John Denker, H. Peter Anvin, Joe Perches, Pavel Machek,
	George Spelvin, linux-crypto, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3256 bytes --]

Hi Stephan,

[auto build test ERROR on cryptodev/master]
[also build test ERROR on v4.8-rc1]
[cannot apply to next-20160811]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Stephan-Mueller/crypto-DRBG-externalize-DRBG-functions-for-LRNG/20160811-203346
base:   https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
config: tile-allyesconfig (attached as .config)
compiler: tilegx-linux-gcc (GCC) 4.6.2
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=tile 

All error/warnings (new ones prefixed by >>):

   crypto/lrng_base.c: In function 'add_interrupt_randomness':
>> crypto/lrng_base.c:583:10: error: implicit declaration of function 'get_irq_regs'
>> crypto/lrng_base.c:583:26: warning: initialization makes pointer from integer without a cast [enabled by default]
   cc1: some warnings being treated as errors

vim +/get_irq_regs +583 crypto/lrng_base.c

22708393 Stephan Mueller 2016-08-11  567  }
22708393 Stephan Mueller 2016-08-11  568  
22708393 Stephan Mueller 2016-08-11  569  /**
22708393 Stephan Mueller 2016-08-11  570   * Hot code path - Callback for interrupt handler
22708393 Stephan Mueller 2016-08-11  571   */
22708393 Stephan Mueller 2016-08-11  572  void add_interrupt_randomness(int irq, int irq_flags)
22708393 Stephan Mueller 2016-08-11  573  {
22708393 Stephan Mueller 2016-08-11  574  	u32 now_time = random_get_entropy();
22708393 Stephan Mueller 2016-08-11  575  	struct lrng_irq_info *irq_info = &lrng_pool.irq_info;
22708393 Stephan Mueller 2016-08-11  576  	u32 irq_num = (u32)atomic_add_return(1, &irq_info->num_events);
22708393 Stephan Mueller 2016-08-11  577  
22708393 Stephan Mueller 2016-08-11  578  	if (lrng_pool.irq_info.irq_highres_timer) {
22708393 Stephan Mueller 2016-08-11  579  		if (lrng_irq_stuck(irq_info, now_time))
22708393 Stephan Mueller 2016-08-11  580  			return;
22708393 Stephan Mueller 2016-08-11  581  		lrng_pool_mixin((u8 *)&now_time, sizeof(now_time), irq_num);
22708393 Stephan Mueller 2016-08-11  582  	} else {
22708393 Stephan Mueller 2016-08-11 @583  		struct pt_regs *regs = get_irq_regs();
22708393 Stephan Mueller 2016-08-11  584  		static atomic_t reg_idx = ATOMIC_INIT(0);
22708393 Stephan Mueller 2016-08-11  585  
22708393 Stephan Mueller 2016-08-11  586  		struct {
22708393 Stephan Mueller 2016-08-11  587  			long jiffies;
22708393 Stephan Mueller 2016-08-11  588  			int irq;
22708393 Stephan Mueller 2016-08-11  589  			int irq_flags;
22708393 Stephan Mueller 2016-08-11  590  			u64 ip;
22708393 Stephan Mueller 2016-08-11  591  			u32 curr_reg;

:::::: The code at line 583 was first introduced by commit
:::::: 227083931f3541c5430b40241419b56057555033 crypto: Linux Random Number Generator

:::::: TO: Stephan Mueller <smueller@chronox.de>
:::::: CC: 0day robot <fengguang.wu@intel.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 45495 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 4/5] crypto: LRNG - enable compile
  2016-08-11 13:50   ` kbuild test robot
@ 2016-08-11 14:03     ` Stephan Mueller
  0 siblings, 0 replies; 26+ messages in thread
From: Stephan Mueller @ 2016-08-11 14:03 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, herbert, Ted Tso, sandyinchina, Jason Cooper,
	John Denker, H. Peter Anvin, Joe Perches, Pavel Machek,
	George Spelvin, linux-crypto, linux-kernel

Am Donnerstag, 11. August 2016, 21:50:15 CEST schrieb kbuild test robot:

Hi,

> Hi Stephan,
> 
> [auto build test ERROR on cryptodev/master]
> [also build test ERROR on v4.8-rc1]
> [cannot apply to next-20160811]
> [if your patch is applied to the wrong git tree, please drop us a note to
> help improve the system]

Thank you for the report. This is due to the missing include of asm/irq_regs.h 
which seem to be included my the tested arches through some other means.

I will add it in an update.

Ciao
Stephan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-11 12:24 [PATCH v6 0/5] /dev/random - a new approach Stephan Mueller
                   ` (4 preceding siblings ...)
  2016-08-11 12:26 ` [PATCH v6 5/5] crypto: LRNG - add ChaCha20 support Stephan Mueller
@ 2016-08-11 21:36 ` Theodore Ts'o
  2016-08-12  9:34   ` Stephan Mueller
  2016-08-17 21:42   ` Pavel Machek
  2016-08-15 20:42 ` H. Peter Anvin
  6 siblings, 2 replies; 26+ messages in thread
From: Theodore Ts'o @ 2016-08-11 21:36 UTC (permalink / raw)
  To: Stephan Mueller
  Cc: herbert, sandyinchina, Jason Cooper, John Denker, H. Peter Anvin,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

On Thu, Aug 11, 2016 at 02:24:21PM +0200, Stephan Mueller wrote:
> 
> The following patch set provides a different approach to /dev/random which
> I call Linux Random Number Generator (LRNG) to collect entropy within the Linux
> kernel. The main improvements compared to the legacy /dev/random is to provide
> sufficient entropy during boot time as well as in virtual environments and when
> using SSDs. A secondary design goal is to limit the impact of the entropy
> collection on massive parallel systems and also allow the use accelerated
> cryptographic primitives. Also, all steps of the entropic data processing are
> testable. Finally massive performance improvements are visible at /dev/urandom
> and get_random_bytes.
> 
> The design and implementation is driven by a set of goals described in [1]
> that the LRNG completely implements. Furthermore, [1] includes a
> comparison with RNG design suggestions such as SP800-90B, SP800-90C, and
> AIS20/31.

Given the changes that have landed in Linus's tree for 4.8, how many
of the design goals for your LRNG are still left not yet achieved?

Reading the paper, you are still claiming huge performance
improvements over getrandomm and /dev/urandom.  With the use of the
ChaCha20 (and given that you added a ChaCha20 DRBG as well), it's not
clear this is still an advantage over what we currently have.

As far as whether or not you can gather enough entropy at boot time,
what we're really talking about how how much entropy we want to assume
can be gathered from interrupt timings, since what you do in your code
is not all that different from what the current random driver is
doing.  So it's pretty easy to turn a knob and say, "hey presto, we
can get all of the entropy we need before userspace starts!"  But
justifying this is much harder, and using statistical tests isn't
really sufficient as far as I'm concerned.

Cheers,

						- Ted

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-11 21:36 ` [PATCH v6 0/5] /dev/random - a new approach Theodore Ts'o
@ 2016-08-12  9:34   ` Stephan Mueller
  2016-08-12 19:22     ` Theodore Ts'o
  2016-08-17 21:42   ` Pavel Machek
  1 sibling, 1 reply; 26+ messages in thread
From: Stephan Mueller @ 2016-08-12  9:34 UTC (permalink / raw)
  To: Theodore Ts'o
  Cc: herbert, sandyinchina, Jason Cooper, John Denker, H. Peter Anvin,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

Am Donnerstag, 11. August 2016, 17:36:32 CEST schrieb Theodore Ts'o:

Hi Theodore,

> On Thu, Aug 11, 2016 at 02:24:21PM +0200, Stephan Mueller wrote:
> > The following patch set provides a different approach to /dev/random 
which
> > I call Linux Random Number Generator (LRNG) to collect entropy within the
> > Linux kernel. The main improvements compared to the legacy /dev/random is
> > to provide sufficient entropy during boot time as well as in virtual
> > environments and when using SSDs. A secondary design goal is to limit the
> > impact of the entropy collection on massive parallel systems and also
> > allow the use accelerated cryptographic primitives. Also, all steps of
> > the entropic data processing are testable. Finally massive performance
> > improvements are visible at /dev/urandom and get_random_bytes.
> > 
> > The design and implementation is driven by a set of goals described in 
[1]
> > that the LRNG completely implements. Furthermore, [1] includes a
> > comparison with RNG design suggestions such as SP800-90B, SP800-90C, and
> > AIS20/31.
> 
> Given the changes that have landed in Linus's tree for 4.8, how many
> of the design goals for your LRNG are still left not yet achieved?

The core concerns I have at this point are the following:

- correlation: the interrupt noise source is closely correlated to the HID/
block noise sources. I see that the fast_pool somehow "smears" that 
correlation. However, I have not seen a full assessment that the correlation 
is gone away. Given that I do not believe that the HID event values (key 
codes, mouse coordinates) have any entropy -- the user sitting at the console 
exactly knows what he pressed and which mouse coordinates are created, and 
given that for block devices, only the high-resolution time stamp gives any 
entropy, I am suggesting to remove the HID/block device noise sources and 
leave the IRQ noise source. Maybe we could record the HID event values to 
further stir the pool but do not credit it any entropy. Of course, that would 
imply that the assumed entropy in an IRQ event is revalued. I am currently 
finishing up an assessment of how entropy behaves in a VM (where I hope that 
the report is released). Please note that contrary to my initial 
expectations, the IRQ events are the only noise sources which are almost 
unaffected by a VMM operation. Hence, IRQs are much better in a VM 
environment than block or HID noise sources.

- entropy estimate: the current entropy heuristics IMHO have nothing to do 
with the entropy of the data coming in. Currently, the min of first/second/
third derivative of the Jiffies time stamp is used and capped at 11. That 
value is the entropy value credited to the event. Given that the entropy 
rests with the high-res time stamp and not with jiffies or the event value, I 
think that the heuristic is not helpful. I understand that it underestimates 
on average the available entropy, but that is the only relationship I see. In 
my mentioned entropy in VM assessment (plus the BSI report on /dev/random 
which is unfortunately written in German, but available in the Internet) I 
did a min entropy calculation based on different min entropy formulas 
(SP800-90B). That calculation shows that we get from the noise sources is 
about 5 to 6 bits. On average the entropy heuristic credits between 0.5 and 1 
bit for events, so it underestimates the entropy. Yet, the entropy heuristic 
can credit up to 11 bits. Here I think it becomes clear that the current 
entropy heuristic is not helpful. In addition, on systems where no high-res 
timer is available, I assume (I have not measured it yet), the entropy 
heuristic even overestimates the entropy.

- albeit I like the current injection of twice the fast_pool into the 
ChaCha20 (which means that the pathological case where the collection of 128 
bits of entropy would result in an attack resistance of 2 * 128 bits and 
*not* 2^128 bits is now increased to an attack strength of 2^64 * 2 bits), /
dev/urandom has *no* entropy until that injection happens. The injection 
happens early in the boot cycle, but in my test system still after user space 
starts. I tried to inject "atomically" (to not fall into the aforementioned 
pathological case trap) of 32 / 112 / 256 bits of entropy into the /dev/
urandom RNG to have /dev/urandom at least seeded with a few bits before user 
space starts followed by the atomic injection of the subsequent bits.


A minor issue that may not be of too much importance: if there is a user 
space entropy provider waiting with select(2) or poll(2) on /dev/random (like 
rngd or my jitterentropy-rngd), this provider is only woken up when somebody 
pulls on /dev/random. If /dev/urandom is pulled (and the system does not 
receive entropy from the add*randomness noise sources), the user space 
provider is *not* woken up. So, /dev/urandom spins as a DRNG even though it 
could use a topping off of its entropy once in a while. In my jitterentropy-
rngd I have handled the situation that in addition to a select(2), the daemon 
is woken up every 5 seconds to read the entropy_avail file and starts 
injecting data into the kernel if it falls below a threshold. Yet, this is a 
hack. The wakeup function in the kernel should be placed at a different 
location to also have /dev/urandom benefit from the wakeup.
> 
> Reading the paper, you are still claiming huge performance
> improvements over getrandomm and /dev/urandom.  With the use of the
> ChaCha20 (and given that you added a ChaCha20 DRBG as well), it's not
> clear this is still an advantage over what we currently have.

I agree that with your latest changes, the performance of /dev/urandom is 
comparatively to my implementation, considering the tables 6 and 7 in my 
report. Although the speed of my ChaCha20 DRNG is faster for large block 
sizes (470 vs 210 MB/s for 4096 byte blocks), you rightfully state that the 
large block sizes do not really matter and hence I am not really using it for 
comparison. 

The table 6 and 7 reference the old /dev/urandom using still the SHA-1.
> 
> As far as whether or not you can gather enough entropy at boot time,
> what we're really talking about how how much entropy we want to assume
> can be gathered from interrupt timings, since what you do in your code
> is not all that different from what the current random driver is

Correct. I think I am doing exactly what you do regarding the entropy 
collection minus the caveats mentioned above.

> doing.  So it's pretty easy to turn a knob and say, "hey presto, we
> can get all of the entropy we need before userspace starts!"  But
> justifying this is much harder, and using statistical tests isn't
> really sufficient as far as I'm concerned.

I agree that statistics is one hint only. But as of now I have not seen any 
real explanation why an IRQ event measured with a high-res timer should not 
have 1 bit or 0.5 bits of entropy on average. All my statistical measurements 
(see my LRNG paper, see with my hopefully released VM assessment paper) show 
that the statistical measurement indicates that each high-res time stamp of 
an IRQ has more 4 bits of entropy at least when the system is under attack. 
Both one bit or 0.5 bits is more than enough to have a properly working /dev/
random even in virtual environments, embedded systems, headless systems, 
systems with SSDs, systems using a device mapper, etc. All those type of 
systems are currently subject to heavy penalties because of the collision 
problem I mentioned in the first bullet above.

Finally, one remark which I know you could not care less: :-) 

I try to use a known DRNG design that a lot of folks have already assessed -- 
SP800-90A (and please, do not hint to the Dual EC DRBG as this issue was 
pointed out already by researcher shortly after the first SP800-90A came out 
in 2007). This way I do not need to re-invent the wheel and potentially 
forget about things that may be helpful in a DRNG. To allow researchers to 
assess my ChaCha20 DRNG. that used when no kernel crypto API is compiled. 
independently from the kernel, I extracted the ChaCha20 DRNG code into a 
standalone DRNG accessible at [1]. This standalone implementation can be 
debugged and studied in user space. Moreover it is a simple copy of the 
kernel code to allow researchers an easy comparison.

[1] http://www.chronox.de/chacha20_drng.html

Ciao
Stephan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-12  9:34   ` Stephan Mueller
@ 2016-08-12 19:22     ` Theodore Ts'o
  2016-08-15  6:13       ` Stephan Mueller
  0 siblings, 1 reply; 26+ messages in thread
From: Theodore Ts'o @ 2016-08-12 19:22 UTC (permalink / raw)
  To: Stephan Mueller
  Cc: herbert, sandyinchina, Jason Cooper, John Denker, H. Peter Anvin,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

On Fri, Aug 12, 2016 at 11:34:55AM +0200, Stephan Mueller wrote:
> 
> - correlation: the interrupt noise source is closely correlated to the HID/
> block noise sources. I see that the fast_pool somehow "smears" that 
> correlation. However, I have not seen a full assessment that the correlation 
> is gone away. Given that I do not believe that the HID event values (key 
> codes, mouse coordinates) have any entropy -- the user sitting at the console 
> exactly knows what he pressed and which mouse coordinates are created, and 
> given that for block devices, only the high-resolution time stamp gives any 
> entropy, I am suggesting to remove the HID/block device noise sources and 
> leave the IRQ noise source. Maybe we could record the HID event values to 
> further stir the pool but do not credit it any entropy. Of course, that would 
> imply that the assumed entropy in an IRQ event is revalued. I am currently 
> finishing up an assessment of how entropy behaves in a VM (where I hope that 
> the report is released). Please note that contrary to my initial 
> expectations, the IRQ events are the only noise sources which are almost 
> unaffected by a VMM operation. Hence, IRQs are much better in a VM 
> environment than block or HID noise sources.

The reason why I'm untroubled with leaving them in is because I beieve
the quality of the timing information from the HID and block devices
is better than most of the other interrupt sources.  For example, most
network interfaces these days use NAPI, which means interrupts get
coalesced and sent in batch, which means the time of the interrupt is
latched off of some kind of timer --- and on many embeded devices
there is a single oscillator for the entire mainboard.  We only call
add_disk_randomness for rotational devices (e.g., only HDD's, not
SSD's), after the interrupt has been recorded.  Yes, most of the
entropy is probably going to be found in the high entropy time stamp
rather than the jiffies-based timestamp, especially for the hard drive
completion time.

I also tend to take a much more pragmatic viewpoint towards
measurability.  Sure, the human may know what she is typing, and
something about when she typed it (although probably not accurately
enough on a millisecond basis, so even the jiffies number is going to
be not easily predicted), but the analyst sitting behind the desk at
the NSA or the BND or the MSS is probably not going to have access to
that information.

(Whereas the NSA or the BND probably *can* get low-level information
about the Intel x86 CPU's internal implementation, which is why I'm
extremely amused by the arugment --- "the internals of the Intel CPU
are **so** complex we can't reverse engineer what's going on inside,
so the jitter RNG *must* be good!"  Note BTW that the NSA has only
said they won't do industrial espionage for economic for economic
gain, not that they won't engage in espionage against industrial
entities at all.  This is why the NSA spying on Petrobras is
considered completely fair game, even if it does enrage the
Brazillians.  :-)

> - entropy estimate: the current entropy heuristics IMHO have nothing to do 
> with the entropy of the data coming in. Currently, the min of first/second/
> third derivative of the Jiffies time stamp is used and capped at 11. That 
> value is the entropy value credited to the event. Given that the entropy 
> rests with the high-res time stamp and not with jiffies or the event value, I 
> think that the heuristic is not helpful. I understand that it underestimates 
> on average the available entropy, but that is the only relationship I see. In 
> my mentioned entropy in VM assessment (plus the BSI report on /dev/random 
> which is unfortunately written in German, but available in the Internet) I 
> did a min entropy calculation based on different min entropy formulas 
> (SP800-90B). That calculation shows that we get from the noise sources is 
> about 5 to 6 bits. On average the entropy heuristic credits between 0.5 and 1 
> bit for events, so it underestimates the entropy. Yet, the entropy heuristic 
> can credit up to 11 bits. Here I think it becomes clear that the current 
> entropy heuristic is not helpful. In addition, on systems where no high-res 
> timer is available, I assume (I have not measured it yet), the entropy 
> heuristic even overestimates the entropy.

The disks on a VM are not rotational disks, so we wouldn't be using
the add-disk-randomness entropy calculation.  And you generally don't
have a keyboard on a mouse attached to the VM, so we would be using
the entropy estimate from the interrupt timing.

As far as whether you can get 5-6 bits of entropy from interrupt
timings --- that just doesn't pass the laugh test.  The min-entropy
formulas are estimates assuming IID data sources, and it's not at all
clear (in fact, i'd argue pretty clearly _not_) that they are IID.  As
I said, take for example the network interfaces, and how NAPI gets
implemented.  And in a VM environment, where everything is synthetic,
the interrupt timings are definitely not IID, and there may be
patterns that will not detectable by statistical mechanisms.

> - albeit I like the current injection of twice the fast_pool into the 
> ChaCha20 (which means that the pathological case where the collection of 128 
> bits of entropy would result in an attack resistance of 2 * 128 bits and 
> *not* 2^128 bits is now increased to an attack strength of 2^64 * 2 bits), /
> dev/urandom has *no* entropy until that injection happens. The injection 
> happens early in the boot cycle, but in my test system still after user space 
> starts. I tried to inject "atomically" (to not fall into the aforementioned 
> pathological case trap) of 32 / 112 / 256 bits of entropy into the /dev/
> urandom RNG to have /dev/urandom at least seeded with a few bits before user 
> space starts followed by the atomic injection of the subsequent bits.

The early boot problem is a hard one.  We can inject some noise in,
but I don't think a few bits actually does much good.  So the question
is whether it's faster to get to fully seeded, or to inject in 32 bits
of entropy in the hopes that this will do some good.  Personally, I'm
not convinced.  So the tack I've taken is to have warning messages
printed when someone *does* draw from /dev/urandom before it's fully
seeded.  In many cases, it's for entirely bogus, non-cryptographic
reasons.  (For example, Python wanting to use a random salt to protect
against certain DOS attacks when Python is being used in a web server
--- a use case which is completely irrelevant when it's being used by
systemd generator scripts at boot time.)

Ultimately, I think the right answer here is we need help from the
bootloader, and ultimately some hardware help or some initialization
at factory time which isn't too easily hacked by a Tailored Access
Organization team who can intercept hardware shipments.  :-)

> A minor issue that may not be of too much importance: if there is a user 
> space entropy provider waiting with select(2) or poll(2) on /dev/random (like 
> rngd or my jitterentropy-rngd), this provider is only woken up when somebody 
> pulls on /dev/random. If /dev/urandom is pulled (and the system does not 
> receive entropy from the add*randomness noise sources), the user space 
> provider is *not* woken up. So, /dev/urandom spins as a DRNG even though it 
> could use a topping off of its entropy once in a while. In my jitterentropy-
> rngd I have handled the situation that in addition to a select(2), the daemon 
> is woken up every 5 seconds to read the entropy_avail file and starts 
> injecting data into the kernel if it falls below a threshold. Yet, this is a 
> hack. The wakeup function in the kernel should be placed at a different 
> location to also have /dev/urandom benefit from the wakeup.

Either /dev/urandom is a DRBG or is it isn't.  If it's a DRBG then you
don't need to track the entropy of the DRBG at all.  In fact, the
concept doesn't even really make sense for DRBG's.  Since we will be
reseeding the DRBG every five minutes if it is in constant use, there
will be plenty of opportunity to pull from a rngd or some other
hw_random device.

> Finally, one remark which I know you could not care less: :-) 
> 
> I try to use a known DRNG design that a lot of folks have already assessed -- 
> SP800-90A (and please, do not hint to the Dual EC DRBG as this issue was 
> pointed out already by researcher shortly after the first SP800-90A came out 
> in 2007). This way I do not need to re-invent the wheel and potentially 
> forget about things that may be helpful in a DRNG. To allow researchers to 
> assess my ChaCha20 DRNG. that used when no kernel crypto API is compiled. 
> independently from the kernel, I extracted the ChaCha20 DRNG code into a 
> standalone DRNG accessible at [1]. This standalone implementation can be 
> debugged and studied in user space. Moreover it is a simple copy of the 
> kernel code to allow researchers an easy comparison.

SP800-90A consists of a high level architecture of a DRBG, plus some
lower-level examples of how to use that high level architecture
assuming you have a hash function, or a block cipher, etc.  But it
doesn't have an example on using a stream cipher like ChaCha20.  So
all you can really do is follow the high-level architecture.  Mapping
the high-level architecture to the current /dev/random generator isn't
hard.  And no, I don't see the point of renaming things or moving
things around just to make the mapping to the SP800-90A easier.

       	      	      	       	       	  - Ted

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-12 19:22     ` Theodore Ts'o
@ 2016-08-15  6:13       ` Stephan Mueller
  2016-08-15 15:00         ` Theodore Ts'o
  0 siblings, 1 reply; 26+ messages in thread
From: Stephan Mueller @ 2016-08-15  6:13 UTC (permalink / raw)
  To: Theodore Ts'o
  Cc: herbert, sandyinchina, Jason Cooper, John Denker, H. Peter Anvin,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

Am Freitag, 12. August 2016, 15:22:08 CEST schrieb Theodore Ts'o:

Hi Theodore,

> On Fri, Aug 12, 2016 at 11:34:55AM +0200, Stephan Mueller wrote:
> > - correlation: the interrupt noise source is closely correlated to the
> > HID/
> > block noise sources. I see that the fast_pool somehow "smears" that
> > correlation. However, I have not seen a full assessment that the
> > correlation is gone away. Given that I do not believe that the HID event
> > values (key codes, mouse coordinates) have any entropy -- the user
> > sitting at the console exactly knows what he pressed and which mouse
> > coordinates are created, and given that for block devices, only the
> > high-resolution time stamp gives any entropy, I am suggesting to remove
> > the HID/block device noise sources and leave the IRQ noise source. Maybe
> > we could record the HID event values to further stir the pool but do not
> > credit it any entropy. Of course, that would imply that the assumed
> > entropy in an IRQ event is revalued. I am currently finishing up an
> > assessment of how entropy behaves in a VM (where I hope that the report
> > is released). Please note that contrary to my initial
> > expectations, the IRQ events are the only noise sources which are almost
> > unaffected by a VMM operation. Hence, IRQs are much better in a VM
> > environment than block or HID noise sources.
> 
> The reason why I'm untroubled with leaving them in is because I beieve
> the quality of the timing information from the HID and block devices
> is better than most of the other interrupt sources.  For example, most
> network interfaces these days use NAPI, which means interrupts get
> coalesced and sent in batch, which means the time of the interrupt is
> latched off of some kind of timer --- and on many embeded devices

According to my understanding of NAPI, the network card sends one interrupt 
when receiving the first packet of a packet stream and then the driver goes 
into polling mode, disabling the interrupt. So, I cannot see any batching 
based on some on-board timer where add_interrupt_randomness is affected.
 
Can you please elaborate?

> there is a single oscillator for the entire mainboard.  We only call
> add_disk_randomness for rotational devices (e.g., only HDD's, not
> SSD's), after the interrupt has been recorded.  Yes, most of the
> entropy is probably going to be found in the high entropy time stamp
> rather than the jiffies-based timestamp, especially for the hard drive
> completion time.
> 
> I also tend to take a much more pragmatic viewpoint towards
> measurability.  Sure, the human may know what she is typing, and
> something about when she typed it (although probably not accurately
> enough on a millisecond basis, so even the jiffies number is going to
> be not easily predicted), but the analyst sitting behind the desk at
> the NSA or the BND or the MSS is probably not going to have access to
> that information.

Well, injecting a trojan to a system in user space as unprivileged user that 
starts some X11 session and that can perform the following command is all you 
need to get to the key commands of the console.

xinput list |   grep -Po 'id=\K\d+(?=.*slave\s*keyboard)' |   xargs -P0 -n1 
xinput test

That is fully within reach of not only some agencies but also other folks. It 
is similar for mice.

> 
> (Whereas the NSA or the BND probably *can* get low-level information
> about the Intel x86 CPU's internal implementation, which is why I'm
> extremely amused by the arugment --- "the internals of the Intel CPU
> are **so** complex we can't reverse engineer what's going on inside,
> so the jitter RNG *must* be good!"  Note BTW that the NSA has only

Sure, agencies may know the full internals of a CPU like they know the full 
internals of the Linux kernel with the /dev/random implementation or just like 
they know the full internals of AES. But they do not know the current state of 
the system. And the cryptographic strength comes from that state.

When you refer to my Jitter RNG, I think I have shown that its strength comes 
from the internal state of the CPU (states of the internal building blocks 
relative to each other which may cause internal wait states, state of branch 
prediction or pipelines, etc.) and not of the layout of the CPU.

> said they won't do industrial espionage for economic for economic
> gain, not that they won't engage in espionage against industrial
> entities at all.  This is why the NSA spying on Petrobras is
> considered completely fair game, even if it does enrage the
> Brazillians.  :-)
> 
> > - entropy estimate: the current entropy heuristics IMHO have nothing to do
> > with the entropy of the data coming in. Currently, the min of
> > first/second/
> > third derivative of the Jiffies time stamp is used and capped at 11. That
> > value is the entropy value credited to the event. Given that the entropy
> > rests with the high-res time stamp and not with jiffies or the event
> > value, I think that the heuristic is not helpful. I understand that it
> > underestimates on average the available entropy, but that is the only
> > relationship I see. In my mentioned entropy in VM assessment (plus the
> > BSI report on /dev/random which is unfortunately written in German, but
> > available in the Internet) I did a min entropy calculation based on
> > different min entropy formulas (SP800-90B). That calculation shows that
> > we get from the noise sources is about 5 to 6 bits. On average the
> > entropy heuristic credits between 0.5 and 1 bit for events, so it
> > underestimates the entropy. Yet, the entropy heuristic can credit up to
> > 11 bits. Here I think it becomes clear that the current entropy heuristic
> > is not helpful. In addition, on systems where no high-res timer is
> > available, I assume (I have not measured it yet), the entropy heuristic
> > even overestimates the entropy.
> 
> The disks on a VM are not rotational disks, so we wouldn't be using
> the add-disk-randomness entropy calculation.  And you generally don't
> have a keyboard on a mouse attached to the VM, so we would be using
> the entropy estimate from the interrupt timing.

On VMs, the add_disk_randomness is always used with the exception of KVM when 
using a virtio disk. All other VMs do not use virtio and offer the disk as a 
SCSI or IDE device. In fact, add_disk_randomness is only disabled when the 
kernel detects:

- SDDs

- virtio

- use of device mapper

(Btw we should be thankful that this is done on Hyper-V as we would have a 
fatal state in a very common use case where /dev/random would have collected 
no entropy and /dev/urandom would have provided bogus data before the patch 
for using the VMBus interrupts was added.)
> 
> As far as whether you can get 5-6 bits of entropy from interrupt
> timings --- that just doesn't pass the laugh test.  The min-entropy

May I ask what you find amusing? When you have a noise source for which you 
have no theoretical model, all you can do is to revert to statistical 
measurements.

> formulas are estimates assuming IID data sources, and it's not at all
> clear (in fact, i'd argue pretty clearly _not_) that they are IID.  As

Sure, they are not IID based on the SP800-90B IID verification tests. For 
that, SP800-90B has non-IID versions of the min entropy calculations. See 
section 9.1 together with 9.3 of SP800-90B where I used those non-IID 
formulas.

Sure, it is "just" some statistical test. But it is better IMHO than to brush 
away available entropy entirely just because "my stomach tells me it is not 
good".

Just see the guy that sent an email to linux-crypto today. His MIPS /dev/
random cannot produce 16 bytes of data within 4 hours (which is similar to 
what I see on POWER systems). This raises a very interesting security issue: /
dev/urandom is not seeded properly. And we all know what folks do in the wild: 
when /dev/random does not produce data, /dev/urandom is used -- all general 
user space libs (OpenSSL, libgcrypt, nettle, ...) seed from /dev/urandom per 
default.

And I call that a very insecure state of affairs.

> I said, take for example the network interfaces, and how NAPI gets

As mentioned above, I do not see NAPI as an issue for interrupt entropy.

> implemented.  And in a VM environment, where everything is synthetic,
> the interrupt timings are definitely not IID, and there may be
> patterns that will not detectable by statistical mechanisms.

As mentioned, to my very surprise, I found that interrupts are the only thing 
in a VM that works extremely well even under attack scenarios. VMMs that I 
quantiatively tested include QEMU/KVM, VirtualBox, VMWare ESXi and Hyper-V. 
After more research, I came to the conclusion that even on the theoretical 
side, it must be one of the better noise sources in a VM.

Note, this was the key motivation for me to start my own implementation of /
dev/random.
> 
> > - albeit I like the current injection of twice the fast_pool into the
> > ChaCha20 (which means that the pathological case where the collection of
> > 128 bits of entropy would result in an attack resistance of 2 * 128 bits
> > and *not* 2^128 bits is now increased to an attack strength of 2^64 * 2
> > bits), / dev/urandom has *no* entropy until that injection happens. The
> > injection happens early in the boot cycle, but in my test system still
> > after user space starts. I tried to inject "atomically" (to not fall into
> > the aforementioned pathological case trap) of 32 / 112 / 256 bits of
> > entropy into the /dev/ urandom RNG to have /dev/urandom at least seeded
> > with a few bits before user space starts followed by the atomic injection
> > of the subsequent bits.
> The early boot problem is a hard one.  We can inject some noise in,
> but I don't think a few bits actually does much good.  So the question
> is whether it's faster to get to fully seeded, or to inject in 32 bits

I am not talking about the 32 bits. We can leave the current 64 bits for the 
first seed.

I am concerned about the *two* separate injections of 64 bits. It should 
rather be *one* injection of at least 112 bit (or 128 bits). This is what I 
mean with "atomic" operation here.

> of entropy in the hopes that this will do some good.  Personally, I'm
> not convinced.  So the tack I've taken is to have warning messages
> printed when someone *does* draw from /dev/urandom before it's fully
> seeded.  In many cases, it's for entirely bogus, non-cryptographic
> reasons.  (For example, Python wanting to use a random salt to protect
> against certain DOS attacks when Python is being used in a web server
> --- a use case which is completely irrelevant when it's being used by
> systemd generator scripts at boot time.)
> 
> Ultimately, I think the right answer here is we need help from the
> bootloader, and ultimately some hardware help or some initialization
> at factory time which isn't too easily hacked by a Tailored Access
> Organization team who can intercept hardware shipments.  :-)

I agree. But we can still try to make the Linux side good as possible to cover 
people who do not have the luxury to control the hardware.
> 
[...]
> 
> > Finally, one remark which I know you could not care less: :-)
> > 
> > I try to use a known DRNG design that a lot of folks have already assessed
> > -- SP800-90A (and please, do not hint to the Dual EC DRBG as this issue
> > was pointed out already by researcher shortly after the first SP800-90A
> > came out in 2007). This way I do not need to re-invent the wheel and
> > potentially forget about things that may be helpful in a DRNG. To allow
> > researchers to assess my ChaCha20 DRNG. that used when no kernel crypto
> > API is compiled. independently from the kernel, I extracted the ChaCha20
> > DRNG code into a standalone DRNG accessible at [1]. This standalone
> > implementation can be debugged and studied in user space. Moreover it is
> > a simple copy of the kernel code to allow researchers an easy comparison.
> 
> SP800-90A consists of a high level architecture of a DRBG, plus some
> lower-level examples of how to use that high level architecture
> assuming you have a hash function, or a block cipher, etc.  But it
> doesn't have an example on using a stream cipher like ChaCha20.  So
> all you can really do is follow the high-level architecture.  Mapping
> the high-level architecture to the current /dev/random generator isn't
> hard.  And no, I don't see the point of renaming things or moving
> things around just to make the mapping to the SP800-90A easier.

Unfortunately I have seen subtle problems with DRNG implementations -- and a 
new one will emerge in the not too far future... There are examples of that 
and I like tests against reference implementations.

For example, the one key problem I have with the ChaCha20 DRNG is the 
following: when final update of the internal state is made for enhanced 
prediction resistance, ChaCha20 is used to generate one more block. That new 
block has 512 bits in size. In your implementation, you use the first 256 bits 
to inject it back into ChaCha20 as key. I use the entire 512 bits. I do not 
know whether one is better than the other (in the sense that it does not loose 
entropy). But barring any real research from other cryptographers, I guess we 
both do not know. And I have seen that such subtle issues may lead to 
catastrophic problems.

Thus, knowing valid DRNG designs may cover 99% of a new DRNG design. But the 
remaining 1% usually gives you the creeps.

Ciao
Stephan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-15  6:13       ` Stephan Mueller
@ 2016-08-15 15:00         ` Theodore Ts'o
  0 siblings, 0 replies; 26+ messages in thread
From: Theodore Ts'o @ 2016-08-15 15:00 UTC (permalink / raw)
  To: Stephan Mueller
  Cc: herbert, sandyinchina, Jason Cooper, John Denker, H. Peter Anvin,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

On Mon, Aug 15, 2016 at 08:13:06AM +0200, Stephan Mueller wrote:
> 
> According to my understanding of NAPI, the network card sends one interrupt 
> when receiving the first packet of a packet stream and then the driver goes 
> into polling mode, disabling the interrupt. So, I cannot see any batching 
> based on some on-board timer where add_interrupt_randomness is affected.
>  
> Can you please elaborate?

>From https://wiki.linuxfoundation.org/networking/napi:

    NAPI (“New API”) is an extension to the device driver packet
    processing framework, which is designed to improve the performance
    of high-speed networking. NAPI works through:

    * Interrupt mitigation
    * High-speed networking can create thousands of interrupts per
      second, all of which tell the system something it already knew:
      it has lots of packets to process. NAPI allows drivers to run
      with (some) interrupts disabled during times of high traffic,
      with a corresponding decrease in system load.
      ...

The idea is to mitigate the CPU load from having a large number of
interrupts.  Spinning in a tight loop, wihch is what polling odoes,
doesn't help reduce the CPU load.  So it's *not* what you would want
to do on a small-count core CPU, or on a bettery operated device.

What you're thinking about is a technique to reduce interrupt latency,
which might be useful on a 32-core server CPU where trading off power
consumption for interrupt latency makes sense.  But NAPI is the exact
reverese thing --- it trades interrupt latency for CPU and power
efficiency.

NAPI in its traditional works by having the interrupt card *not* send
an interrupt after the packet comes in, and instead accumulate packets
in a buffer.  The interupt only gets sent after an short timeout, or
when the on-NIC buffer is in danger of filling.  As a result, when
interrupts get sent might be granularized based on some clock --- and
on small systems, there may only be a single CPU on that clock.

> Well, injecting a trojan to a system in user space as unprivileged user that 
> starts some X11 session and that can perform the following command is all you 
> need to get to the key commands of the console.
> 
> xinput list |   grep -Po 'id=\K\d+(?=.*slave\s*keyboard)' |   xargs -P0 -n1 
> xinput test
> 
> That is fully within reach of not only some agencies but also other folks. It 
> is similar for mice.

This doesn't result in keyboard and mice interrupts, which is how
add_input_randomness() works.  So it's mostly irrelevant.

> When you refer to my Jitter RNG, I think I have shown that its strength comes 
> from the internal state of the CPU (states of the internal building blocks 
> relative to each other which may cause internal wait states, state of branch 
> prediction or pipelines, etc.) and not of the layout of the CPU.

All of this is deterministic.  Just as AES_ENCRPT(NSA_KEY, COUNTER++)
is completely deterministic and dependant on the internal state of the
PRNG.  But it's not random, and if you don't know NSA_KEY you can't
prove that it's not really random.

> On VMs, the add_disk_randomness is always used with the exception of KVM when 
> using a virtio disk. All other VMs do not use virtio and offer the disk as a 
> SCSI or IDE device. In fact, add_disk_randomness is only disabled when the 
> kernel detects:
> 
> - SDDs
> 
> - virtio
> 
> - use of device mapper

AWS uses para-virtualized SCSI; Google Compute Engine uses
virtio-SCSI.  So the kernel should know that these are virtual
devices, and I'd argue that if we're setting the add_random flag, we
shouldn't be.

> > As far as whether you can get 5-6 bits of entropy from interrupt
> > timings --- that just doesn't pass the laugh test.  The min-entropy
> 
> May I ask what you find amusing? When you have a noise source for which you 
> have no theoretical model, all you can do is to revert to statistical 
> measurements.

So tell me, how much "minimum", "conservative" entropy do the non-IID
tests report when fed as input AES_ENCRYPT*NSA_KEY, COUNTER++)?

Sometimes, laughing is better than crying.  :-)

> Just see the guy that sent an email to linux-crypto today. His MIPS /dev/
> random cannot produce 16 bytes of data within 4 hours (which is similar to 
> what I see on POWER systems). This raises a very interesting security issue: /
> dev/urandom is not seeded properly. And we all know what folks do in the wild: 
> when /dev/random does not produce data, /dev/urandom is used -- all general 
> user space libs (OpenSSL, libgcrypt, nettle, ...) seed from /dev/urandom per 
> default.
>
> And I call that a very insecure state of affairs.

Overestimating entropy that isn't there doesn't actually make things
more secure.  It just makes people feel better.  This is especially
true if the goal is declare the /dev/urandom to be fully initialized
before userspace is started.  So if the claim is that your "LRNG" can
fully initialize the /dev/urandom pool, and it's using fundamentally
using the same interrupt sampling techniques as what is currently in
the kernel, then there is no substantive difference in security
between using /dev/urandom and using /dev/urandom with your patches
applied and enabled.

In the case of MIPS it doesn't have a high resolution timer, so it
*will* have less entropy that it can gather using interrupts compared
to an x86 system.  So i'd much rather be very conservative and
encourage people to use a CPU that *does* have a high resolution timer
or a hardware random number generator, or use some other very
carefully seeding via the bootloader or some such, rather than lull
them into a potentially false sense of security.

> As mentioned, to my very surprise, I found that interrupts are the only thing 
> in a VM that works extremely well even under attack scenarios. VMMs that I 
> quantiatively tested include QEMU/KVM, VirtualBox, VMWare ESXi and Hyper-V. 
> After more research, I came to the conclusion that even on the theoretical 
> side, it must be one of the better noise sources in a VM.

Again, how does your quantitive tests work on AES_ENCRYPT(NSA_KEY, COUNTER++)?

> I am concerned about the *two* separate injections of 64 bits. It should 
> rather be *one* injection of at least 112 bit (or 128 bits). This is what I 
> mean with "atomic" operation here.

We only consider urandom/getrandom CRNG to be initialized when 2
injections happen without an intervening extract operation.  If there
is an extract operation, we print a warning and then reset the
counter.  So by the time /dev/urandom is initialized, it has had two
"atomic" injections of entropy.  It's the same kind of atomicity which
is provided by the seqlock_t abstraction in the linux kernel.


> For example, the one key problem I have with the ChaCha20 DRNG is the 
> following: when final update of the internal state is made for enhanced 
> prediction resistance, ChaCha20 is used to generate one more block. That new 
> block has 512 bits in size. In your implementation, you use the first 256 bits 
> to inject it back into ChaCha20 as key. I use the entire 512 bits. I do not 
> know whether one is better than the other (in the sense that it does not loose 
> entropy). But barring any real research from other cryptographers, I guess we 
> both do not know. And I have seen that such subtle issues may lead to 
> catastrophic problems.

Chacha20 uses a 256-bit key, and what I'm doing is folding in 256 bits
into the ChaCha20 key.  The security strength that we're claiming fom
in the SP800-90A DRBG model is 256 bits (the maximum from the
SP800-90A set of 112, 128, 192, or 256), and so I'd argue that what
I'm doing is sufficient.

Entropy doesn't really have a meaning in a DRBG, so SP800-90A wouldn't
have anything to say anything about either alternative.

Cheers,

							- Ted

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-11 12:24 [PATCH v6 0/5] /dev/random - a new approach Stephan Mueller
                   ` (5 preceding siblings ...)
  2016-08-11 21:36 ` [PATCH v6 0/5] /dev/random - a new approach Theodore Ts'o
@ 2016-08-15 20:42 ` H. Peter Anvin
  2016-08-16  5:45   ` Stephan Mueller
  6 siblings, 1 reply; 26+ messages in thread
From: H. Peter Anvin @ 2016-08-15 20:42 UTC (permalink / raw)
  To: Stephan Mueller, herbert, Ted Tso
  Cc: sandyinchina, Jason Cooper, John Denker, Joe Perches,
	Pavel Machek, George Spelvin, linux-crypto, linux-kernel

On 08/11/16 05:24, Stephan Mueller wrote:
> * prevent fast noise sources from dominating slow noise sources
>   in case of /dev/random

Can someone please explain if and why this is actually desirable, and if
this assessment has been passed to someone who has actual experience
with cryptography at the professional level?

	-hpa

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-15 20:42 ` H. Peter Anvin
@ 2016-08-16  5:45   ` Stephan Mueller
  2016-08-16 22:28     ` H. Peter Anvin
  0 siblings, 1 reply; 26+ messages in thread
From: Stephan Mueller @ 2016-08-16  5:45 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: herbert, Ted Tso, sandyinchina, Jason Cooper, John Denker,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

Am Montag, 15. August 2016, 13:42:54 CEST schrieb H. Peter Anvin:

Hi H,

> On 08/11/16 05:24, Stephan Mueller wrote:
> > * prevent fast noise sources from dominating slow noise sources
> > 
> >   in case of /dev/random
> 
> Can someone please explain if and why this is actually desirable, and if
> this assessment has been passed to someone who has actual experience
> with cryptography at the professional level?

There are two motivations for that:

- the current /dev/random is compliant to NTG.1 from AIS 20/31 which requires 
(in brief words) that entropy comes from auditible noise sources. Currently in 
my LRNG only RDRAND is a fast noise source which is not auditible (and it is 
designed to cause a VM exit making it even harder to assess it). To make the 
LRNG to comply with NTG.1, RDRAND can provide entropy but must not become the 
sole entropy provider which is the case now with that change.

- the current /dev/random implementation follows the same concept with the 
exception of 3.15 and 3.16 where RDRAND was not rate-limited. In later 
versions, this was changed.

Ciao
Stephan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-16  5:45   ` Stephan Mueller
@ 2016-08-16 22:28     ` H. Peter Anvin
  2016-08-16 22:49       ` H. Peter Anvin
  2016-08-17  5:21       ` Stephan Mueller
  0 siblings, 2 replies; 26+ messages in thread
From: H. Peter Anvin @ 2016-08-16 22:28 UTC (permalink / raw)
  To: Stephan Mueller
  Cc: herbert, Ted Tso, sandyinchina, Jason Cooper, John Denker,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

On 08/15/16 22:45, Stephan Mueller wrote:
> Am Montag, 15. August 2016, 13:42:54 CEST schrieb H. Peter Anvin:
> 
> Hi H,
> 
>> On 08/11/16 05:24, Stephan Mueller wrote:
>>> * prevent fast noise sources from dominating slow noise sources
>>>
>>>   in case of /dev/random
>>
>> Can someone please explain if and why this is actually desirable, and if
>> this assessment has been passed to someone who has actual experience
>> with cryptography at the professional level?
> 
> There are two motivations for that:
> 
> - the current /dev/random is compliant to NTG.1 from AIS 20/31 which requires 
> (in brief words) that entropy comes from auditible noise sources. Currently in 
> my LRNG only RDRAND is a fast noise source which is not auditible (and it is 
> designed to cause a VM exit making it even harder to assess it). To make the 
> LRNG to comply with NTG.1, RDRAND can provide entropy but must not become the 
> sole entropy provider which is the case now with that change.
> 
> - the current /dev/random implementation follows the same concept with the 
> exception of 3.15 and 3.16 where RDRAND was not rate-limited. In later 
> versions, this was changed.
> 

I'm not saying it should be *sole*.  I am questioning the value in
limiting it, as it seems to me that it could only ever produce a worse
result.

	-hpa

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-16 22:28     ` H. Peter Anvin
@ 2016-08-16 22:49       ` H. Peter Anvin
  2016-08-17  5:21       ` Stephan Mueller
  1 sibling, 0 replies; 26+ messages in thread
From: H. Peter Anvin @ 2016-08-16 22:49 UTC (permalink / raw)
  To: Stephan Mueller
  Cc: herbert, Ted Tso, sandyinchina, Jason Cooper, John Denker,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

On 08/16/16 15:28, H. Peter Anvin wrote:
> On 08/15/16 22:45, Stephan Mueller wrote:
>> Am Montag, 15. August 2016, 13:42:54 CEST schrieb H. Peter Anvin:
>>
>> Hi H,
>>
>>> On 08/11/16 05:24, Stephan Mueller wrote:
>>>> * prevent fast noise sources from dominating slow noise sources
>>>>
>>>>   in case of /dev/random
>>>
>>> Can someone please explain if and why this is actually desirable, and if
>>> this assessment has been passed to someone who has actual experience
>>> with cryptography at the professional level?
>>
>> There are two motivations for that:
>>
>> - the current /dev/random is compliant to NTG.1 from AIS 20/31 which requires 
>> (in brief words) that entropy comes from auditible noise sources. Currently in 
>> my LRNG only RDRAND is a fast noise source which is not auditible (and it is 
>> designed to cause a VM exit making it even harder to assess it). To make the 
>> LRNG to comply with NTG.1, RDRAND can provide entropy but must not become the 
>> sole entropy provider which is the case now with that change.
>>
>> - the current /dev/random implementation follows the same concept with the 
>> exception of 3.15 and 3.16 where RDRAND was not rate-limited. In later 
>> versions, this was changed.
>>
> 
> I'm not saying it should be *sole*.  I am questioning the value in
> limiting it, as it seems to me that it could only ever produce a worse
> result.
> 

Also, it would be great to actually get a definition for "auditable".  A
quantum white noise source which exceeds the sampling bandwidth is an
ideal RNG; how do you "audit" that?  If what you are doing is looking
for imperfections, those imperfections can be trivially emulated.  If
what you mean is an audit on the chip or circuit level, that would
require some mechanism to know that all items were built identically
without deviation, which may be possible for intelligence agencies or
the military who have full control of their supply chain, but for anyone
else that is most likely an impossible task.  How many people are going
to crack the case and look at even a discrete transistor circuit, and
how many of *those* are going to be able to discern if that circuit is
subject to RF capture, or its output even used?

I have been trying to figure out how to reasonably solve this problem
for a long time now, and it is not just a problem for RDSEED (RDRAND is
a slightly different beast.)  The only reason RDSEED exposes the problem
particularly harshly is because it is extremely high bandwidth compared
to other noise sources and it is architecturally integrated into the
CPU, but the same would apply to an external noise generator connected
via PCIe, for example.

Incidentally, I am hoping -- and this is a personal statement and
nothing official from Intel -- that at some future date RDRAND (not
RDSEED) will be fast enough that it can completely replace even
prandom_u32(), which I really hope can be non-controversial as
prandom_u32() isn't cryptographically strong in the first place.

	-hpa

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-16 22:28     ` H. Peter Anvin
  2016-08-16 22:49       ` H. Peter Anvin
@ 2016-08-17  5:21       ` Stephan Mueller
  1 sibling, 0 replies; 26+ messages in thread
From: Stephan Mueller @ 2016-08-17  5:21 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: herbert, Ted Tso, sandyinchina, Jason Cooper, John Denker,
	Joe Perches, Pavel Machek, George Spelvin, linux-crypto,
	linux-kernel

Am Dienstag, 16. August 2016, 15:28:45 CEST schrieb H. Peter Anvin:

Hi Peter,

> > 
> > There are two motivations for that:
> > 
> > - the current /dev/random is compliant to NTG.1 from AIS 20/31 which
> > requires (in brief words) that entropy comes from auditible noise
> > sources. Currently in my LRNG only RDRAND is a fast noise source which is
> > not auditible (and it is designed to cause a VM exit making it even
> > harder to assess it). To make the LRNG to comply with NTG.1, RDRAND can
> > provide entropy but must not become the sole entropy provider which is
> > the case now with that change.
> > 
> > - the current /dev/random implementation follows the same concept with the
> > exception of 3.15 and 3.16 where RDRAND was not rate-limited. In later
> > versions, this was changed.
> 
> I'm not saying it should be *sole*.  I am questioning the value in
> limiting it, as it seems to me that it could only ever produce a worse
> result.

It is not about the limiting of the data. It is all about the entropy estimate 
for those noise sources and how they affect the entropy estimator behind /dev/
random. If that fast noise source injects large amount of data but does not 
increase the entropy estimator, it is of no concern.

Ciao
Stephan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-11 21:36 ` [PATCH v6 0/5] /dev/random - a new approach Theodore Ts'o
  2016-08-12  9:34   ` Stephan Mueller
@ 2016-08-17 21:42   ` Pavel Machek
  2016-08-18 17:27     ` Theodore Ts'o
  1 sibling, 1 reply; 26+ messages in thread
From: Pavel Machek @ 2016-08-17 21:42 UTC (permalink / raw)
  To: Theodore Ts'o, Stephan Mueller, herbert, sandyinchina,
	Jason Cooper, John Denker, H. Peter Anvin, Joe Perches,
	George Spelvin, linux-crypto, linux-kernel

Hi!

> As far as whether or not you can gather enough entropy at boot time,
> what we're really talking about how how much entropy we want to assume
> can be gathered from interrupt timings, since what you do in your code
> is not all that different from what the current random driver is
> doing.  So it's pretty easy to turn a knob and say, "hey presto, we
> can get all of the entropy we need before userspace starts!"  But
> justifying this is much harder, and using statistical tests isn't
> really sufficient as far as I'm concerned.

Actually.. I'm starting to believe that getting enough entropy before
userspace starts is more important than pretty much anything else.

We only "need" 64-bits of entropy, AFAICT. If it passes statistical
tests, I'd use it... for initial bringup.

We can switch to more conservative estimates when system is fully
running. But IMO it is very important to get _some_ randomness at the
begining...

Best regards,
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-17 21:42   ` Pavel Machek
@ 2016-08-18 17:27     ` Theodore Ts'o
  2016-08-18 18:39       ` Pavel Machek
  0 siblings, 1 reply; 26+ messages in thread
From: Theodore Ts'o @ 2016-08-18 17:27 UTC (permalink / raw)
  To: Pavel Machek
  Cc: Stephan Mueller, herbert, sandyinchina, Jason Cooper,
	John Denker, H. Peter Anvin, Joe Perches, George Spelvin,
	linux-crypto, linux-kernel

On Wed, Aug 17, 2016 at 11:42:55PM +0200, Pavel Machek wrote:
> 
> Actually.. I'm starting to believe that getting enough entropy before
> userspace starts is more important than pretty much anything else.
> 
> We only "need" 64-bits of entropy, AFAICT. If it passes statistical
> tests, I'd use it... for initial bringup.

Definitely not 64 bits.  Back in *1996* the estimate was that we
needed at least 75-bits in order to be protected against brute force
attacks.  It's been two *deacdes* years later, and granted Moore's law
has ceased to apply in the last couple of years, but I'm sure 64 bits
is not enough.

What is your specific concern vis-a-vis when userspace starts?  We now
print a warning if someone tries to draw from /dev/urandom, and so it
should be easy to see if someone is doing something dangerous.  The
have only been known cases (at last as far asI know where) where some
software was doing something as *insane* as to create keys right out
of the box was.  One was ssh, and at least on a modern Debian system,
that doesn't happen until fairly late in the process:

% systemd-analyze critical-chain ssh.service
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

ssh.service +888ms
└─network.target @31.473s
  └─wpa_supplicant.service @32.958s +770ms
    └─basic.target @19.479s
      └─sockets.target @19.479s
        └─acpid.socket @19.479s
          └─sysinit.target @19.414s
            └─systemd-timesyncd.service @18.079s +1.330s
              └─systemd-tmpfiles-setup.service @17.512s +78ms
                └─local-fs.target @17.501s
                  └─run-user-15806.mount @43.047s
                    └─local-fs-pre.target @16.616s
                      └─systemd-tmpfiles-setup-dev.service @755ms +930ms
                        └─kmod-static-nodes.service @729ms +17ms
                          └─system.slice @653ms
                            └─-.slice @608ms

The other was HP, which was generating an RSA key very shortly after
the first time the printer was powered on.

> We can switch to more conservative estimates when system is fully
> running. But IMO it is very important to get _some_ randomness at the
> begining...

We're doing this already in the latest getrandom(2) implementation.
For the purposes of initializing the crng, we assume that each
interrupt has a single bit of entropy.  So it requires 128 initerrupts
for getrandom(2) to be fully initialized.  I'm actually worried that
this is too high as it is for architectures that don't have a
fine-grained clock.  Given that on many of these embedded platforms
there is a oscillator which drives all of the clocks and subsystems,
it just doesn't make *sense* that than each interrupt could result in
5-6 bits of entropy, no matter what a magical statistical formula
might say.

(Creation of some completely determinsitic sequences that cause the
magical statistcal formulas to claim a vast number of entropy bits is
left as an exercise to the reader.)

Cheers, 

							- Ted

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-18 17:27     ` Theodore Ts'o
@ 2016-08-18 18:39       ` Pavel Machek
  2016-08-19  2:49         ` Theodore Ts'o
  0 siblings, 1 reply; 26+ messages in thread
From: Pavel Machek @ 2016-08-18 18:39 UTC (permalink / raw)
  To: Theodore Ts'o, Stephan Mueller, herbert, sandyinchina,
	Jason Cooper, John Denker, H. Peter Anvin, Joe Perches,
	George Spelvin, linux-crypto, linux-kernel

On Thu 2016-08-18 13:27:12, Theodore Ts'o wrote:
> On Wed, Aug 17, 2016 at 11:42:55PM +0200, Pavel Machek wrote:
> > 
> > Actually.. I'm starting to believe that getting enough entropy before
> > userspace starts is more important than pretty much anything else.
> > 
> > We only "need" 64-bits of entropy, AFAICT. If it passes statistical
> > tests, I'd use it... for initial bringup.
> 
> Definitely not 64 bits.  Back in *1996* the estimate was that we
> needed at least 75-bits in order to be protected against brute force
> attacks.  It's been two *deacdes* years later, and granted Moore's law
> has ceased to apply in the last couple of years, but I'm sure 64 bits
> is not enough.
> 
> What is your specific concern vis-a-vis when userspace starts?  We now
> print a warning if someone tries to draw from /dev/urandom, and so it
> should be easy to see if someone is doing something dangerous.  The

Well, warning is nice, but I'm afraid it is not going to stop everyone.

> have only been known cases (at last as far asI know where) where some
> software was doing something as *insane* as to create keys right out
> of the box was.  One was ssh, and at least on a modern Debian system,
> that doesn't happen until fairly late in the process:

It is more widespread than that:

rapsberry pi:
https://www.raspberrypi.org/forums/viewtopic.php?t=126892

But this is the scary part. Not limited to ssh. "We perform the
largest ever network survey of TLS and SSH servers and present
evidence that vulnerable keys are surprisingly widespread. We find
that 0.75% of TLS certificates share keys due to insufficient entropy
during key generation, and we suspect that another 1.70% come from the
same faulty implementations and may be susceptible to compromise.
Even more alarmingly, we are able to obtain RSA private keys for 0.50%
of TLS hosts and 0.03% of SSH hosts, because their public keys shared
nontrivial common factors due to entropy problems, and DSA private
keys for 1.03% of SSH hosts, because of insufficient signature
randomness"

https://factorable.net/weakkeys12.conference.pdf

Responsible devices were Gigaset SX762, ADTran Total Access
businessgrade phone/network routers, IBM RSA II remote administration
cards, BladeCenter devices, Juniper Networks Branch SRX devices,
... "We used the techniques described in Section 3.2 to identify
apparently vulnerable devices from 27 manufacturers.  These include
enterprise-grade routers from Cisco; server management cards from
Dell, Hewlett-Packard, and IBM; virtual-private-network (VPN) devices;
building security systems; network attached storage devices; and
several kinds of consumer routers and VoIP products."

> The other was HP, which was generating an RSA key very shortly after
> the first time the printer was powered on.

Its definitely more than two incidents.

> > We can switch to more conservative estimates when system is fully
> > running. But IMO it is very important to get _some_ randomness at the
> > begining...
> 
> We're doing this already in the latest getrandom(2) implementation.
> For the purposes of initializing the crng, we assume that each
> interrupt has a single bit of entropy.  So it requires 128 initerrupts
> for getrandom(2) to be fully initialized.  I'm actually worried that
> this is too high as it is for architectures that don't have a
> fine-grained clock.  Given that on many of these embedded platforms
> there is a oscillator which drives all of the clocks and subsystems,
> it just doesn't make *sense* that than each interrupt could result in
> 5-6 bits of entropy, no matter what a magical statistical formula
> might say.

>From my point of view, it would make sense to factor time from RTC and
mac addresses into the initial hash. Situation in the paper was so bad
some devices had _completely identical_ keys. We should be able to do
better than that.

BTW... 128 interrupts... that's 1.3 seconds, right? Would it make
sense to wait two seconds if urandom use is attempted before it is
ready?

Best regards,
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-18 18:39       ` Pavel Machek
@ 2016-08-19  2:49         ` Theodore Ts'o
  2016-08-19  5:56           ` Herbert Xu
  2016-08-19  7:48           ` Pavel Machek
  0 siblings, 2 replies; 26+ messages in thread
From: Theodore Ts'o @ 2016-08-19  2:49 UTC (permalink / raw)
  To: Pavel Machek
  Cc: Stephan Mueller, herbert, sandyinchina, Jason Cooper,
	John Denker, H. Peter Anvin, Joe Perches, George Spelvin,
	linux-crypto, linux-kernel

On Thu, Aug 18, 2016 at 08:39:23PM +0200, Pavel Machek wrote:
> 
> But this is the scary part. Not limited to ssh. "We perform the
> largest ever network survey of TLS and SSH servers and present
> evidence that vulnerable keys are surprisingly widespread. We find
> that 0.75% of TLS certificates share keys due to insufficient entropy
> during key generation, and we suspect that another 1.70% come from the
> same faulty implementations and may be susceptible to compromise.
> Even more alarmingly, we are able to obtain RSA private keys for 0.50%
> of TLS hosts and 0.03% of SSH hosts, because their public keys shared
> nontrivial common factors due to entropy problems, and DSA private
> keys for 1.03% of SSH hosts, because of insufficient signature
> randomness"
> 
> https://factorable.net/weakkeys12.conference.pdf

That's a very old paper, and we've made a lot of changes since then.
Before that we weren't accumulating entropy from the interrupt
handler, but only from spinning disk drives, some network interrupts
(but not from all NIC's; it was quite arbitrary), and keyboard and
mouse interrupts.  So hours and hours could go by and you still
wouldn't have accumulated much entropy.

> From my point of view, it would make sense to factor time from RTC and
> mac addresses into the initial hash. Situation in the paper was so bad
> some devices had _completely identical_ keys. We should be able to do
> better than that.

We fixed that **years** ago.  In fact, the authors shared with me an
early look at that paper and I implemented add_device_entropy() over
the July 4th weekend back in 2012.  So we are indeed mixing in MAC
addresses and the hardware clock (if it is initialized that early).
In fact that was one of the first things that I did.  Note that this
doesn't really add much entropy, but it does prevent the GCD attack
from demonstrating completely identical keys.  Hence, we had
remediations in the mainline kernel before the factorable.net paper
was published (not that really helped with devices with embedded
Linux, especially since device manufactures don't see anything wrong
with shipping machines with kernels that are years and years out of
date --- OTOH, these systems were probably also shipping with dozens
of known exploitable holes in userspace, if that's any comfort.
Probably not much if you were planning on deploying lots of IOT
devices in your home network.  :-)

> BTW... 128 interrupts... that's 1.3 seconds, right? Would it make
> sense to wait two seconds if urandom use is attempted before it is
> ready?

That really depends on the system.  We can't assume that people are
using systems with a 100Hz clock interrupt.  More often than not
people are using tickless kernels these days.  That's actually the
problem with changing /dev/urandom to block until things are
initialized.

If you do that, then on some system Python will use /dev/urandom to
initialize a salt used by the Python dictionaries, to protect against
DOS attacks when Python is used to run web scripts.  This is a
completely irrelevant reason when Python is being used for systemd
generator scripts in early boot, and if /dev/urandom were to block,
then the system ends up doing nothing, and on a tickless kernels hours
and hours can go by on a VM and Python would still be blocked on
/dev/urandom.  And since none of the system scripts are running, there
are no interrupts, and so Python ends up blocking on /dev/urandom for
a very long time.  (Eventually someone will start trying to brute
force passwords on the VM's ssh port, assuming that the VM's firewall
rules allow this, and that will cause interrupts that will eventually
initialize /dev/urandom.  But that could take hours.)

And this, boys and girls, is why we can't make /dev/urandom block
until its pool is initialized.  There's too great of a chance that we
will break userspace, and then Linus will yell at us and revert the
commit.

						- Ted

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-19  2:49         ` Theodore Ts'o
@ 2016-08-19  5:56           ` Herbert Xu
  2016-08-19 17:20             ` H. Peter Anvin
  2016-08-19  7:48           ` Pavel Machek
  1 sibling, 1 reply; 26+ messages in thread
From: Herbert Xu @ 2016-08-19  5:56 UTC (permalink / raw)
  To: Theodore Ts'o, Pavel Machek, Stephan Mueller, sandyinchina,
	Jason Cooper, John Denker, H. Peter Anvin, Joe Perches,
	George Spelvin, linux-crypto, linux-kernel

On Thu, Aug 18, 2016 at 10:49:47PM -0400, Theodore Ts'o wrote:
>
> That really depends on the system.  We can't assume that people are
> using systems with a 100Hz clock interrupt.  More often than not
> people are using tickless kernels these days.  That's actually the
> problem with changing /dev/urandom to block until things are
> initialized.

Couldn't we disable tickless until urandom has been seeded? In fact
perhaps we should accelerate the timer interrupt rate until it has
been seeded?

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-19  2:49         ` Theodore Ts'o
  2016-08-19  5:56           ` Herbert Xu
@ 2016-08-19  7:48           ` Pavel Machek
  1 sibling, 0 replies; 26+ messages in thread
From: Pavel Machek @ 2016-08-19  7:48 UTC (permalink / raw)
  To: Theodore Ts'o, Stephan Mueller, herbert, sandyinchina,
	Jason Cooper, John Denker, H. Peter Anvin, Joe Perches,
	George Spelvin, linux-crypto, linux-kernel

Hi!

> > From my point of view, it would make sense to factor time from RTC and
> > mac addresses into the initial hash. Situation in the paper was so bad
> > some devices had _completely identical_ keys. We should be able to do
> > better than that.
> 
> We fixed that **years** ago.  In fact, the authors shared with me an
> early look at that paper and I implemented add_device_entropy() over
> the July 4th weekend back in 2012.  So we are indeed mixing in MAC
> addresses and the hardware clock (if it is initialized that early).
> In fact that was one of the first things that I did.  Note that this

Ok, thanks.

> > BTW... 128 interrupts... that's 1.3 seconds, right? Would it make
> > sense to wait two seconds if urandom use is attempted before it is
> > ready?
> 
> That really depends on the system.  We can't assume that people are
> using systems with a 100Hz clock interrupt.  More often than not
> people are using tickless kernels these days.  That's actually the
> problem with changing /dev/urandom to block until things are
> initialized.

Ok, let me check:

config HZ_PERIODIC
config NO_HZ_IDLE
config NO_HZ_FULL

in HZ_PERIODIC, there should be no problem.

NO_HZ_IDLE... should not be a problem either. We can easily make sure
that cpu's are not idle, something like 

     while (not_enough_entropy())
     	   schedule()

NO_HZ_FULL.... first, help text seems to imply that timer ticks still
happen when cpu is in kernel, and second, there is always one CPU that
handles timer ticks. So we are still ok.

So I believe we should add the wait to urandom. One second delay in
rare cases sounds better than alternatives.

Best regards,
									Pavel
PS: Are there systems where the timer interrupt is the only source of time?
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-19  5:56           ` Herbert Xu
@ 2016-08-19 17:20             ` H. Peter Anvin
  2016-08-21  3:14               ` Herbert Xu
  0 siblings, 1 reply; 26+ messages in thread
From: H. Peter Anvin @ 2016-08-19 17:20 UTC (permalink / raw)
  To: Herbert Xu, Theodore Ts'o, Pavel Machek, Stephan Mueller,
	sandyinchina, Jason Cooper, John Denker, Joe Perches,
	George Spelvin, linux-crypto, linux-kernel

On 08/18/16 22:56, Herbert Xu wrote:
> On Thu, Aug 18, 2016 at 10:49:47PM -0400, Theodore Ts'o wrote:
>>
>> That really depends on the system.  We can't assume that people are
>> using systems with a 100Hz clock interrupt.  More often than not
>> people are using tickless kernels these days.  That's actually the
>> problem with changing /dev/urandom to block until things are
>> initialized.
> 
> Couldn't we disable tickless until urandom has been seeded? In fact
> perhaps we should accelerate the timer interrupt rate until it has
> been seeded?
> 

The biggest problem there is that the timer interrupt adds *no* entropy
unless there is a source of asynchronicity in the system.  On PCs,
traditionally the timer has been run from a completely different crystal
(14.31818 MHz) than the CPU, which is the ideal situation, but if they
are run off the same crystal and run in lockstep, there is very little
if anything there.  On some systems, the timer may even *be* the only
source of time, and the entropy truly is zero.

	-hpa

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 0/5] /dev/random - a new approach
  2016-08-19 17:20             ` H. Peter Anvin
@ 2016-08-21  3:14               ` Herbert Xu
  0 siblings, 0 replies; 26+ messages in thread
From: Herbert Xu @ 2016-08-21  3:14 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Theodore Ts'o, Pavel Machek, Stephan Mueller, sandyinchina,
	Jason Cooper, John Denker, Joe Perches, George Spelvin,
	linux-crypto, linux-kernel

On Fri, Aug 19, 2016 at 10:20:18AM -0700, H. Peter Anvin wrote:
> On 08/18/16 22:56, Herbert Xu wrote:
> > On Thu, Aug 18, 2016 at 10:49:47PM -0400, Theodore Ts'o wrote:
> >>
> >> That really depends on the system.  We can't assume that people are
> >> using systems with a 100Hz clock interrupt.  More often than not
> >> people are using tickless kernels these days.  That's actually the
> >> problem with changing /dev/urandom to block until things are
> >> initialized.
> > 
> > Couldn't we disable tickless until urandom has been seeded? In fact
> > perhaps we should accelerate the timer interrupt rate until it has
> > been seeded?
> > 
> 
> The biggest problem there is that the timer interrupt adds *no* entropy
> unless there is a source of asynchronicity in the system.  On PCs,
> traditionally the timer has been run from a completely different crystal
> (14.31818 MHz) than the CPU, which is the ideal situation, but if they
> are run off the same crystal and run in lockstep, there is very little
> if anything there.  On some systems, the timer may even *be* the only
> source of time, and the entropy truly is zero.

Sure, but that's orthorgonal to what Ted was talking about above.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2016-08-21  3:15 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-11 12:24 [PATCH v6 0/5] /dev/random - a new approach Stephan Mueller
2016-08-11 12:24 ` [PATCH v6 1/5] crypto: DRBG - externalize DRBG functions for LRNG Stephan Mueller
2016-08-11 12:25 ` [PATCH v6 2/5] random: conditionally compile code depending on LRNG Stephan Mueller
2016-08-11 12:25 ` [PATCH v6 3/5] crypto: Linux Random Number Generator Stephan Mueller
2016-08-11 12:26 ` [PATCH v6 4/5] crypto: LRNG - enable compile Stephan Mueller
2016-08-11 13:50   ` kbuild test robot
2016-08-11 14:03     ` Stephan Mueller
2016-08-11 12:26 ` [PATCH v6 5/5] crypto: LRNG - add ChaCha20 support Stephan Mueller
2016-08-11 21:36 ` [PATCH v6 0/5] /dev/random - a new approach Theodore Ts'o
2016-08-12  9:34   ` Stephan Mueller
2016-08-12 19:22     ` Theodore Ts'o
2016-08-15  6:13       ` Stephan Mueller
2016-08-15 15:00         ` Theodore Ts'o
2016-08-17 21:42   ` Pavel Machek
2016-08-18 17:27     ` Theodore Ts'o
2016-08-18 18:39       ` Pavel Machek
2016-08-19  2:49         ` Theodore Ts'o
2016-08-19  5:56           ` Herbert Xu
2016-08-19 17:20             ` H. Peter Anvin
2016-08-21  3:14               ` Herbert Xu
2016-08-19  7:48           ` Pavel Machek
2016-08-15 20:42 ` H. Peter Anvin
2016-08-16  5:45   ` Stephan Mueller
2016-08-16 22:28     ` H. Peter Anvin
2016-08-16 22:49       ` H. Peter Anvin
2016-08-17  5:21       ` Stephan Mueller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).