linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] random: use chacha20 for get_random_int/long
@ 2017-01-06 18:32 Jason A. Donenfeld
  2017-01-06 18:32 ` [PATCH 2/2] random: convert get_random_int/long into get_random_u32/u64 Jason A. Donenfeld
  2017-01-18  3:49 ` [PATCH 1/2] random: use chacha20 for get_random_int/long Theodore Ts'o
  0 siblings, 2 replies; 23+ messages in thread
From: Jason A. Donenfeld @ 2017-01-06 18:32 UTC (permalink / raw)
  To: tytso, linux-kernel
  Cc: Jason A. Donenfeld, Hannes Frederic Sowa, Andy Lutomirski

Now that our crng uses chacha20, we can rely on its speedy
characteristics for replacing MD5, while simultaneously achieving a
higher security guarantee. Before the idea was to use these functions if
you wanted random integers that aren't stupidly insecure but aren't
necessarily secure either, a vague gray zone, that hopefully was "good
enough" for its users. With chacha20, we can strengthen this claim,
since either we're using an rdrand-like instruction, or we're using the
same crng as /dev/urandom. And it's faster than what was before.

We could have chosen to replace this with a SipHash-derived function,
which might be slightly faster, but at the cost of having yet another
RNG construction in the kernel. By moving to chacha20, we have a single
RNG to analyze and verify, and we also already get good performance
improvements on all platforms.

Implementation-wise, rather than use a generic buffer for both
get_random_int/long and memcpy based on the size needs, we use a
specific buffer for 32-bit reads and for 64-bit reads. This way, we're
guaranteed to always have aligned accesses on all platforms. While
slightly more verbose in C, the assembly this generates is a lot
simpler than otherwise.

Finally, on 32-bit platforms where longs and ints are the same size,
we simply alias get_random_int to get_random_long.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Suggested-by: Theodore Ts'o <tytso@mit.edu>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
Cc: Andy Lutomirski <luto@amacapital.net>
---
 drivers/char/random.c  | 84 ++++++++++++++++++++++++++------------------------
 include/linux/random.h |  1 -
 init/main.c            |  1 -
 3 files changed, 43 insertions(+), 43 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 1ef26403bcc8..433facfd6cb8 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -2042,63 +2042,65 @@ struct ctl_table random_table[] = {
 };
 #endif 	/* CONFIG_SYSCTL */
 
-static u32 random_int_secret[MD5_MESSAGE_BYTES / 4] ____cacheline_aligned;
-
-int random_int_secret_init(void)
-{
-	get_random_bytes(random_int_secret, sizeof(random_int_secret));
-	return 0;
-}
-
-static DEFINE_PER_CPU(__u32 [MD5_DIGEST_WORDS], get_random_int_hash)
-		__aligned(sizeof(unsigned long));
+struct batched_entropy {
+	union {
+		unsigned long entropy_long[CHACHA20_BLOCK_SIZE / sizeof(unsigned long)];
+		unsigned int entropy_int[CHACHA20_BLOCK_SIZE / sizeof(unsigned int)];
+	};
+	unsigned int position;
+};
 
 /*
- * Get a random word for internal kernel use only. Similar to urandom but
- * with the goal of minimal entropy pool depletion. As a result, the random
- * value is not cryptographically secure but for several uses the cost of
- * depleting entropy is too high
+ * Get a random word for internal kernel use only. The quality of the random
+ * number is either as good as RDRAND or as good as /dev/urandom, with the
+ * goal of being quite fast and not depleting entropy.
  */
-unsigned int get_random_int(void)
+static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_long);
+unsigned long get_random_long(void)
 {
-	__u32 *hash;
-	unsigned int ret;
+	unsigned long ret;
+	struct batched_entropy *batch;
 
-	if (arch_get_random_int(&ret))
+	if (arch_get_random_long(&ret))
 		return ret;
 
-	hash = get_cpu_var(get_random_int_hash);
-
-	hash[0] += current->pid + jiffies + random_get_entropy();
-	md5_transform(hash, random_int_secret);
-	ret = hash[0];
-	put_cpu_var(get_random_int_hash);
-
+	batch = &get_cpu_var(batched_entropy_long);
+	if (batch->position % ARRAY_SIZE(batch->entropy_long) == 0) {
+		extract_crng((u8 *)batch->entropy_long);
+		batch->position = 0;
+	}
+	ret = batch->entropy_long[batch->position++];
+	put_cpu_var(batched_entropy_long);
 	return ret;
 }
-EXPORT_SYMBOL(get_random_int);
+EXPORT_SYMBOL(get_random_long);
 
-/*
- * Same as get_random_int(), but returns unsigned long.
- */
-unsigned long get_random_long(void)
+#if BITS_PER_LONG == 32
+unsigned int get_random_int(void)
 {
-	__u32 *hash;
-	unsigned long ret;
+	return get_random_long();
+}
+#else
+static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_int);
+unsigned int get_random_int(void)
+{
+	unsigned int ret;
+	struct batched_entropy *batch;
 
-	if (arch_get_random_long(&ret))
+	if (arch_get_random_int(&ret))
 		return ret;
 
-	hash = get_cpu_var(get_random_int_hash);
-
-	hash[0] += current->pid + jiffies + random_get_entropy();
-	md5_transform(hash, random_int_secret);
-	ret = *(unsigned long *)hash;
-	put_cpu_var(get_random_int_hash);
-
+	batch = &get_cpu_var(batched_entropy_int);
+	if (batch->position % ARRAY_SIZE(batch->entropy_int) == 0) {
+		extract_crng((u8 *)batch->entropy_int);
+		batch->position = 0;
+	}
+	ret = batch->entropy_int[batch->position++];
+	put_cpu_var(batched_entropy_int);
 	return ret;
 }
-EXPORT_SYMBOL(get_random_long);
+#endif
+EXPORT_SYMBOL(get_random_int);
 
 /**
  * randomize_page - Generate a random, page aligned address
diff --git a/include/linux/random.h b/include/linux/random.h
index 7bd2403e4fef..16ab429735a7 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -37,7 +37,6 @@ extern void get_random_bytes(void *buf, int nbytes);
 extern int add_random_ready_callback(struct random_ready_callback *rdy);
 extern void del_random_ready_callback(struct random_ready_callback *rdy);
 extern void get_random_bytes_arch(void *buf, int nbytes);
-extern int random_int_secret_init(void);
 
 #ifndef MODULE
 extern const struct file_operations random_fops, urandom_fops;
diff --git a/init/main.c b/init/main.c
index b0c9d6facef9..09beb7fc6e8c 100644
--- a/init/main.c
+++ b/init/main.c
@@ -879,7 +879,6 @@ static void __init do_basic_setup(void)
 	do_ctors();
 	usermodehelper_enable();
 	do_initcalls();
-	random_int_secret_init();
 }
 
 static void __init do_pre_smp_initcalls(void)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 2/2] random: convert get_random_int/long into get_random_u32/u64
  2017-01-06 18:32 [PATCH 1/2] random: use chacha20 for get_random_int/long Jason A. Donenfeld
@ 2017-01-06 18:32 ` Jason A. Donenfeld
  2017-01-18  3:49 ` [PATCH 1/2] random: use chacha20 for get_random_int/long Theodore Ts'o
  1 sibling, 0 replies; 23+ messages in thread
From: Jason A. Donenfeld @ 2017-01-06 18:32 UTC (permalink / raw)
  To: tytso, linux-kernel; +Cc: Jason A. Donenfeld

Many times, when a user wants a random number, he wants a random number
of a guaranteed size. So, thinking of get_random_int and get_random_long
in terms of get_random_u32 and get_random_u64 makes it much easier to
achieve this. It also makes the code simpler.

On 32-bit platforms, get_random_int and get_random_long are both aliased
to get_random_u32. On 64-bit platforms, int->u32 and long->u64.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Theodore Ts'o <tytso@mit.edu>
---
 drivers/char/random.c  | 55 +++++++++++++++++++++++++-------------------------
 include/linux/random.h | 17 ++++++++++++++--
 2 files changed, 42 insertions(+), 30 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 433facfd6cb8..ee68d5bffdc0 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -2044,8 +2044,8 @@ struct ctl_table random_table[] = {
 
 struct batched_entropy {
 	union {
-		unsigned long entropy_long[CHACHA20_BLOCK_SIZE / sizeof(unsigned long)];
-		unsigned int entropy_int[CHACHA20_BLOCK_SIZE / sizeof(unsigned int)];
+		u64 entropy_u64[CHACHA20_BLOCK_SIZE / sizeof(u64)];
+		u32 entropy_u32[CHACHA20_BLOCK_SIZE / sizeof(u32)];
 	};
 	unsigned int position;
 };
@@ -2055,52 +2055,51 @@ struct batched_entropy {
  * number is either as good as RDRAND or as good as /dev/urandom, with the
  * goal of being quite fast and not depleting entropy.
  */
-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_long);
-unsigned long get_random_long(void)
+static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64);
+u64 get_random_u64(void)
 {
-	unsigned long ret;
+	u64 ret;
 	struct batched_entropy *batch;
 
-	if (arch_get_random_long(&ret))
+#if BITS_PER_LONG == 64
+	if (arch_get_random_long((unsigned long *)&ret))
 		return ret;
+#else
+	if (arch_get_random_long((unsigned long *)&ret) &&
+	    arch_get_random_long((unsigned long *)&ret + 1))
+	    return ret;
+#endif
 
-	batch = &get_cpu_var(batched_entropy_long);
-	if (batch->position % ARRAY_SIZE(batch->entropy_long) == 0) {
-		extract_crng((u8 *)batch->entropy_long);
+	batch = &get_cpu_var(batched_entropy_u64);
+	if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
+		extract_crng((u8 *)batch->entropy_u64);
 		batch->position = 0;
 	}
-	ret = batch->entropy_long[batch->position++];
-	put_cpu_var(batched_entropy_long);
+	ret = batch->entropy_u64[batch->position++];
+	put_cpu_var(batched_entropy_u64);
 	return ret;
 }
-EXPORT_SYMBOL(get_random_long);
+EXPORT_SYMBOL(get_random_u64);
 
-#if BITS_PER_LONG == 32
-unsigned int get_random_int(void)
-{
-	return get_random_long();
-}
-#else
-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_int);
-unsigned int get_random_int(void)
+static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32);
+u32 get_random_u32(void)
 {
-	unsigned int ret;
+	u32 ret;
 	struct batched_entropy *batch;
 
 	if (arch_get_random_int(&ret))
 		return ret;
 
-	batch = &get_cpu_var(batched_entropy_int);
-	if (batch->position % ARRAY_SIZE(batch->entropy_int) == 0) {
-		extract_crng((u8 *)batch->entropy_int);
+	batch = &get_cpu_var(batched_entropy_u32);
+	if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
+		extract_crng((u8 *)batch->entropy_u32);
 		batch->position = 0;
 	}
-	ret = batch->entropy_int[batch->position++];
-	put_cpu_var(batched_entropy_int);
+	ret = batch->entropy_u32[batch->position++];
+	put_cpu_var(batched_entropy_u32);
 	return ret;
 }
-#endif
-EXPORT_SYMBOL(get_random_int);
+EXPORT_SYMBOL(get_random_u32);
 
 /**
  * randomize_page - Generate a random, page aligned address
diff --git a/include/linux/random.h b/include/linux/random.h
index 16ab429735a7..ed5c3838780d 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -42,8 +42,21 @@ extern void get_random_bytes_arch(void *buf, int nbytes);
 extern const struct file_operations random_fops, urandom_fops;
 #endif
 
-unsigned int get_random_int(void);
-unsigned long get_random_long(void);
+u32 get_random_u32(void);
+u64 get_random_u64(void);
+static inline unsigned int get_random_int(void)
+{
+	return get_random_u32();
+}
+static inline unsigned long get_random_long(void)
+{
+#if BITS_PER_LONG == 64
+	return get_random_u64();
+#else
+	return get_random_u32();
+#endif
+}
+
 unsigned long randomize_page(unsigned long start, unsigned long range);
 
 u32 prandom_u32(void);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-06 18:32 [PATCH 1/2] random: use chacha20 for get_random_int/long Jason A. Donenfeld
  2017-01-06 18:32 ` [PATCH 2/2] random: convert get_random_int/long into get_random_u32/u64 Jason A. Donenfeld
@ 2017-01-18  3:49 ` Theodore Ts'o
  2017-01-20  5:27   ` Jason A. Donenfeld
  1 sibling, 1 reply; 23+ messages in thread
From: Theodore Ts'o @ 2017-01-18  3:49 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: linux-kernel, Hannes Frederic Sowa, Andy Lutomirski

Hi Jason,

I've been taking a look at your patch, and i think it's... problematic.

You're using a union for the entropy_u64 and entropy_u32 arrays, and
the position field is used to indexed into those two arrays.

So if the first caller calls get_random_u64 with position=0, it will
get the first 64 bits out of the batched entropy, and increment the
position to 1.  If the second caller calls get_random_u32 with
position=1... it will get the second of the 32-bit chunks in the
batched entropy array --- which will include the 64 bits returned to
the first caller.

That's not very random.   :-(

						- Ted

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-18  3:49 ` [PATCH 1/2] random: use chacha20 for get_random_int/long Theodore Ts'o
@ 2017-01-20  5:27   ` Jason A. Donenfeld
  2017-01-20 14:28     ` Theodore Ts'o
  0 siblings, 1 reply; 23+ messages in thread
From: Jason A. Donenfeld @ 2017-01-20  5:27 UTC (permalink / raw)
  To: Theodore Ts'o, Jason A. Donenfeld, LKML,
	Hannes Frederic Sowa, Andy Lutomirski

Hi Ted,

On Wed, Jan 18, 2017 at 4:49 AM, Theodore Ts'o <tytso@mit.edu> wrote:
> You're using a union for the entropy_u64 and entropy_u32 arrays, and
> the position field is used to indexed into those two arrays.
>
> So if the first caller calls get_random_u64 with position=0, it will
> get the first 64 bits out of the batched entropy, and increment the
> position to 1.  If the second caller calls get_random_u32 with
> position=1... it will get the second of the 32-bit chunks in the
> batched entropy array --- which will include the 64 bits returned to
> the first caller.

This is most certainly not the case. Read closely.

I use a different variable for each function. get_random_u32 only
touches batched_entropy_u32, and get_random_u64 only touches
bached_entropy_u64. Under no circumstances will one function use the
other's batched entropy.

I use a union so that only one struct must be declared. If you think
that's unclear, I can declare two separate structs instead. Let me
know which strategy you prefer -- what I have now, or splitting it
into two structs.

Jason

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-20  5:27   ` Jason A. Donenfeld
@ 2017-01-20 14:28     ` Theodore Ts'o
  2017-01-20 15:38       ` Jason A. Donenfeld
  2017-01-20 15:47       ` Jason A. Donenfeld
  0 siblings, 2 replies; 23+ messages in thread
From: Theodore Ts'o @ 2017-01-20 14:28 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: LKML, Hannes Frederic Sowa, Andy Lutomirski

On Fri, Jan 20, 2017 at 06:27:19AM +0100, Jason A. Donenfeld wrote:
> This is most certainly not the case. Read closely.
> 
> I use a different variable for each function. get_random_u32 only
> touches batched_entropy_u32, and get_random_u64 only touches
> bached_entropy_u64. Under no circumstances will one function use the
> other's batched entropy.
> 
> I use a union so that only one struct must be declared. If you think
> that's unclear, I can declare two separate structs instead. Let me
> know which strategy you prefer -- what I have now, or splitting it
> into two structs.

What I would probably do is just use one array and one array index,
denominated in 32-bit words, and just grab two 32-bit words for the
64-bit case.  Optimizing away the overhead of assembling two 32-bit
words for a 64-bit word, and the possibility that we might have to
touch two cache lines instead of one --- is it really worth it?

Compare that to the fact that you're wasting up to 66% of the
generated words in the batched entropy array, and certainly on average
you're wasting CPU cycles, even if you are reducing the cost of
calling get_random_u{32,64} by a handful of cycles....

      	  	      	      	     - Ted
				     

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-20 14:28     ` Theodore Ts'o
@ 2017-01-20 15:38       ` Jason A. Donenfeld
  2017-01-21  0:10         ` Theodore Ts'o
  2017-01-20 15:47       ` Jason A. Donenfeld
  1 sibling, 1 reply; 23+ messages in thread
From: Jason A. Donenfeld @ 2017-01-20 15:38 UTC (permalink / raw)
  To: Theodore Ts'o, Jason A. Donenfeld, LKML,
	Hannes Frederic Sowa, Andy Lutomirski

Hi Ted,

On Fri, Jan 20, 2017 at 3:28 PM, Theodore Ts'o <tytso@mit.edu> wrote:
> What I would probably do is just use one array and one array index,
> denominated in 32-bit words, and just grab two 32-bit words for the
> 64-bit case.  Optimizing away the overhead of assembling two 32-bit
> words for a 64-bit word, and the possibility that we might have to
> touch two cache lines instead of one --- is it really worth it?

I was thinking that the issue isn't merely cache line and a slow down,
but that on some platforms, this could be an _illegal unaligned
access_. That means we'd need to rewrite the code to use the unaligned
access helpers or memcpy, and then it's really suboptimal, not to
mention ugly, since just indexing into an array like we do now is so
clean.

Jason

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-20 14:28     ` Theodore Ts'o
  2017-01-20 15:38       ` Jason A. Donenfeld
@ 2017-01-20 15:47       ` Jason A. Donenfeld
  2017-01-21  0:15         ` Theodore Ts'o
  1 sibling, 1 reply; 23+ messages in thread
From: Jason A. Donenfeld @ 2017-01-20 15:47 UTC (permalink / raw)
  To: Theodore Ts'o, Jason A. Donenfeld, LKML,
	Hannes Frederic Sowa, Andy Lutomirski

On Fri, Jan 20, 2017 at 3:28 PM, Theodore Ts'o <tytso@mit.edu> wrote:
> Compare that to the fact that you're wasting up to 66% of the
> generated words in the batched entropy array, and certainly on average
> you're wasting CPU cycles, even if you are reducing the cost of
> calling get_random_u{32,64} by a handful of cycles....

What do you mean? Nothing is wasted right now. The u64 function only
gets u64s from a dedicated u64 array. The u32 function only gets u32s
from a dedicated u32 array. There are separate batched entropy arrays
for each function.

Perfect respect of alignment rules. Perfect cache line
characteristics. Most optimal instructions used. Zero bytes of entropy
wasted. Clean C code.

Unless I'm misunderstanding your emails, can you please give the
original patch another read?

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-20 15:38       ` Jason A. Donenfeld
@ 2017-01-21  0:10         ` Theodore Ts'o
  2017-01-21  0:13           ` Jason A. Donenfeld
  0 siblings, 1 reply; 23+ messages in thread
From: Theodore Ts'o @ 2017-01-21  0:10 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: LKML, Hannes Frederic Sowa, Andy Lutomirski

On Fri, Jan 20, 2017 at 04:38:59PM +0100, Jason A. Donenfeld wrote:
> I was thinking that the issue isn't merely cache line and a slow down,
> but that on some platforms, this could be an _illegal unaligned
> access_. That means we'd need to rewrite the code to use the unaligned
> access helpers or memcpy, and then it's really suboptimal, not to
> mention ugly, since just indexing into an array like we do now is so
> clean.

Why would there be an unaligned access?  What I was suggesting was an
array of u32, and we just do two separate u32 accesses with a shift in
the case of get_random_u64.  There's nothing illegal about that.

    u64 retval;

    retval = (array[pointer] << 32) + array[pointer+1];
    pointer += 2;

This is not terribly suboptimal nor terribly ugly.

						- Ted

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-21  0:10         ` Theodore Ts'o
@ 2017-01-21  0:13           ` Jason A. Donenfeld
  0 siblings, 0 replies; 23+ messages in thread
From: Jason A. Donenfeld @ 2017-01-21  0:13 UTC (permalink / raw)
  To: Theodore Ts'o, Jason A. Donenfeld, LKML,
	Hannes Frederic Sowa, Andy Lutomirski

Hi Ted,

On Sat, Jan 21, 2017 at 1:10 AM, Theodore Ts'o <tytso@mit.edu> wrote:
> Why would there be an unaligned access?  What I was suggesting was an
> array of u32, and we just do two separate u32 accesses with a shift in
> the case of get_random_u64.  There's nothing illegal about that.
>
>     u64 retval;
>
>     retval = (array[pointer] << 32) + array[pointer+1];
>     pointer += 2;

Oh, I see now. I thought you were suggesting casting u32* to u64*.

Yea, this approach works. It's slower than my code, but it gets the job done.

Unless I hear back from you saying "keep with your approach instead of
this one", I'll submit a v2 to you containing this approach instead.

Jason

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-20 15:47       ` Jason A. Donenfeld
@ 2017-01-21  0:15         ` Theodore Ts'o
  2017-01-21  0:16           ` Jason A. Donenfeld
  0 siblings, 1 reply; 23+ messages in thread
From: Theodore Ts'o @ 2017-01-21  0:15 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: LKML, Hannes Frederic Sowa, Andy Lutomirski

On Fri, Jan 20, 2017 at 04:47:42PM +0100, Jason A. Donenfeld wrote:
> 
> What do you mean? Nothing is wasted right now. The u64 function only
> gets u64s from a dedicated u64 array. The u32 function only gets u32s
> from a dedicated u32 array. There are separate batched entropy arrays
> for each function.

But there is a shared pointer, which is used both for the dedicated
u32 array and the dedicated u64 array.  So when you increment the
pointer for the get_random_u32, the corresponding entry in the u64
array is wasted, no?

						- Ted

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-21  0:15         ` Theodore Ts'o
@ 2017-01-21  0:16           ` Jason A. Donenfeld
  2017-01-21  6:24             ` Theodore Ts'o
  0 siblings, 1 reply; 23+ messages in thread
From: Jason A. Donenfeld @ 2017-01-21  0:16 UTC (permalink / raw)
  To: Theodore Ts'o, Jason A. Donenfeld, LKML,
	Hannes Frederic Sowa, Andy Lutomirski

On Sat, Jan 21, 2017 at 1:15 AM, Theodore Ts'o <tytso@mit.edu> wrote:
> But there is a shared pointer, which is used both for the dedicated
> u32 array and the dedicated u64 array.  So when you increment the
> pointer for the get_random_u32, the corresponding entry in the u64
> array is wasted, no?

No, it is not a shared pointer. It is a different pointer with a
different batch. The idea is that each function gets its own batch.
That way there's always perfect alignment. This is why I'm suggesting
that my approach is faster.

Would you like me to roll your (slower) bitshifting idea as v2, or can
we stick with my v1?

Jason

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-21  0:16           ` Jason A. Donenfeld
@ 2017-01-21  6:24             ` Theodore Ts'o
  2017-01-21 14:08               ` Jason A. Donenfeld
  0 siblings, 1 reply; 23+ messages in thread
From: Theodore Ts'o @ 2017-01-21  6:24 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: LKML, Hannes Frederic Sowa, Andy Lutomirski

On Sat, Jan 21, 2017 at 01:16:56AM +0100, Jason A. Donenfeld wrote:
> On Sat, Jan 21, 2017 at 1:15 AM, Theodore Ts'o <tytso@mit.edu> wrote:
> > But there is a shared pointer, which is used both for the dedicated
> > u32 array and the dedicated u64 array.  So when you increment the
> > pointer for the get_random_u32, the corresponding entry in the u64
> > array is wasted, no?
> 
> No, it is not a shared pointer. It is a different pointer with a
> different batch. The idea is that each function gets its own batch.
> That way there's always perfect alignment. This is why I'm suggesting
> that my approach is faster.

Oh, I see.  What was confusing me was that you used the same data
structure for both, and but you were using different instances of the
structure for get_random_u32 and get_random_u64.  I thought you were
using the same batched_entropy structure for both.  My bad.

I probably would have used different structure definitions for both,
but that's probably because I really am not fund of unions at all if
they can be avoided.  I thought you were using a union because you
were deliberately trying to use one instance of the structure as a per
cpu variable for u32 and u64.

So that's not how I would do things, but it's fine.

						- Ted
					

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-21  6:24             ` Theodore Ts'o
@ 2017-01-21 14:08               ` Jason A. Donenfeld
  2017-01-22 11:24                 ` Greg Kroah-Hartman
  0 siblings, 1 reply; 23+ messages in thread
From: Jason A. Donenfeld @ 2017-01-21 14:08 UTC (permalink / raw)
  To: Theodore Ts'o, Jason A. Donenfeld, LKML,
	Hannes Frederic Sowa, Andy Lutomirski
  Cc: Greg Kroah-Hartman

Hi Ted,

On Sat, Jan 21, 2017 at 7:24 AM, Theodore Ts'o <tytso@mit.edu> wrote:
> Oh, I see.
>
> So that's not how I would do things, but it's fine.


Great, alright. In what tree should I look for these two commits? Or
should I send it to the drivers/char maintainer (now CCd)?

Jason

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-21 14:08               ` Jason A. Donenfeld
@ 2017-01-22 11:24                 ` Greg Kroah-Hartman
  2017-01-22 12:21                   ` Jason A. Donenfeld
  0 siblings, 1 reply; 23+ messages in thread
From: Greg Kroah-Hartman @ 2017-01-22 11:24 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: Theodore Ts'o, LKML, Hannes Frederic Sowa, Andy Lutomirski

On Sat, Jan 21, 2017 at 03:08:12PM +0100, Jason A. Donenfeld wrote:
> Hi Ted,
> 
> On Sat, Jan 21, 2017 at 7:24 AM, Theodore Ts'o <tytso@mit.edu> wrote:
> > Oh, I see.
> >
> > So that's not how I would do things, but it's fine.
> 
> 
> Great, alright. In what tree should I look for these two commits? Or
> should I send it to the drivers/char maintainer (now CCd)?

No objection from me, but as the core stuff is in the networking tree,
maybe it makes more sense for them to flow through there?

Ted, any objection to that?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-22 11:24                 ` Greg Kroah-Hartman
@ 2017-01-22 12:21                   ` Jason A. Donenfeld
  2017-01-22 15:20                     ` Greg Kroah-Hartman
  0 siblings, 1 reply; 23+ messages in thread
From: Jason A. Donenfeld @ 2017-01-22 12:21 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Theodore Ts'o, LKML, Hannes Frederic Sowa, Andy Lutomirski

Hey Greg,

On Sun, Jan 22, 2017 at 12:24 PM, Greg Kroah-Hartman
<gregkh@linuxfoundation.org> wrote:
> On Sat, Jan 21, 2017 at 03:08:12PM +0100, Jason A. Donenfeld wrote:
>> Hi Ted,
>>
>> On Sat, Jan 21, 2017 at 7:24 AM, Theodore Ts'o <tytso@mit.edu> wrote:
>> > Oh, I see.
>> >
>> > So that's not how I would do things, but it's fine.
>>
>>
>> Great, alright. In what tree should I look for these two commits? Or
>> should I send it to the drivers/char maintainer (now CCd)?
>
> No objection from me, but as the core stuff is in the networking tree,
> maybe it makes more sense for them to flow through there?

The core stuff is not in the networking tree. This is nothing to do
with the networking tree in any way at all. There might be some
confusion because the initial discussions came from the siphash stuff,
which is in the networking tree, but Ted and I choose a different
route, going with chacha instead of siphash. So this is 0%
network-related.

> Ted, any objection to that?

Seems like either you pull or Ted pulls it.

Jason

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-22 12:21                   ` Jason A. Donenfeld
@ 2017-01-22 15:20                     ` Greg Kroah-Hartman
  2017-01-22 15:34                       ` Jason A. Donenfeld
  2017-01-22 22:28                       ` [PATCH 1/2] random: use chacha20 for get_random_int/long Theodore Ts'o
  0 siblings, 2 replies; 23+ messages in thread
From: Greg Kroah-Hartman @ 2017-01-22 15:20 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: Theodore Ts'o, LKML, Hannes Frederic Sowa, Andy Lutomirski

On Sun, Jan 22, 2017 at 01:21:39PM +0100, Jason A. Donenfeld wrote:
> Hey Greg,
> 
> On Sun, Jan 22, 2017 at 12:24 PM, Greg Kroah-Hartman
> <gregkh@linuxfoundation.org> wrote:
> > On Sat, Jan 21, 2017 at 03:08:12PM +0100, Jason A. Donenfeld wrote:
> >> Hi Ted,
> >>
> >> On Sat, Jan 21, 2017 at 7:24 AM, Theodore Ts'o <tytso@mit.edu> wrote:
> >> > Oh, I see.
> >> >
> >> > So that's not how I would do things, but it's fine.
> >>
> >>
> >> Great, alright. In what tree should I look for these two commits? Or
> >> should I send it to the drivers/char maintainer (now CCd)?
> >
> > No objection from me, but as the core stuff is in the networking tree,
> > maybe it makes more sense for them to flow through there?
> 
> The core stuff is not in the networking tree. This is nothing to do
> with the networking tree in any way at all. There might be some
> confusion because the initial discussions came from the siphash stuff,
> which is in the networking tree, but Ted and I choose a different
> route, going with chacha instead of siphash. So this is 0%
> network-related.

Sorry, you are correct, I am confused here.

> > Ted, any objection to that?
> 
> Seems like either you pull or Ted pulls it.

Can you repost these?  They are gone from my patch queue.

Ted, any objection for me to take these?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-22 15:20                     ` Greg Kroah-Hartman
@ 2017-01-22 15:34                       ` Jason A. Donenfeld
  2017-01-22 15:34                         ` [PATCH 2/2] random: convert get_random_int/long into get_random_u32/u64 Jason A. Donenfeld
  2017-01-22 22:28                       ` [PATCH 1/2] random: use chacha20 for get_random_int/long Theodore Ts'o
  1 sibling, 1 reply; 23+ messages in thread
From: Jason A. Donenfeld @ 2017-01-22 15:34 UTC (permalink / raw)
  To: LKML, Greg Kroah-Hartman
  Cc: Jason A. Donenfeld, Theodore Ts'o, Hannes Frederic Sowa,
	Andy Lutomirski

Now that our crng uses chacha20, we can rely on its speedy
characteristics for replacing MD5, while simultaneously achieving a
higher security guarantee. Before the idea was to use these functions if
you wanted random integers that aren't stupidly insecure but aren't
necessarily secure either, a vague gray zone, that hopefully was "good
enough" for its users. With chacha20, we can strengthen this claim,
since either we're using an rdrand-like instruction, or we're using the
same crng as /dev/urandom. And it's faster than what was before.

We could have chosen to replace this with a SipHash-derived function,
which might be slightly faster, but at the cost of having yet another
RNG construction in the kernel. By moving to chacha20, we have a single
RNG to analyze and verify, and we also already get good performance
improvements on all platforms.

Implementation-wise, rather than use a generic buffer for both
get_random_int/long and memcpy based on the size needs, we use a
specific buffer for 32-bit reads and for 64-bit reads. This way, we're
guaranteed to always have aligned accesses on all platforms. While
slightly more verbose in C, the assembly this generates is a lot
simpler than otherwise.

Finally, on 32-bit platforms where longs and ints are the same size,
we simply alias get_random_int to get_random_long.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Suggested-by: Theodore Ts'o <tytso@mit.edu>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
Cc: Andy Lutomirski <luto@amacapital.net>
---
 drivers/char/random.c  | 84 ++++++++++++++++++++++++++------------------------
 include/linux/random.h |  1 -
 init/main.c            |  1 -
 3 files changed, 43 insertions(+), 43 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 1ef26403bcc8..433facfd6cb8 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -2042,63 +2042,65 @@ struct ctl_table random_table[] = {
 };
 #endif 	/* CONFIG_SYSCTL */
 
-static u32 random_int_secret[MD5_MESSAGE_BYTES / 4] ____cacheline_aligned;
-
-int random_int_secret_init(void)
-{
-	get_random_bytes(random_int_secret, sizeof(random_int_secret));
-	return 0;
-}
-
-static DEFINE_PER_CPU(__u32 [MD5_DIGEST_WORDS], get_random_int_hash)
-		__aligned(sizeof(unsigned long));
+struct batched_entropy {
+	union {
+		unsigned long entropy_long[CHACHA20_BLOCK_SIZE / sizeof(unsigned long)];
+		unsigned int entropy_int[CHACHA20_BLOCK_SIZE / sizeof(unsigned int)];
+	};
+	unsigned int position;
+};
 
 /*
- * Get a random word for internal kernel use only. Similar to urandom but
- * with the goal of minimal entropy pool depletion. As a result, the random
- * value is not cryptographically secure but for several uses the cost of
- * depleting entropy is too high
+ * Get a random word for internal kernel use only. The quality of the random
+ * number is either as good as RDRAND or as good as /dev/urandom, with the
+ * goal of being quite fast and not depleting entropy.
  */
-unsigned int get_random_int(void)
+static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_long);
+unsigned long get_random_long(void)
 {
-	__u32 *hash;
-	unsigned int ret;
+	unsigned long ret;
+	struct batched_entropy *batch;
 
-	if (arch_get_random_int(&ret))
+	if (arch_get_random_long(&ret))
 		return ret;
 
-	hash = get_cpu_var(get_random_int_hash);
-
-	hash[0] += current->pid + jiffies + random_get_entropy();
-	md5_transform(hash, random_int_secret);
-	ret = hash[0];
-	put_cpu_var(get_random_int_hash);
-
+	batch = &get_cpu_var(batched_entropy_long);
+	if (batch->position % ARRAY_SIZE(batch->entropy_long) == 0) {
+		extract_crng((u8 *)batch->entropy_long);
+		batch->position = 0;
+	}
+	ret = batch->entropy_long[batch->position++];
+	put_cpu_var(batched_entropy_long);
 	return ret;
 }
-EXPORT_SYMBOL(get_random_int);
+EXPORT_SYMBOL(get_random_long);
 
-/*
- * Same as get_random_int(), but returns unsigned long.
- */
-unsigned long get_random_long(void)
+#if BITS_PER_LONG == 32
+unsigned int get_random_int(void)
 {
-	__u32 *hash;
-	unsigned long ret;
+	return get_random_long();
+}
+#else
+static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_int);
+unsigned int get_random_int(void)
+{
+	unsigned int ret;
+	struct batched_entropy *batch;
 
-	if (arch_get_random_long(&ret))
+	if (arch_get_random_int(&ret))
 		return ret;
 
-	hash = get_cpu_var(get_random_int_hash);
-
-	hash[0] += current->pid + jiffies + random_get_entropy();
-	md5_transform(hash, random_int_secret);
-	ret = *(unsigned long *)hash;
-	put_cpu_var(get_random_int_hash);
-
+	batch = &get_cpu_var(batched_entropy_int);
+	if (batch->position % ARRAY_SIZE(batch->entropy_int) == 0) {
+		extract_crng((u8 *)batch->entropy_int);
+		batch->position = 0;
+	}
+	ret = batch->entropy_int[batch->position++];
+	put_cpu_var(batched_entropy_int);
 	return ret;
 }
-EXPORT_SYMBOL(get_random_long);
+#endif
+EXPORT_SYMBOL(get_random_int);
 
 /**
  * randomize_page - Generate a random, page aligned address
diff --git a/include/linux/random.h b/include/linux/random.h
index 7bd2403e4fef..16ab429735a7 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -37,7 +37,6 @@ extern void get_random_bytes(void *buf, int nbytes);
 extern int add_random_ready_callback(struct random_ready_callback *rdy);
 extern void del_random_ready_callback(struct random_ready_callback *rdy);
 extern void get_random_bytes_arch(void *buf, int nbytes);
-extern int random_int_secret_init(void);
 
 #ifndef MODULE
 extern const struct file_operations random_fops, urandom_fops;
diff --git a/init/main.c b/init/main.c
index b0c9d6facef9..09beb7fc6e8c 100644
--- a/init/main.c
+++ b/init/main.c
@@ -879,7 +879,6 @@ static void __init do_basic_setup(void)
 	do_ctors();
 	usermodehelper_enable();
 	do_initcalls();
-	random_int_secret_init();
 }
 
 static void __init do_pre_smp_initcalls(void)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 2/2] random: convert get_random_int/long into get_random_u32/u64
  2017-01-22 15:34                       ` Jason A. Donenfeld
@ 2017-01-22 15:34                         ` Jason A. Donenfeld
  0 siblings, 0 replies; 23+ messages in thread
From: Jason A. Donenfeld @ 2017-01-22 15:34 UTC (permalink / raw)
  To: LKML, Greg Kroah-Hartman; +Cc: Jason A. Donenfeld, Theodore Ts'o

Many times, when a user wants a random number, he wants a random number
of a guaranteed size. So, thinking of get_random_int and get_random_long
in terms of get_random_u32 and get_random_u64 makes it much easier to
achieve this. It also makes the code simpler.

On 32-bit platforms, get_random_int and get_random_long are both aliased
to get_random_u32. On 64-bit platforms, int->u32 and long->u64.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Theodore Ts'o <tytso@mit.edu>
---
 drivers/char/random.c  | 55 +++++++++++++++++++++++++-------------------------
 include/linux/random.h | 17 ++++++++++++++--
 2 files changed, 42 insertions(+), 30 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 433facfd6cb8..ee68d5bffdc0 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -2044,8 +2044,8 @@ struct ctl_table random_table[] = {
 
 struct batched_entropy {
 	union {
-		unsigned long entropy_long[CHACHA20_BLOCK_SIZE / sizeof(unsigned long)];
-		unsigned int entropy_int[CHACHA20_BLOCK_SIZE / sizeof(unsigned int)];
+		u64 entropy_u64[CHACHA20_BLOCK_SIZE / sizeof(u64)];
+		u32 entropy_u32[CHACHA20_BLOCK_SIZE / sizeof(u32)];
 	};
 	unsigned int position;
 };
@@ -2055,52 +2055,51 @@ struct batched_entropy {
  * number is either as good as RDRAND or as good as /dev/urandom, with the
  * goal of being quite fast and not depleting entropy.
  */
-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_long);
-unsigned long get_random_long(void)
+static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64);
+u64 get_random_u64(void)
 {
-	unsigned long ret;
+	u64 ret;
 	struct batched_entropy *batch;
 
-	if (arch_get_random_long(&ret))
+#if BITS_PER_LONG == 64
+	if (arch_get_random_long((unsigned long *)&ret))
 		return ret;
+#else
+	if (arch_get_random_long((unsigned long *)&ret) &&
+	    arch_get_random_long((unsigned long *)&ret + 1))
+	    return ret;
+#endif
 
-	batch = &get_cpu_var(batched_entropy_long);
-	if (batch->position % ARRAY_SIZE(batch->entropy_long) == 0) {
-		extract_crng((u8 *)batch->entropy_long);
+	batch = &get_cpu_var(batched_entropy_u64);
+	if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
+		extract_crng((u8 *)batch->entropy_u64);
 		batch->position = 0;
 	}
-	ret = batch->entropy_long[batch->position++];
-	put_cpu_var(batched_entropy_long);
+	ret = batch->entropy_u64[batch->position++];
+	put_cpu_var(batched_entropy_u64);
 	return ret;
 }
-EXPORT_SYMBOL(get_random_long);
+EXPORT_SYMBOL(get_random_u64);
 
-#if BITS_PER_LONG == 32
-unsigned int get_random_int(void)
-{
-	return get_random_long();
-}
-#else
-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_int);
-unsigned int get_random_int(void)
+static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32);
+u32 get_random_u32(void)
 {
-	unsigned int ret;
+	u32 ret;
 	struct batched_entropy *batch;
 
 	if (arch_get_random_int(&ret))
 		return ret;
 
-	batch = &get_cpu_var(batched_entropy_int);
-	if (batch->position % ARRAY_SIZE(batch->entropy_int) == 0) {
-		extract_crng((u8 *)batch->entropy_int);
+	batch = &get_cpu_var(batched_entropy_u32);
+	if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
+		extract_crng((u8 *)batch->entropy_u32);
 		batch->position = 0;
 	}
-	ret = batch->entropy_int[batch->position++];
-	put_cpu_var(batched_entropy_int);
+	ret = batch->entropy_u32[batch->position++];
+	put_cpu_var(batched_entropy_u32);
 	return ret;
 }
-#endif
-EXPORT_SYMBOL(get_random_int);
+EXPORT_SYMBOL(get_random_u32);
 
 /**
  * randomize_page - Generate a random, page aligned address
diff --git a/include/linux/random.h b/include/linux/random.h
index 16ab429735a7..ed5c3838780d 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -42,8 +42,21 @@ extern void get_random_bytes_arch(void *buf, int nbytes);
 extern const struct file_operations random_fops, urandom_fops;
 #endif
 
-unsigned int get_random_int(void);
-unsigned long get_random_long(void);
+u32 get_random_u32(void);
+u64 get_random_u64(void);
+static inline unsigned int get_random_int(void)
+{
+	return get_random_u32();
+}
+static inline unsigned long get_random_long(void)
+{
+#if BITS_PER_LONG == 64
+	return get_random_u64();
+#else
+	return get_random_u32();
+#endif
+}
+
 unsigned long randomize_page(unsigned long start, unsigned long range);
 
 u32 prandom_u32(void);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-22 15:20                     ` Greg Kroah-Hartman
  2017-01-22 15:34                       ` Jason A. Donenfeld
@ 2017-01-22 22:28                       ` Theodore Ts'o
  2017-01-22 22:49                         ` Jason A. Donenfeld
  2017-01-23  8:12                         ` Greg Kroah-Hartman
  1 sibling, 2 replies; 23+ messages in thread
From: Theodore Ts'o @ 2017-01-22 22:28 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Jason A. Donenfeld, LKML, Hannes Frederic Sowa, Andy Lutomirski

On Sun, Jan 22, 2017 at 04:20:04PM +0100, Greg Kroah-Hartman wrote:
> > The core stuff is not in the networking tree. This is nothing to do
> > with the networking tree in any way at all. There might be some
> > confusion because the initial discussions came from the siphash stuff,
> > which is in the networking tree, but Ted and I choose a different
> > route, going with chacha instead of siphash. So this is 0%
> > network-related.
> 
> Sorry, you are correct, I am confused here.
> 
> > > Ted, any objection to that?
> > 
> > Seems like either you pull or Ted pulls it.
> 
> Can you repost these?  They are gone from my patch queue.
> 
> Ted, any objection for me to take these?

If there are other changes to the relevant lines from the networking
tree, sure.  Otherwise, I was planning on taking them, since I've got
some other changes to drivers/char/random.c I was planning on sending
through the next merge window anyway.

I don't think there will be any merge difficulties, but it's simpler
if all of the changes to a particular file is going through one tree.
But if you really want to take it, I'm not going to object any more
than expressing a preference to do it the other way.

Cheers,

						- Ted

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-22 22:28                       ` [PATCH 1/2] random: use chacha20 for get_random_int/long Theodore Ts'o
@ 2017-01-22 22:49                         ` Jason A. Donenfeld
  2017-01-23  0:33                           ` Theodore Ts'o
  2017-01-23  8:12                         ` Greg Kroah-Hartman
  1 sibling, 1 reply; 23+ messages in thread
From: Jason A. Donenfeld @ 2017-01-22 22:49 UTC (permalink / raw)
  To: Theodore Ts'o, Greg Kroah-Hartman, Jason A. Donenfeld, LKML,
	Hannes Frederic Sowa, Andy Lutomirski

On Sun, Jan 22, 2017 at 11:28 PM, Theodore Ts'o <tytso@mit.edu> wrote:
> If there are other changes to the relevant lines from the networking
> tree, sure.

This patch set has *nothing* to do with the networking tree. That just
a topic confusion. There shouldn't be more discussion about
networking, because it doesn't make sense in any way at all.

> Otherwise, I was planning on taking them, since I've got
> some other changes to drivers/char/random.c I was planning on sending
> through the next merge window anyway.

Works for me. Which tree should I look for this to be queued up in?

Jason

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-22 22:49                         ` Jason A. Donenfeld
@ 2017-01-23  0:33                           ` Theodore Ts'o
  0 siblings, 0 replies; 23+ messages in thread
From: Theodore Ts'o @ 2017-01-23  0:33 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: Greg Kroah-Hartman, LKML, Hannes Frederic Sowa, Andy Lutomirski

On Sun, Jan 22, 2017 at 11:49:00PM +0100, Jason A. Donenfeld wrote:
> On Sun, Jan 22, 2017 at 11:28 PM, Theodore Ts'o <tytso@mit.edu> wrote:
> > If there are other changes to the relevant lines from the networking
> > tree, sure.
> 
> This patch set has *nothing* to do with the networking tree. That just
> a topic confusion. There shouldn't be more discussion about
> networking, because it doesn't make sense in any way at all.

Well, the reason people do is for hysterical... er, historical
reasons.  *Originaly* get_random_int() was introduced for use by the
networking folks, and some people think that get_random_int()
therefore "belongs" to the networking tree, even though there are
other users of get_random_int() and get_random_long() besides the
networking code.

I'd much rather break this obviously historical connection, but
patching people's wetware is a lot harder than getting a patch into
the kernel tree....

						- Ted

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
  2017-01-22 22:28                       ` [PATCH 1/2] random: use chacha20 for get_random_int/long Theodore Ts'o
  2017-01-22 22:49                         ` Jason A. Donenfeld
@ 2017-01-23  8:12                         ` Greg Kroah-Hartman
  1 sibling, 0 replies; 23+ messages in thread
From: Greg Kroah-Hartman @ 2017-01-23  8:12 UTC (permalink / raw)
  To: Theodore Ts'o, Jason A. Donenfeld, LKML,
	Hannes Frederic Sowa, Andy Lutomirski

On Sun, Jan 22, 2017 at 05:28:42PM -0500, Theodore Ts'o wrote:
> On Sun, Jan 22, 2017 at 04:20:04PM +0100, Greg Kroah-Hartman wrote:
> > > The core stuff is not in the networking tree. This is nothing to do
> > > with the networking tree in any way at all. There might be some
> > > confusion because the initial discussions came from the siphash stuff,
> > > which is in the networking tree, but Ted and I choose a different
> > > route, going with chacha instead of siphash. So this is 0%
> > > network-related.
> > 
> > Sorry, you are correct, I am confused here.
> > 
> > > > Ted, any objection to that?
> > > 
> > > Seems like either you pull or Ted pulls it.
> > 
> > Can you repost these?  They are gone from my patch queue.
> > 
> > Ted, any objection for me to take these?
> 
> If there are other changes to the relevant lines from the networking
> tree, sure.  Otherwise, I was planning on taking them, since I've got
> some other changes to drivers/char/random.c I was planning on sending
> through the next merge window anyway.
> 
> I don't think there will be any merge difficulties, but it's simpler
> if all of the changes to a particular file is going through one tree.
> But if you really want to take it, I'm not going to object any more
> than expressing a preference to do it the other way.

No problem with me for you to take it, just don't want it to slip
between the cracks and have to wait for the next merge window.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 2/2] random: convert get_random_int/long into get_random_u32/u64
  2016-12-22 18:07 Jason A. Donenfeld
@ 2016-12-22 18:07 ` Jason A. Donenfeld
  0 siblings, 0 replies; 23+ messages in thread
From: Jason A. Donenfeld @ 2016-12-22 18:07 UTC (permalink / raw)
  To: tytso, linux-kernel, kernel-hardening; +Cc: Jason A. Donenfeld

Many times, when a user wants a random number, he wants a random number
of a guaranteed size. So, thinking of get_random_int and get_random_long
in terms of get_random_u32 and get_random_u64 makes it much easier to
achieve this. It also makes the code simpler.

On 32-bit platforms, get_random_int and get_random_long are both aliased
to get_random_u32. On 64-bit platforms, int->u32 and long->u64.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Theodore Ts'o <tytso@mit.edu>
---
I personally would find this extremely useful in my own code, where I
currently wind up doing something ugly like:

    (u64)get_random_int() << 32 | get_random_int()

But if you have some aversion, don't hesitate to just take 1/2 without
this 2/2. But hopefully you'll like this 2/2.

 drivers/char/random.c  | 55 +++++++++++++++++++++++++-------------------------
 include/linux/random.h | 17 ++++++++++++++--
 2 files changed, 42 insertions(+), 30 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 08d1dd58c0d2..ee737ef41ce9 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -2044,8 +2044,8 @@ struct ctl_table random_table[] = {
 
 struct batched_entropy {
 	union {
-		unsigned long entropy_long[CHACHA20_BLOCK_SIZE / sizeof(unsigned long)];
-		unsigned int entropy_int[CHACHA20_BLOCK_SIZE / sizeof(unsigned int)];
+		u64 entropy_u64[CHACHA20_BLOCK_SIZE / sizeof(u64)];
+		u32 entropy_u32[CHACHA20_BLOCK_SIZE / sizeof(u32)];
 	};
 	unsigned int position;
 };
@@ -2055,52 +2055,51 @@ struct batched_entropy {
  * number is either as good as RDRAND or as good as /dev/urandom, with the
  * goal of being quite fast and not depleting entropy.
  */
-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_long);
-unsigned long get_random_long(void)
+static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64);
+u64 get_random_u64(void)
 {
-	unsigned long ret;
+	u64 ret;
 	struct batched_entropy *batch;
 
-	if (arch_get_random_long(&ret))
+#if BITS_PER_LONG == 64
+	if (arch_get_random_long((unsigned long *)&ret))
 		return ret;
+#else
+	if (arch_get_random_long((unsigned long *)&ret) &&
+	    arch_get_random_long((unsigned long *)&ret + 1))
+	    return ret;
+#endif
 
-	batch = &get_cpu_var(batched_entropy_long);
-	if (batch->position % ARRAY_SIZE(batch->entropy_long) == 0) {
-		extract_crng((u8 *)batch->entropy_long);
+	batch = &get_cpu_var(batched_entropy_u64);
+	if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
+		extract_crng((u8 *)batch->entropy_u64);
 		batch->position = 0;
 	}
-	ret = batch->entropy_long[batch->position++];
-	put_cpu_var(batched_entropy_long);
+	ret = batch->entropy_u64[batch->position++];
+	put_cpu_var(batched_entropy_u64);
 	return ret;
 }
-EXPORT_SYMBOL(get_random_long);
+EXPORT_SYMBOL(get_random_u64);
 
-#if BITS_PER_LONG == 32
-unsigned int get_random_int(void)
-{
-	return get_random_long();
-}
-#else
-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_int);
-unsigned int get_random_int(void)
+static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32);
+u32 get_random_u32(void)
 {
-	unsigned int ret;
+	u32 ret;
 	struct batched_entropy *batch;
 
 	if (arch_get_random_int(&ret))
 		return ret;
 
-	batch = &get_cpu_var(batched_entropy_int);
-	if (batch->position % ARRAY_SIZE(batch->entropy_int) == 0) {
-		extract_crng((u8 *)batch->entropy_int);
+	batch = &get_cpu_var(batched_entropy_u32);
+	if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
+		extract_crng((u8 *)batch->entropy_u32);
 		batch->position = 0;
 	}
-	ret = batch->entropy_int[batch->position++];
-	put_cpu_var(batched_entropy_int);
+	ret = batch->entropy_u32[batch->position++];
+	put_cpu_var(batched_entropy_u32);
 	return ret;
 }
-#endif
-EXPORT_SYMBOL(get_random_int);
+EXPORT_SYMBOL(get_random_u32);
 
 /**
  * randomize_page - Generate a random, page aligned address
diff --git a/include/linux/random.h b/include/linux/random.h
index 16ab429735a7..ed5c3838780d 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -42,8 +42,21 @@ extern void get_random_bytes_arch(void *buf, int nbytes);
 extern const struct file_operations random_fops, urandom_fops;
 #endif
 
-unsigned int get_random_int(void);
-unsigned long get_random_long(void);
+u32 get_random_u32(void);
+u64 get_random_u64(void);
+static inline unsigned int get_random_int(void)
+{
+	return get_random_u32();
+}
+static inline unsigned long get_random_long(void)
+{
+#if BITS_PER_LONG == 64
+	return get_random_u64();
+#else
+	return get_random_u32();
+#endif
+}
+
 unsigned long randomize_page(unsigned long start, unsigned long range);
 
 u32 prandom_u32(void);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2017-01-23  8:12 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-06 18:32 [PATCH 1/2] random: use chacha20 for get_random_int/long Jason A. Donenfeld
2017-01-06 18:32 ` [PATCH 2/2] random: convert get_random_int/long into get_random_u32/u64 Jason A. Donenfeld
2017-01-18  3:49 ` [PATCH 1/2] random: use chacha20 for get_random_int/long Theodore Ts'o
2017-01-20  5:27   ` Jason A. Donenfeld
2017-01-20 14:28     ` Theodore Ts'o
2017-01-20 15:38       ` Jason A. Donenfeld
2017-01-21  0:10         ` Theodore Ts'o
2017-01-21  0:13           ` Jason A. Donenfeld
2017-01-20 15:47       ` Jason A. Donenfeld
2017-01-21  0:15         ` Theodore Ts'o
2017-01-21  0:16           ` Jason A. Donenfeld
2017-01-21  6:24             ` Theodore Ts'o
2017-01-21 14:08               ` Jason A. Donenfeld
2017-01-22 11:24                 ` Greg Kroah-Hartman
2017-01-22 12:21                   ` Jason A. Donenfeld
2017-01-22 15:20                     ` Greg Kroah-Hartman
2017-01-22 15:34                       ` Jason A. Donenfeld
2017-01-22 15:34                         ` [PATCH 2/2] random: convert get_random_int/long into get_random_u32/u64 Jason A. Donenfeld
2017-01-22 22:28                       ` [PATCH 1/2] random: use chacha20 for get_random_int/long Theodore Ts'o
2017-01-22 22:49                         ` Jason A. Donenfeld
2017-01-23  0:33                           ` Theodore Ts'o
2017-01-23  8:12                         ` Greg Kroah-Hartman
  -- strict thread matches above, loose matches on Subject: below --
2016-12-22 18:07 Jason A. Donenfeld
2016-12-22 18:07 ` [PATCH 2/2] random: convert get_random_int/long into get_random_u32/u64 Jason A. Donenfeld

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).