From: "Jason A. Donenfeld" <Jason@zx2c4.com> To: Theodore Ts'o <tytso@mit.edu>, Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>, kernel-hardening@lists.openwall.com, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, David Miller <davem@davemloft.net>, Eric Biggers <ebiggers3@gmail.com> Cc: "Jason A. Donenfeld" <Jason@zx2c4.com> Subject: [PATCH v4 01/13] random: invalidate batched entropy after crng init Date: Tue, 6 Jun 2017 19:47:52 +0200 [thread overview] Message-ID: <20170606174804.31124-2-Jason@zx2c4.com> (raw) In-Reply-To: <20170606174804.31124-1-Jason@zx2c4.com> It's possible that get_random_{u32,u64} is used before the crng has initialized, in which case, its output might not be cryptographically secure. For this problem, directly, this patch set is introducing the *_wait variety of functions, but even with that, there's a subtle issue: what happens to our batched entropy that was generated before initialization. Prior to this commit, it'd stick around, supplying bad numbers. After this commit, we force the entropy to be re-extracted after each phase of the crng has initialized. In order to avoid a race condition with the position counter, we introduce a simple rwlock for this invalidation. Since it's only during this awkward transition period, after things are all set up, we stop using it, so that it doesn't have an impact on performance. This should probably be backported to 4.11. (Also: adding my copyright to the top. With the patch series from January, this patch, and then the ones that come after, I think there's a relevant amount of code in here to add my name to the top.) Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> --- drivers/char/random.c | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index 0ab024918907..2291e6224ed3 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1,6 +1,9 @@ /* * random.c -- A strong random number generator * + * Copyright (C) 2017 Jason A. Donenfeld <Jason@zx2c4.com>. All + * Rights Reserved. + * * Copyright Matt Mackall <mpm@selenic.com>, 2003, 2004, 2005 * * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All @@ -762,6 +765,8 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static struct crng_state **crng_node_pool __read_mostly; #endif +static void invalidate_batched_entropy(void); + static void crng_initialize(struct crng_state *crng) { int i; @@ -799,6 +804,7 @@ static int crng_fast_load(const char *cp, size_t len) cp++; crng_init_cnt++; len--; } if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { + invalidate_batched_entropy(); crng_init = 1; wake_up_interruptible(&crng_init_wait); pr_notice("random: fast init done\n"); @@ -836,6 +842,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) memzero_explicit(&buf, sizeof(buf)); crng->init_time = jiffies; if (crng == &primary_crng && crng_init < 2) { + invalidate_batched_entropy(); crng_init = 2; process_random_ready_list(); wake_up_interruptible(&crng_init_wait); @@ -2019,6 +2026,7 @@ struct batched_entropy { }; unsigned int position; }; +static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_reset_lock); /* * Get a random word for internal kernel use only. The quality of the random @@ -2029,6 +2037,8 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64); u64 get_random_u64(void) { u64 ret; + bool use_lock = crng_init < 2; + unsigned long flags; struct batched_entropy *batch; #if BITS_PER_LONG == 64 @@ -2041,11 +2051,15 @@ u64 get_random_u64(void) #endif batch = &get_cpu_var(batched_entropy_u64); + if (use_lock) + read_lock_irqsave(&batched_entropy_reset_lock, flags); if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) { extract_crng((u8 *)batch->entropy_u64); batch->position = 0; } ret = batch->entropy_u64[batch->position++]; + if (use_lock) + read_unlock_irqrestore(&batched_entropy_reset_lock, flags); put_cpu_var(batched_entropy_u64); return ret; } @@ -2055,22 +2069,45 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32); u32 get_random_u32(void) { u32 ret; + bool use_lock = crng_init < 2; + unsigned long flags; struct batched_entropy *batch; if (arch_get_random_int(&ret)) return ret; batch = &get_cpu_var(batched_entropy_u32); + if (use_lock) + read_lock_irqsave(&batched_entropy_reset_lock, flags); if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) { extract_crng((u8 *)batch->entropy_u32); batch->position = 0; } ret = batch->entropy_u32[batch->position++]; + if (use_lock) + read_unlock_irqrestore(&batched_entropy_reset_lock, flags); put_cpu_var(batched_entropy_u32); return ret; } EXPORT_SYMBOL(get_random_u32); +/* It's important to invalidate all potential batched entropy that might + * be stored before the crng is initialized, which we can do lazily by + * simply resetting the counter to zero so that it's re-extracted on the + * next usage. */ +static void invalidate_batched_entropy(void) +{ + int cpu; + unsigned long flags; + + write_lock_irqsave(&batched_entropy_reset_lock, flags); + for_each_possible_cpu (cpu) { + per_cpu_ptr(&batched_entropy_u32, cpu)->position = 0; + per_cpu_ptr(&batched_entropy_u64, cpu)->position = 0; + } + write_unlock_irqrestore(&batched_entropy_reset_lock, flags); +} + /** * randomize_page - Generate a random, page aligned address * @start: The smallest acceptable address the caller will take. -- 2.13.0
WARNING: multiple messages have this Message-ID (diff)
From: "Jason A. Donenfeld" <Jason@zx2c4.com> To: Theodore Ts'o <tytso@mit.edu>, Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>, kernel-hardening@lists.openwall.com, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, David Miller <davem@davemloft.net>, Eric Biggers <ebiggers3@gmail.com> Cc: "Jason A. Donenfeld" <Jason@zx2c4.com> Subject: [kernel-hardening] [PATCH v4 01/13] random: invalidate batched entropy after crng init Date: Tue, 6 Jun 2017 19:47:52 +0200 [thread overview] Message-ID: <20170606174804.31124-2-Jason@zx2c4.com> (raw) In-Reply-To: <20170606174804.31124-1-Jason@zx2c4.com> It's possible that get_random_{u32,u64} is used before the crng has initialized, in which case, its output might not be cryptographically secure. For this problem, directly, this patch set is introducing the *_wait variety of functions, but even with that, there's a subtle issue: what happens to our batched entropy that was generated before initialization. Prior to this commit, it'd stick around, supplying bad numbers. After this commit, we force the entropy to be re-extracted after each phase of the crng has initialized. In order to avoid a race condition with the position counter, we introduce a simple rwlock for this invalidation. Since it's only during this awkward transition period, after things are all set up, we stop using it, so that it doesn't have an impact on performance. This should probably be backported to 4.11. (Also: adding my copyright to the top. With the patch series from January, this patch, and then the ones that come after, I think there's a relevant amount of code in here to add my name to the top.) Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> --- drivers/char/random.c | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index 0ab024918907..2291e6224ed3 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1,6 +1,9 @@ /* * random.c -- A strong random number generator * + * Copyright (C) 2017 Jason A. Donenfeld <Jason@zx2c4.com>. All + * Rights Reserved. + * * Copyright Matt Mackall <mpm@selenic.com>, 2003, 2004, 2005 * * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All @@ -762,6 +765,8 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static struct crng_state **crng_node_pool __read_mostly; #endif +static void invalidate_batched_entropy(void); + static void crng_initialize(struct crng_state *crng) { int i; @@ -799,6 +804,7 @@ static int crng_fast_load(const char *cp, size_t len) cp++; crng_init_cnt++; len--; } if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { + invalidate_batched_entropy(); crng_init = 1; wake_up_interruptible(&crng_init_wait); pr_notice("random: fast init done\n"); @@ -836,6 +842,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) memzero_explicit(&buf, sizeof(buf)); crng->init_time = jiffies; if (crng == &primary_crng && crng_init < 2) { + invalidate_batched_entropy(); crng_init = 2; process_random_ready_list(); wake_up_interruptible(&crng_init_wait); @@ -2019,6 +2026,7 @@ struct batched_entropy { }; unsigned int position; }; +static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_reset_lock); /* * Get a random word for internal kernel use only. The quality of the random @@ -2029,6 +2037,8 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64); u64 get_random_u64(void) { u64 ret; + bool use_lock = crng_init < 2; + unsigned long flags; struct batched_entropy *batch; #if BITS_PER_LONG == 64 @@ -2041,11 +2051,15 @@ u64 get_random_u64(void) #endif batch = &get_cpu_var(batched_entropy_u64); + if (use_lock) + read_lock_irqsave(&batched_entropy_reset_lock, flags); if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) { extract_crng((u8 *)batch->entropy_u64); batch->position = 0; } ret = batch->entropy_u64[batch->position++]; + if (use_lock) + read_unlock_irqrestore(&batched_entropy_reset_lock, flags); put_cpu_var(batched_entropy_u64); return ret; } @@ -2055,22 +2069,45 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32); u32 get_random_u32(void) { u32 ret; + bool use_lock = crng_init < 2; + unsigned long flags; struct batched_entropy *batch; if (arch_get_random_int(&ret)) return ret; batch = &get_cpu_var(batched_entropy_u32); + if (use_lock) + read_lock_irqsave(&batched_entropy_reset_lock, flags); if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) { extract_crng((u8 *)batch->entropy_u32); batch->position = 0; } ret = batch->entropy_u32[batch->position++]; + if (use_lock) + read_unlock_irqrestore(&batched_entropy_reset_lock, flags); put_cpu_var(batched_entropy_u32); return ret; } EXPORT_SYMBOL(get_random_u32); +/* It's important to invalidate all potential batched entropy that might + * be stored before the crng is initialized, which we can do lazily by + * simply resetting the counter to zero so that it's re-extracted on the + * next usage. */ +static void invalidate_batched_entropy(void) +{ + int cpu; + unsigned long flags; + + write_lock_irqsave(&batched_entropy_reset_lock, flags); + for_each_possible_cpu (cpu) { + per_cpu_ptr(&batched_entropy_u32, cpu)->position = 0; + per_cpu_ptr(&batched_entropy_u64, cpu)->position = 0; + } + write_unlock_irqrestore(&batched_entropy_reset_lock, flags); +} + /** * randomize_page - Generate a random, page aligned address * @start: The smallest acceptable address the caller will take. -- 2.13.0
next prev parent reply other threads:[~2017-06-06 17:47 UTC|newest] Thread overview: 109+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-06-06 17:47 [PATCH v4 00/13] Unseeded In-Kernel Randomness Fixes Jason A. Donenfeld 2017-06-06 17:47 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-06 17:47 ` Jason A. Donenfeld [this message] 2017-06-06 17:47 ` [kernel-hardening] [PATCH v4 01/13] random: invalidate batched entropy after crng init Jason A. Donenfeld 2017-06-07 23:58 ` Theodore Ts'o 2017-06-07 23:58 ` [kernel-hardening] " Theodore Ts'o 2017-06-08 0:52 ` Jason A. Donenfeld 2017-06-08 0:52 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-06 17:47 ` [PATCH v4 02/13] random: add synchronous API for the urandom pool Jason A. Donenfeld 2017-06-06 17:47 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 0:00 ` Theodore Ts'o 2017-06-08 0:00 ` [kernel-hardening] " Theodore Ts'o 2017-06-06 17:47 ` [PATCH v4 03/13] random: add get_random_{bytes,u32,u64,int,long,once}_wait family Jason A. Donenfeld 2017-06-06 17:47 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 0:05 ` Theodore Ts'o 2017-06-06 17:47 ` [PATCH v4 04/13] security/keys: ensure RNG is seeded before use Jason A. Donenfeld 2017-06-06 17:47 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 0:31 ` Theodore Ts'o 2017-06-08 0:31 ` [kernel-hardening] " Theodore Ts'o 2017-06-08 0:50 ` Jason A. Donenfeld 2017-06-08 0:50 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 1:03 ` Jason A. Donenfeld 2017-06-08 1:03 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-06 17:47 ` [PATCH v4 05/13] crypto/rng: ensure that the RNG is ready before using Jason A. Donenfeld 2017-06-06 17:47 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 0:41 ` Theodore Ts'o 2017-06-08 0:47 ` Jason A. Donenfeld 2017-06-06 17:47 ` [PATCH v4 06/13] iscsi: ensure RNG is seeded before use Jason A. Donenfeld 2017-06-06 17:47 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 2:43 ` Theodore Ts'o 2017-06-08 2:43 ` [kernel-hardening] " Theodore Ts'o 2017-06-08 12:09 ` Jason A. Donenfeld 2017-06-16 21:58 ` Lee Duncan 2017-06-17 0:41 ` Jason A. Donenfeld 2017-06-17 3:45 ` Lee Duncan 2017-06-17 14:23 ` Jeffrey Walton [not found] ` <CAH8yC8nHX2r9cfQ0gNeJAUrgSfAS8V16dVHv35BRnLn-YprZCg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2017-06-17 18:50 ` [kernel-hardening] " Paul Koning 2017-06-17 18:50 ` Paul Koning 2017-07-05 7:08 ` Antw: Re: [kernel-hardening] " Ulrich Windl 2017-07-05 7:08 ` Ulrich Windl 2017-07-05 7:08 ` Ulrich Windl 2017-07-05 13:16 ` Paul Koning 2017-07-05 13:16 ` Paul Koning 2017-07-05 17:34 ` Antw: " Theodore Ts'o 2017-07-05 17:34 ` Antw: Re: [kernel-hardening] " Theodore Ts'o 2017-07-05 17:34 ` Theodore Ts'o 2017-06-18 8:04 ` Stephan Müller [not found] ` <2639082.PtrrGWOPPL-jJGQKZiSfeo1haGO/jJMPxvVK+yQ3ZXh@public.gmane.org> 2017-06-26 1:23 ` Nicholas A. Bellinger 2017-06-26 1:23 ` Nicholas A. Bellinger [not found] ` <1498440189.26123.85.camel-XoQW25Eq2zviZyQQd+hFbcojREIfoBdhmpATvIKMPHk@public.gmane.org> 2017-06-26 17:38 ` Stephan Müller 2017-06-26 17:38 ` Stephan Müller 2017-06-30 6:02 ` Nicholas A. Bellinger [not found] ` <1678474.GnYBdSlWgs-b2PLbiJbNv8ftSvlWXw0+g@public.gmane.org> 2017-07-05 7:03 ` Antw: " Ulrich Windl 2017-07-05 7:03 ` Ulrich Windl 2017-07-05 7:03 ` Ulrich Windl 2017-07-05 12:35 ` Theodore Ts'o 2017-07-05 12:35 ` Theodore Ts'o 2017-06-06 17:47 ` [PATCH v4 07/13] ceph: ensure RNG is seeded before using Jason A. Donenfeld 2017-06-06 17:47 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 2:45 ` Theodore Ts'o 2017-06-06 17:47 ` [PATCH v4 08/13] cifs: use get_random_u32 for 32-bit lock random Jason A. Donenfeld 2017-06-06 17:47 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 0:25 ` Theodore Ts'o 2017-06-08 0:25 ` [kernel-hardening] " Theodore Ts'o 2017-06-08 0:31 ` Jason A. Donenfeld 2017-06-08 0:34 ` Jason A. Donenfeld 2017-06-06 17:48 ` [PATCH v4 09/13] rhashtable: use get_random_u32 for hash_rnd Jason A. Donenfeld 2017-06-06 17:48 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 2:47 ` Theodore Ts'o 2017-06-08 2:47 ` [kernel-hardening] " Theodore Ts'o 2017-06-06 17:48 ` [PATCH v4 10/13] net/neighbor: use get_random_u32 for 32-bit hash random Jason A. Donenfeld 2017-06-06 17:48 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 3:00 ` Theodore Ts'o 2017-06-08 3:00 ` [kernel-hardening] " Theodore Ts'o 2017-06-06 17:48 ` [PATCH v4 11/13] net/route: use get_random_int for random counter Jason A. Donenfeld 2017-06-06 17:48 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 3:01 ` Theodore Ts'o 2017-06-08 3:01 ` [kernel-hardening] " Theodore Ts'o 2017-06-06 17:48 ` [PATCH v4 12/13] bluetooth/smp: ensure RNG is properly seeded before ECDH use Jason A. Donenfeld 2017-06-06 17:48 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 3:06 ` Theodore Ts'o 2017-06-08 3:06 ` [kernel-hardening] " Theodore Ts'o 2017-06-08 5:04 ` Marcel Holtmann 2017-06-08 5:04 ` [kernel-hardening] " Marcel Holtmann 2017-06-08 12:03 ` Jason A. Donenfeld 2017-06-08 12:03 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 12:05 ` Jason A. Donenfeld 2017-06-08 12:05 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 17:05 ` Marcel Holtmann 2017-06-08 17:05 ` [kernel-hardening] " Marcel Holtmann 2017-06-08 17:34 ` Jason A. Donenfeld 2017-06-08 17:34 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-09 1:16 ` [PATCH] bluetooth: ensure RNG is properly seeded before powerup Jason A. Donenfeld 2017-06-06 17:48 ` [PATCH v4 13/13] random: warn when kernel uses unseeded randomness Jason A. Donenfeld 2017-06-06 17:48 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-08 8:19 ` Theodore Ts'o 2017-06-08 8:19 ` [kernel-hardening] " Theodore Ts'o 2017-06-08 12:01 ` Jason A. Donenfeld 2017-06-08 12:01 ` [kernel-hardening] " Jason A. Donenfeld 2017-06-15 11:03 ` Michael Ellerman 2017-06-15 11:59 ` Stephan Müller 2017-06-18 15:46 ` Theodore Ts'o 2017-06-18 17:55 ` Stephan Müller 2017-06-18 19:12 ` Jason A. Donenfeld 2017-06-18 19:11 ` Jason A. Donenfeld 2017-06-08 8:43 ` Jeffrey Walton 2017-06-08 8:43 ` [kernel-hardening] " Jeffrey Walton 2017-06-07 12:33 ` [PATCH v4 00/13] Unseeded In-Kernel Randomness Fixes Jason A. Donenfeld 2017-06-07 12:33 ` [kernel-hardening] " Jason A. Donenfeld
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20170606174804.31124-2-Jason@zx2c4.com \ --to=jason@zx2c4.com \ --cc=davem@davemloft.net \ --cc=ebiggers3@gmail.com \ --cc=gregkh@linuxfoundation.org \ --cc=kernel-hardening@lists.openwall.com \ --cc=linux-crypto@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=tytso@mit.edu \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.