All of lore.kernel.org
 help / color / mirror / Atom feed
From: Nicolai Stange <nstange@suse.de>
To: "Jason A. Donenfeld" <Jason@zx2c4.com>
Cc: tytso@mit.edu, linux-kernel@vger.kernel.org, gregkh@linuxfoundation.org
Subject: Re: [PATCH v2] random: always use batched entropy for get_random_u{32,64}
Date: Wed, 01 Apr 2020 15:08:22 +0200	[thread overview]
Message-ID: <87d08rbbg9.fsf@suse.de> (raw)
In-Reply-To: <20200221201037.30231-1-Jason@zx2c4.com> (Jason A. Donenfeld's message of "Fri, 21 Feb 2020 21:10:37 +0100")

Hi,

"Jason A. Donenfeld" <Jason@zx2c4.com> writes:

> diff --git a/drivers/char/random.c b/drivers/char/random.c
> index c7f9584de2c8..a6b77a850ddd 100644
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -2149,11 +2149,11 @@ struct batched_entropy {
>  
>  /*
>   * Get a random word for internal kernel use only. The quality of the random
> - * number is either as good as RDRAND or as good as /dev/urandom, with the
> - * goal of being quite fast and not depleting entropy. In order to ensure
> + * number is good as /dev/urandom, but there is no backtrack protection, with
> + * the goal of being quite fast and not depleting entropy. In order to ensure
>   * that the randomness provided by this function is okay, the function
> - * wait_for_random_bytes() should be called and return 0 at least once
> - * at any point prior.
> + * wait_for_random_bytes() should be called and return 0 at least once at any
> + * point prior.
>   */
>  static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = {
>  	.batch_lock	= __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock),
> @@ -2166,15 +2166,6 @@ u64 get_random_u64(void)
>  	struct batched_entropy *batch;
>  	static void *previous;
>  
> -#if BITS_PER_LONG == 64
> -	if (arch_get_random_long((unsigned long *)&ret))
> -		return ret;
> -#else
> -	if (arch_get_random_long((unsigned long *)&ret) &&
> -	    arch_get_random_long((unsigned long *)&ret + 1))
> -	    return ret;
> -#endif
> -
>  	warn_unseeded_randomness(&previous);

I don't know if this has been reported elsewhere already, but this
warning triggers on x86_64 with CONFIG_SLAB_FREELIST_HARDENED and
CONFIG_SLAB_FREELIST_RANDOM during early boot now:

  [    0.079775] random: random: get_random_u64 called from __kmem_cache_create+0x40/0x550 with crng_init=0

Strictly speaking, this isn't really a problem with this patch -- other
archs without arch_get_random_long() support (like e.g. ppc64le) have
showed this behaviour before.

FWIW, I added a dump_stack() to warn_unseeded_randomness() in order to
capture the resp. call paths of the offending get_random_u64/u32()
callers, i.e. those invoked before rand_initialize(). Those are

  - __kmem_cache_create() and
     init_cache_random_seq()->cache_random_seq_create(), called
     indirectly quite a number of times from
     - kmem_cache_init()
     - kmem_cache_init()->create_kmalloc_caches()
     - vmalloc_init()->kmem_cache_create()
     - sched_init()->kmem_cache_create()
     - radix_tree_init()->kmem_cache_create()
     - workqueue_init_early()->kmem_cache_create()
     - trace_event_init()->kmem_cache_create()

  - __kmalloc()/kmem_cache_alloc()/kmem_cache_alloc_node()/kmem_cache_alloc_trace()
      ->__slab_alloc->___slab_alloc()->new_slab()
    called indirectly through
    - vmalloc_init()
    - workqueue_init_early()
    - trace_event_init()
    - early_irq_init()->alloc_desc()

Two possible ways to work around this came to my mind:
1.) Either reintroduce arch_get_random_long() to get_random_u64/u32(),
    but only for the case that crng_init <= 1.
2.) Or introduce something like
      arch_seed_primary_crng_early()
    to be called early from start_kernel().
    For x86_64 this could be implemented by filling the crng state with
    RDRAND data whereas other archs would fall back to get_cycles(),
    something jitterentropish-like or whatever.

What do you think?

Thanks,

Nicolai


>  
>  	batch = raw_cpu_ptr(&batched_entropy_u64);
> @@ -2199,9 +2190,6 @@ u32 get_random_u32(void)
>  	struct batched_entropy *batch;
>  	static void *previous;
>  
> -	if (arch_get_random_int(&ret))
> -		return ret;
> -
>  	warn_unseeded_randomness(&previous);
>  
>  	batch = raw_cpu_ptr(&batched_entropy_u32);

-- 
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg), GF: Felix Imendörffer

      parent reply	other threads:[~2020-04-01 13:08 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-16 16:18 [PATCH] random: always use batched entropy for get_random_u{32,64} Jason A. Donenfeld
2020-02-16 18:23 ` Greg Kroah-Hartman
2020-02-20 22:20   ` Tony Luck
2020-02-20 22:29     ` Tony Luck
2020-02-21 20:08       ` Jason A. Donenfeld
2020-02-22  0:41         ` Theodore Y. Ts'o
2020-02-22  9:59           ` Jason A. Donenfeld
2020-02-24 20:41           ` Luck, Tony
2020-02-21 20:07     ` Jason A. Donenfeld
2020-02-21 20:10       ` [PATCH v2] " Jason A. Donenfeld
2020-02-28  4:09         ` Theodore Y. Ts'o
2020-04-01 13:08         ` Nicolai Stange [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87d08rbbg9.fsf@suse.de \
    --to=nstange@suse.de \
    --cc=Jason@zx2c4.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.