All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marco Elver <elver@google.com>
To: "Jason A. Donenfeld" <Jason@zx2c4.com>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	kasan-dev@googlegroups.com,
	John Ogness <john.ogness@linutronix.de>,
	Alexander Potapenko <glider@google.com>,
	Dmitry Vyukov <dvyukov@google.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Geert Uytterhoeven <geert+renesas@glider.be>
Subject: Re: [PATCH v2] mm/kfence: select random number before taking raw lock
Date: Thu, 9 Jun 2022 14:41:56 +0200	[thread overview]
Message-ID: <YqHqlC9gvYl2vAiE@elver.google.com> (raw)
In-Reply-To: <20220609123319.17576-1-Jason@zx2c4.com>

On Thu, Jun 09, 2022 at 02:33PM +0200, Jason A. Donenfeld wrote:
> The RNG uses vanilla spinlocks, not raw spinlocks, so kfence should pick
> its random numbers before taking its raw spinlocks. This also has the
> nice effect of doing less work inside the lock. It should fix a splat
> that Geert saw with CONFIG_PROVE_RAW_LOCK_NESTING:
> 
>      dump_backtrace.part.0+0x98/0xc0
>      show_stack+0x14/0x28
>      dump_stack_lvl+0xac/0xec
>      dump_stack+0x14/0x2c
>      __lock_acquire+0x388/0x10a0
>      lock_acquire+0x190/0x2c0
>      _raw_spin_lock_irqsave+0x6c/0x94
>      crng_make_state+0x148/0x1e4
>      _get_random_bytes.part.0+0x4c/0xe8
>      get_random_u32+0x4c/0x140
>      __kfence_alloc+0x460/0x5c4
>      kmem_cache_alloc_trace+0x194/0x1dc
>      __kthread_create_on_node+0x5c/0x1a8
>      kthread_create_on_node+0x58/0x7c
>      printk_start_kthread.part.0+0x34/0xa8
>      printk_activate_kthreads+0x4c/0x54
>      do_one_initcall+0xec/0x278
>      kernel_init_freeable+0x11c/0x214
>      kernel_init+0x24/0x124
>      ret_from_fork+0x10/0x20
> 
> Cc: John Ogness <john.ogness@linutronix.de>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Marco Elver <elver@google.com>
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>

Reviewed-by: Marco Elver <elver@google.com>

Thank you.

> ---
> Changes v1->v2:
> - Make the bools const to help compiler elide branch when possible,
>   suggested by Marco.
> 
>  mm/kfence/core.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> index 4e7cd4c8e687..4b5e5a3d3a63 100644
> --- a/mm/kfence/core.c
> +++ b/mm/kfence/core.c
> @@ -360,6 +360,9 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g
>  	unsigned long flags;
>  	struct slab *slab;
>  	void *addr;
> +	const bool random_right_allocate = prandom_u32_max(2);
> +	const bool random_fault = CONFIG_KFENCE_STRESS_TEST_FAULTS &&
> +				  !prandom_u32_max(CONFIG_KFENCE_STRESS_TEST_FAULTS);
>  
>  	/* Try to obtain a free object. */
>  	raw_spin_lock_irqsave(&kfence_freelist_lock, flags);
> @@ -404,7 +407,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g
>  	 * is that the out-of-bounds accesses detected are deterministic for
>  	 * such allocations.
>  	 */
> -	if (prandom_u32_max(2)) {
> +	if (random_right_allocate) {
>  		/* Allocate on the "right" side, re-calculate address. */
>  		meta->addr += PAGE_SIZE - size;
>  		meta->addr = ALIGN_DOWN(meta->addr, cache->align);
> @@ -444,7 +447,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g
>  	if (cache->ctor)
>  		cache->ctor(addr);
>  
> -	if (CONFIG_KFENCE_STRESS_TEST_FAULTS && !prandom_u32_max(CONFIG_KFENCE_STRESS_TEST_FAULTS))
> +	if (random_fault)
>  		kfence_protect(meta->addr); /* Random "faults" by protecting the object. */
>  
>  	atomic_long_inc(&counters[KFENCE_COUNTER_ALLOCATED]);
> -- 
> 2.35.1
> 

  reply	other threads:[~2022-06-09 12:42 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-09 12:17 [PATCH] mm/kfence: select random number before taking raw lock Jason A. Donenfeld
2022-06-09 12:27 ` Marco Elver
2022-06-09 12:29   ` Jason A. Donenfeld
2022-06-09 12:33     ` [PATCH v2] " Jason A. Donenfeld
2022-06-09 12:41       ` Marco Elver [this message]
2022-06-09 12:52       ` Petr Mladek
2022-06-09 12:31 ` [PATCH] " Geert Uytterhoeven

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YqHqlC9gvYl2vAiE@elver.google.com \
    --to=elver@google.com \
    --cc=Jason@zx2c4.com \
    --cc=akpm@linux-foundation.org \
    --cc=dvyukov@google.com \
    --cc=geert+renesas@glider.be \
    --cc=geert@linux-m68k.org \
    --cc=glider@google.com \
    --cc=john.ogness@linutronix.de \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.