All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2 v2] Init the hashed pointer from a worker.
@ 2022-07-29 15:47 Sebastian Andrzej Siewior
  2022-07-29 15:47 ` [PATCH 1/2 v2] lib/vsprintf: Remove static_branch_likely() from __ptr_to_hashval() Sebastian Andrzej Siewior
  2022-07-29 15:47 ` [PATCH 2/2 v2] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready Sebastian Andrzej Siewior
  0 siblings, 2 replies; 25+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-07-29 15:47 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andy Shevchenko, Jason A . Donenfeld, John Ogness,
	Mike Galbraith, Petr Mladek, Rasmus Villemoes,
	Sergey Senozhatsky, Steven Rostedt, Theodore Ts'o,
	Thomas Gleixner

Hi,

this mini series is a follow up to 
	https://lore.kernel.org/all/YuOf6qu453dOkR+S@linutronix.de/

v1…v2:
   - Remove the static_branch_likely() as suggested by Petr Mladek.
   - Jason wasn't onboard with fiddling in random core to get the job
     done. Instead a worker is scheduled from an initcall and
     get_random_bytes_wait() is used to get the date once it is
     available.

Sebastian



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 1/2 v2] lib/vsprintf: Remove static_branch_likely() from __ptr_to_hashval().
  2022-07-29 15:47 [PATCH 0/2 v2] Init the hashed pointer from a worker Sebastian Andrzej Siewior
@ 2022-07-29 15:47 ` Sebastian Andrzej Siewior
  2022-08-01 12:11   ` Jason A. Donenfeld
  2022-08-01 12:13   ` Jason A. Donenfeld
  2022-07-29 15:47 ` [PATCH 2/2 v2] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready Sebastian Andrzej Siewior
  1 sibling, 2 replies; 25+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-07-29 15:47 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andy Shevchenko, Jason A . Donenfeld, John Ogness,
	Mike Galbraith, Petr Mladek, Rasmus Villemoes,
	Sergey Senozhatsky, Steven Rostedt, Theodore Ts'o,
	Thomas Gleixner, Sebastian Andrzej Siewior

Using static_branch_likely() to signal that ptr_key has been filled is a
bit much given that it is not a fast path.

Replace static_branch_likely() with bool for condition and a memory
barrier for ptr_key.

Suggested-by: Petr Mladek <pmladek@suse.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 lib/vsprintf.c |   19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -750,12 +750,7 @@ static int __init debug_boot_weak_hash_e
 }
 early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable);
 
-static DEFINE_STATIC_KEY_FALSE(filled_random_ptr_key);
-
-static void enable_ptr_key_workfn(struct work_struct *work)
-{
-	static_branch_enable(&filled_random_ptr_key);
-}
+static bool filled_random_ptr_key;
 
 /* Maps a pointer to a 32 bit unique identifier. */
 static inline int __ptr_to_hashval(const void *ptr, unsigned long *hashval_out)
@@ -763,24 +758,26 @@ static inline int __ptr_to_hashval(const
 	static siphash_key_t ptr_key __read_mostly;
 	unsigned long hashval;
 
-	if (!static_branch_likely(&filled_random_ptr_key)) {
+	if (!READ_ONCE(filled_random_ptr_key)) {
 		static bool filled = false;
 		static DEFINE_SPINLOCK(filling);
-		static DECLARE_WORK(enable_ptr_key_work, enable_ptr_key_workfn);
 		unsigned long flags;
 
-		if (!system_unbound_wq || !rng_is_initialized() ||
+		if (!rng_is_initialized() ||
 		    !spin_trylock_irqsave(&filling, flags))
 			return -EAGAIN;
 
 		if (!filled) {
 			get_random_bytes(&ptr_key, sizeof(ptr_key));
-			queue_work(system_unbound_wq, &enable_ptr_key_work);
+			/* Pairs with smp_rmb() before reading ptr_key. */
+			smp_wmb();
+			WRITE_ONCE(filled_random_ptr_key, true);
 			filled = true;
 		}
 		spin_unlock_irqrestore(&filling, flags);
 	}
-
+	/* Pairs with smp_wmb() after writing ptr_key. */
+	smp_rmb();
 
 #ifdef CONFIG_64BIT
 	hashval = (unsigned long)siphash_1u64((u64)ptr, &ptr_key);

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 2/2 v2] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready.
  2022-07-29 15:47 [PATCH 0/2 v2] Init the hashed pointer from a worker Sebastian Andrzej Siewior
  2022-07-29 15:47 ` [PATCH 1/2 v2] lib/vsprintf: Remove static_branch_likely() from __ptr_to_hashval() Sebastian Andrzej Siewior
@ 2022-07-29 15:47 ` Sebastian Andrzej Siewior
  2022-07-29 23:29   ` Jason A. Donenfeld
  1 sibling, 1 reply; 25+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-07-29 15:47 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andy Shevchenko, Jason A . Donenfeld, John Ogness,
	Mike Galbraith, Petr Mladek, Rasmus Villemoes,
	Sergey Senozhatsky, Steven Rostedt, Theodore Ts'o,
	Thomas Gleixner, Sebastian Andrzej Siewior

The printk code invokes vnsprintf in order to compute the complete
string before adding it into its buffer. This happens in an IRQ-off
region which leads to a warning on PREEMPT_RT in the random code if the
format strings contains a %p for pointer printing. This happens because
the random core acquires locks which become sleeping locks on PREEMPT_RT
which must not be acquired with disabled interrupts and or preemption
disabled.
By default the pointers are hashed which requires a random value on the
first invocation (either by printk or another user which comes first.

One could argue that there is no need for printk to disable interrupts
during the vsprintf() invocation which would fix the just mentioned
problem. However printk itself can be invoked in a context with
disabled interrupts which would lead to the very same problem.

Move the initializaion of ptr_key into a worker and schedule it from
subsys_initcall(). This happens early but after the workqueue subsystem
is ready. Use get_random_bytes_wait() to retrieve the random value which
will block until random data is available.

Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 lib/vsprintf.c |   44 +++++++++++++++++++++++++-------------------
 1 file changed, 25 insertions(+), 19 deletions(-)

--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -751,31 +751,37 @@ static int __init debug_boot_weak_hash_e
 early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable);
 
 static bool filled_random_ptr_key;
+static siphash_key_t ptr_key __read_mostly;
+
+static void fill_ptr_key_workfn(struct work_struct *work)
+{
+	int ret;
+
+	ret = get_random_bytes_wait(&ptr_key, sizeof(ptr_key));
+	if (WARN_ON(ret < 0))
+		return;
+	/* Pairs with smp_rmb() before reading ptr_key. */
+	smp_wmb();
+	WRITE_ONCE(filled_random_ptr_key, true);
+}
+
+static int vsprintf_init_hashval(void)
+{
+	static DECLARE_WORK(fill_ptr_key_work, fill_ptr_key_workfn);
+
+	queue_work(system_unbound_wq, &fill_ptr_key_work);
+	return 0;
+}
+subsys_initcall(vsprintf_init_hashval)
 
 /* Maps a pointer to a 32 bit unique identifier. */
 static inline int __ptr_to_hashval(const void *ptr, unsigned long *hashval_out)
 {
-	static siphash_key_t ptr_key __read_mostly;
 	unsigned long hashval;
 
-	if (!READ_ONCE(filled_random_ptr_key)) {
-		static bool filled = false;
-		static DEFINE_SPINLOCK(filling);
-		unsigned long flags;
-
-		if (!rng_is_initialized() ||
-		    !spin_trylock_irqsave(&filling, flags))
-			return -EAGAIN;
-
-		if (!filled) {
-			get_random_bytes(&ptr_key, sizeof(ptr_key));
-			/* Pairs with smp_rmb() before reading ptr_key. */
-			smp_wmb();
-			WRITE_ONCE(filled_random_ptr_key, true);
-			filled = true;
-		}
-		spin_unlock_irqrestore(&filling, flags);
-	}
+	if (!READ_ONCE(filled_random_ptr_key))
+		return -EBUSY;
+
 	/* Pairs with smp_wmb() after writting ptr_key. */
 	smp_rmb();
 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2 v2] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready.
  2022-07-29 15:47 ` [PATCH 2/2 v2] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready Sebastian Andrzej Siewior
@ 2022-07-29 23:29   ` Jason A. Donenfeld
  2022-08-01  7:32     ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 25+ messages in thread
From: Jason A. Donenfeld @ 2022-07-29 23:29 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

On Fri, Jul 29, 2022 at 05:47:16PM +0200, Sebastian Andrzej Siewior wrote:
> +static void fill_ptr_key_workfn(struct work_struct *work)
> +{
> +	int ret;
> +
> +	ret = get_random_bytes_wait(&ptr_key, sizeof(ptr_key));

> +static int vsprintf_init_hashval(void)
> +{
> +	static DECLARE_WORK(fill_ptr_key_work, fill_ptr_key_workfn);
> +
> +	queue_work(system_unbound_wq, &fill_ptr_key_work);
> +	return 0;
> +}
> +subsys_initcall(vsprintf_init_hashval)

I'm unsure how good of an idea this is; it'll wind up setting off the
jitter entropy thing very early in init. It's probably a better idea to
just schedule the worker the first time that the RNG is already
initialized by some other means. Check `in_hardirq()` or something if
you're worried about missing the first message.

Jason

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2 v2] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready.
  2022-07-29 23:29   ` Jason A. Donenfeld
@ 2022-08-01  7:32     ` Sebastian Andrzej Siewior
  2022-08-01  9:34       ` [PATCH 2/2 v3] " Sebastian Andrzej Siewior
  0 siblings, 1 reply; 25+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-01  7:32 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

On 2022-07-30 01:29:12 [+0200], Jason A. Donenfeld wrote:
> On Fri, Jul 29, 2022 at 05:47:16PM +0200, Sebastian Andrzej Siewior wrote:
> > +static void fill_ptr_key_workfn(struct work_struct *work)
> > +{
> > +	int ret;
> > +
> > +	ret = get_random_bytes_wait(&ptr_key, sizeof(ptr_key));
> 
> > +static int vsprintf_init_hashval(void)
> > +{
> > +	static DECLARE_WORK(fill_ptr_key_work, fill_ptr_key_workfn);
> > +
> > +	queue_work(system_unbound_wq, &fill_ptr_key_work);
> > +	return 0;
> > +}
> > +subsys_initcall(vsprintf_init_hashval)
> 
> I'm unsure how good of an idea this is; it'll wind up setting off the
> jitter entropy thing very early in init. It's probably a better idea to
> just schedule the worker the first time that the RNG is already
> initialized by some other means. Check `in_hardirq()` or something if
> you're worried about missing the first message.

I'm aware of in_hardirq() and this not the only I have to worry about.
The same is true for interrupts-off, preempt-off, BH-disabled, rcu-read
section.
If you don't want me to use get_random_bytes_wait(), would you prefer to
delay it to late_initcall() or would you rather prefer to schedule a worker
every other second until the RNG is ready?

> Jason

Sebastian

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 2/2 v3] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready.
  2022-08-01  7:32     ` Sebastian Andrzej Siewior
@ 2022-08-01  9:34       ` Sebastian Andrzej Siewior
  2022-08-01 12:36         ` Jason A. Donenfeld
  2022-09-20 15:01         ` Jason A. Donenfeld
  0 siblings, 2 replies; 25+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-01  9:34 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

The printk code invokes vnsprintf in order to compute the complete
string before adding it into its buffer. This happens in an IRQ-off
region which leads to a warning on PREEMPT_RT in the random code if the
format strings contains a %p for pointer printing. This happens because
the random core acquires locks which become sleeping locks on PREEMPT_RT
which must not be acquired with disabled interrupts and or preemption
disabled.
By default the pointers are hashed which requires a random value on the
first invocation (either by printk or another user which comes first.

One could argue that there is no need for printk to disable interrupts
during the vsprintf() invocation which would fix the just mentioned
problem. However printk itself can be invoked in a context with
disabled interrupts which would lead to the very same problem.

Move the initialization of ptr_key into a worker and schedule it from
subsys_initcall(). This happens early but after the workqueue subsystem
is ready. Use get_random_bytes() to retrieve the random value if the RNG
core is ready, otherwise schedule a worker in two seconds and try again.

Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
v2…v3:
   - schedule a worker every two seconds if the RNG core is not ready.

 lib/vsprintf.c |   46 +++++++++++++++++++++++++++-------------------
 1 file changed, 27 insertions(+), 19 deletions(-)

--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -751,31 +751,39 @@ static int __init debug_boot_weak_hash_e
 early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable);
 
 static bool filled_random_ptr_key;
+static siphash_key_t ptr_key __read_mostly;
+static void fill_ptr_key_workfn(struct work_struct *work);
+static DECLARE_DELAYED_WORK(fill_ptr_key_work, fill_ptr_key_workfn);
+
+static void fill_ptr_key_workfn(struct work_struct *work)
+{
+	if (!rng_is_initialized()) {
+		queue_delayed_work(system_unbound_wq, &fill_ptr_key_work, HZ  * 2);
+		return;
+	}
+
+	get_random_bytes(&ptr_key, sizeof(ptr_key));
+
+	/* Pairs with smp_rmb() before reading ptr_key. */
+	smp_wmb();
+	WRITE_ONCE(filled_random_ptr_key, true);
+}
+
+static int __init vsprintf_init_hashval(void)
+{
+	fill_ptr_key_workfn(NULL);
+	return 0;
+}
+subsys_initcall(vsprintf_init_hashval)
 
 /* Maps a pointer to a 32 bit unique identifier. */
 static inline int __ptr_to_hashval(const void *ptr, unsigned long *hashval_out)
 {
-	static siphash_key_t ptr_key __read_mostly;
 	unsigned long hashval;
 
-	if (!READ_ONCE(filled_random_ptr_key)) {
-		static bool filled = false;
-		static DEFINE_SPINLOCK(filling);
-		unsigned long flags;
-
-		if (!rng_is_initialized() ||
-		    !spin_trylock_irqsave(&filling, flags))
-			return -EAGAIN;
-
-		if (!filled) {
-			get_random_bytes(&ptr_key, sizeof(ptr_key));
-			/* Pairs with smp_rmb() before reading ptr_key. */
-			smp_wmb();
-			WRITE_ONCE(filled_random_ptr_key, true);
-			filled = true;
-		}
-		spin_unlock_irqrestore(&filling, flags);
-	}
+	if (!READ_ONCE(filled_random_ptr_key))
+		return -EBUSY;
+
 	/* Pairs with smp_wmb() after writing ptr_key. */
 	smp_rmb();
 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/2 v2] lib/vsprintf: Remove static_branch_likely() from __ptr_to_hashval().
  2022-07-29 15:47 ` [PATCH 1/2 v2] lib/vsprintf: Remove static_branch_likely() from __ptr_to_hashval() Sebastian Andrzej Siewior
@ 2022-08-01 12:11   ` Jason A. Donenfeld
  2022-08-01 12:41     ` Sebastian Andrzej Siewior
  2022-08-01 12:13   ` Jason A. Donenfeld
  1 sibling, 1 reply; 25+ messages in thread
From: Jason A. Donenfeld @ 2022-08-01 12:11 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

On Fri, Jul 29, 2022 at 05:47:15PM +0200, Sebastian Andrzej Siewior wrote:
> Using static_branch_likely() to signal that ptr_key has been filled is a
> bit much given that it is not a fast path.
> 
> Replace static_branch_likely() with bool for condition and a memory
> barrier for ptr_key.
> 
> Suggested-by: Petr Mladek <pmladek@suse.com>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  lib/vsprintf.c |   19 ++++++++-----------
>  1 file changed, 8 insertions(+), 11 deletions(-)
> 
> --- a/lib/vsprintf.c
> +++ b/lib/vsprintf.c
> @@ -750,12 +750,7 @@ static int __init debug_boot_weak_hash_e
>  }
>  early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable);
>  
> -static DEFINE_STATIC_KEY_FALSE(filled_random_ptr_key);
> -
> -static void enable_ptr_key_workfn(struct work_struct *work)
> -{
> -	static_branch_enable(&filled_random_ptr_key);
> -}
> +static bool filled_random_ptr_key;

This should be __read_mostly, right? Just like ptr_key.

Jason

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/2 v2] lib/vsprintf: Remove static_branch_likely() from __ptr_to_hashval().
  2022-07-29 15:47 ` [PATCH 1/2 v2] lib/vsprintf: Remove static_branch_likely() from __ptr_to_hashval() Sebastian Andrzej Siewior
  2022-08-01 12:11   ` Jason A. Donenfeld
@ 2022-08-01 12:13   ` Jason A. Donenfeld
  2022-08-01 12:42     ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 25+ messages in thread
From: Jason A. Donenfeld @ 2022-08-01 12:13 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

Also,

On Fri, Jul 29, 2022 at 05:47:15PM +0200, Sebastian Andrzej Siewior wrote:
>  		if (!filled) {
>  			get_random_bytes(&ptr_key, sizeof(ptr_key));
> -			queue_work(system_unbound_wq, &enable_ptr_key_work);
> +			/* Pairs with smp_rmb() before reading ptr_key. */
> +			smp_wmb();
> +			WRITE_ONCE(filled_random_ptr_key, true);
>  			filled = true;

Also, should `filled` be changed there?

Jason

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2 v3] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready.
  2022-08-01  9:34       ` [PATCH 2/2 v3] " Sebastian Andrzej Siewior
@ 2022-08-01 12:36         ` Jason A. Donenfeld
  2022-08-01 12:39           ` [PATCH v4] lib/vsprintf: defer filling siphash key on RT Jason A. Donenfeld
  2022-08-01 12:41           ` [PATCH 2/2 v3] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready Sebastian Andrzej Siewior
  2022-09-20 15:01         ` Jason A. Donenfeld
  1 sibling, 2 replies; 25+ messages in thread
From: Jason A. Donenfeld @ 2022-08-01 12:36 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

On Mon, Aug 01, 2022 at 11:34:26AM +0200, Sebastian Andrzej Siewior wrote:
> The printk code invokes vnsprintf in order to compute the complete
> string before adding it into its buffer. This happens in an IRQ-off
> region which leads to a warning on PREEMPT_RT in the random code if the
> format strings contains a %p for pointer printing. This happens because
> the random core acquires locks which become sleeping locks on PREEMPT_RT
> which must not be acquired with disabled interrupts and or preemption
> disabled.
> By default the pointers are hashed which requires a random value on the
> first invocation (either by printk or another user which comes first.
> 
> One could argue that there is no need for printk to disable interrupts
> during the vsprintf() invocation which would fix the just mentioned
> problem. However printk itself can be invoked in a context with
> disabled interrupts which would lead to the very same problem.
> 
> Move the initialization of ptr_key into a worker and schedule it from
> subsys_initcall(). This happens early but after the workqueue subsystem
> is ready. Use get_random_bytes() to retrieve the random value if the RNG
> core is ready, otherwise schedule a worker in two seconds and try again.
> 
> Reported-by: Mike Galbraith <efault@gmx.de>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
> v2…v3:
>    - schedule a worker every two seconds if the RNG core is not ready.
> 
>  lib/vsprintf.c |   46 +++++++++++++++++++++++++++-------------------
>  1 file changed, 27 insertions(+), 19 deletions(-)
> 
> --- a/lib/vsprintf.c
> +++ b/lib/vsprintf.c
> @@ -751,31 +751,39 @@ static int __init debug_boot_weak_hash_e
>  early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable);
>  
>  static bool filled_random_ptr_key;
> +static siphash_key_t ptr_key __read_mostly;
> +static void fill_ptr_key_workfn(struct work_struct *work);
> +static DECLARE_DELAYED_WORK(fill_ptr_key_work, fill_ptr_key_workfn);
> +
> +static void fill_ptr_key_workfn(struct work_struct *work)
> +{
> +	if (!rng_is_initialized()) {
> +		queue_delayed_work(system_unbound_wq, &fill_ptr_key_work, HZ  * 2);
> +		return;
> +	}

This seems kind of crazy and over-complex. You only need this on RT,
right? Because of the raw lock issue? So just schedule it for the right
time on RT and not elsewhere.

I'll send a more basic patch.

Jason

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v4] lib/vsprintf: defer filling siphash key on RT
  2022-08-01 12:36         ` Jason A. Donenfeld
@ 2022-08-01 12:39           ` Jason A. Donenfeld
  2022-08-01 12:46             ` Sebastian Andrzej Siewior
  2022-08-01 12:41           ` [PATCH 2/2 v3] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready Sebastian Andrzej Siewior
  1 sibling, 1 reply; 25+ messages in thread
From: Jason A. Donenfeld @ 2022-08-01 12:39 UTC (permalink / raw)
  To: bigeasy, linux-kernel, Andy Shevchenko, John Ogness,
	Mike Galbraith, Petr Mladek, Rasmus Villemoes,
	Sergey Senozhatsky, Steven Rostedt, Theodore Ts'o,
	Thomas Gleixner
  Cc: Jason A. Donenfeld

On RT, we can't call get_random_bytes() from inside of the raw locks
that callers of vsprintf might take, because get_random_bytes() takes
normal spinlocks. So on those RT systems, defer the siphash key
generation to a worker.

Also, avoid using a static_branch, as this isn't the fast path.
Using static_branch_likely() to signal that ptr_key has been filled is a
bit much given that it is not a fast path.

Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
Sebastian - feel free to take this and tweak it as needed. Sending this
mostly as something illustrative of what the "simpler" thing would be
that I had in mind. -Jason

 lib/vsprintf.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 3c1853a9d1c0..5a67f6f65ddc 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -750,37 +750,42 @@ static int __init debug_boot_weak_hash_enable(char *str)
 }
 early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable);
 
-static DEFINE_STATIC_KEY_FALSE(filled_random_ptr_key);
+static bool filled_ptr_key __read_mostly;
+static siphash_key_t ptr_key __read_mostly;
 
-static void enable_ptr_key_workfn(struct work_struct *work)
+static void fill_ptr_key_workfn(struct work_struct *work)
 {
-	static_branch_enable(&filled_random_ptr_key);
+	if (READ_ONCE(filled_ptr_key))
+		return;
+	get_random_bytes(&ptr_key, sizeof(ptr_key));
+	/* Pairs with smp_rmb() before reading ptr_key. */
+	smp_wmb();
+	WRITE_ONCE(filled_ptr_key, true);
 }
 
 /* Maps a pointer to a 32 bit unique identifier. */
 static inline int __ptr_to_hashval(const void *ptr, unsigned long *hashval_out)
 {
-	static siphash_key_t ptr_key __read_mostly;
 	unsigned long hashval;
 
-	if (!static_branch_likely(&filled_random_ptr_key)) {
-		static bool filled = false;
+	if (!READ_ONCE(filled_ptr_key)) {
 		static DEFINE_SPINLOCK(filling);
-		static DECLARE_WORK(enable_ptr_key_work, enable_ptr_key_workfn);
 		unsigned long flags;
 
-		if (!system_unbound_wq || !rng_is_initialized() ||
-		    !spin_trylock_irqsave(&filling, flags))
+		if (IS_ENABLED(CONFIG_PREEMPT_RT) && rng_is_initialized()) {
+			static DECLARE_WORK(fill_ptr_key_work, fill_ptr_key_workfn);
+			queue_work(system_unbound_wq, &fill_ptr_key_work);
 			return -EAGAIN;
-
-		if (!filled) {
-			get_random_bytes(&ptr_key, sizeof(ptr_key));
-			queue_work(system_unbound_wq, &enable_ptr_key_work);
-			filled = true;
 		}
+
+		if (!rng_is_initialized() || !spin_trylock_irqsave(&filling, flags))
+			return -EAGAIN;
+
+		fill_ptr_key_workfn(NULL);
 		spin_unlock_irqrestore(&filling, flags);
 	}
-
+	/* Pairs with smp_wmb() after writing ptr_key. */
+	smp_rmb();
 
 #ifdef CONFIG_64BIT
 	hashval = (unsigned long)siphash_1u64((u64)ptr, &ptr_key);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2 v3] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready.
  2022-08-01 12:36         ` Jason A. Donenfeld
  2022-08-01 12:39           ` [PATCH v4] lib/vsprintf: defer filling siphash key on RT Jason A. Donenfeld
@ 2022-08-01 12:41           ` Sebastian Andrzej Siewior
  1 sibling, 0 replies; 25+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-01 12:41 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

On 2022-08-01 14:36:48 [+0200], Jason A. Donenfeld wrote:
> > --- a/lib/vsprintf.c
> > +++ b/lib/vsprintf.c
> > @@ -751,31 +751,39 @@ static int __init debug_boot_weak_hash_e
> >  early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable);
> >  
> >  static bool filled_random_ptr_key;
> > +static siphash_key_t ptr_key __read_mostly;
> > +static void fill_ptr_key_workfn(struct work_struct *work);
> > +static DECLARE_DELAYED_WORK(fill_ptr_key_work, fill_ptr_key_workfn);
> > +
> > +static void fill_ptr_key_workfn(struct work_struct *work)
> > +{
> > +	if (!rng_is_initialized()) {
> > +		queue_delayed_work(system_unbound_wq, &fill_ptr_key_work, HZ  * 2);
> > +		return;
> > +	}
> 
> This seems kind of crazy and over-complex. You only need this on RT,
> right? Because of the raw lock issue? So just schedule it for the right
> time on RT and not elsewhere.

Then we have two code paths for RT and !RT and I would to avoid that
especially if it ends up as a user visible change.
And then there is lockdep which might yell on !RT if you acquire a
spinlock_t from context which won't work on RT.

> I'll send a more basic patch.

oki.

> Jason

Sebastian

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/2 v2] lib/vsprintf: Remove static_branch_likely() from __ptr_to_hashval().
  2022-08-01 12:11   ` Jason A. Donenfeld
@ 2022-08-01 12:41     ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 25+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-01 12:41 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

On 2022-08-01 14:11:33 [+0200], Jason A. Donenfeld wrote:
> > --- a/lib/vsprintf.c
> > +++ b/lib/vsprintf.c
> > @@ -750,12 +750,7 @@ static int __init debug_boot_weak_hash_e
> >  }
> >  early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable);
> >  
> > -static DEFINE_STATIC_KEY_FALSE(filled_random_ptr_key);
> > -
> > -static void enable_ptr_key_workfn(struct work_struct *work)
> > -{
> > -	static_branch_enable(&filled_random_ptr_key);
> > -}
> > +static bool filled_random_ptr_key;
> 
> This should be __read_mostly, right? Just like ptr_key.

could be, yes.

> Jason

Sebastian

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/2 v2] lib/vsprintf: Remove static_branch_likely() from __ptr_to_hashval().
  2022-08-01 12:13   ` Jason A. Donenfeld
@ 2022-08-01 12:42     ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 25+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-01 12:42 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

On 2022-08-01 14:13:36 [+0200], Jason A. Donenfeld wrote:
> Also,
> 
> On Fri, Jul 29, 2022 at 05:47:15PM +0200, Sebastian Andrzej Siewior wrote:
> >  		if (!filled) {
> >  			get_random_bytes(&ptr_key, sizeof(ptr_key));
> > -			queue_work(system_unbound_wq, &enable_ptr_key_work);
> > +			/* Pairs with smp_rmb() before reading ptr_key. */
> > +			smp_wmb();
> > +			WRITE_ONCE(filled_random_ptr_key, true);
> >  			filled = true;
> 
> Also, should `filled` be changed there?

you change `filled` as in read_mostly?

> Jason

Sebastian

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v4] lib/vsprintf: defer filling siphash key on RT
  2022-08-01 12:39           ` [PATCH v4] lib/vsprintf: defer filling siphash key on RT Jason A. Donenfeld
@ 2022-08-01 12:46             ` Sebastian Andrzej Siewior
  2022-08-01 13:36               ` Jason A. Donenfeld
  0 siblings, 1 reply; 25+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-01 12:46 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

On 2022-08-01 14:39:46 [+0200], Jason A. Donenfeld wrote:
> On RT, we can't call get_random_bytes() from inside of the raw locks
> that callers of vsprintf might take, because get_random_bytes() takes
> normal spinlocks. So on those RT systems, defer the siphash key
> generation to a worker.
> 
> Also, avoid using a static_branch, as this isn't the fast path.
> Using static_branch_likely() to signal that ptr_key has been filled is a
> bit much given that it is not a fast path.
> 
> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Reported-by: Mike Galbraith <efault@gmx.de>
> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> ---
> Sebastian - feel free to take this and tweak it as needed. Sending this
> mostly as something illustrative of what the "simpler" thing would be
> that I had in mind. -Jason

Can have the same behaviour regardless of CONFIG_PREEMPT_RT? Here
lockdep _may_ yell with !RT because it is broken for RT.
If we agree that we drop the first %p print here, can we do this on
both (regardless of CONFIG_PREEMPT_RT)?

Sebastian

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v4] lib/vsprintf: defer filling siphash key on RT
  2022-08-01 12:46             ` Sebastian Andrzej Siewior
@ 2022-08-01 13:36               ` Jason A. Donenfeld
  2022-08-01 13:44                 ` Jason A. Donenfeld
  2022-08-01 13:47                 ` Sebastian Andrzej Siewior
  0 siblings, 2 replies; 25+ messages in thread
From: Jason A. Donenfeld @ 2022-08-01 13:36 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

Hi Sebastian,

On Mon, Aug 01, 2022 at 02:46:35PM +0200, Sebastian Andrzej Siewior wrote:
> On 2022-08-01 14:39:46 [+0200], Jason A. Donenfeld wrote:
> > On RT, we can't call get_random_bytes() from inside of the raw locks
> > that callers of vsprintf might take, because get_random_bytes() takes
> > normal spinlocks. So on those RT systems, defer the siphash key
> > generation to a worker.
> > 
> > Also, avoid using a static_branch, as this isn't the fast path.
> > Using static_branch_likely() to signal that ptr_key has been filled is a
> > bit much given that it is not a fast path.
> > 
> > Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > Reported-by: Mike Galbraith <efault@gmx.de>
> > Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> > ---
> > Sebastian - feel free to take this and tweak it as needed. Sending this
> > mostly as something illustrative of what the "simpler" thing would be
> > that I had in mind. -Jason
> 
> Can have the same behaviour regardless of CONFIG_PREEMPT_RT? Here
> lockdep _may_ yell with !RT because it is broken for RT.
> If we agree that we drop the first %p print here, can we do this on
> both (regardless of CONFIG_PREEMPT_RT)?

"Lockdep may yell" -- but this would be when lockdep is turned on to
catch RT bugs, not to catch non-RT bugs. The actual bug only exists on
RT. This is an RT problem. Stop pretending that this is a real issue
outside of RT. It isn't. This is *only* an RT issue. So why would we
make things worse for an issue that doesn't actually exist on non-RT?

I too generally prefer having only one code path and not two. But the
way this patch is written, the worker function just gets reused with a
straight call on the non-RT case, so it doesn't actually require
duplicating code.

Jason

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v4] lib/vsprintf: defer filling siphash key on RT
  2022-08-01 13:36               ` Jason A. Donenfeld
@ 2022-08-01 13:44                 ` Jason A. Donenfeld
  2022-08-01 14:25                   ` Sebastian Andrzej Siewior
  2022-08-01 13:47                 ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 25+ messages in thread
From: Jason A. Donenfeld @ 2022-08-01 13:44 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

Hey again,

On Mon, Aug 01, 2022 at 03:36:32PM +0200, Jason A. Donenfeld wrote:
> Hi Sebastian,
> 
> On Mon, Aug 01, 2022 at 02:46:35PM +0200, Sebastian Andrzej Siewior wrote:
> > On 2022-08-01 14:39:46 [+0200], Jason A. Donenfeld wrote:
> > > On RT, we can't call get_random_bytes() from inside of the raw locks
> > > that callers of vsprintf might take, because get_random_bytes() takes
> > > normal spinlocks. So on those RT systems, defer the siphash key
> > > generation to a worker.
> > > 
> > > Also, avoid using a static_branch, as this isn't the fast path.
> > > Using static_branch_likely() to signal that ptr_key has been filled is a
> > > bit much given that it is not a fast path.
> > > 
> > > Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > > Reported-by: Mike Galbraith <efault@gmx.de>
> > > Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> > > ---
> > > Sebastian - feel free to take this and tweak it as needed. Sending this
> > > mostly as something illustrative of what the "simpler" thing would be
> > > that I had in mind. -Jason
> > 
> > Can have the same behaviour regardless of CONFIG_PREEMPT_RT? Here
> > lockdep _may_ yell with !RT because it is broken for RT.
> > If we agree that we drop the first %p print here, can we do this on
> > both (regardless of CONFIG_PREEMPT_RT)?
> 
> "Lockdep may yell" -- but this would be when lockdep is turned on to
> catch RT bugs, not to catch non-RT bugs. The actual bug only exists on
> RT. This is an RT problem. Stop pretending that this is a real issue
> outside of RT. It isn't. This is *only* an RT issue. So why would we
> make things worse for an issue that doesn't actually exist on non-RT?
> 
> I too generally prefer having only one code path and not two. But the
> way this patch is written, the worker function just gets reused with a
> straight call on the non-RT case, so it doesn't actually require
> duplicating code.
> 
> Jason

By the way, another option that would be fine with me would be to make
random.c use all raw spinlocks. From a non-RT perspective, that wouldn't
change the codegen at all, so it doesn't make a huge difference to me.
From an RT perspective, it would presumably fix a lot of these issues,
and enable randomness to be available in any context, which is maybe
what we want anyway. From an RT-safety point of view, I suspect doing
this might actually be okay, because the locks are only ever protecting
operations that are fixed duration CPU-bound, like generating a chacha
block or something, not waiting for some I/O.

Thoughts on that?

Jason

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v4] lib/vsprintf: defer filling siphash key on RT
  2022-08-01 13:36               ` Jason A. Donenfeld
  2022-08-01 13:44                 ` Jason A. Donenfeld
@ 2022-08-01 13:47                 ` Sebastian Andrzej Siewior
  2022-08-01 13:55                   ` Jason A. Donenfeld
  1 sibling, 1 reply; 25+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-01 13:47 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

On 2022-08-01 15:36:32 [+0200], Jason A. Donenfeld wrote:
> Hi Sebastian,
Hi Jason,

> > Can have the same behaviour regardless of CONFIG_PREEMPT_RT? Here
> > lockdep _may_ yell with !RT because it is broken for RT.
> > If we agree that we drop the first %p print here, can we do this on
> > both (regardless of CONFIG_PREEMPT_RT)?
> 
> "Lockdep may yell" -- but this would be when lockdep is turned on to
> catch RT bugs, not to catch non-RT bugs. The actual bug only exists on
> RT. This is an RT problem. Stop pretending that this is a real issue
> outside of RT. It isn't. This is *only* an RT issue. So why would we
> make things worse for an issue that doesn't actually exist on non-RT?

You do remember the warning that poped up in random core with
CONFIG_PROVE_RAW_LOCK_NESTING enabled? Where I said it does not affect
!RT it just points out a RT problem in a !RT config?
If you fix this with one code path for RT and another one for !RT then
you will have this warning _if_ the caller has a raw_spinlock_t
acquired.

> I too generally prefer having only one code path and not two. But the
> way this patch is written, the worker function just gets reused with a
> straight call on the non-RT case, so it doesn't actually require
> duplicating code.

> Jason

Sebastian

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v4] lib/vsprintf: defer filling siphash key on RT
  2022-08-01 13:47                 ` Sebastian Andrzej Siewior
@ 2022-08-01 13:55                   ` Jason A. Donenfeld
  2022-08-01 14:12                     ` [PATCH v5] " Jason A. Donenfeld
  0 siblings, 1 reply; 25+ messages in thread
From: Jason A. Donenfeld @ 2022-08-01 13:55 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

Hi Sebastian,

On 8/1/22, Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> You do remember the warning that poped up in random core with
> CONFIG_PROVE_RAW_LOCK_NESTING enabled? Where I said it does not affect
> !RT it just points out a RT problem in a !RT config?
> If you fix this with one code path for RT and another one for !RT then
> you will have this warning _if_ the caller has a raw_spinlock_t
> acquired.

CONFIG_PROVE_RAW_LOCK_NESTING catches RT issues on non-RT. It doesn't
catch non-RT issues. So lockdep isn't actually warning about something
real as far as non-RT is concerned when CONFIG_PROVE_RAW_LOCK_NESTING
is enabled.

So probably this v4 patch should expand that condition to be:

IS_ENABLED(CONFIG_PROVE_RAW_LOCK_NESTING) || IS_ENABLED(CONFIG_PREEMPT_RT)

That's somewhat distasteful, I realize, but it *does* make the code
actually match reality, which is maybe better.

Jason

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v5] lib/vsprintf: defer filling siphash key on RT
  2022-08-01 13:55                   ` Jason A. Donenfeld
@ 2022-08-01 14:12                     ` Jason A. Donenfeld
  2022-08-01 14:26                       ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 25+ messages in thread
From: Jason A. Donenfeld @ 2022-08-01 14:12 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior, linux-kernel, Andy Shevchenko,
	John Ogness, Mike Galbraith, Petr Mladek, Rasmus Villemoes,
	Sergey Senozhatsky, Steven Rostedt, Theodore Ts'o,
	Thomas Gleixner
  Cc: Jason A. Donenfeld

On RT, we can't call get_random_bytes() from inside of the raw locks
that callers of vsprintf might take, because get_random_bytes() takes
normal spinlocks. So on those RT systems, defer the siphash key
generation to a worker.

We also do the deferal for CONFIG_PROVE_RAW_LOCK_NESTING systems, which
catches RT issues on non-RT. Branching on CONFIG_PROVE_RAW_LOCK_NESTING
is partly awful, as it basically defeats the purpose of lockdep. But in
this case, it really generates incorrect splats.

Also, avoid using a static_branch, as this isn't the fast path.

Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
Sebastian - again, feel free to take this and modify it as needed. Just
posting ideas... -Jason

 lib/vsprintf.c | 36 +++++++++++++++++++++---------------
 1 file changed, 21 insertions(+), 15 deletions(-)

diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 3c1853a9d1c0..a2a61915eb6b 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -750,37 +750,43 @@ static int __init debug_boot_weak_hash_enable(char *str)
 }
 early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable);
 
-static DEFINE_STATIC_KEY_FALSE(filled_random_ptr_key);
+static bool filled_ptr_key __read_mostly;
+static siphash_key_t ptr_key __read_mostly;
 
-static void enable_ptr_key_workfn(struct work_struct *work)
+static void fill_ptr_key_workfn(struct work_struct *work)
 {
-	static_branch_enable(&filled_random_ptr_key);
+	if (READ_ONCE(filled_ptr_key))
+		return;
+	get_random_bytes(&ptr_key, sizeof(ptr_key));
+	/* Pairs with smp_rmb() before reading ptr_key. */
+	smp_wmb();
+	WRITE_ONCE(filled_ptr_key, true);
 }
 
 /* Maps a pointer to a 32 bit unique identifier. */
 static inline int __ptr_to_hashval(const void *ptr, unsigned long *hashval_out)
 {
-	static siphash_key_t ptr_key __read_mostly;
 	unsigned long hashval;
 
-	if (!static_branch_likely(&filled_random_ptr_key)) {
-		static bool filled = false;
+	if (!READ_ONCE(filled_ptr_key)) {
 		static DEFINE_SPINLOCK(filling);
-		static DECLARE_WORK(enable_ptr_key_work, enable_ptr_key_workfn);
 		unsigned long flags;
 
-		if (!system_unbound_wq || !rng_is_initialized() ||
-		    !spin_trylock_irqsave(&filling, flags))
+		if ((IS_ENABLED(CONFIG_PREEMPT_RT) || IS_ENABLED(CONFIG_PROVE_RAW_LOCK_NESTING)) &&
+		    rng_is_initialized()) {
+			static DECLARE_WORK(fill_ptr_key_work, fill_ptr_key_workfn);
+			queue_work(system_unbound_wq, &fill_ptr_key_work);
 			return -EAGAIN;
-
-		if (!filled) {
-			get_random_bytes(&ptr_key, sizeof(ptr_key));
-			queue_work(system_unbound_wq, &enable_ptr_key_work);
-			filled = true;
 		}
+
+		if (!rng_is_initialized() || !spin_trylock_irqsave(&filling, flags))
+			return -EAGAIN;
+
+		fill_ptr_key_workfn(NULL);
 		spin_unlock_irqrestore(&filling, flags);
 	}
-
+	/* Pairs with smp_wmb() after writing ptr_key. */
+	smp_rmb();
 
 #ifdef CONFIG_64BIT
 	hashval = (unsigned long)siphash_1u64((u64)ptr, &ptr_key);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v4] lib/vsprintf: defer filling siphash key on RT
  2022-08-01 13:44                 ` Jason A. Donenfeld
@ 2022-08-01 14:25                   ` Sebastian Andrzej Siewior
  2022-08-01 14:30                     ` Jason A. Donenfeld
  0 siblings, 1 reply; 25+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-01 14:25 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

On 2022-08-01 15:44:12 [+0200], Jason A. Donenfeld wrote:
> Hey again,
Hi Jason,

> By the way, another option that would be fine with me would be to make
> random.c use all raw spinlocks. From a non-RT perspective, that wouldn't
> change the codegen at all, so it doesn't make a huge difference to me.
> From an RT perspective, it would presumably fix a lot of these issues,
> and enable randomness to be available in any context, which is maybe
> what we want anyway. From an RT-safety point of view, I suspect doing
> this might actually be okay, because the locks are only ever protecting
> operations that are fixed duration CPU-bound, like generating a chacha
> block or something, not waiting for some I/O.
> 
> Thoughts on that?

That random-core change regarding random numbers broke lockdep, kasan (I
think) and now printk's %p. Each one of them appears to be exceptional
since we don't have _that_ many users asking for random numbers in
atomic context.
Making the locks raw would indeed solve all the issues at once. Last
time I was looking into this, would include three locks and I tried to
trigger the worst-case via "re-seed" and this was visible back then.
After the rework you did back thinks looked good.

> Jason

Sebastian

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5] lib/vsprintf: defer filling siphash key on RT
  2022-08-01 14:12                     ` [PATCH v5] " Jason A. Donenfeld
@ 2022-08-01 14:26                       ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 25+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-01 14:26 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

On 2022-08-01 16:12:45 [+0200], Jason A. Donenfeld wrote:
> Sebastian - again, feel free to take this and modify it as needed. Just
> posting ideas... -Jason

The plan is to make CONFIG_PROVE_RAW_LOCK_NESTING part of LOCKDEP once
the issues have been sorted out.

Sebastian

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v4] lib/vsprintf: defer filling siphash key on RT
  2022-08-01 14:25                   ` Sebastian Andrzej Siewior
@ 2022-08-01 14:30                     ` Jason A. Donenfeld
  0 siblings, 0 replies; 25+ messages in thread
From: Jason A. Donenfeld @ 2022-08-01 14:30 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: LKML, Andy Shevchenko, John Ogness, Mike Galbraith, Petr Mladek,
	Rasmus Villemoes, Sergey Senozhatsky, Steven Rostedt,
	Theodore Ts'o, Thomas Gleixner

Hi Sebastian,

On Mon, Aug 1, 2022 at 4:25 PM Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
>
> On 2022-08-01 15:44:12 [+0200], Jason A. Donenfeld wrote:
> > Hey again,
> Hi Jason,
>
> > By the way, another option that would be fine with me would be to make
> > random.c use all raw spinlocks. From a non-RT perspective, that wouldn't
> > change the codegen at all, so it doesn't make a huge difference to me.
> > From an RT perspective, it would presumably fix a lot of these issues,
> > and enable randomness to be available in any context, which is maybe
> > what we want anyway. From an RT-safety point of view, I suspect doing
> > this might actually be okay, because the locks are only ever protecting
> > operations that are fixed duration CPU-bound, like generating a chacha
> > block or something, not waiting for some I/O.
> >
> > Thoughts on that?
>
> That random-core change regarding random numbers broke lockdep, kasan (I
> think) and now printk's %p. Each one of them appears to be exceptional
> since we don't have _that_ many users asking for random numbers in
> atomic context.

Actually, the printk %p case was caused by something different than
the other. This used to be initialized with a clunky notifier callback
mechanism, which I got rid of, replacing it with this direct thing.
It's this direct thing that's now causing problems on RT.

> Making the locks raw would indeed solve all the issues at once. Last
> time I was looking into this, would include three locks and I tried to
> trigger the worst-case via "re-seed" and this was visible back then.
> After the rework you did back thinks looked good.

I actually just sent a patch for that to you a second ago:
https://lore.kernel.org/lkml/20220801142530.133007-1-Jason@zx2c4.com/

It's only two locks, and usage seems pretty constrained in a good way.
So okay, if you're on board, let's just do that, and then printk can
stay the same.

Jason

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2 v3] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready.
  2022-08-01  9:34       ` [PATCH 2/2 v3] " Sebastian Andrzej Siewior
  2022-08-01 12:36         ` Jason A. Donenfeld
@ 2022-09-20 15:01         ` Jason A. Donenfeld
  2022-09-23 10:36           ` Petr Mladek
  1 sibling, 1 reply; 25+ messages in thread
From: Jason A. Donenfeld @ 2022-09-20 15:01 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Petr Mladek, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

Hi,

On Mon, Aug 1, 2022 at 11:34 AM Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
>
> The printk code invokes vnsprintf in order to compute the complete
> string before adding it into its buffer. This happens in an IRQ-off
> region which leads to a warning on PREEMPT_RT in the random code if the
> format strings contains a %p for pointer printing. This happens because
> the random core acquires locks which become sleeping locks on PREEMPT_RT
> which must not be acquired with disabled interrupts and or preemption
> disabled.
> By default the pointers are hashed which requires a random value on the
> first invocation (either by printk or another user which comes first.
>
> One could argue that there is no need for printk to disable interrupts
> during the vsprintf() invocation which would fix the just mentioned
> problem. However printk itself can be invoked in a context with
> disabled interrupts which would lead to the very same problem.
>
> Move the initialization of ptr_key into a worker and schedule it from
> subsys_initcall(). This happens early but after the workqueue subsystem
> is ready. Use get_random_bytes() to retrieve the random value if the RNG
> core is ready, otherwise schedule a worker in two seconds and try again.
>
> Reported-by: Mike Galbraith <efault@gmx.de>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
> v2…v3:
>    - schedule a worker every two seconds if the RNG core is not ready.
>
>  lib/vsprintf.c |   46 +++++++++++++++++++++++++++-------------------
>  1 file changed, 27 insertions(+), 19 deletions(-)
>
> --- a/lib/vsprintf.c
> +++ b/lib/vsprintf.c
> @@ -751,31 +751,39 @@ static int __init debug_boot_weak_hash_e
>  early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable);
>
>  static bool filled_random_ptr_key;
> +static siphash_key_t ptr_key __read_mostly;
> +static void fill_ptr_key_workfn(struct work_struct *work);
> +static DECLARE_DELAYED_WORK(fill_ptr_key_work, fill_ptr_key_workfn);
> +
> +static void fill_ptr_key_workfn(struct work_struct *work)
> +{
> +       if (!rng_is_initialized()) {
> +               queue_delayed_work(system_unbound_wq, &fill_ptr_key_work, HZ  * 2);
> +               return;
> +       }
> +
> +       get_random_bytes(&ptr_key, sizeof(ptr_key));
> +
> +       /* Pairs with smp_rmb() before reading ptr_key. */
> +       smp_wmb();
> +       WRITE_ONCE(filled_random_ptr_key, true);
> +}
> +
> +static int __init vsprintf_init_hashval(void)
> +{
> +       fill_ptr_key_workfn(NULL);
> +       return 0;
> +}
> +subsys_initcall(vsprintf_init_hashval)
>
>  /* Maps a pointer to a 32 bit unique identifier. */
>  static inline int __ptr_to_hashval(const void *ptr, unsigned long *hashval_out)
>  {
> -       static siphash_key_t ptr_key __read_mostly;
>         unsigned long hashval;
>
> -       if (!READ_ONCE(filled_random_ptr_key)) {
> -               static bool filled = false;
> -               static DEFINE_SPINLOCK(filling);
> -               unsigned long flags;
> -
> -               if (!rng_is_initialized() ||
> -                   !spin_trylock_irqsave(&filling, flags))
> -                       return -EAGAIN;
> -
> -               if (!filled) {
> -                       get_random_bytes(&ptr_key, sizeof(ptr_key));
> -                       /* Pairs with smp_rmb() before reading ptr_key. */
> -                       smp_wmb();
> -                       WRITE_ONCE(filled_random_ptr_key, true);
> -                       filled = true;
> -               }
> -               spin_unlock_irqrestore(&filling, flags);
> -       }
> +       if (!READ_ONCE(filled_random_ptr_key))
> +               return -EBUSY;
> +
>         /* Pairs with smp_wmb() after writing ptr_key. */
>         smp_rmb();
>

As we discussed at Plumbers, it seems like this is the least-awful way
forward. If we wind up with another case sufficiently similar to this,
I'll add back a notifier to random.c. But while there's only this one
special case, the ugly timer thing will have to do.

So Petr - feel free to queue this up this v3, with my objection now removed.

Jason

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2 v3] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready.
  2022-09-20 15:01         ` Jason A. Donenfeld
@ 2022-09-23 10:36           ` Petr Mladek
  2022-09-23 15:28             ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 25+ messages in thread
From: Petr Mladek @ 2022-09-23 10:36 UTC (permalink / raw)
  To: Jason A. Donenfeld, Sebastian Andrzej Siewior
  Cc: linux-kernel, Andy Shevchenko, John Ogness, Mike Galbraith,
	Rasmus Villemoes, Sergey Senozhatsky, Steven Rostedt,
	Theodore Ts'o, Thomas Gleixner

On Tue 2022-09-20 17:01:33, Jason A. Donenfeld wrote:
> Hi,
> 
> On Mon, Aug 1, 2022 at 11:34 AM Sebastian Andrzej Siewior
> <bigeasy@linutronix.de> wrote:
> >
> > The printk code invokes vnsprintf in order to compute the complete
> > string before adding it into its buffer. This happens in an IRQ-off
> > region which leads to a warning on PREEMPT_RT in the random code if the
> > format strings contains a %p for pointer printing. This happens because
> > the random core acquires locks which become sleeping locks on PREEMPT_RT
> > which must not be acquired with disabled interrupts and or preemption
> > disabled.
> > By default the pointers are hashed which requires a random value on the
> > first invocation (either by printk or another user which comes first.
> >
> > One could argue that there is no need for printk to disable interrupts
> > during the vsprintf() invocation which would fix the just mentioned
> > problem. However printk itself can be invoked in a context with
> > disabled interrupts which would lead to the very same problem.
> >
> > Move the initialization of ptr_key into a worker and schedule it from
> > subsys_initcall(). This happens early but after the workqueue subsystem
> > is ready. Use get_random_bytes() to retrieve the random value if the RNG
> > core is ready, otherwise schedule a worker in two seconds and try again.
> >
> > Reported-by: Mike Galbraith <efault@gmx.de>
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > ---
> > v2…v3:
> >    - schedule a worker every two seconds if the RNG core is not ready.
> >
> As we discussed at Plumbers, it seems like this is the least-awful way
> forward. If we wind up with another case sufficiently similar to this,
> I'll add back a notifier to random.c. But while there's only this one
> special case, the ugly timer thing will have to do.
> 
> So Petr - feel free to queue this up this v3, with my objection now removed.

v3 is still using two patches and there was some discussion about
adding __read_mostly.

Sebastian, could you please re-send a cleaned up patch(set). Also it would
be to get/add there also Acked-by from Jason.

Thanks in advance.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2 v3] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready.
  2022-09-23 10:36           ` Petr Mladek
@ 2022-09-23 15:28             ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 25+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-09-23 15:28 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jason A. Donenfeld, linux-kernel, Andy Shevchenko, John Ogness,
	Mike Galbraith, Rasmus Villemoes, Sergey Senozhatsky,
	Steven Rostedt, Theodore Ts'o, Thomas Gleixner

On 2022-09-23 12:36:58 [+0200], Petr Mladek wrote:
> v3 is still using two patches and there was some discussion about
> adding __read_mostly.
> 
> Sebastian, could you please re-send a cleaned up patch(set). Also it would
> be to get/add there also Acked-by from Jason.
> 
> Thanks in advance.

I wanted to have done that already but I'm backlogged. I hope to get it
done early next week.

> Best Regards,
> Petr

Sebastian

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2022-09-23 15:29 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-29 15:47 [PATCH 0/2 v2] Init the hashed pointer from a worker Sebastian Andrzej Siewior
2022-07-29 15:47 ` [PATCH 1/2 v2] lib/vsprintf: Remove static_branch_likely() from __ptr_to_hashval() Sebastian Andrzej Siewior
2022-08-01 12:11   ` Jason A. Donenfeld
2022-08-01 12:41     ` Sebastian Andrzej Siewior
2022-08-01 12:13   ` Jason A. Donenfeld
2022-08-01 12:42     ` Sebastian Andrzej Siewior
2022-07-29 15:47 ` [PATCH 2/2 v2] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready Sebastian Andrzej Siewior
2022-07-29 23:29   ` Jason A. Donenfeld
2022-08-01  7:32     ` Sebastian Andrzej Siewior
2022-08-01  9:34       ` [PATCH 2/2 v3] " Sebastian Andrzej Siewior
2022-08-01 12:36         ` Jason A. Donenfeld
2022-08-01 12:39           ` [PATCH v4] lib/vsprintf: defer filling siphash key on RT Jason A. Donenfeld
2022-08-01 12:46             ` Sebastian Andrzej Siewior
2022-08-01 13:36               ` Jason A. Donenfeld
2022-08-01 13:44                 ` Jason A. Donenfeld
2022-08-01 14:25                   ` Sebastian Andrzej Siewior
2022-08-01 14:30                     ` Jason A. Donenfeld
2022-08-01 13:47                 ` Sebastian Andrzej Siewior
2022-08-01 13:55                   ` Jason A. Donenfeld
2022-08-01 14:12                     ` [PATCH v5] " Jason A. Donenfeld
2022-08-01 14:26                       ` Sebastian Andrzej Siewior
2022-08-01 12:41           ` [PATCH 2/2 v3] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready Sebastian Andrzej Siewior
2022-09-20 15:01         ` Jason A. Donenfeld
2022-09-23 10:36           ` Petr Mladek
2022-09-23 15:28             ` Sebastian Andrzej Siewior

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.