linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] random: schedule jitter credit for next jiffy, not in two jiffies
@ 2022-09-30 23:10 Jason A. Donenfeld
  2022-09-30 23:10 ` [PATCH 2/2] random: spread out jitter callback to different CPUs Jason A. Donenfeld
  0 siblings, 1 reply; 17+ messages in thread
From: Jason A. Donenfeld @ 2022-09-30 23:10 UTC (permalink / raw)
  To: linux-kernel, linux-crypto
  Cc: Jason A. Donenfeld, Dominik Brodowski, Sebastian Andrzej Siewior,
	Sultan Alsawaf

Counterintuitively, mod_timer(..., jiffies + 1) will cause the timer to
fire not in the next jiffy, but in two jiffies. The way to cause
the timer to fire in the next jiffy is with mod_timer(..., jiffies).
Doing so then lets us bump the upper bound back up again.

Fixes: 50ee7529ec45 ("random: try to actively add entropy rather than passively wait for it")
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 drivers/char/random.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 64ee16ffb8b7..fdf15f5c87dd 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1205,7 +1205,7 @@ static void __cold entropy_timer(struct timer_list *timer)
  */
 static void __cold try_to_generate_entropy(void)
 {
-	enum { NUM_TRIAL_SAMPLES = 8192, MAX_SAMPLES_PER_BIT = HZ / 30 };
+	enum { NUM_TRIAL_SAMPLES = 8192, MAX_SAMPLES_PER_BIT = HZ / 15 };
 	struct entropy_timer_state stack;
 	unsigned int i, num_different = 0;
 	unsigned long last = random_get_entropy();
@@ -1224,7 +1224,7 @@ static void __cold try_to_generate_entropy(void)
 	timer_setup_on_stack(&stack.timer, entropy_timer, 0);
 	while (!crng_ready() && !signal_pending(current)) {
 		if (!timer_pending(&stack.timer))
-			mod_timer(&stack.timer, jiffies + 1);
+			mod_timer(&stack.timer, jiffies);
 		mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
 		schedule();
 		stack.entropy = random_get_entropy();
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/2] random: spread out jitter callback to different CPUs
  2022-09-30 23:10 [PATCH 1/2] random: schedule jitter credit for next jiffy, not in two jiffies Jason A. Donenfeld
@ 2022-09-30 23:10 ` Jason A. Donenfeld
  2022-10-01  9:21   ` Jason A. Donenfeld
  0 siblings, 1 reply; 17+ messages in thread
From: Jason A. Donenfeld @ 2022-09-30 23:10 UTC (permalink / raw)
  To: linux-kernel, linux-crypto
  Cc: Jason A. Donenfeld, Dominik Brodowski, Sebastian Andrzej Siewior,
	Sultan Alsawaf

Rather than merely hoping that the callback gets called on another CPU,
arrange for that to actually happen, by round robining which CPU the
timer fires on. This way, on multiprocessor machines, we exacerbate
jitter by touching the same memory from multiple different cores.

Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 drivers/char/random.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index fdf15f5c87dd..74627b53179a 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1209,6 +1209,7 @@ static void __cold try_to_generate_entropy(void)
 	struct entropy_timer_state stack;
 	unsigned int i, num_different = 0;
 	unsigned long last = random_get_entropy();
+	int cpu = -1;
 
 	for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) {
 		stack.entropy = random_get_entropy();
@@ -1223,8 +1224,17 @@ static void __cold try_to_generate_entropy(void)
 	stack.samples = 0;
 	timer_setup_on_stack(&stack.timer, entropy_timer, 0);
 	while (!crng_ready() && !signal_pending(current)) {
-		if (!timer_pending(&stack.timer))
-			mod_timer(&stack.timer, jiffies);
+		if (!timer_pending(&stack.timer)) {
+			preempt_disable();
+			do {
+				cpu = cpumask_next(cpu, cpu_online_mask);
+				if (cpu == nr_cpumask_bits)
+					cpu = cpumask_first(cpu_online_mask);
+			} while (cpu == smp_processor_id() && cpumask_weight(cpu_online_mask) > 1);
+			stack.timer.expires = jiffies;
+			add_timer_on(&stack.timer, cpu);
+			preempt_enable();
+		}
 		mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
 		schedule();
 		stack.entropy = random_get_entropy();
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] random: spread out jitter callback to different CPUs
  2022-09-30 23:10 ` [PATCH 2/2] random: spread out jitter callback to different CPUs Jason A. Donenfeld
@ 2022-10-01  9:21   ` Jason A. Donenfeld
  2022-10-05 17:26     ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 17+ messages in thread
From: Jason A. Donenfeld @ 2022-10-01  9:21 UTC (permalink / raw)
  To: linux-kernel, linux-crypto
  Cc: Dominik Brodowski, Sebastian Andrzej Siewior, Sultan Alsawaf

On Sat, Oct 01, 2022 at 01:10:50AM +0200, Jason A. Donenfeld wrote:
> Rather than merely hoping that the callback gets called on another CPU,
> arrange for that to actually happen, by round robining which CPU the
> timer fires on. This way, on multiprocessor machines, we exacerbate
> jitter by touching the same memory from multiple different cores.
> 
> Cc: Dominik Brodowski <linux@dominikbrodowski.net>
> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Cc: Sultan Alsawaf <sultan@kerneltoast.com>
> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> ---
>  drivers/char/random.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/char/random.c b/drivers/char/random.c
> index fdf15f5c87dd..74627b53179a 100644
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -1209,6 +1209,7 @@ static void __cold try_to_generate_entropy(void)
>  	struct entropy_timer_state stack;
>  	unsigned int i, num_different = 0;
>  	unsigned long last = random_get_entropy();
> +	int cpu = -1;
>  
>  	for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) {
>  		stack.entropy = random_get_entropy();
> @@ -1223,8 +1224,17 @@ static void __cold try_to_generate_entropy(void)
>  	stack.samples = 0;
>  	timer_setup_on_stack(&stack.timer, entropy_timer, 0);
>  	while (!crng_ready() && !signal_pending(current)) {
> -		if (!timer_pending(&stack.timer))
> -			mod_timer(&stack.timer, jiffies);
> +		if (!timer_pending(&stack.timer)) {
> +			preempt_disable();
> +			do {
> +				cpu = cpumask_next(cpu, cpu_online_mask);
> +				if (cpu == nr_cpumask_bits)
> +					cpu = cpumask_first(cpu_online_mask);
> +			} while (cpu == smp_processor_id() && cpumask_weight(cpu_online_mask) > 1);
> +			stack.timer.expires = jiffies;
> +			add_timer_on(&stack.timer, cpu);

Sultan points out that timer_pending() returns false before the function
has actually run, while add_timer_on() adds directly to the timer base,
which means del_timer_sync() might fail to notice a pending timer, which
means UaF. This seems like a somewhat hard problem to solve. So I think
I'll just drop this patch 2/2 here until a better idea comes around.

Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] random: spread out jitter callback to different CPUs
  2022-10-01  9:21   ` Jason A. Donenfeld
@ 2022-10-05 17:26     ` Sebastian Andrzej Siewior
  2022-10-05 21:08       ` Jason A. Donenfeld
  0 siblings, 1 reply; 17+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-10-05 17:26 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, linux-crypto, Dominik Brodowski, Sultan Alsawaf

On 2022-10-01 11:21:30 [+0200], Jason A. Donenfeld wrote:
> Sultan points out that timer_pending() returns false before the function
> has actually run, while add_timer_on() adds directly to the timer base,
> which means del_timer_sync() might fail to notice a pending timer, which
> means UaF. This seems like a somewhat hard problem to solve. So I think
> I'll just drop this patch 2/2 here until a better idea comes around.

I don't know what you exactly intend but this:

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 79d7d4e4e5828..18d785f5969e5 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1195,6 +1195,7 @@ static void __cold try_to_generate_entropy(void)
 	struct entropy_timer_state stack;
 	unsigned int i, num_different = 0;
 	unsigned long last = random_get_entropy();
+	unsigned int cpu = raw_smp_processor_id();
 
 	for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) {
 		stack.entropy = random_get_entropy();
@@ -1207,10 +1208,17 @@ static void __cold try_to_generate_entropy(void)
 		return;
 
 	stack.samples = 0;
-	timer_setup_on_stack(&stack.timer, entropy_timer, 0);
+	timer_setup_on_stack(&stack.timer, entropy_timer, TIMER_PINNED);
 	while (!crng_ready() && !signal_pending(current)) {
-		if (!timer_pending(&stack.timer))
-			mod_timer(&stack.timer, jiffies + 1);
+
+		if (!timer_pending(&stack.timer)) {
+			cpu = cpumask_next(cpu, cpu_online_mask);
+			if (cpu == nr_cpumask_bits)
+				cpu = cpumask_first(cpu_online_mask);
+
+			stack.timer.expires = jiffies;
+			add_timer_on(&stack.timer, cpu);
+		}
 		mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
 		schedule();
 		stack.entropy = random_get_entropy();

will enqueue a timer once none is pending. That is on first invocation
_or_ as soon as the callback is about to be invoked. So basically the
timer is about to be called and you enqueue it right away.
With "expires = jiffies" the timer will be invoked on every tick while
"jiffies + 1" will invoke it on every other tick.

You will start the timer on "this-CPU + 1" and iterate it in a round
robin fashion through all CPUs. It seems this is important. I don't
think that you need to ensure that the CPU running
try_to_generate_entropy() will not fire the timer since it won't happen
most of the time (due to the round-robin thingy). This is (of course)
different between a busy system and an idle one.

That del_timer_sync() at the end is what you want. If the timer is
pending (as in enqueued in the timer wheel) then it will be removed
before it is invoked. If the timer's callback is invoked then it will
spin until the callback is done.

I *think* you are aware that schedule() here is kind of pointless
because if there is not much going on (this is the only task in the
system), then you leave schedule() right away and continue. Assuming
random_get_entropy() is returning current clock (which is either the
rdtsc on x86 or random_get_entropy_fallback() somewhere else) then you
get little noise.

With some additional trace prints:

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 79d7d4e4e5828..802e0d9254611 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1195,6 +1195,8 @@ static void __cold try_to_generate_entropy(void)
 	struct entropy_timer_state stack;
 	unsigned int i, num_different = 0;
 	unsigned long last = random_get_entropy();
+	unsigned int cpu = raw_smp_processor_id();
+	unsigned long v1, v2;
 
 	for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) {
 		stack.entropy = random_get_entropy();
@@ -1207,15 +1209,26 @@ static void __cold try_to_generate_entropy(void)
 		return;
 
 	stack.samples = 0;
-	timer_setup_on_stack(&stack.timer, entropy_timer, 0);
+	timer_setup_on_stack(&stack.timer, entropy_timer, TIMER_PINNED);
+	v1 = v2 = 0;
 	while (!crng_ready() && !signal_pending(current)) {
-		if (!timer_pending(&stack.timer))
-			mod_timer(&stack.timer, jiffies + 1);
+
+		if (!timer_pending(&stack.timer)) {
+			cpu = cpumask_next(cpu, cpu_online_mask);
+			if (cpu == nr_cpumask_bits)
+				cpu = cpumask_first(cpu_online_mask);
+
+			stack.timer.expires = jiffies;
+			add_timer_on(&stack.timer, cpu);
+		}
 		mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
 		schedule();
-		stack.entropy = random_get_entropy();
+		v1 = random_get_entropy();
+		stack.entropy = v1;
+		trace_printk("%lx | %lx\n", v1, v1 - v2);
+		v2 = v1;
 	}
-
+	tracing_off();
 	del_timer_sync(&stack.timer);
 	destroy_timer_on_stack(&stack.timer);
 	mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));

I get:

|       swapper/0-1       [002] .....     2.570083: try_to_generate_entropy: 275e8a56d | 2e4
|       swapper/0-1       [002] .....     2.570084: try_to_generate_entropy: 275e8a82c | 2bf
|       swapper/0-1       [002] .....     2.570084: try_to_generate_entropy: 275e8ab10 | 2e4
|       swapper/0-1       [002] .....     2.570084: try_to_generate_entropy: 275e8adcf | 2bf
|       swapper/0-1       [002] .....     2.570084: try_to_generate_entropy: 275e8b0b3 | 2e4
|       swapper/0-1       [002] .....     2.570084: try_to_generate_entropy: 275e8b372 | 2bf
|       swapper/0-1       [002] .....     2.570085: try_to_generate_entropy: 275e8b85c | 4ea
|       swapper/0-1       [002] .....     2.570085: try_to_generate_entropy: 275e8bb1b | 2bf
|       swapper/0-1       [002] .....     2.570085: try_to_generate_entropy: 275e8be49 | 32e
|       swapper/0-1       [002] .....     2.570085: try_to_generate_entropy: 275e8c12d | 2e4
|       swapper/0-1       [002] .....     2.570087: try_to_generate_entropy: 275e8de15 | 1ce8
|       swapper/0-1       [002] .....     2.570088: try_to_generate_entropy: 275e8e168 | 353
|       swapper/0-1       [002] .....     2.570088: try_to_generate_entropy: 275e8e471 | 309
|       swapper/0-1       [002] .....     2.570088: try_to_generate_entropy: 275e8e833 | 3c2
|       swapper/0-1       [002] .....     2.570088: try_to_generate_entropy: 275e8edd6 | 5a3

So with sizeof(entropy) = 8 bytes you add 8 bytes only little changes in
lower bits.
That is maybe where you say that I don't need to worry because it is a
very good hash function and the timer accounts only one bit of entropy
every jiffy.

> Jason

Sebastian

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] random: spread out jitter callback to different CPUs
  2022-10-05 17:26     ` Sebastian Andrzej Siewior
@ 2022-10-05 21:08       ` Jason A. Donenfeld
  2022-10-06  6:46         ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 17+ messages in thread
From: Jason A. Donenfeld @ 2022-10-05 21:08 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, linux-crypto, Dominik Brodowski, Sultan Alsawaf

Hi Sebastian,

On Wed, Oct 05, 2022 at 07:26:42PM +0200, Sebastian Andrzej Siewior wrote:
> That del_timer_sync() at the end is what you want. If the timer is
> pending (as in enqueued in the timer wheel) then it will be removed
> before it is invoked. If the timer's callback is invoked then it will
> spin until the callback is done.

del_timer_sync() is not guaranteed to succeed with add_timer_on() being
used in conjunction with timer_pending() though. That's why I've
abandoned this.

Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] random: spread out jitter callback to different CPUs
  2022-10-05 21:08       ` Jason A. Donenfeld
@ 2022-10-06  6:46         ` Sebastian Andrzej Siewior
  2022-10-06 12:26           ` Jason A. Donenfeld
  0 siblings, 1 reply; 17+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-10-06  6:46 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, linux-crypto, Dominik Brodowski, Sultan Alsawaf

On 2022-10-05 23:08:19 [+0200], Jason A. Donenfeld wrote:
> Hi Sebastian,
Hi Jason,

> On Wed, Oct 05, 2022 at 07:26:42PM +0200, Sebastian Andrzej Siewior wrote:
> > That del_timer_sync() at the end is what you want. If the timer is
> > pending (as in enqueued in the timer wheel) then it will be removed
> > before it is invoked. If the timer's callback is invoked then it will
> > spin until the callback is done.
> 
> del_timer_sync() is not guaranteed to succeed with add_timer_on() being
> used in conjunction with timer_pending() though. That's why I've
> abandoned this.

But why? The timer is added to a timer-base on a different CPU. Should
work.

> Jason

Sebastian

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] random: spread out jitter callback to different CPUs
  2022-10-06  6:46         ` Sebastian Andrzej Siewior
@ 2022-10-06 12:26           ` Jason A. Donenfeld
  2022-10-06 12:41             ` Sebastian Andrzej Siewior
  2022-10-07 14:01             ` Jason A. Donenfeld
  0 siblings, 2 replies; 17+ messages in thread
From: Jason A. Donenfeld @ 2022-10-06 12:26 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, linux-crypto, Dominik Brodowski, Sultan Alsawaf

On Thu, Oct 06, 2022 at 08:46:27AM +0200, Sebastian Andrzej Siewior wrote:
> On 2022-10-05 23:08:19 [+0200], Jason A. Donenfeld wrote:
> > Hi Sebastian,
> Hi Jason,
> 
> > On Wed, Oct 05, 2022 at 07:26:42PM +0200, Sebastian Andrzej Siewior wrote:
> > > That del_timer_sync() at the end is what you want. If the timer is
> > > pending (as in enqueued in the timer wheel) then it will be removed
> > > before it is invoked. If the timer's callback is invoked then it will
> > > spin until the callback is done.
> > 
> > del_timer_sync() is not guaranteed to succeed with add_timer_on() being
> > used in conjunction with timer_pending() though. That's why I've
> > abandoned this.
> 
> But why? The timer is added to a timer-base on a different CPU. Should
> work.

So it's easier to talk about, I'll number a few lines:

 1 while (conditions) {
 2     if (!timer_pending(&stack.timer))
 3         add_timer_on(&stack.timer, some_next_cpu);
 4 }
 5 del_timer_sync(&stack.timer);


Then, steps to cause UaF:

a) add_timer_on() on line 3 is called from CPU 1 and pends the timer on
   CPU 2.

b) Just before the timer callback runs, not after, timer_pending() is
   made false, so the condition on line 2 holds true again.

c) add_timer_on() on line 3 is called from CPU 1 and pends the timer on
   CPU 3.

d) The conditions on line 1 are made false, and the loop breaks.

e) del_timer_sync() on line 5 is called, and its `base->running_timer !=
   timer` check is false, because of step (c).

f) `stack.timer` gets freed / goes out of scope.

g) The callback scheduled from step (b) runs, and we have a UaF.

That's, anyway, what I understand Sultan to have pointed out to me. In
looking at this closely, though, to write this email, I noticed that
add_timer_on() does set TIMER_MIGRATING, which lock_timer_base() spins
on. So actually, maybe this scenario should be accounted for? Sultan, do
you care to comment here?

Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] random: spread out jitter callback to different CPUs
  2022-10-06 12:26           ` Jason A. Donenfeld
@ 2022-10-06 12:41             ` Sebastian Andrzej Siewior
  2022-10-06 16:39               ` Sultan Alsawaf
  2022-10-07 14:01             ` Jason A. Donenfeld
  1 sibling, 1 reply; 17+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-10-06 12:41 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, linux-crypto, Dominik Brodowski, Sultan Alsawaf

On 2022-10-06 06:26:04 [-0600], Jason A. Donenfeld wrote:
> e) del_timer_sync() on line 5 is called, and its `base->running_timer !=
>    timer` check is false, because of step (c).

If `base->running_timer != timer` then the timer ('s callback) is not
currently active/ running. Therefore it can be removed from the timer
bucket (in case it is pending in the future).
If `base->running_timer == timer` then the timer ('s callback) is
currently active. del_timer_sync() will loop in cpu_relax() until the
callback finished. And then try again.

> f) `stack.timer` gets freed / goes out of scope.
> 
> g) The callback scheduled from step (b) runs, and we have a UaF.
> 
> That's, anyway, what I understand Sultan to have pointed out to me. In
> looking at this closely, though, to write this email, I noticed that
> add_timer_on() does set TIMER_MIGRATING, which lock_timer_base() spins
> on. So actually, maybe this scenario should be accounted for? Sultan, do
> you care to comment here?

During TIMER_MIGRATING the del_timer_sync() caller will spin until the
condition is over. So it can remove the timer from the right bucket and
check if it is active vs the right bucket.

> Jason

Sebastian

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] random: spread out jitter callback to different CPUs
  2022-10-06 12:41             ` Sebastian Andrzej Siewior
@ 2022-10-06 16:39               ` Sultan Alsawaf
  2022-10-07  7:29                 ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 17+ messages in thread
From: Sultan Alsawaf @ 2022-10-06 16:39 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Jason A. Donenfeld, linux-kernel, linux-crypto, Dominik Brodowski

Hi Sebastian,

On Thu, Oct 06, 2022 at 02:41:11PM +0200, Sebastian Andrzej Siewior wrote:
> On 2022-10-06 06:26:04 [-0600], Jason A. Donenfeld wrote:
> > e) del_timer_sync() on line 5 is called, and its `base->running_timer !=
> >    timer` check is false, because of step (c).
> 
> If `base->running_timer != timer` then the timer ('s callback) is not
> currently active/ running. Therefore it can be removed from the timer
> bucket (in case it is pending in the future).
> If `base->running_timer == timer` then the timer ('s callback) is
> currently active. del_timer_sync() will loop in cpu_relax() until the
> callback finished. And then try again.
> > f) `stack.timer` gets freed / goes out of scope.
> > 
> > g) The callback scheduled from step (b) runs, and we have a UaF.
> > 
> > That's, anyway, what I understand Sultan to have pointed out to me. In
> > looking at this closely, though, to write this email, I noticed that
> > add_timer_on() does set TIMER_MIGRATING, which lock_timer_base() spins
> > on. So actually, maybe this scenario should be accounted for? Sultan, do
> > you care to comment here?
> 
> During TIMER_MIGRATING the del_timer_sync() caller will spin until the
> condition is over. So it can remove the timer from the right bucket and
> check if it is active vs the right bucket.

My concern stems from the design of add_timer_on(). Specifically, add_timer_on()
expects the timer to not already be pending or running.

Because of this, add_timer_on() doesn't check `base->running_timer` and doesn't
wait for the timer to finish running, because it expects the timer to be
completely idle. Giving add_timer_on() a timer which is already running is a
bug, as made clear by the `BUG_ON(timer_pending(timer) || !timer->function);`.

But since a timer is marked as not-pending prior to when it runs, add_timer_on()
can't detect if the timer is actively running; the above BUG_ON() won't be
tripped. So the UaF scenario I forsee is that doing this:
    add_timer_on(timer, 0);
    // timer is actively running on CPU0, timer is no longer pending
    add_timer_on(timer, 1); // changes timer base, won't wait for timer to stop
    del_timer_sync(timer); // only checks CPU1 timer base for the running timer

may result in the del_timer_sync() not waiting for the timer function to finish
running on CPU0 from the `add_timer_on(timer, 0);`, since add_timer_on() won't
wait for the timer function to finish running before changing the timer base.
And since Jason's timer is declared on the stack, his timer callback function
would dereference `stack.timer` after it's been freed.

> Sebastian

Sultan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] random: spread out jitter callback to different CPUs
  2022-10-06 16:39               ` Sultan Alsawaf
@ 2022-10-07  7:29                 ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 17+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-10-07  7:29 UTC (permalink / raw)
  To: Sultan Alsawaf
  Cc: Jason A. Donenfeld, linux-kernel, linux-crypto, Dominik Brodowski

On 2022-10-06 09:39:46 [-0700], Sultan Alsawaf wrote:
> Hi Sebastian,
Hi Sultan,

> But since a timer is marked as not-pending prior to when it runs, add_timer_on()
> can't detect if the timer is actively running; the above BUG_ON() won't be
> tripped. So the UaF scenario I forsee is that doing this:
>     add_timer_on(timer, 0);
>     // timer is actively running on CPU0, timer is no longer pending
>     add_timer_on(timer, 1); // changes timer base, won't wait for timer to stop
>     del_timer_sync(timer); // only checks CPU1 timer base for the running timer

/me taking notes.

> Sultan

Sebastian

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] random: spread out jitter callback to different CPUs
  2022-10-06 12:26           ` Jason A. Donenfeld
  2022-10-06 12:41             ` Sebastian Andrzej Siewior
@ 2022-10-07 14:01             ` Jason A. Donenfeld
  2022-10-07 14:55               ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 17+ messages in thread
From: Jason A. Donenfeld @ 2022-10-07 14:01 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior, tglx
  Cc: linux-kernel, linux-crypto, Dominik Brodowski, Sultan Alsawaf

On Thu, Oct 06, 2022 at 06:26:04AM -0600, Jason A. Donenfeld wrote:
> On Thu, Oct 06, 2022 at 08:46:27AM +0200, Sebastian Andrzej Siewior wrote:
> > On 2022-10-05 23:08:19 [+0200], Jason A. Donenfeld wrote:
> > > Hi Sebastian,
> > Hi Jason,
> > 
> > > On Wed, Oct 05, 2022 at 07:26:42PM +0200, Sebastian Andrzej Siewior wrote:
> > > > That del_timer_sync() at the end is what you want. If the timer is
> > > > pending (as in enqueued in the timer wheel) then it will be removed
> > > > before it is invoked. If the timer's callback is invoked then it will
> > > > spin until the callback is done.
> > > 
> > > del_timer_sync() is not guaranteed to succeed with add_timer_on() being
> > > used in conjunction with timer_pending() though. That's why I've
> > > abandoned this.
> > 
> > But why? The timer is added to a timer-base on a different CPU. Should
> > work.
> 
> So it's easier to talk about, I'll number a few lines:
> 
>  1 while (conditions) {
>  2     if (!timer_pending(&stack.timer))
>  3         add_timer_on(&stack.timer, some_next_cpu);
>  4 }
>  5 del_timer_sync(&stack.timer);
> 
> 
> Then, steps to cause UaF:
> 
> a) add_timer_on() on line 3 is called from CPU 1 and pends the timer on
>    CPU 2.
> 
> b) Just before the timer callback runs, not after, timer_pending() is
>    made false, so the condition on line 2 holds true again.
> 
> c) add_timer_on() on line 3 is called from CPU 1 and pends the timer on
>    CPU 3.
> 
> d) The conditions on line 1 are made false, and the loop breaks.
> 
> e) del_timer_sync() on line 5 is called, and its `base->running_timer !=
>    timer` check is false, because of step (c).
> 
> f) `stack.timer` gets freed / goes out of scope.
> 
> g) The callback scheduled from step (b) runs, and we have a UaF.

Here's a reproducer of this flow, which prints out:

    [    4.157610] wireguard: Stack on cpu 1 is corrupt

diff --git a/drivers/net/wireguard/main.c b/drivers/net/wireguard/main.c
index ee4da9ab8013..5c61f49918f2 100644
--- a/drivers/net/wireguard/main.c
+++ b/drivers/net/wireguard/main.c
@@ -17,10 +17,40 @@
 #include <linux/genetlink.h>
 #include <net/rtnetlink.h>

+struct state {
+	struct timer_list timer;
+	char valid[8];
+};
+
+static void fire(struct timer_list *timer)
+{
+	struct state *stack = container_of(timer, struct state, timer);
+	mdelay(1000);
+	pr_err("Stack on cpu %d is %s\n", raw_smp_processor_id(), stack->valid);
+}
+
+static void do_the_thing(struct work_struct *work)
+{
+	struct state stack = { .valid = "valid" };
+	timer_setup_on_stack(&stack.timer, fire, 0);
+	stack.timer.expires = jiffies;
+	add_timer_on(&stack.timer, 1);
+	while (timer_pending(&stack.timer))
+		cpu_relax();
+	stack.timer.expires = jiffies;
+	add_timer_on(&stack.timer, 2);
+	del_timer_sync(&stack.timer);
+	memcpy(&stack.valid, "corrupt", 8);
+}
+
+static DECLARE_DELAYED_WORK(reproducer, do_the_thing);
+
 static int __init wg_mod_init(void)
 {
 	int ret;

+	schedule_delayed_work_on(0, &reproducer, HZ * 3);
+
 	ret = wg_allowedips_slab_init();
 	if (ret < 0)
 		goto err_allowedips;


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] random: spread out jitter callback to different CPUs
  2022-10-07 14:01             ` Jason A. Donenfeld
@ 2022-10-07 14:55               ` Sebastian Andrzej Siewior
  2022-10-07 15:32                 ` Jason A. Donenfeld
  0 siblings, 1 reply; 17+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-10-07 14:55 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: tglx, linux-kernel, linux-crypto, Dominik Brodowski, Sultan Alsawaf

On 2022-10-07 08:01:20 [-0600], Jason A. Donenfeld wrote:
> Here's a reproducer of this flow, which prints out:
> 
>     [    4.157610] wireguard: Stack on cpu 1 is corrupt

I understood Sultan's description. The end of story (after discussing
this tglx) is that this will be documented since it can't be fixed for
add_timer_on().

Sebastian

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] random: spread out jitter callback to different CPUs
  2022-10-07 14:55               ` Sebastian Andrzej Siewior
@ 2022-10-07 15:32                 ` Jason A. Donenfeld
  2022-11-29 16:08                   ` [PATCH v2] " Jason A. Donenfeld
  0 siblings, 1 reply; 17+ messages in thread
From: Jason A. Donenfeld @ 2022-10-07 15:32 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: tglx, linux-kernel, linux-crypto, Dominik Brodowski, Sultan Alsawaf

On Fri, Oct 7, 2022 at 8:55 AM Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
>
> On 2022-10-07 08:01:20 [-0600], Jason A. Donenfeld wrote:
> > Here's a reproducer of this flow, which prints out:
> >
> >     [    4.157610] wireguard: Stack on cpu 1 is corrupt
>
> I understood Sultan's description. The end of story (after discussing
> this tglx) is that this will be documented since it can't be fixed for
> add_timer_on().

Right, that's about where I wound up too, which is why I just
abandoned the approach of this patchset. Calling del_timer_sync()
before each new add_timer_on() (but after !timer_pending()) seems
kinda ugly.

Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2] random: spread out jitter callback to different CPUs
  2022-10-07 15:32                 ` Jason A. Donenfeld
@ 2022-11-29 16:08                   ` Jason A. Donenfeld
  2022-11-29 18:27                     ` [PATCH v3] " Jason A. Donenfeld
  0 siblings, 1 reply; 17+ messages in thread
From: Jason A. Donenfeld @ 2022-11-29 16:08 UTC (permalink / raw)
  To: linux-kernel, linux-crypto
  Cc: Jason A. Donenfeld, Sultan Alsawaf, Dominik Brodowski,
	Sebastian Andrzej Siewior, Thomas Gleixner

Rather than merely hoping that the callback gets called on another CPU,
arrange for that to actually happen, by round robining which CPU the
timer fires on. This way, on multiprocessor machines, we exacerbate
jitter by touching the same memory from multiple different cores.

There's a little bit of tricky bookkeeping involved here, because using
timer_setup_on_stack() + add_timer_on() + del_timer_sync() is a recipe
for disaster. See this sample code: <https://xn--4db.cc/xBdEiIKO/c>.

Mitigating this by calling del_timer_sync() (or try_to_del_timer_sync())
before each add_timer_on() also isn't satisfactory, because that halts
the main entropy collecting loop while it waits, trying to acquire the
timer base spinlock.

So instead, poll a boolean inside of the main loop to indicate when the
timer callback has finished executing. This serializes the callbacks
while still ensuring the main loop continues.

Cc: Sultan Alsawaf <sultan@kerneltoast.com>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 drivers/char/random.c | 39 ++++++++++++++++++++++++++++++---------
 1 file changed, 30 insertions(+), 9 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 7b71cea6a6ab..40d0395ea566 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1232,7 +1232,9 @@ void __cold rand_initialize_disk(struct gendisk *disk)
 struct entropy_timer_state {
 	unsigned long entropy;
 	struct timer_list timer;
-	unsigned int samples, samples_per_bit;
+	atomic_t samples;
+	unsigned int samples_per_bit;
+	bool is_running;
 };
 
 /*
@@ -1250,10 +1252,9 @@ static void __cold entropy_timer(struct timer_list *timer)
 {
 	struct entropy_timer_state *state = container_of(timer, struct entropy_timer_state, timer);
 
-	if (++state->samples == state->samples_per_bit) {
+	if (atomic_inc_return(&state->samples) % state->samples_per_bit == 0)
 		credit_init_bits(1);
-		state->samples = 0;
-	}
+	smp_store_release(&state->is_running, false);
 }
 
 /*
@@ -1263,9 +1264,10 @@ static void __cold entropy_timer(struct timer_list *timer)
 static void __cold try_to_generate_entropy(void)
 {
 	enum { NUM_TRIAL_SAMPLES = 8192, MAX_SAMPLES_PER_BIT = HZ / 15 };
-	struct entropy_timer_state stack;
+	struct entropy_timer_state stack = { 0 };
 	unsigned int i, num_different = 0;
 	unsigned long last = random_get_entropy();
+	int cpu = -1;
 
 	for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) {
 		stack.entropy = random_get_entropy();
@@ -1277,19 +1279,38 @@ static void __cold try_to_generate_entropy(void)
 	if (stack.samples_per_bit > MAX_SAMPLES_PER_BIT)
 		return;
 
-	stack.samples = 0;
 	timer_setup_on_stack(&stack.timer, entropy_timer, 0);
 	while (!crng_ready() && !signal_pending(current)) {
-		if (!timer_pending(&stack.timer))
-			mod_timer(&stack.timer, jiffies);
+		/*
+		 * Check both !timer_pending() and state->is_running to ensure that any previous
+		 * callback has finished executing, before queueing the next one.
+		 */
+		if (!timer_pending(&stack.timer) && !smp_load_acquire(&stack.is_running)) {
+			preempt_disable();
+
+			/* Basic CPU round-robin, which avoids the current CPU. */
+			do {
+				cpu = cpumask_next(cpu, cpu_online_mask);
+				if (cpu == nr_cpumask_bits)
+					cpu = cpumask_first(cpu_online_mask);
+			} while (cpu == smp_processor_id() && cpumask_weight(cpu_online_mask) > 1);
+
+			/* Expiring the timer at `jiffies` means it's the next tick. */
+			stack.timer.expires = jiffies;
+
+			smp_store_release(&stack.is_running, true);
+			add_timer_on(&stack.timer, cpu);
+
+			preempt_enable();
+		}
 		mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
 		schedule();
 		stack.entropy = random_get_entropy();
 	}
+	mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
 
 	del_timer_sync(&stack.timer);
 	destroy_timer_on_stack(&stack.timer);
-	mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
 }
 
 
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3] random: spread out jitter callback to different CPUs
  2022-11-29 16:08                   ` [PATCH v2] " Jason A. Donenfeld
@ 2022-11-29 18:27                     ` Jason A. Donenfeld
       [not found]                       ` <20221130014514.6494-1-hdanton@sina.com>
  2022-11-30 18:48                       ` [PATCH v4] " Jason A. Donenfeld
  0 siblings, 2 replies; 17+ messages in thread
From: Jason A. Donenfeld @ 2022-11-29 18:27 UTC (permalink / raw)
  To: linux-kernel, linux-crypto
  Cc: Jason A. Donenfeld, Sultan Alsawaf, Dominik Brodowski,
	Sebastian Andrzej Siewior, Thomas Gleixner

Rather than merely hoping that the callback gets called on another CPU,
arrange for that to actually happen, by round robining which CPU the
timer fires on. This way, on multiprocessor machines, we exacerbate
jitter by touching the same memory from multiple different cores.

It's necessary to call [try_to_]del_timer_sync() before calling
add_timer_on(), so that the final call to del_timer_sync() at the end of
the function actually succeeds at making sure no handlers are running.

Cc: Sultan Alsawaf <sultan@kerneltoast.com>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
Changes v2->v3:
- Thomas convinced me try_to_del_timer_sync() was fine.

 drivers/char/random.c | 36 +++++++++++++++++++++++++++---------
 1 file changed, 27 insertions(+), 9 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 7b71cea6a6ab..4cb1d606a492 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1232,7 +1232,8 @@ void __cold rand_initialize_disk(struct gendisk *disk)
 struct entropy_timer_state {
 	unsigned long entropy;
 	struct timer_list timer;
-	unsigned int samples, samples_per_bit;
+	atomic_t samples;
+	unsigned int samples_per_bit;
 };
 
 /*
@@ -1250,10 +1251,8 @@ static void __cold entropy_timer(struct timer_list *timer)
 {
 	struct entropy_timer_state *state = container_of(timer, struct entropy_timer_state, timer);
 
-	if (++state->samples == state->samples_per_bit) {
+	if (atomic_inc_return(&state->samples) % state->samples_per_bit == 0)
 		credit_init_bits(1);
-		state->samples = 0;
-	}
 }
 
 /*
@@ -1263,9 +1262,10 @@ static void __cold entropy_timer(struct timer_list *timer)
 static void __cold try_to_generate_entropy(void)
 {
 	enum { NUM_TRIAL_SAMPLES = 8192, MAX_SAMPLES_PER_BIT = HZ / 15 };
-	struct entropy_timer_state stack;
+	struct entropy_timer_state stack = { 0 };
 	unsigned int i, num_different = 0;
 	unsigned long last = random_get_entropy();
+	int cpu = -1;
 
 	for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) {
 		stack.entropy = random_get_entropy();
@@ -1277,19 +1277,37 @@ static void __cold try_to_generate_entropy(void)
 	if (stack.samples_per_bit > MAX_SAMPLES_PER_BIT)
 		return;
 
-	stack.samples = 0;
 	timer_setup_on_stack(&stack.timer, entropy_timer, 0);
 	while (!crng_ready() && !signal_pending(current)) {
-		if (!timer_pending(&stack.timer))
-			mod_timer(&stack.timer, jiffies);
+		/*
+		 * Check !timer_pending() and then ensure that any previous callback has finished
+		 * executing by checking try_to_del_timer_sync(), before queueing the next one.
+		 */
+		if (!timer_pending(&stack.timer) && try_to_del_timer_sync(&stack.timer) >= 0) {
+			preempt_disable();
+
+			/* Basic CPU round-robin, which avoids the current CPU. */
+			do {
+				cpu = cpumask_next(cpu, cpu_online_mask);
+				if (cpu == nr_cpumask_bits)
+					cpu = cpumask_first(cpu_online_mask);
+			} while (cpu == smp_processor_id() && cpumask_weight(cpu_online_mask) > 1);
+
+			/* Expiring the timer at `jiffies` means it's the next tick. */
+			stack.timer.expires = jiffies;
+
+			add_timer_on(&stack.timer, cpu);
+
+			preempt_enable();
+		}
 		mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
 		schedule();
 		stack.entropy = random_get_entropy();
 	}
+	mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
 
 	del_timer_sync(&stack.timer);
 	destroy_timer_on_stack(&stack.timer);
-	mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
 }
 
 
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v3] random: spread out jitter callback to different CPUs
       [not found]                       ` <20221130014514.6494-1-hdanton@sina.com>
@ 2022-11-30  1:49                         ` Jason A. Donenfeld
  0 siblings, 0 replies; 17+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30  1:49 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-kernel, linux-crypto, Sultan Alsawaf, Dominik Brodowski,
	Sebastian Andrzej Siewior, Thomas Gleixner

Hi Hillf,

On Wed, Nov 30, 2022 at 2:45 AM Hillf Danton <hdanton@sina.com> wrote:
>
> On 29 Nov 2022 19:27:52 +0100 Jason A. Donenfeld <Jason@zx2c4.com>
> > Rather than merely hoping that the callback gets called on another CPU,
> > arrange for that to actually happen, by round robining which CPU the
> > timer fires on. This way, on multiprocessor machines, we exacerbate
> > jitter by touching the same memory from multiple different cores.
> >
> > It's necessary to call [try_to_]del_timer_sync() before calling
> > add_timer_on(), so that the final call to del_timer_sync() at the end of
> > the function actually succeeds at making sure no handlers are running.
> >
> > Cc: Sultan Alsawaf <sultan@kerneltoast.com>
> > Cc: Dominik Brodowski <linux@dominikbrodowski.net>
> > Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> > ---
> > Changes v2->v3:
> > - Thomas convinced me try_to_del_timer_sync() was fine.
> >
> >  drivers/char/random.c | 36 +++++++++++++++++++++++++++---------
> >  1 file changed, 27 insertions(+), 9 deletions(-)
> >
> > diff --git a/drivers/char/random.c b/drivers/char/random.c
> > index 7b71cea6a6ab..4cb1d606a492 100644
> > --- a/drivers/char/random.c
> > +++ b/drivers/char/random.c
> > @@ -1232,7 +1232,8 @@ void __cold rand_initialize_disk(struct gendisk *disk)
> >  struct entropy_timer_state {
> >       unsigned long entropy;
> >       struct timer_list timer;
> > -     unsigned int samples, samples_per_bit;
> > +     atomic_t samples;
> > +     unsigned int samples_per_bit;
> >  };
> >
> >  /*
> > @@ -1250,10 +1251,8 @@ static void __cold entropy_timer(struct timer_list *timer)
> >  {
> >       struct entropy_timer_state *state = container_of(timer, struct entropy_timer_state, timer);
> >
> > -     if (++state->samples == state->samples_per_bit) {
> > +     if (atomic_inc_return(&state->samples) % state->samples_per_bit == 0)
> >               credit_init_bits(1);
> > -             state->samples = 0;
> > -     }
> >  }
> >
> >  /*
> > @@ -1263,9 +1262,10 @@ static void __cold entropy_timer(struct timer_list *timer)
> >  static void __cold try_to_generate_entropy(void)
> >  {
> >       enum { NUM_TRIAL_SAMPLES = 8192, MAX_SAMPLES_PER_BIT = HZ / 15 };
> > -     struct entropy_timer_state stack;
> > +     struct entropy_timer_state stack = { 0 };
> >       unsigned int i, num_different = 0;
> >       unsigned long last = random_get_entropy();
> > +     int cpu = -1;
> >
> >       for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) {
> >               stack.entropy = random_get_entropy();
> > @@ -1277,19 +1277,37 @@ static void __cold try_to_generate_entropy(void)
> >       if (stack.samples_per_bit > MAX_SAMPLES_PER_BIT)
> >               return;
> >
> > -     stack.samples = 0;
> >       timer_setup_on_stack(&stack.timer, entropy_timer, 0);
> >       while (!crng_ready() && !signal_pending(current)) {
> > -             if (!timer_pending(&stack.timer))
> > -                     mod_timer(&stack.timer, jiffies);
> > +             /*
> > +              * Check !timer_pending() and then ensure that any previous callback has finished
> > +              * executing by checking try_to_del_timer_sync(), before queueing the next one.
> > +              */
> > +             if (!timer_pending(&stack.timer) && try_to_del_timer_sync(&stack.timer) >= 0) {
>
> If CPU RR is moved to the timer callback, timer game like this one that hurts
> brain can be avoided.

There's a comment in the code from Linus about this:

* Note that we don't re-arm the timer in the timer itself - we are happy to be
* scheduled away, since that just makes the load more complex, but we do not
* want the timer to keep ticking unless the entropy loop is running.

> What sense made by trying to delete a non-pending timer?

If the timer is no longer pending, but has not completed executing its
callback, and add_timer_on() is called, subsequent calls to
del_timer_sync() will stop the second add_timer_on(), but the first
one that has not completed executing its callback will not be touched.
Take a look at this example: https://א.cc/xBdEiIKO/c

Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v4] random: spread out jitter callback to different CPUs
  2022-11-29 18:27                     ` [PATCH v3] " Jason A. Donenfeld
       [not found]                       ` <20221130014514.6494-1-hdanton@sina.com>
@ 2022-11-30 18:48                       ` Jason A. Donenfeld
  1 sibling, 0 replies; 17+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30 18:48 UTC (permalink / raw)
  To: linux-kernel, linux-crypto
  Cc: Jason A. Donenfeld, Sultan Alsawaf, Dominik Brodowski,
	Sebastian Andrzej Siewior, Thomas Gleixner

Rather than merely hoping that the callback gets called on another CPU,
arrange for that to actually happen, by round robining which CPU the
timer fires on. This way, on multiprocessor machines, we exacerbate
jitter by touching the same memory from multiple different cores.

There's a little bit of tricky bookkeeping involved here, because using
timer_setup_on_stack() + add_timer_on() + del_timer_sync() will result
in a use after free. See this sample code: <https://xn--4db.cc/xBdEiIKO/c>.

Instead, it's necessary to call [try_to_]del_timer_sync() before calling
add_timer_on(), so that the final call to del_timer_sync() at the end of
the function actually succeeds at making sure no handlers are running.

Cc: Sultan Alsawaf <sultan@kerneltoast.com>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
Changes v3->v4:
- Limit used CPUs to HK_TYPE_TIMER ones.

 drivers/char/random.c | 50 ++++++++++++++++++++++++++++++++++++-------
 1 file changed, 42 insertions(+), 8 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 7b71cea6a6ab..232bf487851d 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -53,6 +53,7 @@
 #include <linux/uaccess.h>
 #include <linux/suspend.h>
 #include <linux/siphash.h>
+#include <linux/sched/isolation.h>
 #include <crypto/chacha.h>
 #include <crypto/blake2s.h>
 #include <asm/processor.h>
@@ -1232,7 +1233,8 @@ void __cold rand_initialize_disk(struct gendisk *disk)
 struct entropy_timer_state {
 	unsigned long entropy;
 	struct timer_list timer;
-	unsigned int samples, samples_per_bit;
+	atomic_t samples;
+	unsigned int samples_per_bit;
 };
 
 /*
@@ -1250,10 +1252,8 @@ static void __cold entropy_timer(struct timer_list *timer)
 {
 	struct entropy_timer_state *state = container_of(timer, struct entropy_timer_state, timer);
 
-	if (++state->samples == state->samples_per_bit) {
+	if (atomic_inc_return(&state->samples) % state->samples_per_bit == 0)
 		credit_init_bits(1);
-		state->samples = 0;
-	}
 }
 
 /*
@@ -1266,6 +1266,7 @@ static void __cold try_to_generate_entropy(void)
 	struct entropy_timer_state stack;
 	unsigned int i, num_different = 0;
 	unsigned long last = random_get_entropy();
+	int cpu = -1;
 
 	for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) {
 		stack.entropy = random_get_entropy();
@@ -1277,19 +1278,52 @@ static void __cold try_to_generate_entropy(void)
 	if (stack.samples_per_bit > MAX_SAMPLES_PER_BIT)
 		return;
 
-	stack.samples = 0;
+	atomic_set(&stack.samples, 0);
 	timer_setup_on_stack(&stack.timer, entropy_timer, 0);
 	while (!crng_ready() && !signal_pending(current)) {
-		if (!timer_pending(&stack.timer))
-			mod_timer(&stack.timer, jiffies);
+		/*
+		 * Check !timer_pending() and then ensure that any previous callback has finished
+		 * executing by checking try_to_del_timer_sync(), before queueing the next one.
+		 */
+		if (!timer_pending(&stack.timer) && try_to_del_timer_sync(&stack.timer) >= 0) {
+			struct cpumask timer_cpus;
+			unsigned int num_cpus;
+
+			/*
+			 * Preemption must be disabled here, both to read the current CPU number
+			 * and to avoid scheduling a timer on a dead CPU.
+			 */
+			preempt_disable();
+
+			/* Only schedule callbacks on timer CPUs that are online. */
+			cpumask_and(&timer_cpus, housekeeping_cpumask(HK_TYPE_TIMER), cpu_online_mask);
+			num_cpus = cpumask_weight(&timer_cpus);
+			/* In very bizarre case of misconfiguration, fallback to all online. */
+			if (unlikely(num_cpus == 0))
+				timer_cpus = *cpu_online_mask;
+
+			/* Basic CPU round-robin, which avoids the current CPU. */
+			do {
+				cpu = cpumask_next(cpu, &timer_cpus);
+				if (cpu == nr_cpumask_bits)
+					cpu = cpumask_first(&timer_cpus);
+			} while (cpu == smp_processor_id() && num_cpus > 1);
+
+			/* Expiring the timer at `jiffies` means it's the next tick. */
+			stack.timer.expires = jiffies;
+
+			add_timer_on(&stack.timer, cpu);
+
+			preempt_enable();
+		}
 		mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
 		schedule();
 		stack.entropy = random_get_entropy();
 	}
+	mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
 
 	del_timer_sync(&stack.timer);
 	destroy_timer_on_stack(&stack.timer);
-	mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
 }
 
 
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2022-11-30 18:48 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-30 23:10 [PATCH 1/2] random: schedule jitter credit for next jiffy, not in two jiffies Jason A. Donenfeld
2022-09-30 23:10 ` [PATCH 2/2] random: spread out jitter callback to different CPUs Jason A. Donenfeld
2022-10-01  9:21   ` Jason A. Donenfeld
2022-10-05 17:26     ` Sebastian Andrzej Siewior
2022-10-05 21:08       ` Jason A. Donenfeld
2022-10-06  6:46         ` Sebastian Andrzej Siewior
2022-10-06 12:26           ` Jason A. Donenfeld
2022-10-06 12:41             ` Sebastian Andrzej Siewior
2022-10-06 16:39               ` Sultan Alsawaf
2022-10-07  7:29                 ` Sebastian Andrzej Siewior
2022-10-07 14:01             ` Jason A. Donenfeld
2022-10-07 14:55               ` Sebastian Andrzej Siewior
2022-10-07 15:32                 ` Jason A. Donenfeld
2022-11-29 16:08                   ` [PATCH v2] " Jason A. Donenfeld
2022-11-29 18:27                     ` [PATCH v3] " Jason A. Donenfeld
     [not found]                       ` <20221130014514.6494-1-hdanton@sina.com>
2022-11-30  1:49                         ` Jason A. Donenfeld
2022-11-30 18:48                       ` [PATCH v4] " Jason A. Donenfeld

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).