linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] sched: migrate_disable() vs per-CPU access safety checks
@ 2021-07-21 11:51 Valentin Schneider
  2021-07-21 11:51 ` [PATCH 1/3] sched: Introduce is_pcpu_safe() Valentin Schneider
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Valentin Schneider @ 2021-07-21 11:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, linux-rt-users
  Cc: Catalin Marinas, Will Deacon, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Steven Rostedt, Daniel Bristot de Oliveira,
	Paul E. McKenney, Josh Triplett, Mathieu Desnoyers,
	Lai Jiangshan, Joel Fernandes, Anshuman Khandual,
	Vincenzo Frascino, Steven Price, Ard Biesheuvel

Hi folks,

I've hit a few warnings when taking v5.13-rt1 out for a spin on my arm64
Juno. Those are due to regions that become preemptible under PREEMPT_RT, but
remain safe wrt per-CPU accesses due to migrate_disable() + a sleepable lock.

This adds a helper that looks at not just preemptability but also affinity and
migrate disable, and plasters the warning sites.

Cheers,
Valentin

Valentin Schneider (3):
  sched: Introduce is_pcpu_safe()
  rcu/nocb: Check for migratability rather than pure preemptability
  arm64: mm: Make arch_faults_on_old_pte() check for migratability

 arch/arm64/include/asm/pgtable.h |  2 +-
 include/linux/sched.h            | 10 ++++++++++
 kernel/rcu/tree_plugin.h         |  3 +--
 3 files changed, 12 insertions(+), 3 deletions(-)

--
2.25.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/3] sched: Introduce is_pcpu_safe()
  2021-07-21 11:51 [PATCH 0/3] sched: migrate_disable() vs per-CPU access safety checks Valentin Schneider
@ 2021-07-21 11:51 ` Valentin Schneider
  2021-07-27 16:23   ` Paul E. McKenney
  2021-07-21 11:51 ` [PATCH 2/3] rcu/nocb: Check for migratability rather than pure preemptability Valentin Schneider
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 12+ messages in thread
From: Valentin Schneider @ 2021-07-21 11:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, linux-rt-users
  Cc: Catalin Marinas, Will Deacon, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Steven Rostedt, Daniel Bristot de Oliveira,
	Paul E. McKenney, Josh Triplett, Mathieu Desnoyers,
	Lai Jiangshan, Joel Fernandes, Anshuman Khandual,
	Vincenzo Frascino, Steven Price, Ard Biesheuvel

Some areas use preempt_disable() + preempt_enable() to safely access
per-CPU data. The PREEMPT_RT folks have shown this can also be done by
keeping preemption enabled and instead disabling migration (and acquiring a
sleepable lock, if relevant).

Introduce a helper which checks whether the current task can safely access
per-CPU data, IOW if the task's context guarantees the accesses will target
a single CPU. This accounts for preemption, CPU affinity, and migrate
disable - note that the CPU affinity check also mandates the presence of
PF_NO_SETAFFINITY, as otherwise userspace could concurrently render the
upcoming per-CPU access(es) unsafe.

Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
---
 include/linux/sched.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index efdbdf654876..7ce2d5c1ad55 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1707,6 +1707,16 @@ static inline bool is_percpu_thread(void)
 #endif
 }
 
+/* Is the current task guaranteed not to be migrated elsewhere? */
+static inline bool is_pcpu_safe(void)
+{
+#ifdef CONFIG_SMP
+	return !preemptible() || is_percpu_thread() || current->migration_disabled;
+#else
+	return true;
+#endif
+}
+
 /* Per-process atomic flags. */
 #define PFA_NO_NEW_PRIVS		0	/* May not gain new privileges. */
 #define PFA_SPREAD_PAGE			1	/* Spread page cache over cpuset */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/3] rcu/nocb: Check for migratability rather than pure preemptability
  2021-07-21 11:51 [PATCH 0/3] sched: migrate_disable() vs per-CPU access safety checks Valentin Schneider
  2021-07-21 11:51 ` [PATCH 1/3] sched: Introduce is_pcpu_safe() Valentin Schneider
@ 2021-07-21 11:51 ` Valentin Schneider
  2021-07-27 16:24   ` Paul E. McKenney
  2021-07-27 23:08   ` Frederic Weisbecker
  2021-07-21 11:51 ` [PATCH 3/3] arm64: mm: Make arch_faults_on_old_pte() check for migratability Valentin Schneider
  2021-07-27 19:45 ` [PATCH 0/3] sched: migrate_disable() vs per-CPU access safety checks Thomas Gleixner
  3 siblings, 2 replies; 12+ messages in thread
From: Valentin Schneider @ 2021-07-21 11:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, linux-rt-users
  Cc: Catalin Marinas, Will Deacon, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Steven Rostedt, Daniel Bristot de Oliveira,
	Paul E. McKenney, Josh Triplett, Mathieu Desnoyers,
	Lai Jiangshan, Joel Fernandes, Anshuman Khandual,
	Vincenzo Frascino, Steven Price, Ard Biesheuvel

Running v5.13-rt1 on my arm64 Juno board triggers:

[    0.156302] =============================
[    0.160416] WARNING: suspicious RCU usage
[    0.164529] 5.13.0-rt1 #20 Not tainted
[    0.168300] -----------------------------
[    0.172409] kernel/rcu/tree_plugin.h:69 Unsafe read of RCU_NOCB offloaded state!
[    0.179920]
[    0.179920] other info that might help us debug this:
[    0.179920]
[    0.188037]
[    0.188037] rcu_scheduler_active = 1, debug_locks = 1
[    0.194677] 3 locks held by rcuc/0/11:
[    0.198448] #0: ffff00097ef10cf8 ((softirq_ctrl.lock).lock){+.+.}-{2:2}, at: __local_bh_disable_ip (./include/linux/rcupdate.h:662 kernel/softirq.c:171)
[    0.208709] #1: ffff80001205e5f0 (rcu_read_lock){....}-{1:2}, at: rt_spin_lock (kernel/locking/spinlock_rt.c:43 (discriminator 4))
[    0.217134] #2: ffff80001205e5f0 (rcu_read_lock){....}-{1:2}, at: __local_bh_disable_ip (kernel/softirq.c:169)
[    0.226428]
[    0.226428] stack backtrace:
[    0.230889] CPU: 0 PID: 11 Comm: rcuc/0 Not tainted 5.13.0-rt1 #20
[    0.237100] Hardware name: ARM Juno development board (r0) (DT)
[    0.243041] Call trace:
[    0.245497] dump_backtrace (arch/arm64/kernel/stacktrace.c:163)
[    0.249185] show_stack (arch/arm64/kernel/stacktrace.c:219)
[    0.252522] dump_stack (lib/dump_stack.c:122)
[    0.255947] lockdep_rcu_suspicious (kernel/locking/lockdep.c:6439)
[    0.260328] rcu_rdp_is_offloaded (kernel/rcu/tree_plugin.h:69 kernel/rcu/tree_plugin.h:58)
[    0.264537] rcu_core (kernel/rcu/tree.c:2332 kernel/rcu/tree.c:2398 kernel/rcu/tree.c:2777)
[    0.267786] rcu_cpu_kthread (./include/linux/bottom_half.h:32 kernel/rcu/tree.c:2876)
[    0.271644] smpboot_thread_fn (kernel/smpboot.c:165 (discriminator 3))
[    0.275767] kthread (kernel/kthread.c:321)
[    0.279013] ret_from_fork (arch/arm64/kernel/entry.S:1005)

In this case, this is the RCU core kthread accessing the local CPU's
rdp. Before that, rcu_cpu_kthread() invokes local_bh_disable().

Under !CONFIG_PREEMPT_RT (and rcutree.use_softirq=0), this ends up
incrementing the preempt_count, which satisfies the "local non-preemptible
read" of rcu_rdp_is_offloaded().

Under CONFIG_PREEMPT_RT however, this becomes

  local_lock(&softirq_ctrl.lock)

which, under the same config, is migrate_disable() + rt_spin_lock().
This *does* prevent the task from migrating away, but not in a way
rcu_rdp_is_offloaded() can notice. Note that the invoking task is an
smpboot thread, and thus cannot be migrated away in the first place.

Check is_pcpu_safe() here rather than preemptible().

Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
---
 kernel/rcu/tree_plugin.h | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index ad0156b86937..6c3c4100da83 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -70,8 +70,7 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp)
 		!(lockdep_is_held(&rcu_state.barrier_mutex) ||
 		  (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) ||
 		  rcu_lockdep_is_held_nocb(rdp) ||
-		  (rdp == this_cpu_ptr(&rcu_data) &&
-		   !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) ||
+		  (rdp == this_cpu_ptr(&rcu_data) && is_pcpu_safe()) ||
 		  rcu_current_is_nocb_kthread(rdp) ||
 		  rcu_running_nocb_timer(rdp)),
 		"Unsafe read of RCU_NOCB offloaded state"
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/3] arm64: mm: Make arch_faults_on_old_pte() check for migratability
  2021-07-21 11:51 [PATCH 0/3] sched: migrate_disable() vs per-CPU access safety checks Valentin Schneider
  2021-07-21 11:51 ` [PATCH 1/3] sched: Introduce is_pcpu_safe() Valentin Schneider
  2021-07-21 11:51 ` [PATCH 2/3] rcu/nocb: Check for migratability rather than pure preemptability Valentin Schneider
@ 2021-07-21 11:51 ` Valentin Schneider
  2021-07-27 19:45 ` [PATCH 0/3] sched: migrate_disable() vs per-CPU access safety checks Thomas Gleixner
  3 siblings, 0 replies; 12+ messages in thread
From: Valentin Schneider @ 2021-07-21 11:51 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, linux-rt-users
  Cc: Catalin Marinas, Will Deacon, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Steven Rostedt, Daniel Bristot de Oliveira,
	Paul E. McKenney, Josh Triplett, Mathieu Desnoyers,
	Lai Jiangshan, Joel Fernandes, Anshuman Khandual,
	Vincenzo Frascino, Steven Price, Ard Biesheuvel

Running v5.13-rt1 on my arm64 Juno board triggers:

[   30.430643] WARNING: CPU: 4 PID: 1 at arch/arm64/include/asm/pgtable.h:985 do_set_pte (./arch/arm64/include/asm/pgtable.h:985 ./arch/arm64/include/asm/pgtable.h:997 mm/memory.c:3830)
[   30.430669] Modules linked in:
[   30.430679] CPU: 4 PID: 1 Comm: init Tainted: G        W         5.13.0-rt1-00002-gcb994ad7c570 #35
[   30.430690] Hardware name: ARM Juno development board (r0) (DT)
[   30.430695] pstate: 80000005 (Nzcv daif -PAN -UAO -TCO BTYPE=--)
[   30.430705] pc : do_set_pte (./arch/arm64/include/asm/pgtable.h:985 ./arch/arm64/include/asm/pgtable.h:997 mm/memory.c:3830)
[   30.430713] lr : filemap_map_pages (mm/filemap.c:3222)
[   30.430725] sp : ffff800012f4bb90
[   30.430729] x29: ffff800012f4bb90 x28: fffffc0025d81900 x27: 0000000000000100
[   30.430745] x26: fffffc0025d81900 x25: ffff000803460000 x24: ffff000801bbf428
[   30.430760] x23: ffff00080317d900 x22: 0000ffffb4c3e000 x21: fffffc0025d81900
[   30.430775] x20: ffff800012f4bd10 x19: 00200009f6064fc3 x18: 000000000000ca01
[   30.430790] x17: 0000000000000000 x16: 000000000000ca06 x15: ffff80001240e128
[   30.430804] x14: ffff8000124b0128 x13: 000000000000000a x12: ffff80001205e5f0
[   30.430819] x11: 0000000000000000 x10: ffff800011a37d28 x9 : 00000000000000c8
[   30.430833] x8 : ffff000800160000 x7 : 0000000000000002 x6 : 0000000000000000
[   30.430847] x5 : 0000000000000000 x4 : 0000ffffb4c2f000 x3 : 0020000000000fc3
[   30.430861] x2 : 0000000000000000 x1 : 0000000000000000 x0 : 0000000000000000
[   30.430874] Call trace:
[   30.430878] do_set_pte (./arch/arm64/include/asm/pgtable.h:985 ./arch/arm64/include/asm/pgtable.h:997 mm/memory.c:3830)
[   30.430886] filemap_map_pages (mm/filemap.c:3222)
[   30.430895] __handle_mm_fault (mm/memory.c:4006 mm/memory.c:4020 mm/memory.c:4153 mm/memory.c:4412 mm/memory.c:4547)
[   30.430904] handle_mm_fault (mm/memory.c:4645)
[   30.430912] do_page_fault (arch/arm64/mm/fault.c:507 arch/arm64/mm/fault.c:607)
[   30.430925] do_translation_fault (arch/arm64/mm/fault.c:692)
[   30.430936] do_mem_abort (arch/arm64/mm/fault.c:821)
[   30.430946] el0_ia (arch/arm64/kernel/entry-common.c:324)
[   30.430959] el0_sync_handler (arch/arm64/kernel/entry-common.c:431)
[   30.430967] el0_sync (arch/arm64/kernel/entry.S:744)
[   30.430977] irq event stamp: 1228384
[   30.430981] hardirqs last enabled at (1228383): lock_page_memcg (mm/memcontrol.c:2005 (discriminator 1))
[   30.430993] hardirqs last disabled at (1228384): el1_dbg (arch/arm64/kernel/entry-common.c:144 arch/arm64/kernel/entry-common.c:234)
[   30.431007] softirqs last enabled at (1228260): __local_bh_enable_ip (./arch/arm64/include/asm/irqflags.h:85 kernel/softirq.c:262)
[   30.431022] softirqs last disabled at (1228232): fpsimd_restore_current_state (./include/linux/bottom_half.h:19 arch/arm64/kernel/fpsimd.c:183 arch/arm64/kernel/fpsimd.c:1182)

CONFIG_PREEMPT_RT turns the PTE lock into a sleepable spinlock. Since
acquiring such a lock also disables migration, any per-CPU access done
under the lock remains safe even if preemptible.

This affects:

  filemap_map_pages()
  `\
    do_set_pte()
    `\
      arch_wants_old_prefaulted_pte()

which checks preemptible() to figure out if the output of
cpu_has_hw_af() (IOW the underlying CPU) will remain stable for the
subsequent operations. Make it use is_pcpu_safe() instead.

Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
---
 arch/arm64/include/asm/pgtable.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 0b10204e72fc..3c2b63306237 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -982,7 +982,7 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
  */
 static inline bool arch_faults_on_old_pte(void)
 {
-	WARN_ON(preemptible());
+	WARN_ON(!is_pcpu_safe());
 
 	return !cpu_has_hw_af();
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] sched: Introduce is_pcpu_safe()
  2021-07-21 11:51 ` [PATCH 1/3] sched: Introduce is_pcpu_safe() Valentin Schneider
@ 2021-07-27 16:23   ` Paul E. McKenney
  0 siblings, 0 replies; 12+ messages in thread
From: Paul E. McKenney @ 2021-07-27 16:23 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: linux-kernel, linux-arm-kernel, linux-rt-users, Catalin Marinas,
	Will Deacon, Ingo Molnar, Peter Zijlstra, Thomas Gleixner,
	Steven Rostedt, Daniel Bristot de Oliveira, Josh Triplett,
	Mathieu Desnoyers, Lai Jiangshan, Joel Fernandes,
	Anshuman Khandual, Vincenzo Frascino, Steven Price,
	Ard Biesheuvel

On Wed, Jul 21, 2021 at 12:51:16PM +0100, Valentin Schneider wrote:
> Some areas use preempt_disable() + preempt_enable() to safely access
> per-CPU data. The PREEMPT_RT folks have shown this can also be done by
> keeping preemption enabled and instead disabling migration (and acquiring a
> sleepable lock, if relevant).
> 
> Introduce a helper which checks whether the current task can safely access
> per-CPU data, IOW if the task's context guarantees the accesses will target
> a single CPU. This accounts for preemption, CPU affinity, and migrate
> disable - note that the CPU affinity check also mandates the presence of
> PF_NO_SETAFFINITY, as otherwise userspace could concurrently render the
> upcoming per-CPU access(es) unsafe.
> 
> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>

Acked-by: Paul E. McKenney <paulmck@kernel.org>

> ---
>  include/linux/sched.h | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index efdbdf654876..7ce2d5c1ad55 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1707,6 +1707,16 @@ static inline bool is_percpu_thread(void)
>  #endif
>  }
>  
> +/* Is the current task guaranteed not to be migrated elsewhere? */
> +static inline bool is_pcpu_safe(void)
> +{
> +#ifdef CONFIG_SMP
> +	return !preemptible() || is_percpu_thread() || current->migration_disabled;
> +#else
> +	return true;
> +#endif
> +}
> +
>  /* Per-process atomic flags. */
>  #define PFA_NO_NEW_PRIVS		0	/* May not gain new privileges. */
>  #define PFA_SPREAD_PAGE			1	/* Spread page cache over cpuset */
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] rcu/nocb: Check for migratability rather than pure preemptability
  2021-07-21 11:51 ` [PATCH 2/3] rcu/nocb: Check for migratability rather than pure preemptability Valentin Schneider
@ 2021-07-27 16:24   ` Paul E. McKenney
  2021-07-27 23:08   ` Frederic Weisbecker
  1 sibling, 0 replies; 12+ messages in thread
From: Paul E. McKenney @ 2021-07-27 16:24 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: linux-kernel, linux-arm-kernel, linux-rt-users, Catalin Marinas,
	Will Deacon, Ingo Molnar, Peter Zijlstra, Thomas Gleixner,
	Steven Rostedt, Daniel Bristot de Oliveira, Josh Triplett,
	Mathieu Desnoyers, Lai Jiangshan, Joel Fernandes,
	Anshuman Khandual, Vincenzo Frascino, Steven Price,
	Ard Biesheuvel

On Wed, Jul 21, 2021 at 12:51:17PM +0100, Valentin Schneider wrote:
> Running v5.13-rt1 on my arm64 Juno board triggers:
> 
> [    0.156302] =============================
> [    0.160416] WARNING: suspicious RCU usage
> [    0.164529] 5.13.0-rt1 #20 Not tainted
> [    0.168300] -----------------------------
> [    0.172409] kernel/rcu/tree_plugin.h:69 Unsafe read of RCU_NOCB offloaded state!
> [    0.179920]
> [    0.179920] other info that might help us debug this:
> [    0.179920]
> [    0.188037]
> [    0.188037] rcu_scheduler_active = 1, debug_locks = 1
> [    0.194677] 3 locks held by rcuc/0/11:
> [    0.198448] #0: ffff00097ef10cf8 ((softirq_ctrl.lock).lock){+.+.}-{2:2}, at: __local_bh_disable_ip (./include/linux/rcupdate.h:662 kernel/softirq.c:171)
> [    0.208709] #1: ffff80001205e5f0 (rcu_read_lock){....}-{1:2}, at: rt_spin_lock (kernel/locking/spinlock_rt.c:43 (discriminator 4))
> [    0.217134] #2: ffff80001205e5f0 (rcu_read_lock){....}-{1:2}, at: __local_bh_disable_ip (kernel/softirq.c:169)
> [    0.226428]
> [    0.226428] stack backtrace:
> [    0.230889] CPU: 0 PID: 11 Comm: rcuc/0 Not tainted 5.13.0-rt1 #20
> [    0.237100] Hardware name: ARM Juno development board (r0) (DT)
> [    0.243041] Call trace:
> [    0.245497] dump_backtrace (arch/arm64/kernel/stacktrace.c:163)
> [    0.249185] show_stack (arch/arm64/kernel/stacktrace.c:219)
> [    0.252522] dump_stack (lib/dump_stack.c:122)
> [    0.255947] lockdep_rcu_suspicious (kernel/locking/lockdep.c:6439)
> [    0.260328] rcu_rdp_is_offloaded (kernel/rcu/tree_plugin.h:69 kernel/rcu/tree_plugin.h:58)
> [    0.264537] rcu_core (kernel/rcu/tree.c:2332 kernel/rcu/tree.c:2398 kernel/rcu/tree.c:2777)
> [    0.267786] rcu_cpu_kthread (./include/linux/bottom_half.h:32 kernel/rcu/tree.c:2876)
> [    0.271644] smpboot_thread_fn (kernel/smpboot.c:165 (discriminator 3))
> [    0.275767] kthread (kernel/kthread.c:321)
> [    0.279013] ret_from_fork (arch/arm64/kernel/entry.S:1005)
> 
> In this case, this is the RCU core kthread accessing the local CPU's
> rdp. Before that, rcu_cpu_kthread() invokes local_bh_disable().
> 
> Under !CONFIG_PREEMPT_RT (and rcutree.use_softirq=0), this ends up
> incrementing the preempt_count, which satisfies the "local non-preemptible
> read" of rcu_rdp_is_offloaded().
> 
> Under CONFIG_PREEMPT_RT however, this becomes
> 
>   local_lock(&softirq_ctrl.lock)
> 
> which, under the same config, is migrate_disable() + rt_spin_lock().
> This *does* prevent the task from migrating away, but not in a way
> rcu_rdp_is_offloaded() can notice. Note that the invoking task is an
> smpboot thread, and thus cannot be migrated away in the first place.
> 
> Check is_pcpu_safe() here rather than preemptible().
> 
> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>

Acked-by: Paul E. McKenney <paulmck@kernel.org>

> ---
>  kernel/rcu/tree_plugin.h | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index ad0156b86937..6c3c4100da83 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -70,8 +70,7 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp)
>  		!(lockdep_is_held(&rcu_state.barrier_mutex) ||
>  		  (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) ||
>  		  rcu_lockdep_is_held_nocb(rdp) ||
> -		  (rdp == this_cpu_ptr(&rcu_data) &&
> -		   !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) ||
> +		  (rdp == this_cpu_ptr(&rcu_data) && is_pcpu_safe()) ||
>  		  rcu_current_is_nocb_kthread(rdp) ||
>  		  rcu_running_nocb_timer(rdp)),
>  		"Unsafe read of RCU_NOCB offloaded state"
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/3] sched: migrate_disable() vs per-CPU access safety checks
  2021-07-21 11:51 [PATCH 0/3] sched: migrate_disable() vs per-CPU access safety checks Valentin Schneider
                   ` (2 preceding siblings ...)
  2021-07-21 11:51 ` [PATCH 3/3] arm64: mm: Make arch_faults_on_old_pte() check for migratability Valentin Schneider
@ 2021-07-27 19:45 ` Thomas Gleixner
  3 siblings, 0 replies; 12+ messages in thread
From: Thomas Gleixner @ 2021-07-27 19:45 UTC (permalink / raw)
  To: Valentin Schneider, linux-kernel, linux-arm-kernel, linux-rt-users
  Cc: Catalin Marinas, Will Deacon, Ingo Molnar, Peter Zijlstra,
	Steven Rostedt, Daniel Bristot de Oliveira, Paul E. McKenney,
	Josh Triplett, Mathieu Desnoyers, Lai Jiangshan, Joel Fernandes,
	Anshuman Khandual, Vincenzo Frascino, Steven Price,
	Ard Biesheuvel

On Wed, Jul 21 2021 at 12:51, Valentin Schneider wrote:
> Hi folks,
>
> I've hit a few warnings when taking v5.13-rt1 out for a spin on my arm64
> Juno. Those are due to regions that become preemptible under PREEMPT_RT, but
> remain safe wrt per-CPU accesses due to migrate_disable() + a sleepable lock.
>
> This adds a helper that looks at not just preemptability but also affinity and
> migrate disable, and plasters the warning sites.

Nice!

I just pulled that into the RT queue and it will show up with the next
release.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] rcu/nocb: Check for migratability rather than pure preemptability
  2021-07-21 11:51 ` [PATCH 2/3] rcu/nocb: Check for migratability rather than pure preemptability Valentin Schneider
  2021-07-27 16:24   ` Paul E. McKenney
@ 2021-07-27 23:08   ` Frederic Weisbecker
  2021-07-28 19:34     ` Valentin Schneider
  1 sibling, 1 reply; 12+ messages in thread
From: Frederic Weisbecker @ 2021-07-27 23:08 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: linux-kernel, linux-arm-kernel, linux-rt-users, Catalin Marinas,
	Will Deacon, Ingo Molnar, Peter Zijlstra, Thomas Gleixner,
	Steven Rostedt, Daniel Bristot de Oliveira, Paul E. McKenney,
	Josh Triplett, Mathieu Desnoyers, Lai Jiangshan, Joel Fernandes,
	Anshuman Khandual, Vincenzo Frascino, Steven Price,
	Ard Biesheuvel

On Wed, Jul 21, 2021 at 12:51:17PM +0100, Valentin Schneider wrote:
> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
> ---
>  kernel/rcu/tree_plugin.h | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index ad0156b86937..6c3c4100da83 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -70,8 +70,7 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp)
>  		!(lockdep_is_held(&rcu_state.barrier_mutex) ||
>  		  (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) ||
>  		  rcu_lockdep_is_held_nocb(rdp) ||
> -		  (rdp == this_cpu_ptr(&rcu_data) &&
> -		   !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) ||
> +		  (rdp == this_cpu_ptr(&rcu_data) && is_pcpu_safe()) ||

I fear that won't work. We really need any caller of rcu_rdp_is_offloaded()
on the local rdp to have preemption disabled and not just migration disabled,
because we must protect against concurrent offloaded state changes.

The offloaded state is changed by a workqueue that executes on the target rdp.

Here is a practical example where it matters:

           CPU 0
           -----
           // =======> task rcuc running
           rcu_core {
	       rcu_nocb_lock_irqsave(rdp, flags) {
                   if (!rcu_segcblist_is_offloaded(rdp->cblist)) {
		       // is not offloaded right now, so it's going
                       // to just disable IRQs. Oh no wait:
           // preemption
           // ========> workqueue running
           rcu_nocb_rdp_offload();
           // ========> task rcuc resume
	               local_irq_disable();
                   }
               }
	       ....
       	       rcu_nocb_unlock_irqrestore(rdp, flags) {
                   if (rcu_segcblist_is_offloaded(rdp->cblist)) {
                       // is offloaded right now so:
                       raw_spin_unlock_irqrestore(rdp, flags);

And that will explode because that's an impaired unlock on nocb_lock.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] rcu/nocb: Check for migratability rather than pure preemptability
  2021-07-27 23:08   ` Frederic Weisbecker
@ 2021-07-28 19:34     ` Valentin Schneider
  2021-07-28 22:01       ` Frederic Weisbecker
  0 siblings, 1 reply; 12+ messages in thread
From: Valentin Schneider @ 2021-07-28 19:34 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: linux-kernel, linux-arm-kernel, linux-rt-users, Catalin Marinas,
	Will Deacon, Ingo Molnar, Peter Zijlstra, Thomas Gleixner,
	Steven Rostedt, Daniel Bristot de Oliveira, Paul E. McKenney,
	Josh Triplett, Mathieu Desnoyers, Lai Jiangshan, Joel Fernandes,
	Anshuman Khandual, Vincenzo Frascino, Steven Price,
	Ard Biesheuvel

On 28/07/21 01:08, Frederic Weisbecker wrote:
> On Wed, Jul 21, 2021 at 12:51:17PM +0100, Valentin Schneider wrote:
>> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
>> ---
>>  kernel/rcu/tree_plugin.h | 3 +--
>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
>> index ad0156b86937..6c3c4100da83 100644
>> --- a/kernel/rcu/tree_plugin.h
>> +++ b/kernel/rcu/tree_plugin.h
>> @@ -70,8 +70,7 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp)
>>              !(lockdep_is_held(&rcu_state.barrier_mutex) ||
>>                (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) ||
>>                rcu_lockdep_is_held_nocb(rdp) ||
>> -		  (rdp == this_cpu_ptr(&rcu_data) &&
>> -		   !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) ||
>> +		  (rdp == this_cpu_ptr(&rcu_data) && is_pcpu_safe()) ||
>
> I fear that won't work. We really need any caller of rcu_rdp_is_offloaded()
> on the local rdp to have preemption disabled and not just migration disabled,
> because we must protect against concurrent offloaded state changes.
>
> The offloaded state is changed by a workqueue that executes on the target rdp.
>
> Here is a practical example where it matters:
>
>            CPU 0
>            -----
>            // =======> task rcuc running
>            rcu_core {
>              rcu_nocb_lock_irqsave(rdp, flags) {
>                    if (!rcu_segcblist_is_offloaded(rdp->cblist)) {
>                      // is not offloaded right now, so it's going
>                        // to just disable IRQs. Oh no wait:
>            // preemption
>            // ========> workqueue running
>            rcu_nocb_rdp_offload();
>            // ========> task rcuc resume
>                      local_irq_disable();
>                    }
>                }
>              ....
>                      rcu_nocb_unlock_irqrestore(rdp, flags) {
>                    if (rcu_segcblist_is_offloaded(rdp->cblist)) {
>                        // is offloaded right now so:
>                        raw_spin_unlock_irqrestore(rdp, flags);
>
> And that will explode because that's an impaired unlock on nocb_lock.

Harumph, that doesn't look good, thanks for pointing this out.

AFAICT PREEMPT_RT doesn't actually require to disable softirqs here (since
it forces RCU callbacks on the RCU kthreads), but disabled softirqs seem to
be a requirement for much of the underlying functions and even some of the
callbacks (delayed_put_task_struct() ~> vfree() pays close attention to
in_interrupt() for instance).

Now, if the offloaded state was (properly) protected by a local_lock, do
you reckon we could then keep preemption enabled?

From a naive outsider PoV, rdp->nocb_lock looks like a decent candidate,
but it's a *raw* spinlock (I can't tell right now whether changing this is
a horrible idea or not), and then there's

81c0b3d724f4 ("rcu/nocb: Avoid ->nocb_lock capture by corresponding CPU")

on top...

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] rcu/nocb: Check for migratability rather than pure preemptability
  2021-07-28 19:34     ` Valentin Schneider
@ 2021-07-28 22:01       ` Frederic Weisbecker
  2021-07-29  1:04         ` Paul E. McKenney
  0 siblings, 1 reply; 12+ messages in thread
From: Frederic Weisbecker @ 2021-07-28 22:01 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: linux-kernel, linux-arm-kernel, linux-rt-users, Catalin Marinas,
	Will Deacon, Ingo Molnar, Peter Zijlstra, Thomas Gleixner,
	Steven Rostedt, Daniel Bristot de Oliveira, Paul E. McKenney,
	Josh Triplett, Mathieu Desnoyers, Lai Jiangshan, Joel Fernandes,
	Anshuman Khandual, Vincenzo Frascino, Steven Price,
	Ard Biesheuvel, Sebastian Andrzej Siewior

On Wed, Jul 28, 2021 at 08:34:14PM +0100, Valentin Schneider wrote:
> On 28/07/21 01:08, Frederic Weisbecker wrote:
> > On Wed, Jul 21, 2021 at 12:51:17PM +0100, Valentin Schneider wrote:
> >> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
> >> ---
> >>  kernel/rcu/tree_plugin.h | 3 +--
> >>  1 file changed, 1 insertion(+), 2 deletions(-)
> >>
> >> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> >> index ad0156b86937..6c3c4100da83 100644
> >> --- a/kernel/rcu/tree_plugin.h
> >> +++ b/kernel/rcu/tree_plugin.h
> >> @@ -70,8 +70,7 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp)
> >>              !(lockdep_is_held(&rcu_state.barrier_mutex) ||
> >>                (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) ||
> >>                rcu_lockdep_is_held_nocb(rdp) ||
> >> -		  (rdp == this_cpu_ptr(&rcu_data) &&
> >> -		   !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) ||
> >> +		  (rdp == this_cpu_ptr(&rcu_data) && is_pcpu_safe()) ||
> >
> > I fear that won't work. We really need any caller of rcu_rdp_is_offloaded()
> > on the local rdp to have preemption disabled and not just migration disabled,
> > because we must protect against concurrent offloaded state changes.
> >
> > The offloaded state is changed by a workqueue that executes on the target rdp.
> >
> > Here is a practical example where it matters:
> >
> >            CPU 0
> >            -----
> >            // =======> task rcuc running
> >            rcu_core {
> >              rcu_nocb_lock_irqsave(rdp, flags) {
> >                    if (!rcu_segcblist_is_offloaded(rdp->cblist)) {
> >                      // is not offloaded right now, so it's going
> >                        // to just disable IRQs. Oh no wait:
> >            // preemption
> >            // ========> workqueue running
> >            rcu_nocb_rdp_offload();
> >            // ========> task rcuc resume
> >                      local_irq_disable();
> >                    }
> >                }
> >              ....
> >                      rcu_nocb_unlock_irqrestore(rdp, flags) {
> >                    if (rcu_segcblist_is_offloaded(rdp->cblist)) {
> >                        // is offloaded right now so:
> >                        raw_spin_unlock_irqrestore(rdp, flags);
> >
> > And that will explode because that's an impaired unlock on nocb_lock.
> 
> Harumph, that doesn't look good, thanks for pointing this out.
> 
> AFAICT PREEMPT_RT doesn't actually require to disable softirqs here (since
> it forces RCU callbacks on the RCU kthreads), but disabled softirqs seem to
> be a requirement for much of the underlying functions and even some of the
> callbacks (delayed_put_task_struct() ~> vfree() pays close attention to
> in_interrupt() for instance).
> 
> Now, if the offloaded state was (properly) protected by a local_lock, do
> you reckon we could then keep preemption enabled?

I guess we could take such a local lock on the update side
(rcu_nocb_rdp_offload) and then take it on rcuc kthread/softirqs
and maybe other places.

But we must make sure that rcu_core() is preempt-safe from a general perspective
in the first place. From a quick glance I can't find obvious issues...yet.

Paul maybe you can see something?

> 
> From a naive outsider PoV, rdp->nocb_lock looks like a decent candidate,
> but it's a *raw* spinlock (I can't tell right now whether changing this is
> a horrible idea or not), and then there's

Yeah that's not possible, nocb_lock is too low level and has to be called with
IRQs disabled. So if we take that local_lock solution, we need a new lock.

Thanks.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] rcu/nocb: Check for migratability rather than pure preemptability
  2021-07-28 22:01       ` Frederic Weisbecker
@ 2021-07-29  1:04         ` Paul E. McKenney
  2021-07-29 10:51           ` Valentin Schneider
  0 siblings, 1 reply; 12+ messages in thread
From: Paul E. McKenney @ 2021-07-29  1:04 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Valentin Schneider, linux-kernel, linux-arm-kernel,
	linux-rt-users, Catalin Marinas, Will Deacon, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Steven Rostedt,
	Daniel Bristot de Oliveira, Josh Triplett, Mathieu Desnoyers,
	Lai Jiangshan, Joel Fernandes, Anshuman Khandual,
	Vincenzo Frascino, Steven Price, Ard Biesheuvel,
	Sebastian Andrzej Siewior

On Thu, Jul 29, 2021 at 12:01:37AM +0200, Frederic Weisbecker wrote:
> On Wed, Jul 28, 2021 at 08:34:14PM +0100, Valentin Schneider wrote:
> > On 28/07/21 01:08, Frederic Weisbecker wrote:
> > > On Wed, Jul 21, 2021 at 12:51:17PM +0100, Valentin Schneider wrote:
> > >> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
> > >> ---
> > >>  kernel/rcu/tree_plugin.h | 3 +--
> > >>  1 file changed, 1 insertion(+), 2 deletions(-)
> > >>
> > >> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> > >> index ad0156b86937..6c3c4100da83 100644
> > >> --- a/kernel/rcu/tree_plugin.h
> > >> +++ b/kernel/rcu/tree_plugin.h
> > >> @@ -70,8 +70,7 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp)
> > >>              !(lockdep_is_held(&rcu_state.barrier_mutex) ||
> > >>                (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) ||
> > >>                rcu_lockdep_is_held_nocb(rdp) ||
> > >> -		  (rdp == this_cpu_ptr(&rcu_data) &&
> > >> -		   !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) ||
> > >> +		  (rdp == this_cpu_ptr(&rcu_data) && is_pcpu_safe()) ||
> > >
> > > I fear that won't work. We really need any caller of rcu_rdp_is_offloaded()
> > > on the local rdp to have preemption disabled and not just migration disabled,
> > > because we must protect against concurrent offloaded state changes.
> > >
> > > The offloaded state is changed by a workqueue that executes on the target rdp.
> > >
> > > Here is a practical example where it matters:
> > >
> > >            CPU 0
> > >            -----
> > >            // =======> task rcuc running
> > >            rcu_core {
> > >              rcu_nocb_lock_irqsave(rdp, flags) {
> > >                    if (!rcu_segcblist_is_offloaded(rdp->cblist)) {
> > >                      // is not offloaded right now, so it's going
> > >                        // to just disable IRQs. Oh no wait:
> > >            // preemption
> > >            // ========> workqueue running
> > >            rcu_nocb_rdp_offload();
> > >            // ========> task rcuc resume
> > >                      local_irq_disable();
> > >                    }
> > >                }
> > >              ....
> > >                      rcu_nocb_unlock_irqrestore(rdp, flags) {
> > >                    if (rcu_segcblist_is_offloaded(rdp->cblist)) {
> > >                        // is offloaded right now so:
> > >                        raw_spin_unlock_irqrestore(rdp, flags);
> > >
> > > And that will explode because that's an impaired unlock on nocb_lock.
> > 
> > Harumph, that doesn't look good, thanks for pointing this out.
> > 
> > AFAICT PREEMPT_RT doesn't actually require to disable softirqs here (since
> > it forces RCU callbacks on the RCU kthreads), but disabled softirqs seem to
> > be a requirement for much of the underlying functions and even some of the
> > callbacks (delayed_put_task_struct() ~> vfree() pays close attention to
> > in_interrupt() for instance).
> > 
> > Now, if the offloaded state was (properly) protected by a local_lock, do
> > you reckon we could then keep preemption enabled?
> 
> I guess we could take such a local lock on the update side
> (rcu_nocb_rdp_offload) and then take it on rcuc kthread/softirqs
> and maybe other places.
> 
> But we must make sure that rcu_core() is preempt-safe from a general perspective
> in the first place. From a quick glance I can't find obvious issues...yet.
> 
> Paul maybe you can see something?

Let's see...

o	Extra context switches in rcu_core() mean extra quiescent
	states.  It therefore might be necessary to wrap rcu_core()
	in an rcu_read_lock() / rcu_read_unlock() pair, because
	otherwise an RCU grace period won't wait for rcu_core().

	Actually, better have local_bh_disable() imply
	rcu_read_lock() and local_bh_enable() imply rcu_read_unlock().
	But I would hope that this already happened.

o	The rcu_preempt_deferred_qs() check should still be fine,
	unless there is a raw_bh_disable() in -rt. 

o	The set_tsk_need_resched() and set_preempt_need_resched()
	might preempt immediately.  I cannot think of a problem
	with that, but careful testing is clearly in order.

o	The values checked by rcu_check_quiescent_state() could now
	change while this function is running.	I don't immediately
	see a problematic sequence of events, but here be dragons.
	I therefore suggest disabling preemption across this function.
	Or if that is impossible, taking a very careful look at the
	proposed expansion of the state space of this function.

o	I don't see any new races in the grace-period/callback check.
	New callbacks can appear in interrupt handlers, after all.

o	The rcu_check_gp_start_stall() function looks similarly
	unproblematic.

o	Callback invocation can now be preempted, but then again it
	recently started being concurrent, so this should be no
	added risk over offloading/de-offloading.

o	I don't see any problem with do_nocb_deferred_wakeup().

o	The CONFIG_RCU_STRICT_GRACE_PERIOD check should not be
	impacted.

So some adjustments might be needed, but I don't see a need for
major surgery.

This of course might be a failure of imagination on my part, so it
wouldn't hurt to double-check my observations.

> > From a naive outsider PoV, rdp->nocb_lock looks like a decent candidate,
> > but it's a *raw* spinlock (I can't tell right now whether changing this is
> > a horrible idea or not), and then there's
> 
> Yeah that's not possible, nocb_lock is too low level and has to be called with
> IRQs disabled. So if we take that local_lock solution, we need a new lock.

No argument here!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] rcu/nocb: Check for migratability rather than pure preemptability
  2021-07-29  1:04         ` Paul E. McKenney
@ 2021-07-29 10:51           ` Valentin Schneider
  0 siblings, 0 replies; 12+ messages in thread
From: Valentin Schneider @ 2021-07-29 10:51 UTC (permalink / raw)
  To: paulmck, Frederic Weisbecker
  Cc: linux-kernel, linux-arm-kernel, linux-rt-users, Catalin Marinas,
	Will Deacon, Ingo Molnar, Peter Zijlstra, Thomas Gleixner,
	Steven Rostedt, Daniel Bristot de Oliveira, Josh Triplett,
	Mathieu Desnoyers, Lai Jiangshan, Joel Fernandes,
	Anshuman Khandual, Vincenzo Frascino, Steven Price,
	Ard Biesheuvel, Sebastian Andrzej Siewior

On 28/07/21 18:04, Paul E. McKenney wrote:
> On Thu, Jul 29, 2021 at 12:01:37AM +0200, Frederic Weisbecker wrote:
>> On Wed, Jul 28, 2021 at 08:34:14PM +0100, Valentin Schneider wrote:
>> > Now, if the offloaded state was (properly) protected by a local_lock, do
>> > you reckon we could then keep preemption enabled?
>>
>> I guess we could take such a local lock on the update side
>> (rcu_nocb_rdp_offload) and then take it on rcuc kthread/softirqs
>> and maybe other places.
>>
>> But we must make sure that rcu_core() is preempt-safe from a general perspective
>> in the first place. From a quick glance I can't find obvious issues...yet.
>>
>> Paul maybe you can see something?
>
> Let's see...
>
> o	Extra context switches in rcu_core() mean extra quiescent
>       states.  It therefore might be necessary to wrap rcu_core()
>       in an rcu_read_lock() / rcu_read_unlock() pair, because
>       otherwise an RCU grace period won't wait for rcu_core().
>
>       Actually, better have local_bh_disable() imply
>       rcu_read_lock() and local_bh_enable() imply rcu_read_unlock().
>       But I would hope that this already happened.

It does look like it.

>
> o	The rcu_preempt_deferred_qs() check should still be fine,
>       unless there is a raw_bh_disable() in -rt.
>
> o	The set_tsk_need_resched() and set_preempt_need_resched()
>       might preempt immediately.  I cannot think of a problem
>       with that, but careful testing is clearly in order.
>
> o	The values checked by rcu_check_quiescent_state() could now
>       change while this function is running.	I don't immediately
>       see a problematic sequence of events, but here be dragons.
>       I therefore suggest disabling preemption across this function.
>       Or if that is impossible, taking a very careful look at the
>       proposed expansion of the state space of this function.
>
> o	I don't see any new races in the grace-period/callback check.
>       New callbacks can appear in interrupt handlers, after all.
>
> o	The rcu_check_gp_start_stall() function looks similarly
>       unproblematic.
>
> o	Callback invocation can now be preempted, but then again it
>       recently started being concurrent, so this should be no
>       added risk over offloading/de-offloading.
>
> o	I don't see any problem with do_nocb_deferred_wakeup().
>
> o	The CONFIG_RCU_STRICT_GRACE_PERIOD check should not be
>       impacted.
>
> So some adjustments might be needed, but I don't see a need for
> major surgery.
>
> This of course might be a failure of imagination on my part, so it
> wouldn't hurt to double-check my observations.
>

I'll go poke around, thank you both!

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-07-29 10:51 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-21 11:51 [PATCH 0/3] sched: migrate_disable() vs per-CPU access safety checks Valentin Schneider
2021-07-21 11:51 ` [PATCH 1/3] sched: Introduce is_pcpu_safe() Valentin Schneider
2021-07-27 16:23   ` Paul E. McKenney
2021-07-21 11:51 ` [PATCH 2/3] rcu/nocb: Check for migratability rather than pure preemptability Valentin Schneider
2021-07-27 16:24   ` Paul E. McKenney
2021-07-27 23:08   ` Frederic Weisbecker
2021-07-28 19:34     ` Valentin Schneider
2021-07-28 22:01       ` Frederic Weisbecker
2021-07-29  1:04         ` Paul E. McKenney
2021-07-29 10:51           ` Valentin Schneider
2021-07-21 11:51 ` [PATCH 3/3] arm64: mm: Make arch_faults_on_old_pte() check for migratability Valentin Schneider
2021-07-27 19:45 ` [PATCH 0/3] sched: migrate_disable() vs per-CPU access safety checks Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).