* [PATCH rcu 1/8] rcu: Remove duplicate RCU exp QS report from rcu_report_dead()
2022-10-19 22:46 [PATCH rcu 0/8] Miscellaneous fixes for v6.2 Paul E. McKenney
@ 2022-10-19 22:46 ` Paul E. McKenney
2022-10-19 22:46 ` [PATCH rcu 2/8] rcu: Synchronize ->qsmaskinitnext in rcu_boost_kthread_setaffinity() Paul E. McKenney
` (6 subsequent siblings)
7 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:46 UTC (permalink / raw)
To: rcu; +Cc: linux-kernel, kernel-team, rostedt, Zqiang, Paul E . McKenney
From: Zqiang <qiang1.zhang@intel.com>
The rcu_report_dead() function invokes rcu_report_exp_rdp() in order
to force an immediate expedited quiescent state on the outgoing
CPU, and then it invokes rcu_preempt_deferred_qs() to provide any
required deferred quiescent state of either sort. Because the call to
rcu_preempt_deferred_qs() provides the expedited RCU quiescent state if
requested, the call to rcu_report_exp_rdp() is potentially redundant.
One possible issue is a concurrent start of a new expedited RCU
grace period, but this situation is already handled correctly
by __sync_rcu_exp_select_node_cpus(). This function will detect
that the CPU is going offline via the error return from its call
to smp_call_function_single(). In that case, it will retry, and
eventually stop retrying due to rcu_report_exp_rdp() clearing the
->qsmaskinitnext bit corresponding to the target CPU. As a result,
__sync_rcu_exp_select_node_cpus() will report the necessary quiescent
state after dealing with any remaining CPU.
This change assumes that control does not enter rcu_report_dead() within
an RCU read-side critical section, but then again, the surviving call
to rcu_preempt_deferred_qs() has always made this assumption.
This commit therefore removes the call to rcu_report_exp_rdp(), thus
relying on rcu_preempt_deferred_qs() to handle both normal and expedited
quiescent states.
Signed-off-by: Zqiang <qiang1.zhang@intel.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
kernel/rcu/tree.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 6bb8e72bc8151..0ca21ac0f0648 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -4276,8 +4276,6 @@ void rcu_report_dead(unsigned int cpu)
// Do any dangling deferred wakeups.
do_nocb_deferred_wakeup(rdp);
- /* QS for any half-done expedited grace period. */
- rcu_report_exp_rdp(rdp);
rcu_preempt_deferred_qs(current);
/* Remove outgoing CPU from mask in the leaf rcu_node structure. */
--
2.31.1.189.g2e36527f23
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH rcu 2/8] rcu: Synchronize ->qsmaskinitnext in rcu_boost_kthread_setaffinity()
2022-10-19 22:46 [PATCH rcu 0/8] Miscellaneous fixes for v6.2 Paul E. McKenney
2022-10-19 22:46 ` [PATCH rcu 1/8] rcu: Remove duplicate RCU exp QS report from rcu_report_dead() Paul E. McKenney
@ 2022-10-19 22:46 ` Paul E. McKenney
2022-10-19 22:46 ` [PATCH rcu 3/8] rcu: Remove unused 'cpu' in rcu_virt_note_context_switch() Paul E. McKenney
` (5 subsequent siblings)
7 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:46 UTC (permalink / raw)
To: rcu
Cc: linux-kernel, kernel-team, rostedt, Pingfan Liu, David Woodhouse,
Frederic Weisbecker, Neeraj Upadhyay, Josh Triplett,
Mathieu Desnoyers, Lai Jiangshan, Joel Fernandes,
Jason A. Donenfeld, Paul E . McKenney
From: Pingfan Liu <kernelfans@gmail.com>
Once either rcutree_online_cpu() or rcutree_dead_cpu() is invoked
concurrently, the following rcu_boost_kthread_setaffinity() race can
occur:
CPU 1 CPU2
mask = rcu_rnp_online_cpus(rnp);
...
mask = rcu_rnp_online_cpus(rnp);
...
set_cpus_allowed_ptr(t, cm);
set_cpus_allowed_ptr(t, cm);
This results in CPU2's update being overwritten by that of CPU1, and
thus the possibility of ->boost_kthread_task continuing to run on a
to-be-offlined CPU.
This commit therefore eliminates this race by relying on the pre-existing
acquisition of ->boost_kthread_mutex to serialize the full process of
changing the affinity of ->boost_kthread_task.
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
kernel/rcu/tree_plugin.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index e3142ee35fc6a..7b0fe741a0886 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1221,11 +1221,13 @@ static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp)
* We don't include outgoingcpu in the affinity set, use -1 if there is
* no outgoing CPU. If there are no CPUs left in the affinity set,
* this function allows the kthread to execute on any CPU.
+ *
+ * Any future concurrent calls are serialized via ->boost_kthread_mutex.
*/
static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu)
{
struct task_struct *t = rnp->boost_kthread_task;
- unsigned long mask = rcu_rnp_online_cpus(rnp);
+ unsigned long mask;
cpumask_var_t cm;
int cpu;
@@ -1234,6 +1236,7 @@ static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu)
if (!zalloc_cpumask_var(&cm, GFP_KERNEL))
return;
mutex_lock(&rnp->boost_kthread_mutex);
+ mask = rcu_rnp_online_cpus(rnp);
for_each_leaf_node_possible_cpu(rnp, cpu)
if ((mask & leaf_node_cpu_bit(rnp, cpu)) &&
cpu != outgoingcpu)
--
2.31.1.189.g2e36527f23
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH rcu 3/8] rcu: Remove unused 'cpu' in rcu_virt_note_context_switch()
2022-10-19 22:46 [PATCH rcu 0/8] Miscellaneous fixes for v6.2 Paul E. McKenney
2022-10-19 22:46 ` [PATCH rcu 1/8] rcu: Remove duplicate RCU exp QS report from rcu_report_dead() Paul E. McKenney
2022-10-19 22:46 ` [PATCH rcu 2/8] rcu: Synchronize ->qsmaskinitnext in rcu_boost_kthread_setaffinity() Paul E. McKenney
@ 2022-10-19 22:46 ` Paul E. McKenney
2022-10-19 22:46 ` [PATCH rcu 4/8] rcu: Use READ_ONCE() for lockless read of rnp->qsmask Paul E. McKenney
` (4 subsequent siblings)
7 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:46 UTC (permalink / raw)
To: rcu
Cc: linux-kernel, kernel-team, rostedt, Zeng Heng, Mukesh Ojha,
Paul E . McKenney
From: Zeng Heng <zengheng4@huawei.com>
This commit removes the unused function argument 'cpu'. This does not
change functionality, but might save a cycle or two.
Signed-off-by: Zeng Heng <zengheng4@huawei.com>
Acked-by: Mukesh Ojha <quic_mojha@quicinc.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
include/linux/kvm_host.h | 2 +-
include/linux/rcutiny.h | 2 +-
include/linux/rcutree.h | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 32f259fa58013..381b92d146c71 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -416,7 +416,7 @@ static __always_inline void guest_context_enter_irqoff(void)
*/
if (!context_tracking_guest_enter()) {
instrumentation_begin();
- rcu_virt_note_context_switch(smp_processor_id());
+ rcu_virt_note_context_switch();
instrumentation_end();
}
}
diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
index 768196a5f39d6..9bc025aa79a30 100644
--- a/include/linux/rcutiny.h
+++ b/include/linux/rcutiny.h
@@ -142,7 +142,7 @@ static inline int rcu_needs_cpu(void)
* Take advantage of the fact that there is only one CPU, which
* allows us to ignore virtualization-based context switches.
*/
-static inline void rcu_virt_note_context_switch(int cpu) { }
+static inline void rcu_virt_note_context_switch(void) { }
static inline void rcu_cpu_stall_reset(void) { }
static inline int rcu_jiffies_till_stall_check(void) { return 21 * HZ; }
static inline void rcu_irq_exit_check_preempt(void) { }
diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
index 5efb51486e8af..70795386b9ffa 100644
--- a/include/linux/rcutree.h
+++ b/include/linux/rcutree.h
@@ -27,7 +27,7 @@ void rcu_cpu_stall_reset(void);
* wrapper around rcu_note_context_switch(), which allows TINY_RCU
* to save a few bytes. The caller must have disabled interrupts.
*/
-static inline void rcu_virt_note_context_switch(int cpu)
+static inline void rcu_virt_note_context_switch(void)
{
rcu_note_context_switch(false);
}
--
2.31.1.189.g2e36527f23
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH rcu 4/8] rcu: Use READ_ONCE() for lockless read of rnp->qsmask
2022-10-19 22:46 [PATCH rcu 0/8] Miscellaneous fixes for v6.2 Paul E. McKenney
` (2 preceding siblings ...)
2022-10-19 22:46 ` [PATCH rcu 3/8] rcu: Remove unused 'cpu' in rcu_virt_note_context_switch() Paul E. McKenney
@ 2022-10-19 22:46 ` Paul E. McKenney
2022-10-19 22:46 ` [PATCH rcu 5/8] slab: Explain why SLAB_DESTROY_BY_RCU reference before locking Paul E. McKenney
` (3 subsequent siblings)
7 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:46 UTC (permalink / raw)
To: rcu
Cc: linux-kernel, kernel-team, rostedt, Joel Fernandes (Google),
Frederic Weisbecker, Paul E . McKenney
From: "Joel Fernandes (Google)" <joel@joelfernandes.org>
The rnp->qsmask is locklessly accessed from rcutree_dying_cpu(). This
may help avoid load tearing due to concurrent access, KCSAN
issues, and preserve sanity of people reading the mask in tracing.
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
kernel/rcu/tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 0ca21ac0f0648..5ec97e3f7468f 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2106,7 +2106,7 @@ int rcutree_dying_cpu(unsigned int cpu)
if (!IS_ENABLED(CONFIG_HOTPLUG_CPU))
return 0;
- blkd = !!(rnp->qsmask & rdp->grpmask);
+ blkd = !!(READ_ONCE(rnp->qsmask) & rdp->grpmask);
trace_rcu_grace_period(rcu_state.name, READ_ONCE(rnp->gp_seq),
blkd ? TPS("cpuofl-bgp") : TPS("cpuofl"));
return 0;
--
2.31.1.189.g2e36527f23
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH rcu 5/8] slab: Explain why SLAB_DESTROY_BY_RCU reference before locking
2022-10-19 22:46 [PATCH rcu 0/8] Miscellaneous fixes for v6.2 Paul E. McKenney
` (3 preceding siblings ...)
2022-10-19 22:46 ` [PATCH rcu 4/8] rcu: Use READ_ONCE() for lockless read of rnp->qsmask Paul E. McKenney
@ 2022-10-19 22:46 ` Paul E. McKenney
2022-10-20 7:10 ` Vlastimil Babka
2022-10-21 7:44 ` Christoph Lameter
2022-10-19 22:46 ` [PATCH rcu 6/8] rcu: Remove rcu_is_idle_cpu() Paul E. McKenney
` (2 subsequent siblings)
7 siblings, 2 replies; 17+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:46 UTC (permalink / raw)
To: rcu
Cc: linux-kernel, kernel-team, rostedt, Paul E. McKenney,
Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
Andrew Morton, Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo,
linux-mm
It is not obvious to the casual user why it is absolutely necessary to
acquire a reference to a SLAB_DESTROY_BY_RCU structure before acquiring
a lock in that structure. Therefore, add a comment explaining this point.
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: <linux-mm@kvack.org>
---
include/linux/slab.h | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 90877fcde70bd..446303e385265 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -76,6 +76,12 @@
* rcu_read_lock before reading the address, then rcu_read_unlock after
* taking the spinlock within the structure expected at that address.
*
+ * Note that it is not possible to acquire a lock within a structure
+ * allocated with SLAB_DESTROY_BY_RCU without first acquiring a reference
+ * as described above. The reason is that SLAB_DESTROY_BY_RCU pages are
+ * not zeroed before being given to the slab, which means that any locks
+ * must be initialized after each and every kmem_struct_alloc().
+ *
* Note that SLAB_TYPESAFE_BY_RCU was originally named SLAB_DESTROY_BY_RCU.
*/
/* Defer freeing slabs to RCU */
--
2.31.1.189.g2e36527f23
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH rcu 5/8] slab: Explain why SLAB_DESTROY_BY_RCU reference before locking
2022-10-19 22:46 ` [PATCH rcu 5/8] slab: Explain why SLAB_DESTROY_BY_RCU reference before locking Paul E. McKenney
@ 2022-10-20 7:10 ` Vlastimil Babka
2022-10-20 16:31 ` Paul E. McKenney
2022-10-21 7:44 ` Christoph Lameter
1 sibling, 1 reply; 17+ messages in thread
From: Vlastimil Babka @ 2022-10-20 7:10 UTC (permalink / raw)
To: Paul E. McKenney, rcu
Cc: linux-kernel, kernel-team, rostedt, Christoph Lameter,
Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
Roman Gushchin, Hyeonggon Yoo, linux-mm
On 10/20/22 00:46, Paul E. McKenney wrote:
> It is not obvious to the casual user why it is absolutely necessary to
> acquire a reference to a SLAB_DESTROY_BY_RCU structure before acquiring
> a lock in that structure. Therefore, add a comment explaining this point.
s/SLAB_DESTROY_BY_RCU/SLAB_TYPESAFE_BY_RCU/ in subject, commit log and the
added comment? :)
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Cc: <linux-mm@kvack.org>
> ---
> include/linux/slab.h | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 90877fcde70bd..446303e385265 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -76,6 +76,12 @@
> * rcu_read_lock before reading the address, then rcu_read_unlock after
> * taking the spinlock within the structure expected at that address.
> *
> + * Note that it is not possible to acquire a lock within a structure
> + * allocated with SLAB_DESTROY_BY_RCU without first acquiring a reference
> + * as described above. The reason is that SLAB_DESTROY_BY_RCU pages are
> + * not zeroed before being given to the slab, which means that any locks
> + * must be initialized after each and every kmem_struct_alloc().
> + *
Wonder if slab caches with a constructor should be OK here as AFAIK it
should mean the object has to be in the initialized state both when
allocated and freed?
> * Note that SLAB_TYPESAFE_BY_RCU was originally named SLAB_DESTROY_BY_RCU.
> */
> /* Defer freeing slabs to RCU */
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH rcu 5/8] slab: Explain why SLAB_DESTROY_BY_RCU reference before locking
2022-10-20 7:10 ` Vlastimil Babka
@ 2022-10-20 16:31 ` Paul E. McKenney
0 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2022-10-20 16:31 UTC (permalink / raw)
To: Vlastimil Babka
Cc: rcu, linux-kernel, kernel-team, rostedt, Christoph Lameter,
Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
Roman Gushchin, Hyeonggon Yoo, linux-mm
On Thu, Oct 20, 2022 at 09:10:49AM +0200, Vlastimil Babka wrote:
> On 10/20/22 00:46, Paul E. McKenney wrote:
> > It is not obvious to the casual user why it is absolutely necessary to
> > acquire a reference to a SLAB_DESTROY_BY_RCU structure before acquiring
> > a lock in that structure. Therefore, add a comment explaining this point.
>
> s/SLAB_DESTROY_BY_RCU/SLAB_TYPESAFE_BY_RCU/ in subject, commit log and the
> added comment? :)
Boy, I was certainly living in the past when I did this patch, wasn't I?
Thank you, will fix on next rebase.
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > Cc: Christoph Lameter <cl@linux.com>
> > Cc: Pekka Enberg <penberg@kernel.org>
> > Cc: David Rientjes <rientjes@google.com>
> > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Vlastimil Babka <vbabka@suse.cz>
> > Cc: Roman Gushchin <roman.gushchin@linux.dev>
> > Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Cc: <linux-mm@kvack.org>
> > ---
> > include/linux/slab.h | 6 ++++++
> > 1 file changed, 6 insertions(+)
> >
> > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > index 90877fcde70bd..446303e385265 100644
> > --- a/include/linux/slab.h
> > +++ b/include/linux/slab.h
> > @@ -76,6 +76,12 @@
> > * rcu_read_lock before reading the address, then rcu_read_unlock after
> > * taking the spinlock within the structure expected at that address.
> > *
> > + * Note that it is not possible to acquire a lock within a structure
> > + * allocated with SLAB_DESTROY_BY_RCU without first acquiring a reference
> > + * as described above. The reason is that SLAB_DESTROY_BY_RCU pages are
> > + * not zeroed before being given to the slab, which means that any locks
> > + * must be initialized after each and every kmem_struct_alloc().
> > + *
>
> Wonder if slab caches with a constructor should be OK here as AFAIK it
> should mean the object has to be in the initialized state both when
> allocated and freed?
It does look that way, thank you!
And __i915_request_ctor(), sighand_ctor(), and anon_vma_ctor() actually
do this, initializing a lock in the process.
The ctor function could just initialize the locks, and all would be well.
In addition, this makes sequence-lock-like approaches a bit easier, as in
"just use a sequence lock".
I will update with attribution.
Thanx, Paul
> > * Note that SLAB_TYPESAFE_BY_RCU was originally named SLAB_DESTROY_BY_RCU.
> > */
> > /* Defer freeing slabs to RCU */
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH rcu 5/8] slab: Explain why SLAB_DESTROY_BY_RCU reference before locking
2022-10-19 22:46 ` [PATCH rcu 5/8] slab: Explain why SLAB_DESTROY_BY_RCU reference before locking Paul E. McKenney
2022-10-20 7:10 ` Vlastimil Babka
@ 2022-10-21 7:44 ` Christoph Lameter
2022-10-21 13:43 ` Paul E. McKenney
1 sibling, 1 reply; 17+ messages in thread
From: Christoph Lameter @ 2022-10-21 7:44 UTC (permalink / raw)
To: Paul E. McKenney
Cc: rcu, linux-kernel, kernel-team, rostedt, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Vlastimil Babka,
Roman Gushchin, Hyeonggon Yoo, linux-mm
On Wed, 19 Oct 2022, Paul E. McKenney wrote:
> It is not obvious to the casual user why it is absolutely necessary to
> acquire a reference to a SLAB_DESTROY_BY_RCU structure before acquiring
> a lock in that structure. Therefore, add a comment explaining this point.
Sorry but this is not correct and difficult to comprehend.
1. You do not need a reference to a slab object after it was allocated.
Objects must be properly protected by rcu_locks.
2. Locks are initialized once on slab allocation via a constructor (*not* on object allocation via kmem_cache_alloc)
3. Modifying locks at allocation/free is not possible since references to
these objects may still persist after free and before alloc.
4. The old term SLAB_DESTROY_BY_RCU is used here.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH rcu 5/8] slab: Explain why SLAB_DESTROY_BY_RCU reference before locking
2022-10-21 7:44 ` Christoph Lameter
@ 2022-10-21 13:43 ` Paul E. McKenney
2022-10-21 13:50 ` Vlastimil Babka
0 siblings, 1 reply; 17+ messages in thread
From: Paul E. McKenney @ 2022-10-21 13:43 UTC (permalink / raw)
To: Christoph Lameter
Cc: rcu, linux-kernel, kernel-team, rostedt, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Vlastimil Babka,
Roman Gushchin, Hyeonggon Yoo, linux-mm
On Fri, Oct 21, 2022 at 09:44:23AM +0200, Christoph Lameter wrote:
> On Wed, 19 Oct 2022, Paul E. McKenney wrote:
>
> > It is not obvious to the casual user why it is absolutely necessary to
> > acquire a reference to a SLAB_DESTROY_BY_RCU structure before acquiring
> > a lock in that structure. Therefore, add a comment explaining this point.
>
> Sorry but this is not correct and difficult to comprehend.
>
> 1. You do not need a reference to a slab object after it was allocated.
> Objects must be properly protected by rcu_locks.
>
> 2. Locks are initialized once on slab allocation via a constructor (*not* on object allocation via kmem_cache_alloc)
>
> 3. Modifying locks at allocation/free is not possible since references to
> these objects may still persist after free and before alloc.
>
> 4. The old term SLAB_DESTROY_BY_RCU is used here.
Thank you for looking this over, but Vlastimil beat you to it. How does
the update below look?
Thanx, Paul
------------------------------------------------------------------------
commit ff4c536e6b44e2e185e38c3653851f92e07139da
Author: Paul E. McKenney <paulmck@kernel.org>
Date: Mon Sep 26 08:57:56 2022 -0700
slab: Explain why SLAB_TYPESAFE_BY_RCU reference before locking
It is not obvious to the casual user why it is absolutely necessary to
acquire a reference to a SLAB_TYPESAFE_BY_RCU structure before acquiring
a lock in that structure. Therefore, add a comment explaining this point.
[ paulmck: Apply Vlastimil Babka feedback. ]
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: <linux-mm@kvack.org>
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 90877fcde70bd..487418c7ea8cd 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -76,6 +76,17 @@
* rcu_read_lock before reading the address, then rcu_read_unlock after
* taking the spinlock within the structure expected at that address.
*
+ * Note that it is not possible to acquire a lock within a structure
+ * allocated with SLAB_TYPESAFE_BY_RCU without first acquiring a reference
+ * as described above. The reason is that SLAB_TYPESAFE_BY_RCU pages
+ * are not zeroed before being given to the slab, which means that any
+ * locks must be initialized after each and every kmem_struct_alloc().
+ * Alternatively, make the ctor passed to kmem_cache_create() initialize
+ * the locks at page-allocation time, as is done in __i915_request_ctor(),
+ * sighand_ctor(), and anon_vma_ctor(). Such a ctor permits readers
+ * to safely acquire those ctor-initialized locks under rcu_read_lock()
+ * protection.
+ *
* Note that SLAB_TYPESAFE_BY_RCU was originally named SLAB_DESTROY_BY_RCU.
*/
/* Defer freeing slabs to RCU */
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH rcu 5/8] slab: Explain why SLAB_DESTROY_BY_RCU reference before locking
2022-10-21 13:43 ` Paul E. McKenney
@ 2022-10-21 13:50 ` Vlastimil Babka
2022-10-21 15:42 ` Paul E. McKenney
0 siblings, 1 reply; 17+ messages in thread
From: Vlastimil Babka @ 2022-10-21 13:50 UTC (permalink / raw)
To: paulmck, Christoph Lameter
Cc: rcu, linux-kernel, kernel-team, rostedt, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Roman Gushchin,
Hyeonggon Yoo, linux-mm
On 10/21/22 15:43, Paul E. McKenney wrote:
> On Fri, Oct 21, 2022 at 09:44:23AM +0200, Christoph Lameter wrote:
>> On Wed, 19 Oct 2022, Paul E. McKenney wrote:
>>
>> > It is not obvious to the casual user why it is absolutely necessary to
>> > acquire a reference to a SLAB_DESTROY_BY_RCU structure before acquiring
>> > a lock in that structure. Therefore, add a comment explaining this point.
>>
>> Sorry but this is not correct and difficult to comprehend.
>>
>> 1. You do not need a reference to a slab object after it was allocated.
>> Objects must be properly protected by rcu_locks.
>>
>> 2. Locks are initialized once on slab allocation via a constructor (*not* on object allocation via kmem_cache_alloc)
>>
>> 3. Modifying locks at allocation/free is not possible since references to
>> these objects may still persist after free and before alloc.
>>
>> 4. The old term SLAB_DESTROY_BY_RCU is used here.
>
> Thank you for looking this over, but Vlastimil beat you to it. How does
> the update below look?
LGTM.
> Thanx, Paul
>
> ------------------------------------------------------------------------
>
> commit ff4c536e6b44e2e185e38c3653851f92e07139da
> Author: Paul E. McKenney <paulmck@kernel.org>
> Date: Mon Sep 26 08:57:56 2022 -0700
>
> slab: Explain why SLAB_TYPESAFE_BY_RCU reference before locking
>
> It is not obvious to the casual user why it is absolutely necessary to
> acquire a reference to a SLAB_TYPESAFE_BY_RCU structure before acquiring
> a lock in that structure. Therefore, add a comment explaining this point.
>
> [ paulmck: Apply Vlastimil Babka feedback. ]
>
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Cc: <linux-mm@kvack.org>
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 90877fcde70bd..487418c7ea8cd 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -76,6 +76,17 @@
> * rcu_read_lock before reading the address, then rcu_read_unlock after
> * taking the spinlock within the structure expected at that address.
> *
> + * Note that it is not possible to acquire a lock within a structure
> + * allocated with SLAB_TYPESAFE_BY_RCU without first acquiring a reference
> + * as described above. The reason is that SLAB_TYPESAFE_BY_RCU pages
> + * are not zeroed before being given to the slab, which means that any
> + * locks must be initialized after each and every kmem_struct_alloc().
> + * Alternatively, make the ctor passed to kmem_cache_create() initialize
> + * the locks at page-allocation time, as is done in __i915_request_ctor(),
> + * sighand_ctor(), and anon_vma_ctor(). Such a ctor permits readers
> + * to safely acquire those ctor-initialized locks under rcu_read_lock()
> + * protection.
> + *
> * Note that SLAB_TYPESAFE_BY_RCU was originally named SLAB_DESTROY_BY_RCU.
> */
> /* Defer freeing slabs to RCU */
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH rcu 5/8] slab: Explain why SLAB_DESTROY_BY_RCU reference before locking
2022-10-21 13:50 ` Vlastimil Babka
@ 2022-10-21 15:42 ` Paul E. McKenney
2022-10-21 15:50 ` Vlastimil Babka
0 siblings, 1 reply; 17+ messages in thread
From: Paul E. McKenney @ 2022-10-21 15:42 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Christoph Lameter, rcu, linux-kernel, kernel-team, rostedt,
Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
Roman Gushchin, Hyeonggon Yoo, linux-mm
On Fri, Oct 21, 2022 at 03:50:17PM +0200, Vlastimil Babka wrote:
> On 10/21/22 15:43, Paul E. McKenney wrote:
> > On Fri, Oct 21, 2022 at 09:44:23AM +0200, Christoph Lameter wrote:
> >> On Wed, 19 Oct 2022, Paul E. McKenney wrote:
> >>
> >> > It is not obvious to the casual user why it is absolutely necessary to
> >> > acquire a reference to a SLAB_DESTROY_BY_RCU structure before acquiring
> >> > a lock in that structure. Therefore, add a comment explaining this point.
> >>
> >> Sorry but this is not correct and difficult to comprehend.
> >>
> >> 1. You do not need a reference to a slab object after it was allocated.
> >> Objects must be properly protected by rcu_locks.
> >>
> >> 2. Locks are initialized once on slab allocation via a constructor (*not* on object allocation via kmem_cache_alloc)
> >>
> >> 3. Modifying locks at allocation/free is not possible since references to
> >> these objects may still persist after free and before alloc.
> >>
> >> 4. The old term SLAB_DESTROY_BY_RCU is used here.
> >
> > Thank you for looking this over, but Vlastimil beat you to it. How does
> > the update below look?
>
> LGTM.
May I please have your ack?
Thanx, Paul
> > ------------------------------------------------------------------------
> >
> > commit ff4c536e6b44e2e185e38c3653851f92e07139da
> > Author: Paul E. McKenney <paulmck@kernel.org>
> > Date: Mon Sep 26 08:57:56 2022 -0700
> >
> > slab: Explain why SLAB_TYPESAFE_BY_RCU reference before locking
> >
> > It is not obvious to the casual user why it is absolutely necessary to
> > acquire a reference to a SLAB_TYPESAFE_BY_RCU structure before acquiring
> > a lock in that structure. Therefore, add a comment explaining this point.
> >
> > [ paulmck: Apply Vlastimil Babka feedback. ]
> >
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
>
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
>
> > Cc: Christoph Lameter <cl@linux.com>
> > Cc: Pekka Enberg <penberg@kernel.org>
> > Cc: David Rientjes <rientjes@google.com>
> > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Vlastimil Babka <vbabka@suse.cz>
> > Cc: Roman Gushchin <roman.gushchin@linux.dev>
> > Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Cc: <linux-mm@kvack.org>
> >
> > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > index 90877fcde70bd..487418c7ea8cd 100644
> > --- a/include/linux/slab.h
> > +++ b/include/linux/slab.h
> > @@ -76,6 +76,17 @@
> > * rcu_read_lock before reading the address, then rcu_read_unlock after
> > * taking the spinlock within the structure expected at that address.
> > *
> > + * Note that it is not possible to acquire a lock within a structure
> > + * allocated with SLAB_TYPESAFE_BY_RCU without first acquiring a reference
> > + * as described above. The reason is that SLAB_TYPESAFE_BY_RCU pages
> > + * are not zeroed before being given to the slab, which means that any
> > + * locks must be initialized after each and every kmem_struct_alloc().
> > + * Alternatively, make the ctor passed to kmem_cache_create() initialize
> > + * the locks at page-allocation time, as is done in __i915_request_ctor(),
> > + * sighand_ctor(), and anon_vma_ctor(). Such a ctor permits readers
> > + * to safely acquire those ctor-initialized locks under rcu_read_lock()
> > + * protection.
> > + *
> > * Note that SLAB_TYPESAFE_BY_RCU was originally named SLAB_DESTROY_BY_RCU.
> > */
> > /* Defer freeing slabs to RCU */
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH rcu 5/8] slab: Explain why SLAB_DESTROY_BY_RCU reference before locking
2022-10-21 15:42 ` Paul E. McKenney
@ 2022-10-21 15:50 ` Vlastimil Babka
2022-10-21 16:10 ` Paul E. McKenney
0 siblings, 1 reply; 17+ messages in thread
From: Vlastimil Babka @ 2022-10-21 15:50 UTC (permalink / raw)
To: paulmck
Cc: Christoph Lameter, rcu, linux-kernel, kernel-team, rostedt,
Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
Roman Gushchin, Hyeonggon Yoo, linux-mm
On 10/21/22 17:42, Paul E. McKenney wrote:
> On Fri, Oct 21, 2022 at 03:50:17PM +0200, Vlastimil Babka wrote:
>> On 10/21/22 15:43, Paul E. McKenney wrote:
>> > On Fri, Oct 21, 2022 at 09:44:23AM +0200, Christoph Lameter wrote:
>> >> On Wed, 19 Oct 2022, Paul E. McKenney wrote:
>> >>
>> >> > It is not obvious to the casual user why it is absolutely necessary to
>> >> > acquire a reference to a SLAB_DESTROY_BY_RCU structure before acquiring
>> >> > a lock in that structure. Therefore, add a comment explaining this point.
>> >>
>> >> Sorry but this is not correct and difficult to comprehend.
>> >>
>> >> 1. You do not need a reference to a slab object after it was allocated.
>> >> Objects must be properly protected by rcu_locks.
>> >>
>> >> 2. Locks are initialized once on slab allocation via a constructor (*not* on object allocation via kmem_cache_alloc)
>> >>
>> >> 3. Modifying locks at allocation/free is not possible since references to
>> >> these objects may still persist after free and before alloc.
>> >>
>> >> 4. The old term SLAB_DESTROY_BY_RCU is used here.
>> >
>> > Thank you for looking this over, but Vlastimil beat you to it. How does
>> > the update below look?
>>
>> LGTM.
>
> May I please have your ack?
>
> Thanx, Paul
>
>> > ------------------------------------------------------------------------
>> >
>> > commit ff4c536e6b44e2e185e38c3653851f92e07139da
>> > Author: Paul E. McKenney <paulmck@kernel.org>
>> > Date: Mon Sep 26 08:57:56 2022 -0700
>> >
>> > slab: Explain why SLAB_TYPESAFE_BY_RCU reference before locking
>> >
>> > It is not obvious to the casual user why it is absolutely necessary to
>> > acquire a reference to a SLAB_TYPESAFE_BY_RCU structure before acquiring
>> > a lock in that structure. Therefore, add a comment explaining this point.
>> >
>> > [ paulmck: Apply Vlastimil Babka feedback. ]
>> >
>> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
>>
>> Acked-by: Vlastimil Babka <vbabka@suse.cz>
It was there :)
>> > Cc: Christoph Lameter <cl@linux.com>
>> > Cc: Pekka Enberg <penberg@kernel.org>
>> > Cc: David Rientjes <rientjes@google.com>
>> > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>> > Cc: Andrew Morton <akpm@linux-foundation.org>
>> > Cc: Vlastimil Babka <vbabka@suse.cz>
>> > Cc: Roman Gushchin <roman.gushchin@linux.dev>
>> > Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> > Cc: <linux-mm@kvack.org>
>> >
>> > diff --git a/include/linux/slab.h b/include/linux/slab.h
>> > index 90877fcde70bd..487418c7ea8cd 100644
>> > --- a/include/linux/slab.h
>> > +++ b/include/linux/slab.h
>> > @@ -76,6 +76,17 @@
>> > * rcu_read_lock before reading the address, then rcu_read_unlock after
>> > * taking the spinlock within the structure expected at that address.
>> > *
>> > + * Note that it is not possible to acquire a lock within a structure
>> > + * allocated with SLAB_TYPESAFE_BY_RCU without first acquiring a reference
>> > + * as described above. The reason is that SLAB_TYPESAFE_BY_RCU pages
>> > + * are not zeroed before being given to the slab, which means that any
>> > + * locks must be initialized after each and every kmem_struct_alloc().
>> > + * Alternatively, make the ctor passed to kmem_cache_create() initialize
>> > + * the locks at page-allocation time, as is done in __i915_request_ctor(),
>> > + * sighand_ctor(), and anon_vma_ctor(). Such a ctor permits readers
>> > + * to safely acquire those ctor-initialized locks under rcu_read_lock()
>> > + * protection.
>> > + *
>> > * Note that SLAB_TYPESAFE_BY_RCU was originally named SLAB_DESTROY_BY_RCU.
>> > */
>> > /* Defer freeing slabs to RCU */
>>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH rcu 5/8] slab: Explain why SLAB_DESTROY_BY_RCU reference before locking
2022-10-21 15:50 ` Vlastimil Babka
@ 2022-10-21 16:10 ` Paul E. McKenney
0 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2022-10-21 16:10 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Christoph Lameter, rcu, linux-kernel, kernel-team, rostedt,
Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
Roman Gushchin, Hyeonggon Yoo, linux-mm
On Fri, Oct 21, 2022 at 05:50:39PM +0200, Vlastimil Babka wrote:
> On 10/21/22 17:42, Paul E. McKenney wrote:
> > On Fri, Oct 21, 2022 at 03:50:17PM +0200, Vlastimil Babka wrote:
> >> On 10/21/22 15:43, Paul E. McKenney wrote:
> >> > On Fri, Oct 21, 2022 at 09:44:23AM +0200, Christoph Lameter wrote:
> >> >> On Wed, 19 Oct 2022, Paul E. McKenney wrote:
> >> >>
> >> >> > It is not obvious to the casual user why it is absolutely necessary to
> >> >> > acquire a reference to a SLAB_DESTROY_BY_RCU structure before acquiring
> >> >> > a lock in that structure. Therefore, add a comment explaining this point.
> >> >>
> >> >> Sorry but this is not correct and difficult to comprehend.
> >> >>
> >> >> 1. You do not need a reference to a slab object after it was allocated.
> >> >> Objects must be properly protected by rcu_locks.
> >> >>
> >> >> 2. Locks are initialized once on slab allocation via a constructor (*not* on object allocation via kmem_cache_alloc)
> >> >>
> >> >> 3. Modifying locks at allocation/free is not possible since references to
> >> >> these objects may still persist after free and before alloc.
> >> >>
> >> >> 4. The old term SLAB_DESTROY_BY_RCU is used here.
> >> >
> >> > Thank you for looking this over, but Vlastimil beat you to it. How does
> >> > the update below look?
> >>
> >> LGTM.
> >
> > May I please have your ack?
> >
> > Thanx, Paul
> >
> >> > ------------------------------------------------------------------------
> >> >
> >> > commit ff4c536e6b44e2e185e38c3653851f92e07139da
> >> > Author: Paul E. McKenney <paulmck@kernel.org>
> >> > Date: Mon Sep 26 08:57:56 2022 -0700
> >> >
> >> > slab: Explain why SLAB_TYPESAFE_BY_RCU reference before locking
> >> >
> >> > It is not obvious to the casual user why it is absolutely necessary to
> >> > acquire a reference to a SLAB_TYPESAFE_BY_RCU structure before acquiring
> >> > a lock in that structure. Therefore, add a comment explaining this point.
> >> >
> >> > [ paulmck: Apply Vlastimil Babka feedback. ]
> >> >
> >> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> >>
> >> Acked-by: Vlastimil Babka <vbabka@suse.cz>
>
> It was there :)
One of those mornings, I guess...
Thank you very much!!!
Thanx, Paul
> >> > Cc: Christoph Lameter <cl@linux.com>
> >> > Cc: Pekka Enberg <penberg@kernel.org>
> >> > Cc: David Rientjes <rientjes@google.com>
> >> > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> >> > Cc: Andrew Morton <akpm@linux-foundation.org>
> >> > Cc: Vlastimil Babka <vbabka@suse.cz>
> >> > Cc: Roman Gushchin <roman.gushchin@linux.dev>
> >> > Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> >> > Cc: <linux-mm@kvack.org>
> >> >
> >> > diff --git a/include/linux/slab.h b/include/linux/slab.h
> >> > index 90877fcde70bd..487418c7ea8cd 100644
> >> > --- a/include/linux/slab.h
> >> > +++ b/include/linux/slab.h
> >> > @@ -76,6 +76,17 @@
> >> > * rcu_read_lock before reading the address, then rcu_read_unlock after
> >> > * taking the spinlock within the structure expected at that address.
> >> > *
> >> > + * Note that it is not possible to acquire a lock within a structure
> >> > + * allocated with SLAB_TYPESAFE_BY_RCU without first acquiring a reference
> >> > + * as described above. The reason is that SLAB_TYPESAFE_BY_RCU pages
> >> > + * are not zeroed before being given to the slab, which means that any
> >> > + * locks must be initialized after each and every kmem_struct_alloc().
> >> > + * Alternatively, make the ctor passed to kmem_cache_create() initialize
> >> > + * the locks at page-allocation time, as is done in __i915_request_ctor(),
> >> > + * sighand_ctor(), and anon_vma_ctor(). Such a ctor permits readers
> >> > + * to safely acquire those ctor-initialized locks under rcu_read_lock()
> >> > + * protection.
> >> > + *
> >> > * Note that SLAB_TYPESAFE_BY_RCU was originally named SLAB_DESTROY_BY_RCU.
> >> > */
> >> > /* Defer freeing slabs to RCU */
> >>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH rcu 6/8] rcu: Remove rcu_is_idle_cpu()
2022-10-19 22:46 [PATCH rcu 0/8] Miscellaneous fixes for v6.2 Paul E. McKenney
` (4 preceding siblings ...)
2022-10-19 22:46 ` [PATCH rcu 5/8] slab: Explain why SLAB_DESTROY_BY_RCU reference before locking Paul E. McKenney
@ 2022-10-19 22:46 ` Paul E. McKenney
2022-10-19 22:46 ` [PATCH rcu 7/8] rcu-tasks: Make grace-period-age message human-readable Paul E. McKenney
2022-10-19 22:46 ` [PATCH rcu 8/8] rcu: Fix __this_cpu_read() lockdep warning in rcu_force_quiescent_state() Paul E. McKenney
7 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:46 UTC (permalink / raw)
To: rcu
Cc: linux-kernel, kernel-team, rostedt, Yipeng Zou,
Frederic Weisbecker, Paul E . McKenney
From: Yipeng Zou <zouyipeng@huawei.com>
The commit 3fcd6a230fa7 ("x86/cpu: Avoid cpuinfo-induced IPIing of
idle CPUs") introduced rcu_is_idle_cpu() in order to identify the
current CPU idle state. But commit f3eca381bd49 ("x86/aperfmperf:
Replace arch_freq_get_on_cpu()") switched to using MAX_SAMPLE_AGE,
so rcu_is_idle_cpu() is no longer used. This commit therefore removes it.
Fixes: f3eca381bd49 ("x86/aperfmperf: Replace arch_freq_get_on_cpu()")
Signed-off-by: Yipeng Zou <zouyipeng@huawei.com>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
include/linux/rcutiny.h | 2 --
include/linux/rcutree.h | 2 --
kernel/rcu/tree.c | 6 ------
3 files changed, 10 deletions(-)
diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
index 9bc025aa79a30..5c271bf3a1e7e 100644
--- a/include/linux/rcutiny.h
+++ b/include/linux/rcutiny.h
@@ -146,8 +146,6 @@ static inline void rcu_virt_note_context_switch(void) { }
static inline void rcu_cpu_stall_reset(void) { }
static inline int rcu_jiffies_till_stall_check(void) { return 21 * HZ; }
static inline void rcu_irq_exit_check_preempt(void) { }
-#define rcu_is_idle_cpu(cpu) \
- (is_idle_task(current) && !in_nmi() && !in_hardirq() && !in_serving_softirq())
static inline void exit_rcu(void) { }
static inline bool rcu_preempt_need_deferred_qs(struct task_struct *t)
{
diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
index 70795386b9ffa..4003bf6cfa1c2 100644
--- a/include/linux/rcutree.h
+++ b/include/linux/rcutree.h
@@ -87,8 +87,6 @@ bool poll_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp);
void cond_synchronize_rcu(unsigned long oldstate);
void cond_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp);
-bool rcu_is_idle_cpu(int cpu);
-
#ifdef CONFIG_PROVE_RCU
void rcu_irq_exit_check_preempt(void);
#else
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 5ec97e3f7468f..f6561aa401c04 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -301,12 +301,6 @@ static bool rcu_dynticks_in_eqs(int snap)
return !(snap & RCU_DYNTICKS_IDX);
}
-/* Return true if the specified CPU is currently idle from an RCU viewpoint. */
-bool rcu_is_idle_cpu(int cpu)
-{
- return rcu_dynticks_in_eqs(rcu_dynticks_snap(cpu));
-}
-
/*
* Return true if the CPU corresponding to the specified rcu_data
* structure has spent some time in an extended quiescent state since
--
2.31.1.189.g2e36527f23
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH rcu 7/8] rcu-tasks: Make grace-period-age message human-readable
2022-10-19 22:46 [PATCH rcu 0/8] Miscellaneous fixes for v6.2 Paul E. McKenney
` (5 preceding siblings ...)
2022-10-19 22:46 ` [PATCH rcu 6/8] rcu: Remove rcu_is_idle_cpu() Paul E. McKenney
@ 2022-10-19 22:46 ` Paul E. McKenney
2022-10-19 22:46 ` [PATCH rcu 8/8] rcu: Fix __this_cpu_read() lockdep warning in rcu_force_quiescent_state() Paul E. McKenney
7 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:46 UTC (permalink / raw)
To: rcu
Cc: linux-kernel, kernel-team, rostedt, Paul E. McKenney, Alexei Starovoitov
This commit adds a few words to the informative message that appears
every ten seconds in RCU Tasks and RCU Tasks Trace grace periods.
This message currently reads as follows:
rcu_tasks_wait_gp: rcu_tasks grace period 1046 is 10088 jiffies old.
After this change, it provides additional context, instead reading
as follows:
rcu_tasks_wait_gp: rcu_tasks grace period number 1046 (since boot) is 10088 jiffies old.
Reported-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
kernel/rcu/tasks.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index f5bf6fb430dab..b0b885e071fa8 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -728,7 +728,7 @@ static void rcu_tasks_wait_gp(struct rcu_tasks *rtp)
if (rtsi > 0 && !reported && time_after(j, lastinfo + rtsi)) {
lastinfo = j;
rtsi = rtsi * rcu_task_stall_info_mult;
- pr_info("%s: %s grace period %lu is %lu jiffies old.\n",
+ pr_info("%s: %s grace period number %lu (since boot) is %lu jiffies old.\n",
__func__, rtp->kname, rtp->tasks_gp_seq, j - rtp->gp_start);
}
}
--
2.31.1.189.g2e36527f23
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH rcu 8/8] rcu: Fix __this_cpu_read() lockdep warning in rcu_force_quiescent_state()
2022-10-19 22:46 [PATCH rcu 0/8] Miscellaneous fixes for v6.2 Paul E. McKenney
` (6 preceding siblings ...)
2022-10-19 22:46 ` [PATCH rcu 7/8] rcu-tasks: Make grace-period-age message human-readable Paul E. McKenney
@ 2022-10-19 22:46 ` Paul E. McKenney
7 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2022-10-19 22:46 UTC (permalink / raw)
To: rcu
Cc: linux-kernel, kernel-team, rostedt, Zqiang, Joel Fernandes,
Paul E . McKenney
From: Zqiang <qiang1.zhang@intel.com>
Running rcutorture with non-zero fqs_duration module parameter in a
kernel built with CONFIG_PREEMPTION=y results in the following splat:
BUG: using __this_cpu_read() in preemptible [00000000]
code: rcu_torture_fqs/398
caller is __this_cpu_preempt_check+0x13/0x20
CPU: 3 PID: 398 Comm: rcu_torture_fqs Not tainted 6.0.0-rc1-yoctodev-standard+
Call Trace:
<TASK>
dump_stack_lvl+0x5b/0x86
dump_stack+0x10/0x16
check_preemption_disabled+0xe5/0xf0
__this_cpu_preempt_check+0x13/0x20
rcu_force_quiescent_state.part.0+0x1c/0x170
rcu_force_quiescent_state+0x1e/0x30
rcu_torture_fqs+0xca/0x160
? rcu_torture_boost+0x430/0x430
kthread+0x192/0x1d0
? kthread_complete_and_exit+0x30/0x30
ret_from_fork+0x22/0x30
</TASK>
The problem is that rcu_force_quiescent_state() uses __this_cpu_read()
in preemptible code instead of the proper raw_cpu_read(). This commit
therefore changes __this_cpu_read() to raw_cpu_read().
Signed-off-by: Zqiang <qiang1.zhang@intel.com>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
kernel/rcu/tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index f6561aa401c04..1e1d333d07ffe 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2410,7 +2410,7 @@ void rcu_force_quiescent_state(void)
struct rcu_node *rnp_old = NULL;
/* Funnel through hierarchy to reduce memory contention. */
- rnp = __this_cpu_read(rcu_data.mynode);
+ rnp = raw_cpu_read(rcu_data.mynode);
for (; rnp != NULL; rnp = rnp->parent) {
ret = (READ_ONCE(rcu_state.gp_flags) & RCU_GP_FLAG_FQS) ||
!raw_spin_trylock(&rnp->fqslock);
--
2.31.1.189.g2e36527f23
^ permalink raw reply related [flat|nested] 17+ messages in thread