linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 tip/core/rcu 1/2] srcu: Allow use of Tiny/Tree SRCU from both process and interrupt context
       [not found] <@@@>
@ 2017-06-06 17:07 ` Paul E. McKenney
  2017-06-06 17:07 ` [PATCH v2 tip/core/rcu 2/2] srcu: Allow use of Classic " Paul E. McKenney
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2017-06-06 17:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paolo Bonzini, kvm, Linus Torvalds, Paul E. McKenney

From: Paolo Bonzini <pbonzini@redhat.com>

Linu Cherian reported a WARN in cleanup_srcu_struct() when shutting
down a guest running iperf on a VFIO assigned device.  This happens
because irqfd_wakeup() calls srcu_read_lock(&kvm->irq_srcu) in interrupt
context, while a worker thread does the same inside kvm_set_irq().  If the
interrupt happens while the worker thread is executing __srcu_read_lock(),
updates to the Classic SRCU ->lock_count[] field or the Tree SRCU
->srcu_lock_count[] field can be lost.

The docs say you are not supposed to call srcu_read_lock() and
srcu_read_unlock() from irq context, but KVM interrupt injection happens
from (host) interrupt context and it would be nice if SRCU supported the
use case.  KVM is using SRCU here not really for the "sleepable" part,
but rather due to its IPI-free fast detection of grace periods.  It is
therefore not desirable to switch back to RCU, which would effectively
revert commit 719d93cd5f5c ("kvm/irqchip: Speed up KVM_SET_GSI_ROUTING",
2014-01-16).

However, the docs are overly conservative.  You can have an SRCU instance
only has users in irq context, and you can mix process and irq context
as long as process context users disable interrupts.  In addition,
__srcu_read_unlock() actually uses this_cpu_dec() on both Tree SRCU and
Classic SRCU.  For those two implementations, only srcu_read_lock()
is unsafe.

When Classic SRCU's __srcu_read_unlock() was changed to use this_cpu_dec(),
in commit 5a41344a3d83 ("srcu: Simplify __srcu_read_unlock() via
this_cpu_dec()", 2012-11-29), __srcu_read_lock() did two increments.
Therefore it kept __this_cpu_inc(), with preempt_disable/enable in
the caller.  Tree SRCU however only does one increment, so on most
architectures it is more efficient for __srcu_read_lock() to use
this_cpu_inc(), and any performance differences appear to be down in
the noise.

Unlike Classic and Tree SRCU, Tiny SRCU does increments and decrements on
a single variable.  Therefore, as Peter Zijlstra pointed out, Tiny SRCU's
implementation already supports mixed-context use of srcu_read_lock()
and srcu_read_unlock(), at least as long as uses of srcu_read_lock()
and srcu_read_unlock() in each handler are nested and paired properly.
In other words, it is still illegal to (say) invoke srcu_read_lock()
in an interrupt handler and to invoke the matching srcu_read_unlock()
in a softirq handler.  Therefore, the only change required for Tiny SRCU
is to its comments.

Fixes: 719d93cd5f5c ("kvm/irqchip: Speed up KVM_SET_GSI_ROUTING")
Reported-by: Linu Cherian <linuc.decode@gmail.com>
Suggested-by: Linu Cherian <linuc.decode@gmail.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/srcutiny.c | 7 ++++---
 kernel/rcu/srcutree.c | 5 ++---
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
index 36e1f82faed1..32798eb14853 100644
--- a/kernel/rcu/srcutiny.c
+++ b/kernel/rcu/srcutiny.c
@@ -97,8 +97,9 @@ EXPORT_SYMBOL_GPL(cleanup_srcu_struct);
 
 /*
  * Counts the new reader in the appropriate per-CPU element of the
- * srcu_struct.  Must be called from process context.
- * Returns an index that must be passed to the matching srcu_read_unlock().
+ * srcu_struct.  Can be invoked from irq/bh handlers, but the matching
+ * __srcu_read_unlock() must be in the same handler instance.  Returns an
+ * index that must be passed to the matching srcu_read_unlock().
  */
 int __srcu_read_lock(struct srcu_struct *sp)
 {
@@ -112,7 +113,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock);
 
 /*
  * Removes the count for the old reader from the appropriate element of
- * the srcu_struct.  Must be called from process context.
+ * the srcu_struct.
  */
 void __srcu_read_unlock(struct srcu_struct *sp, int idx)
 {
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 3ae8474557df..157654fa436a 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -357,7 +357,7 @@ EXPORT_SYMBOL_GPL(cleanup_srcu_struct);
 
 /*
  * Counts the new reader in the appropriate per-CPU element of the
- * srcu_struct.  Must be called from process context.
+ * srcu_struct.
  * Returns an index that must be passed to the matching srcu_read_unlock().
  */
 int __srcu_read_lock(struct srcu_struct *sp)
@@ -365,7 +365,7 @@ int __srcu_read_lock(struct srcu_struct *sp)
 	int idx;
 
 	idx = READ_ONCE(sp->srcu_idx) & 0x1;
-	__this_cpu_inc(sp->sda->srcu_lock_count[idx]);
+	this_cpu_inc(sp->sda->srcu_lock_count[idx]);
 	smp_mb(); /* B */  /* Avoid leaking the critical section. */
 	return idx;
 }
@@ -375,7 +375,6 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock);
  * Removes the count for the old reader from the appropriate per-CPU
  * element of the srcu_struct.  Note that this may well be a different
  * CPU than that which was incremented by the corresponding srcu_read_lock().
- * Must be called from process context.
  */
 void __srcu_read_unlock(struct srcu_struct *sp, int idx)
 {
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 tip/core/rcu 2/2] srcu: Allow use of Classic SRCU from both process and interrupt context
       [not found] <@@@>
  2017-06-06 17:07 ` [PATCH v2 tip/core/rcu 1/2] srcu: Allow use of Tiny/Tree SRCU from both process and interrupt context Paul E. McKenney
@ 2017-06-06 17:07 ` Paul E. McKenney
  2020-02-14 23:38 ` [PATCH tip/core/rcu 1/9] doc: Add some more RCU list patterns in the kernel paulmck
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2017-06-06 17:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paolo Bonzini, stable, kvm, Linus Torvalds, Paul E. McKenney

From: Paolo Bonzini <pbonzini@redhat.com>

Linu Cherian reported a WARN in cleanup_srcu_struct() when shutting
down a guest running iperf on a VFIO assigned device.  This happens
because irqfd_wakeup() calls srcu_read_lock(&kvm->irq_srcu) in interrupt
context, while a worker thread does the same inside kvm_set_irq().  If the
interrupt happens while the worker thread is executing __srcu_read_lock(),
updates to the Classic SRCU ->lock_count[] field or the Tree SRCU
->srcu_lock_count[] field can be lost.

The docs say you are not supposed to call srcu_read_lock() and
srcu_read_unlock() from irq context, but KVM interrupt injection happens
from (host) interrupt context and it would be nice if SRCU supported the
use case.  KVM is using SRCU here not really for the "sleepable" part,
but rather due to its IPI-free fast detection of grace periods.  It is
therefore not desirable to switch back to RCU, which would effectively
revert commit 719d93cd5f5c ("kvm/irqchip: Speed up KVM_SET_GSI_ROUTING",
2014-01-16).

However, the docs are overly conservative.  You can have an SRCU instance
only has users in irq context, and you can mix process and irq context
as long as process context users disable interrupts.  In addition,
__srcu_read_unlock() actually uses this_cpu_dec() on both Tree SRCU and
Classic SRCU.  For those two implementations, only srcu_read_lock()
is unsafe.

When Classic SRCU's __srcu_read_unlock() was changed to use this_cpu_dec(),
in commit 5a41344a3d83 ("srcu: Simplify __srcu_read_unlock() via
this_cpu_dec()", 2012-11-29), __srcu_read_lock() did two increments.
Therefore it kept __this_cpu_inc(), with preempt_disable/enable in
the caller.  Tree SRCU however only does one increment, so on most
architectures it is more efficient for __srcu_read_lock() to use
this_cpu_inc(), and any performance differences appear to be down in
the noise.

Cc: stable@vger.kernel.org
Fixes: 719d93cd5f5c ("kvm/irqchip: Speed up KVM_SET_GSI_ROUTING")
Reported-by: Linu Cherian <linuc.decode@gmail.com>
Suggested-by: Linu Cherian <linuc.decode@gmail.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/linux/srcu.h | 2 --
 kernel/rcu/srcu.c    | 5 ++---
 2 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index 167ad8831aaf..4c1d5f7e62c4 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -172,9 +172,7 @@ static inline int srcu_read_lock(struct srcu_struct *sp) __acquires(sp)
 {
 	int retval;
 
-	preempt_disable();
 	retval = __srcu_read_lock(sp);
-	preempt_enable();
 	rcu_lock_acquire(&(sp)->dep_map);
 	return retval;
 }
diff --git a/kernel/rcu/srcu.c b/kernel/rcu/srcu.c
index 584d8a983883..dea03614263f 100644
--- a/kernel/rcu/srcu.c
+++ b/kernel/rcu/srcu.c
@@ -263,7 +263,7 @@ EXPORT_SYMBOL_GPL(cleanup_srcu_struct);
 
 /*
  * Counts the new reader in the appropriate per-CPU element of the
- * srcu_struct.  Must be called from process context.
+ * srcu_struct.
  * Returns an index that must be passed to the matching srcu_read_unlock().
  */
 int __srcu_read_lock(struct srcu_struct *sp)
@@ -271,7 +271,7 @@ int __srcu_read_lock(struct srcu_struct *sp)
 	int idx;
 
 	idx = READ_ONCE(sp->completed) & 0x1;
-	__this_cpu_inc(sp->per_cpu_ref->lock_count[idx]);
+	this_cpu_inc(sp->per_cpu_ref->lock_count[idx]);
 	smp_mb(); /* B */  /* Avoid leaking the critical section. */
 	return idx;
 }
@@ -281,7 +281,6 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock);
  * Removes the count for the old reader from the appropriate per-CPU
  * element of the srcu_struct.  Note that this may well be a different
  * CPU than that which was incremented by the corresponding srcu_read_lock().
- * Must be called from process context.
  */
 void __srcu_read_unlock(struct srcu_struct *sp, int idx)
 {
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 1/9] doc: Add some more RCU list patterns in the kernel
       [not found] <@@@>
  2017-06-06 17:07 ` [PATCH v2 tip/core/rcu 1/2] srcu: Allow use of Tiny/Tree SRCU from both process and interrupt context Paul E. McKenney
  2017-06-06 17:07 ` [PATCH v2 tip/core/rcu 2/2] srcu: Allow use of Classic " Paul E. McKenney
@ 2020-02-14 23:38 ` paulmck
  2020-02-14 23:38 ` [PATCH tip/core/rcu 2/9] doc/RCU/Design: Remove remaining HTML tags in ReST files paulmck
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 32+ messages in thread
From: paulmck @ 2020-02-14 23:38 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, Amol Grover, Paul E . McKenney

From: "Joel Fernandes (Google)" <joel@joelfernandes.org>

- Add more information about RCU list patterns taking examples
from audit subsystem in the linux kernel.

- Keep the current audit examples, even though the kernel has changed.

- Modify inline text for better passage quality.

- Fix typo in code-blocks and improve code comments.

- Add text formatting (italics, bold and code) for better emphasis.

Patch originally submitted at
https://lore.kernel.org/patchwork/patch/1082804/

Co-developed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Amol Grover <frextrite@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 Documentation/RCU/listRCU.rst | 275 ++++++++++++++++++++++++++++++++----------
 1 file changed, 211 insertions(+), 64 deletions(-)

diff --git a/Documentation/RCU/listRCU.rst b/Documentation/RCU/listRCU.rst
index 7956ff3..55d2b30 100644
--- a/Documentation/RCU/listRCU.rst
+++ b/Documentation/RCU/listRCU.rst
@@ -4,12 +4,61 @@ Using RCU to Protect Read-Mostly Linked Lists
 =============================================
 
 One of the best applications of RCU is to protect read-mostly linked lists
-("struct list_head" in list.h).  One big advantage of this approach
+(``struct list_head`` in list.h).  One big advantage of this approach
 is that all of the required memory barriers are included for you in
 the list macros.  This document describes several applications of RCU,
 with the best fits first.
 
-Example 1: Read-Side Action Taken Outside of Lock, No In-Place Updates
+
+Example 1: Read-mostly list: Deferred Destruction
+-------------------------------------------------
+
+A widely used usecase for RCU lists in the kernel is lockless iteration over
+all processes in the system. ``task_struct::tasks`` represents the list node that
+links all the processes. The list can be traversed in parallel to any list
+additions or removals.
+
+The traversal of the list is done using ``for_each_process()`` which is defined
+by the 2 macros::
+
+	#define next_task(p) \
+		list_entry_rcu((p)->tasks.next, struct task_struct, tasks)
+
+	#define for_each_process(p) \
+		for (p = &init_task ; (p = next_task(p)) != &init_task ; )
+
+The code traversing the list of all processes typically looks like::
+
+	rcu_read_lock();
+	for_each_process(p) {
+		/* Do something with p */
+	}
+	rcu_read_unlock();
+
+The simplified code for removing a process from a task list is::
+
+	void release_task(struct task_struct *p)
+	{
+		write_lock(&tasklist_lock);
+		list_del_rcu(&p->tasks);
+		write_unlock(&tasklist_lock);
+		call_rcu(&p->rcu, delayed_put_task_struct);
+	}
+
+When a process exits, ``release_task()`` calls ``list_del_rcu(&p->tasks)`` under
+``tasklist_lock`` writer lock protection, to remove the task from the list of
+all tasks. The ``tasklist_lock`` prevents concurrent list additions/removals
+from corrupting the list. Readers using ``for_each_process()`` are not protected
+with the ``tasklist_lock``. To prevent readers from noticing changes in the list
+pointers, the ``task_struct`` object is freed only after one or more grace
+periods elapse (with the help of call_rcu()). This deferring of destruction
+ensures that any readers traversing the list will see valid ``p->tasks.next``
+pointers and deletion/freeing can happen in parallel with traversal of the list.
+This pattern is also called an **existence lock**, since RCU pins the object in
+memory until all existing readers finish.
+
+
+Example 2: Read-Side Action Taken Outside of Lock: No In-Place Updates
 ----------------------------------------------------------------------
 
 The best applications are cases where, if reader-writer locking were
@@ -26,7 +75,7 @@ added or deleted, rather than being modified in place.
 
 A straightforward example of this use of RCU may be found in the
 system-call auditing support.  For example, a reader-writer locked
-implementation of audit_filter_task() might be as follows::
+implementation of ``audit_filter_task()`` might be as follows::
 
 	static enum audit_state audit_filter_task(struct task_struct *tsk)
 	{
@@ -34,7 +83,7 @@ implementation of audit_filter_task() might be as follows::
 		enum audit_state   state;
 
 		read_lock(&auditsc_lock);
-		/* Note: audit_netlink_sem held by caller. */
+		/* Note: audit_filter_mutex held by caller. */
 		list_for_each_entry(e, &audit_tsklist, list) {
 			if (audit_filter_rules(tsk, &e->rule, NULL, &state)) {
 				read_unlock(&auditsc_lock);
@@ -58,7 +107,7 @@ This means that RCU can be easily applied to the read side, as follows::
 		enum audit_state   state;
 
 		rcu_read_lock();
-		/* Note: audit_netlink_sem held by caller. */
+		/* Note: audit_filter_mutex held by caller. */
 		list_for_each_entry_rcu(e, &audit_tsklist, list) {
 			if (audit_filter_rules(tsk, &e->rule, NULL, &state)) {
 				rcu_read_unlock();
@@ -69,18 +118,18 @@ This means that RCU can be easily applied to the read side, as follows::
 		return AUDIT_BUILD_CONTEXT;
 	}
 
-The read_lock() and read_unlock() calls have become rcu_read_lock()
+The ``read_lock()`` and ``read_unlock()`` calls have become rcu_read_lock()
 and rcu_read_unlock(), respectively, and the list_for_each_entry() has
-become list_for_each_entry_rcu().  The _rcu() list-traversal primitives
+become list_for_each_entry_rcu().  The **_rcu()** list-traversal primitives
 insert the read-side memory barriers that are required on DEC Alpha CPUs.
 
-The changes to the update side are also straightforward.  A reader-writer
-lock might be used as follows for deletion and insertion::
+The changes to the update side are also straightforward. A reader-writer lock
+might be used as follows for deletion and insertion::
 
 	static inline int audit_del_rule(struct audit_rule *rule,
 					 struct list_head *list)
 	{
-		struct audit_entry  *e;
+		struct audit_entry *e;
 
 		write_lock(&auditsc_lock);
 		list_for_each_entry(e, list, list) {
@@ -113,9 +162,9 @@ Following are the RCU equivalents for these two functions::
 	static inline int audit_del_rule(struct audit_rule *rule,
 					 struct list_head *list)
 	{
-		struct audit_entry  *e;
+		struct audit_entry *e;
 
-		/* Do not use the _rcu iterator here, since this is the only
+		/* No need to use the _rcu iterator here, since this is the only
 		 * deletion routine. */
 		list_for_each_entry(e, list, list) {
 			if (!audit_compare_rule(rule, &e->rule)) {
@@ -139,41 +188,41 @@ Following are the RCU equivalents for these two functions::
 		return 0;
 	}
 
-Normally, the write_lock() and write_unlock() would be replaced by
-a spin_lock() and a spin_unlock(), but in this case, all callers hold
-audit_netlink_sem, so no additional locking is required.  The auditsc_lock
-can therefore be eliminated, since use of RCU eliminates the need for
-writers to exclude readers.  Normally, the write_lock() calls would
-be converted into spin_lock() calls.
+Normally, the ``write_lock()`` and ``write_unlock()`` would be replaced by a
+spin_lock() and a spin_unlock(). But in this case, all callers hold
+``audit_filter_mutex``, so no additional locking is required. The
+``auditsc_lock`` can therefore be eliminated, since use of RCU eliminates the
+need for writers to exclude readers.
 
 The list_del(), list_add(), and list_add_tail() primitives have been
 replaced by list_del_rcu(), list_add_rcu(), and list_add_tail_rcu().
-The _rcu() list-manipulation primitives add memory barriers that are
-needed on weakly ordered CPUs (most of them!).  The list_del_rcu()
-primitive omits the pointer poisoning debug-assist code that would
-otherwise cause concurrent readers to fail spectacularly.
+The **_rcu()** list-manipulation primitives add memory barriers that are needed on
+weakly ordered CPUs (most of them!).  The list_del_rcu() primitive omits the
+pointer poisoning debug-assist code that would otherwise cause concurrent
+readers to fail spectacularly.
 
-So, when readers can tolerate stale data and when entries are either added
-or deleted, without in-place modification, it is very easy to use RCU!
+So, when readers can tolerate stale data and when entries are either added or
+deleted, without in-place modification, it is very easy to use RCU!
 
-Example 2: Handling In-Place Updates
+
+Example 3: Handling In-Place Updates
 ------------------------------------
 
-The system-call auditing code does not update auditing rules in place.
-However, if it did, reader-writer-locked code to do so might look as
-follows (presumably, the field_count is only permitted to decrease,
-otherwise, the added fields would need to be filled in)::
+The system-call auditing code does not update auditing rules in place.  However,
+if it did, the reader-writer-locked code to do so might look as follows
+(assuming only ``field_count`` is updated, otherwise, the added fields would
+need to be filled in)::
 
 	static inline int audit_upd_rule(struct audit_rule *rule,
 					 struct list_head *list,
 					 __u32 newaction,
 					 __u32 newfield_count)
 	{
-		struct audit_entry  *e;
-		struct audit_newentry *ne;
+		struct audit_entry *e;
+		struct audit_entry *ne;
 
 		write_lock(&auditsc_lock);
-		/* Note: audit_netlink_sem held by caller. */
+		/* Note: audit_filter_mutex held by caller. */
 		list_for_each_entry(e, list, list) {
 			if (!audit_compare_rule(rule, &e->rule)) {
 				e->rule.action = newaction;
@@ -188,16 +237,16 @@ otherwise, the added fields would need to be filled in)::
 
 The RCU version creates a copy, updates the copy, then replaces the old
 entry with the newly updated entry.  This sequence of actions, allowing
-concurrent reads while doing a copy to perform an update, is what gives
-RCU ("read-copy update") its name.  The RCU code is as follows::
+concurrent reads while making a copy to perform an update, is what gives
+RCU (*read-copy update*) its name.  The RCU code is as follows::
 
 	static inline int audit_upd_rule(struct audit_rule *rule,
 					 struct list_head *list,
 					 __u32 newaction,
 					 __u32 newfield_count)
 	{
-		struct audit_entry  *e;
-		struct audit_newentry *ne;
+		struct audit_entry *e;
+		struct audit_entry *ne;
 
 		list_for_each_entry(e, list, list) {
 			if (!audit_compare_rule(rule, &e->rule)) {
@@ -215,34 +264,45 @@ RCU ("read-copy update") its name.  The RCU code is as follows::
 		return -EFAULT;		/* No matching rule */
 	}
 
-Again, this assumes that the caller holds audit_netlink_sem.  Normally,
-the reader-writer lock would become a spinlock in this sort of code.
+Again, this assumes that the caller holds ``audit_filter_mutex``.  Normally, the
+writer lock would become a spinlock in this sort of code.
 
-Example 3: Eliminating Stale Data
+Another use of this pattern can be found in the openswitch driver's *connection
+tracking table* code in ``ct_limit_set()``.  The table holds connection tracking
+entries and has a limit on the maximum entries.  There is one such table
+per-zone and hence one *limit* per zone.  The zones are mapped to their limits
+through a hashtable using an RCU-managed hlist for the hash chains. When a new
+limit is set, a new limit object is allocated and ``ct_limit_set()`` is called
+to replace the old limit object with the new one using list_replace_rcu().
+The old limit object is then freed after a grace period using kfree_rcu().
+
+
+Example 4: Eliminating Stale Data
 ---------------------------------
 
-The auditing examples above tolerate stale data, as do most algorithms
+The auditing example above tolerates stale data, as do most algorithms
 that are tracking external state.  Because there is a delay from the
 time the external state changes before Linux becomes aware of the change,
-additional RCU-induced staleness is normally not a problem.
+additional RCU-induced staleness is generally not a problem.
 
 However, there are many examples where stale data cannot be tolerated.
 One example in the Linux kernel is the System V IPC (see the ipc_lock()
-function in ipc/util.c).  This code checks a "deleted" flag under a
-per-entry spinlock, and, if the "deleted" flag is set, pretends that the
+function in ipc/util.c).  This code checks a *deleted* flag under a
+per-entry spinlock, and, if the *deleted* flag is set, pretends that the
 entry does not exist.  For this to be helpful, the search function must
-return holding the per-entry spinlock, as ipc_lock() does in fact do.
+return holding the per-entry lock, as ipc_lock() does in fact do.
+
+.. _quick_quiz:
 
 Quick Quiz:
-	Why does the search function need to return holding the per-entry lock for
-	this deleted-flag technique to be helpful?
+	For the deleted-flag technique to be helpful, why is it necessary
+	to hold the per-entry lock while returning from the search function?
 
-:ref:`Answer to Quick Quiz <answer_quick_quiz_list>`
+:ref:`Answer to Quick Quiz <quick_quiz_answer>`
 
-If the system-call audit module were to ever need to reject stale data,
-one way to accomplish this would be to add a "deleted" flag and a "lock"
-spinlock to the audit_entry structure, and modify audit_filter_task()
-as follows::
+If the system-call audit module were to ever need to reject stale data, one way
+to accomplish this would be to add a ``deleted`` flag and a ``lock`` spinlock to the
+audit_entry structure, and modify ``audit_filter_task()`` as follows::
 
 	static enum audit_state audit_filter_task(struct task_struct *tsk)
 	{
@@ -267,20 +327,20 @@ as follows::
 	}
 
 Note that this example assumes that entries are only added and deleted.
-Additional mechanism is required to deal correctly with the
-update-in-place performed by audit_upd_rule().  For one thing,
-audit_upd_rule() would need additional memory barriers to ensure
-that the list_add_rcu() was really executed before the list_del_rcu().
+Additional mechanism is required to deal correctly with the update-in-place
+performed by ``audit_upd_rule()``.  For one thing, ``audit_upd_rule()`` would
+need additional memory barriers to ensure that the list_add_rcu() was really
+executed before the list_del_rcu().
 
-The audit_del_rule() function would need to set the "deleted"
-flag under the spinlock as follows::
+The ``audit_del_rule()`` function would need to set the ``deleted`` flag under the
+spinlock as follows::
 
 	static inline int audit_del_rule(struct audit_rule *rule,
 					 struct list_head *list)
 	{
-		struct audit_entry  *e;
+		struct audit_entry *e;
 
-		/* Do not need to use the _rcu iterator here, since this
+		/* No need to use the _rcu iterator here, since this
 		 * is the only deletion routine. */
 		list_for_each_entry(e, list, list) {
 			if (!audit_compare_rule(rule, &e->rule)) {
@@ -295,6 +355,91 @@ flag under the spinlock as follows::
 		return -EFAULT;		/* No matching rule */
 	}
 
+This too assumes that the caller holds ``audit_filter_mutex``.
+
+
+Example 5: Skipping Stale Objects
+---------------------------------
+
+For some usecases, reader performance can be improved by skipping stale objects
+during read-side list traversal if the object in concern is pending destruction
+after one or more grace periods. One such example can be found in the timerfd
+subsystem. When a ``CLOCK_REALTIME`` clock is reprogrammed - for example due to
+setting of the system time, then all programmed timerfds that depend on this
+clock get triggered and processes waiting on them to expire are woken up in
+advance of their scheduled expiry. To facilitate this, all such timers are added
+to an RCU-managed ``cancel_list`` when they are setup in
+``timerfd_setup_cancel()``::
+
+	static void timerfd_setup_cancel(struct timerfd_ctx *ctx, int flags)
+	{
+		spin_lock(&ctx->cancel_lock);
+		if ((ctx->clockid == CLOCK_REALTIME &&
+		    (flags & TFD_TIMER_ABSTIME) && (flags & TFD_TIMER_CANCEL_ON_SET)) {
+			if (!ctx->might_cancel) {
+				ctx->might_cancel = true;
+				spin_lock(&cancel_lock);
+				list_add_rcu(&ctx->clist, &cancel_list);
+				spin_unlock(&cancel_lock);
+			}
+		}
+		spin_unlock(&ctx->cancel_lock);
+	}
+
+When a timerfd is freed (fd is closed), then the ``might_cancel`` flag of the
+timerfd object is cleared, the object removed from the ``cancel_list`` and
+destroyed::
+
+	int timerfd_release(struct inode *inode, struct file *file)
+	{
+		struct timerfd_ctx *ctx = file->private_data;
+
+		spin_lock(&ctx->cancel_lock);
+		if (ctx->might_cancel) {
+			ctx->might_cancel = false;
+			spin_lock(&cancel_lock);
+			list_del_rcu(&ctx->clist);
+			spin_unlock(&cancel_lock);
+		}
+		spin_unlock(&ctx->cancel_lock);
+
+		hrtimer_cancel(&ctx->t.tmr);
+		kfree_rcu(ctx, rcu);
+		return 0;
+	}
+
+If the ``CLOCK_REALTIME`` clock is set, for example by a time server, the
+hrtimer framework calls ``timerfd_clock_was_set()`` which walks the
+``cancel_list`` and wakes up processes waiting on the timerfd. While iterating
+the ``cancel_list``, the ``might_cancel`` flag is consulted to skip stale
+objects::
+
+	void timerfd_clock_was_set(void)
+	{
+		struct timerfd_ctx *ctx;
+		unsigned long flags;
+
+		rcu_read_lock();
+		list_for_each_entry_rcu(ctx, &cancel_list, clist) {
+			if (!ctx->might_cancel)
+				continue;
+			spin_lock_irqsave(&ctx->wqh.lock, flags);
+			if (ctx->moffs != ktime_mono_to_real(0)) {
+				ctx->moffs = KTIME_MAX;
+				ctx->ticks++;
+				wake_up_locked_poll(&ctx->wqh, EPOLLIN);
+			}
+			spin_unlock_irqrestore(&ctx->wqh.lock, flags);
+		}
+		rcu_read_unlock();
+	}
+
+The key point here is, because RCU-traversal of the ``cancel_list`` happens
+while objects are being added and removed to the list, sometimes the traversal
+can step on an object that has been removed from the list. In this example, it
+is seen that it is better to skip such objects using a flag.
+
+
 Summary
 -------
 
@@ -303,19 +448,21 @@ the most amenable to use of RCU.  The simplest case is where entries are
 either added or deleted from the data structure (or atomically modified
 in place), but non-atomic in-place modifications can be handled by making
 a copy, updating the copy, then replacing the original with the copy.
-If stale data cannot be tolerated, then a "deleted" flag may be used
+If stale data cannot be tolerated, then a *deleted* flag may be used
 in conjunction with a per-entry spinlock in order to allow the search
 function to reject newly deleted data.
 
-.. _answer_quick_quiz_list:
+.. _quick_quiz_answer:
 
 Answer to Quick Quiz:
-	Why does the search function need to return holding the per-entry
-	lock for this deleted-flag technique to be helpful?
+	For the deleted-flag technique to be helpful, why is it necessary
+	to hold the per-entry lock while returning from the search function?
 
 	If the search function drops the per-entry lock before returning,
 	then the caller will be processing stale data in any case.  If it
 	is really OK to be processing stale data, then you don't need a
-	"deleted" flag.  If processing stale data really is a problem,
+	*deleted* flag.  If processing stale data really is a problem,
 	then you need to hold the per-entry lock across all of the code
 	that uses the value that was returned.
+
+:ref:`Back to Quick Quiz <quick_quiz>`
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 2/9] doc/RCU/Design: Remove remaining HTML tags in ReST files
       [not found] <@@@>
                   ` (2 preceding siblings ...)
  2020-02-14 23:38 ` [PATCH tip/core/rcu 1/9] doc: Add some more RCU list patterns in the kernel paulmck
@ 2020-02-14 23:38 ` paulmck
  2020-02-14 23:38 ` [PATCH tip/core/rcu 3/9] doc/RCU/listRCU: Fix typos in a example code snippets paulmck
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 32+ messages in thread
From: paulmck @ 2020-02-14 23:38 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, SeongJae Park, Paul E . McKenney

From: SeongJae Park <sjpark@amazon.de>

Commit ccc9971e2147 ("docs: rcu: convert some articles from html to
ReST") has converted a few of html RCU docs into ReST files, but a few
of html tags which not supported on rst is remaining.  This commit
converts those to ReST appropriate alternatives.

Reviewed-by: Madhuparna Bhowmik <madhuparnabhowmik04@gmail.com>
Signed-off-by: SeongJae Park <sjpark@amazon.de>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 .../RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst       | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst b/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst
index 1a8b129..83ae3b7 100644
--- a/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst
+++ b/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst
@@ -4,7 +4,7 @@ A Tour Through TREE_RCU's Grace-Period Memory Ordering
 
 August 8, 2017
 
-This article was contributed by Paul E.&nbsp;McKenney
+This article was contributed by Paul E. McKenney
 
 Introduction
 ============
@@ -48,7 +48,7 @@ Tree RCU Grace Period Memory Ordering Building Blocks
 
 The workhorse for RCU's grace-period memory ordering is the
 critical section for the ``rcu_node`` structure's
-``-&gt;lock``. These critical sections use helper functions for lock
+``->lock``. These critical sections use helper functions for lock
 acquisition, including ``raw_spin_lock_rcu_node()``,
 ``raw_spin_lock_irq_rcu_node()``, and ``raw_spin_lock_irqsave_rcu_node()``.
 Their lock-release counterparts are ``raw_spin_unlock_rcu_node()``,
@@ -102,9 +102,9 @@ lock-acquisition and lock-release functions::
    23   r3 = READ_ONCE(x);
    24 }
    25
-   26 WARN_ON(r1 == 0 &amp;&amp; r2 == 0 &amp;&amp; r3 == 0);
+   26 WARN_ON(r1 == 0 && r2 == 0 && r3 == 0);
 
-The ``WARN_ON()`` is evaluated at &ldquo;the end of time&rdquo;,
+The ``WARN_ON()`` is evaluated at "the end of time",
 after all changes have propagated throughout the system.
 Without the ``smp_mb__after_unlock_lock()`` provided by the
 acquisition functions, this ``WARN_ON()`` could trigger, for example
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 3/9] doc/RCU/listRCU: Fix typos in a example code snippets
       [not found] <@@@>
                   ` (3 preceding siblings ...)
  2020-02-14 23:38 ` [PATCH tip/core/rcu 2/9] doc/RCU/Design: Remove remaining HTML tags in ReST files paulmck
@ 2020-02-14 23:38 ` paulmck
  2020-02-14 23:38 ` [PATCH tip/core/rcu 4/9] doc/RCU/listRCU: Update example function name paulmck
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 32+ messages in thread
From: paulmck @ 2020-02-14 23:38 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, SeongJae Park, Paul E . McKenney

From: SeongJae Park <sjpark@amazon.de>

Signed-off-by: SeongJae Park <sjpark@amazon.de>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 Documentation/RCU/listRCU.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Documentation/RCU/listRCU.rst b/Documentation/RCU/listRCU.rst
index 55d2b30..e768f56 100644
--- a/Documentation/RCU/listRCU.rst
+++ b/Documentation/RCU/listRCU.rst
@@ -226,7 +226,7 @@ need to be filled in)::
 		list_for_each_entry(e, list, list) {
 			if (!audit_compare_rule(rule, &e->rule)) {
 				e->rule.action = newaction;
-				e->rule.file_count = newfield_count;
+				e->rule.field_count = newfield_count;
 				write_unlock(&auditsc_lock);
 				return 0;
 			}
@@ -255,7 +255,7 @@ RCU (*read-copy update*) its name.  The RCU code is as follows::
 					return -ENOMEM;
 				audit_copy_rule(&ne->rule, &e->rule);
 				ne->rule.action = newaction;
-				ne->rule.file_count = newfield_count;
+				ne->rule.field_count = newfield_count;
 				list_replace_rcu(&e->list, &ne->list);
 				call_rcu(&e->rcu, audit_free_rule);
 				return 0;
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 4/9] doc/RCU/listRCU: Update example function name
       [not found] <@@@>
                   ` (4 preceding siblings ...)
  2020-02-14 23:38 ` [PATCH tip/core/rcu 3/9] doc/RCU/listRCU: Fix typos in a example code snippets paulmck
@ 2020-02-14 23:38 ` paulmck
  2020-02-14 23:38 ` [PATCH tip/core/rcu 5/9] doc/RCU/rcu: Use ':ref:' for links to other docs paulmck
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 32+ messages in thread
From: paulmck @ 2020-02-14 23:38 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, SeongJae Park, Paul E . McKenney

From: SeongJae Park <sjpark@amazon.de>

listRCU.rst document gives an example with 'ipc_lock()', but the
function has dropped off by commit 82061c57ce93 ("ipc: drop
ipc_lock()").  Because the main logic of 'ipc_lock()' has melded in
'shm_lock()' by the commit, this commit updates the document to use
'shm_lock()' instead.

Reviewed-by: Madhuparna Bhowmik <madhuparnabhowmik04@gmail.com>
Signed-off-by: SeongJae Park <sjpark@amazon.de>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 Documentation/RCU/listRCU.rst | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/Documentation/RCU/listRCU.rst b/Documentation/RCU/listRCU.rst
index e768f56..2a643e2 100644
--- a/Documentation/RCU/listRCU.rst
+++ b/Documentation/RCU/listRCU.rst
@@ -286,11 +286,11 @@ time the external state changes before Linux becomes aware of the change,
 additional RCU-induced staleness is generally not a problem.
 
 However, there are many examples where stale data cannot be tolerated.
-One example in the Linux kernel is the System V IPC (see the ipc_lock()
-function in ipc/util.c).  This code checks a *deleted* flag under a
+One example in the Linux kernel is the System V IPC (see the shm_lock()
+function in ipc/shm.c).  This code checks a *deleted* flag under a
 per-entry spinlock, and, if the *deleted* flag is set, pretends that the
 entry does not exist.  For this to be helpful, the search function must
-return holding the per-entry lock, as ipc_lock() does in fact do.
+return holding the per-entry spinlock, as shm_lock() does in fact do.
 
 .. _quick_quiz:
 
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 5/9] doc/RCU/rcu: Use ':ref:' for links to other docs
       [not found] <@@@>
                   ` (5 preceding siblings ...)
  2020-02-14 23:38 ` [PATCH tip/core/rcu 4/9] doc/RCU/listRCU: Update example function name paulmck
@ 2020-02-14 23:38 ` paulmck
  2020-02-14 23:39 ` [PATCH tip/core/rcu 6/9] doc/RCU/rcu: Use absolute paths for non-rst files paulmck
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 32+ messages in thread
From: paulmck @ 2020-02-14 23:38 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, SeongJae Park, Paul E . McKenney

From: SeongJae Park <sjpark@amazon.de>

Signed-off-by: SeongJae Park <sjpark@amazon.de>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 Documentation/RCU/rcu.rst | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/Documentation/RCU/rcu.rst b/Documentation/RCU/rcu.rst
index 8dfb437..a1dd71d 100644
--- a/Documentation/RCU/rcu.rst
+++ b/Documentation/RCU/rcu.rst
@@ -11,8 +11,8 @@ must be long enough that any readers accessing the item being deleted have
 since dropped their references.  For example, an RCU-protected deletion
 from a linked list would first remove the item from the list, wait for
 a grace period to elapse, then free the element.  See the
-Documentation/RCU/listRCU.rst file for more information on using RCU with
-linked lists.
+:ref:`Documentation/RCU/listRCU.rst <list_rcu_doc>` for more information on
+using RCU with linked lists.
 
 Frequently Asked Questions
 --------------------------
@@ -50,7 +50,7 @@ Frequently Asked Questions
 - If I am running on a uniprocessor kernel, which can only do one
   thing at a time, why should I wait for a grace period?
 
-  See the Documentation/RCU/UP.rst file for more information.
+  See :ref:`Documentation/RCU/UP.rst <up_doc>` for more information.
 
 - How can I see where RCU is currently used in the Linux kernel?
 
@@ -68,9 +68,9 @@ Frequently Asked Questions
 
 - Why the name "RCU"?
 
-  "RCU" stands for "read-copy update".  The file Documentation/RCU/listRCU.rst
-  has more information on where this name came from, search for
-  "read-copy update" to find it.
+  "RCU" stands for "read-copy update".
+  :ref:`Documentation/RCU/listRCU.rst <list_rcu_doc>` has more information on where
+  this name came from, search for "read-copy update" to find it.
 
 - I hear that RCU is patented?  What is with that?
 
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 6/9] doc/RCU/rcu: Use absolute paths for non-rst files
       [not found] <@@@>
                   ` (6 preceding siblings ...)
  2020-02-14 23:38 ` [PATCH tip/core/rcu 5/9] doc/RCU/rcu: Use ':ref:' for links to other docs paulmck
@ 2020-02-14 23:39 ` paulmck
  2020-02-14 23:39 ` [PATCH tip/core/rcu 7/9] doc/RCU/rcu: Use https instead of http if possible paulmck
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 32+ messages in thread
From: paulmck @ 2020-02-14 23:39 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, SeongJae Park, Paul E . McKenney

From: SeongJae Park <sjpark@amazon.de>

Signed-off-by: SeongJae Park <sjpark@amazon.de>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 Documentation/RCU/rcu.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Documentation/RCU/rcu.rst b/Documentation/RCU/rcu.rst
index a1dd71d..2a830c5 100644
--- a/Documentation/RCU/rcu.rst
+++ b/Documentation/RCU/rcu.rst
@@ -75,7 +75,7 @@ Frequently Asked Questions
 - I hear that RCU is patented?  What is with that?
 
   Yes, it is.  There are several known patents related to RCU,
-  search for the string "Patent" in RTFP.txt to find them.
+  search for the string "Patent" in Documentation/RCU/RTFP.txt to find them.
   Of these, one was allowed to lapse by the assignee, and the
   others have been contributed to the Linux kernel under GPL.
   There are now also LGPL implementations of user-level RCU
@@ -88,5 +88,5 @@ Frequently Asked Questions
 
 - Where can I find more information on RCU?
 
-  See the RTFP.txt file in this directory.
+  See the Documentation/RCU/RTFP.txt file.
   Or point your browser at (http://www.rdrop.com/users/paulmck/RCU/).
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 7/9] doc/RCU/rcu: Use https instead of http if possible
       [not found] <@@@>
                   ` (7 preceding siblings ...)
  2020-02-14 23:39 ` [PATCH tip/core/rcu 6/9] doc/RCU/rcu: Use absolute paths for non-rst files paulmck
@ 2020-02-14 23:39 ` paulmck
  2020-02-14 23:39 ` [PATCH tip/core/rcu 8/9] doc: Add rcutorture scripting to torture.txt paulmck
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 32+ messages in thread
From: paulmck @ 2020-02-14 23:39 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, SeongJae Park, Paul E . McKenney

From: SeongJae Park <sjpark@amazon.de>

Signed-off-by: SeongJae Park <sjpark@amazon.de>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 Documentation/RCU/rcu.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/RCU/rcu.rst b/Documentation/RCU/rcu.rst
index 2a830c5..0e03c6e 100644
--- a/Documentation/RCU/rcu.rst
+++ b/Documentation/RCU/rcu.rst
@@ -79,7 +79,7 @@ Frequently Asked Questions
   Of these, one was allowed to lapse by the assignee, and the
   others have been contributed to the Linux kernel under GPL.
   There are now also LGPL implementations of user-level RCU
-  available (http://liburcu.org/).
+  available (https://liburcu.org/).
 
 - I hear that RCU needs work in order to support realtime kernels?
 
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 8/9] doc: Add rcutorture scripting to torture.txt
       [not found] <@@@>
                   ` (8 preceding siblings ...)
  2020-02-14 23:39 ` [PATCH tip/core/rcu 7/9] doc/RCU/rcu: Use https instead of http if possible paulmck
@ 2020-02-14 23:39 ` paulmck
  2020-02-14 23:39 ` [PATCH tip/core/rcu 9/9] Documentation/memory-barriers: Fix typos paulmck
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 32+ messages in thread
From: paulmck @ 2020-02-14 23:39 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@kernel.org>

For testing mainline, the kvm.sh rcutorture script is the preferred
approach to testing.  This commit therefore adds it to the torture.txt
documentation.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 Documentation/RCU/torture.txt | 147 ++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 140 insertions(+), 7 deletions(-)

diff --git a/Documentation/RCU/torture.txt b/Documentation/RCU/torture.txt
index a41a038..af712a3 100644
--- a/Documentation/RCU/torture.txt
+++ b/Documentation/RCU/torture.txt
@@ -124,9 +124,14 @@ using a dynamically allocated srcu_struct (hence "srcud-" rather than
 debugging.  The final "T" entry contains the totals of the counters.
 
 
-USAGE
+USAGE ON SPECIFIC KERNEL BUILDS
 
-The following script may be used to torture RCU:
+It is sometimes desirable to torture RCU on a specific kernel build,
+for example, when preparing to put that kernel build into production.
+In that case, the kernel should be built with CONFIG_RCU_TORTURE_TEST=m
+so that the test can be started using modprobe and terminated using rmmod.
+
+For example, the following script may be used to torture RCU:
 
 	#!/bin/sh
 
@@ -142,8 +147,136 @@ checked for such errors.  The "rmmod" command forces a "SUCCESS",
 two are self-explanatory, while the last indicates that while there
 were no RCU failures, CPU-hotplug problems were detected.
 
-However, the tools/testing/selftests/rcutorture/bin/kvm.sh script
-provides better automation, including automatic failure analysis.
-It assumes a qemu/kvm-enabled platform, and runs guest OSes out of initrd.
-See tools/testing/selftests/rcutorture/doc/initrd.txt for instructions
-on setting up such an initrd.
+
+USAGE ON MAINLINE KERNELS
+
+When using rcutorture to test changes to RCU itself, it is often
+necessary to build a number of kernels in order to test that change
+across a broad range of combinations of the relevant Kconfig options
+and of the relevant kernel boot parameters.  In this situation, use
+of modprobe and rmmod can be quite time-consuming and error-prone.
+
+Therefore, the tools/testing/selftests/rcutorture/bin/kvm.sh
+script is available for mainline testing for x86, arm64, and
+powerpc.  By default, it will run the series of tests specified by
+tools/testing/selftests/rcutorture/configs/rcu/CFLIST, with each test
+running for 30 minutes within a guest OS using a minimal userspace
+supplied by an automatically generated initrd.  After the tests are
+complete, the resulting build products and console output are analyzed
+for errors and the results of the runs are summarized.
+
+On larger systems, rcutorture testing can be accelerated by passing the
+--cpus argument to kvm.sh.  For example, on a 64-CPU system, "--cpus 43"
+would use up to 43 CPUs to run tests concurrently, which as of v5.4 would
+complete all the scenarios in two batches, reducing the time to complete
+from about eight hours to about one hour (not counting the time to build
+the sixteen kernels).  The "--dryrun sched" argument will not run tests,
+but rather tell you how the tests would be scheduled into batches.  This
+can be useful when working out how many CPUs to specify in the --cpus
+argument.
+
+Not all changes require that all scenarios be run.  For example, a change
+to Tree SRCU might run only the SRCU-N and SRCU-P scenarios using the
+--configs argument to kvm.sh as follows:  "--configs 'SRCU-N SRCU-P'".
+Large systems can run multiple copies of of the full set of scenarios,
+for example, a system with 448 hardware threads can run five instances
+of the full set concurrently.  To make this happen:
+
+	kvm.sh --cpus 448 --configs '5*CFLIST'
+
+Alternatively, such a system can run 56 concurrent instances of a single
+eight-CPU scenario:
+
+	kvm.sh --cpus 448 --configs '56*TREE04'
+
+Or 28 concurrent instances of each of two eight-CPU scenarios:
+
+	kvm.sh --cpus 448 --configs '28*TREE03 28*TREE04'
+
+Of course, each concurrent instance will use memory, which can be
+limited using the --memory argument, which defaults to 512M.  Small
+values for memory may require disabling the callback-flooding tests
+using the --bootargs parameter discussed below.
+
+Sometimes additional debugging is useful, and in such cases the --kconfig
+parameter to kvm.sh may be used, for example, "--kconfig 'CONFIG_KASAN=y'".
+
+Kernel boot arguments can also be supplied, for example, to control
+rcutorture's module parameters.  For example, to test a change to RCU's
+CPU stall-warning code, use "--bootargs 'rcutorture.stall_cpu=30'".
+This will of course result in the scripting reporting a failure, namely
+the resuling RCU CPU stall warning.  As noted above, reducing memory may
+require disabling rcutorture's callback-flooding tests:
+
+	kvm.sh --cpus 448 --configs '56*TREE04' --memory 128M \
+		--bootargs 'rcutorture.fwd_progress=0'
+
+Sometimes all that is needed is a full set of kernel builds.  This is
+what the --buildonly argument does.
+
+Finally, the --trust-make argument allows each kernel build to reuse what
+it can from the previous kernel build.
+
+There are additional more arcane arguments that are documented in the
+source code of the kvm.sh script.
+
+If a run contains failures, the number of buildtime and runtime failures
+is listed at the end of the kvm.sh output, which you really should redirect
+to a file.  The build products and console output of each run is kept in
+tools/testing/selftests/rcutorture/res in timestamped directories.  A
+given directory can be supplied to kvm-find-errors.sh in order to have
+it cycle you through summaries of errors and full error logs.  For example:
+
+	tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh \
+		tools/testing/selftests/rcutorture/res/2020.01.20-15.54.23
+
+However, it is often more convenient to access the files directly.
+Files pertaining to all scenarios in a run reside in the top-level
+directory (2020.01.20-15.54.23 in the example above), while per-scenario
+files reside in a subdirectory named after the scenario (for example,
+"TREE04").  If a given scenario ran more than once (as in "--configs
+'56*TREE04'" above), the directories corresponding to the second and
+subsequent runs of that scenario include a sequence number, for example,
+"TREE04.2", "TREE04.3", and so on.
+
+The most frequently used file in the top-level directory is testid.txt.
+If the test ran in a git repository, then this file contains the commit
+that was tested and any uncommitted changes in diff format.
+
+The most frequently used files in each per-scenario-run directory are:
+
+.config: This file contains the Kconfig options.
+
+Make.out: This contains build output for a specific scenario.
+
+console.log: This contains the console output for a specific scenario.
+	This file may be examined once the kernel has booted, but
+	it might not exist if the build failed.
+
+vmlinux: This contains the kernel, which can be useful with tools like
+	objdump and gdb.
+
+A number of additional files are available, but are less frequently used.
+Many are intended for debugging of rcutorture itself or of its scripting.
+
+As of v5.4, a successful run with the default set of scenarios produces
+the following summary at the end of the run on a 12-CPU system:
+
+SRCU-N ------- 804233 GPs (148.932/s) [srcu: g10008272 f0x0 ]
+SRCU-P ------- 202320 GPs (37.4667/s) [srcud: g1809476 f0x0 ]
+SRCU-t ------- 1122086 GPs (207.794/s) [srcu: g0 f0x0 ]
+SRCU-u ------- 1111285 GPs (205.794/s) [srcud: g1 f0x0 ]
+TASKS01 ------- 19666 GPs (3.64185/s) [tasks: g0 f0x0 ]
+TASKS02 ------- 20541 GPs (3.80389/s) [tasks: g0 f0x0 ]
+TASKS03 ------- 19416 GPs (3.59556/s) [tasks: g0 f0x0 ]
+TINY01 ------- 836134 GPs (154.84/s) [rcu: g0 f0x0 ] n_max_cbs: 34198
+TINY02 ------- 850371 GPs (157.476/s) [rcu: g0 f0x0 ] n_max_cbs: 2631
+TREE01 ------- 162625 GPs (30.1157/s) [rcu: g1124169 f0x0 ]
+TREE02 ------- 333003 GPs (61.6672/s) [rcu: g2647753 f0x0 ] n_max_cbs: 35844
+TREE03 ------- 306623 GPs (56.782/s) [rcu: g2975325 f0x0 ] n_max_cbs: 1496497
+CPU count limited from 16 to 12
+TREE04 ------- 246149 GPs (45.5831/s) [rcu: g1695737 f0x0 ] n_max_cbs: 434961
+TREE05 ------- 314603 GPs (58.2598/s) [rcu: g2257741 f0x2 ] n_max_cbs: 193997
+TREE07 ------- 167347 GPs (30.9902/s) [rcu: g1079021 f0x0 ] n_max_cbs: 478732
+CPU count limited from 16 to 12
+TREE09 ------- 752238 GPs (139.303/s) [rcu: g13075057 f0x0 ] n_max_cbs: 99011
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 9/9] Documentation/memory-barriers: Fix typos
       [not found] <@@@>
                   ` (9 preceding siblings ...)
  2020-02-14 23:39 ` [PATCH tip/core/rcu 8/9] doc: Add rcutorture scripting to torture.txt paulmck
@ 2020-02-14 23:39 ` paulmck
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 1/6] srcu: Make Tiny SRCU use multi-bit grace-period counter paulmck
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 32+ messages in thread
From: paulmck @ 2020-02-14 23:39 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, SeongJae Park, Paul E . McKenney

From: SeongJae Park <sjpark@amazon.de>

Signed-off-by: SeongJae Park <sjpark@amazon.de>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 Documentation/memory-barriers.txt | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index 7146da0..e1c355e 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -185,7 +185,7 @@ As a further example, consider this sequence of events:
 	===============	===============
 	{ A == 1, B == 2, C == 3, P == &A, Q == &C }
 	B = 4;		Q = P;
-	P = &B		D = *Q;
+	P = &B;		D = *Q;
 
 There is an obvious data dependency here, as the value loaded into D depends on
 the address retrieved from P by CPU 2.  At the end of the sequence, any of the
@@ -569,7 +569,7 @@ following sequence of events:
 	{ A == 1, B == 2, C == 3, P == &A, Q == &C }
 	B = 4;
 	<write barrier>
-	WRITE_ONCE(P, &B)
+	WRITE_ONCE(P, &B);
 			      Q = READ_ONCE(P);
 			      D = *Q;
 
@@ -1721,7 +1721,7 @@ of optimizations:
      and WRITE_ONCE() are more selective:  With READ_ONCE() and
      WRITE_ONCE(), the compiler need only forget the contents of the
      indicated memory locations, while with barrier() the compiler must
-     discard the value of all memory locations that it has currented
+     discard the value of all memory locations that it has currently
      cached in any machine registers.  Of course, the compiler must also
      respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
      though the CPU of course need not do so.
@@ -1833,7 +1833,7 @@ Aside: In the case of data dependencies, the compiler would be expected
 to issue the loads in the correct order (eg. `a[b]` would have to load
 the value of b before loading a[b]), however there is no guarantee in
 the C specification that the compiler may not speculate the value of b
-(eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1)
+(eg. is equal to 1) and load a[b] before b (eg. tmp = a[1]; if (b != 1)
 tmp = a[b]; ).  There is also the problem of a compiler reloading b after
 having loaded a[b], thus having a newer copy of b than a[b].  A consensus
 has not yet been reached about these problems, however the READ_ONCE()
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 tip/core/rcu 1/6] srcu: Make Tiny SRCU use multi-bit grace-period counter
       [not found] <@@@>
                   ` (10 preceding siblings ...)
  2020-02-14 23:39 ` [PATCH tip/core/rcu 9/9] Documentation/memory-barriers: Fix typos paulmck
@ 2020-11-21  0:59 ` paulmck
  2020-11-23  4:31   ` Neeraj Upadhyay
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 2/6] srcu: Provide internal interface to start a Tiny SRCU grace period paulmck
                   ` (4 subsequent siblings)
  16 siblings, 1 reply; 32+ messages in thread
From: paulmck @ 2020-11-21  0:59 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@kernel.org>

There is a need for a polling interface for SRCU grace periods.  This
polling needs to distinguish between an SRCU instance being idle on the
one hand or in the middle of a grace period on the other.  This commit
therefore converts the Tiny SRCU srcu_struct structure's srcu_idx from
a defacto boolean to a free-running counter, using the bottom bit to
indicate that a grace period is in progress.  The second-from-bottom
bit is thus used as the index returned by srcu_read_lock().

Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
[ paulmck: Fix __srcu_read_lock() idx computation Neeraj per Upadhyay. ]
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 include/linux/srcutiny.h | 4 ++--
 kernel/rcu/srcutiny.c    | 5 +++--
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
index 5a5a194..d9edb67 100644
--- a/include/linux/srcutiny.h
+++ b/include/linux/srcutiny.h
@@ -15,7 +15,7 @@
 
 struct srcu_struct {
 	short srcu_lock_nesting[2];	/* srcu_read_lock() nesting depth. */
-	short srcu_idx;			/* Current reader array element. */
+	unsigned short srcu_idx;	/* Current reader array element in bit 0x2. */
 	u8 srcu_gp_running;		/* GP workqueue running? */
 	u8 srcu_gp_waiting;		/* GP waiting for readers? */
 	struct swait_queue_head srcu_wq;
@@ -59,7 +59,7 @@ static inline int __srcu_read_lock(struct srcu_struct *ssp)
 {
 	int idx;
 
-	idx = READ_ONCE(ssp->srcu_idx);
+	idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
 	WRITE_ONCE(ssp->srcu_lock_nesting[idx], ssp->srcu_lock_nesting[idx] + 1);
 	return idx;
 }
diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
index 6208c1d..5598cf6 100644
--- a/kernel/rcu/srcutiny.c
+++ b/kernel/rcu/srcutiny.c
@@ -124,11 +124,12 @@ void srcu_drive_gp(struct work_struct *wp)
 	ssp->srcu_cb_head = NULL;
 	ssp->srcu_cb_tail = &ssp->srcu_cb_head;
 	local_irq_enable();
-	idx = ssp->srcu_idx;
-	WRITE_ONCE(ssp->srcu_idx, !ssp->srcu_idx);
+	idx = (ssp->srcu_idx & 0x2) / 2;
+	WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
 	WRITE_ONCE(ssp->srcu_gp_waiting, true);  /* srcu_read_unlock() wakes! */
 	swait_event_exclusive(ssp->srcu_wq, !READ_ONCE(ssp->srcu_lock_nesting[idx]));
 	WRITE_ONCE(ssp->srcu_gp_waiting, false); /* srcu_read_unlock() cheap. */
+	WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
 
 	/* Invoke the callbacks we removed above. */
 	while (lh) {
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 tip/core/rcu 2/6] srcu: Provide internal interface to start a Tiny SRCU grace period
       [not found] <@@@>
                   ` (11 preceding siblings ...)
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 1/6] srcu: Make Tiny SRCU use multi-bit grace-period counter paulmck
@ 2020-11-21  0:59 ` paulmck
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 3/6] srcu: Provide internal interface to start a Tree " paulmck
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 32+ messages in thread
From: paulmck @ 2020-11-21  0:59 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@kernel.org>

There is a need for a polling interface for SRCU grace periods.
This polling needs to initiate an SRCU grace period without
having to queue (and manage) a callback.  This commit therefore
splits the Tiny SRCU call_srcu() function into callback-queuing and
start-grace-period portions, with the latter in a new function named
srcu_gp_start_if_needed().

Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
Reviewed-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 kernel/rcu/srcutiny.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
index 5598cf6..3bac1db 100644
--- a/kernel/rcu/srcutiny.c
+++ b/kernel/rcu/srcutiny.c
@@ -152,6 +152,16 @@ void srcu_drive_gp(struct work_struct *wp)
 }
 EXPORT_SYMBOL_GPL(srcu_drive_gp);
 
+static void srcu_gp_start_if_needed(struct srcu_struct *ssp)
+{
+	if (!READ_ONCE(ssp->srcu_gp_running)) {
+		if (likely(srcu_init_done))
+			schedule_work(&ssp->srcu_work);
+		else if (list_empty(&ssp->srcu_work.entry))
+			list_add(&ssp->srcu_work.entry, &srcu_boot_list);
+	}
+}
+
 /*
  * Enqueue an SRCU callback on the specified srcu_struct structure,
  * initiating grace-period processing if it is not already running.
@@ -167,12 +177,7 @@ void call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
 	*ssp->srcu_cb_tail = rhp;
 	ssp->srcu_cb_tail = &rhp->next;
 	local_irq_restore(flags);
-	if (!READ_ONCE(ssp->srcu_gp_running)) {
-		if (likely(srcu_init_done))
-			schedule_work(&ssp->srcu_work);
-		else if (list_empty(&ssp->srcu_work.entry))
-			list_add(&ssp->srcu_work.entry, &srcu_boot_list);
-	}
+	srcu_gp_start_if_needed(ssp);
 }
 EXPORT_SYMBOL_GPL(call_srcu);
 
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 tip/core/rcu 3/6] srcu: Provide internal interface to start a Tree SRCU grace period
       [not found] <@@@>
                   ` (12 preceding siblings ...)
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 2/6] srcu: Provide internal interface to start a Tiny SRCU grace period paulmck
@ 2020-11-21  0:59 ` paulmck
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 4/6] srcu: Provide polling interfaces for Tiny SRCU grace periods paulmck
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 32+ messages in thread
From: paulmck @ 2020-11-21  0:59 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@kernel.org>

There is a need for a polling interface for SRCU grace periods.
This polling needs to initiate an SRCU grace period without having
to queue (and manage) a callback.  This commit therefore splits the
Tree SRCU __call_srcu() function into callback-initialization and
queuing/start-grace-period portions, with the latter in a new function
named srcu_gp_start_if_needed().  This function may be passed a NULL
callback pointer, in which case it will refrain from queuing anything.

Why have the new function mess with queuing?  Locking considerations,
of course!

Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
Reviewed-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 kernel/rcu/srcutree.c | 66 +++++++++++++++++++++++++++++----------------------
 1 file changed, 37 insertions(+), 29 deletions(-)

diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 79b7081..d930ece 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -808,6 +808,42 @@ static void srcu_leak_callback(struct rcu_head *rhp)
 }
 
 /*
+ * Start an SRCU grace period, and also queue the callback if non-NULL.
+ */
+static void srcu_gp_start_if_needed(struct srcu_struct *ssp, struct rcu_head *rhp, bool do_norm)
+{
+	unsigned long flags;
+	int idx;
+	bool needexp = false;
+	bool needgp = false;
+	unsigned long s;
+	struct srcu_data *sdp;
+
+	idx = srcu_read_lock(ssp);
+	sdp = raw_cpu_ptr(ssp->sda);
+	spin_lock_irqsave_rcu_node(sdp, flags);
+	rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
+	rcu_segcblist_advance(&sdp->srcu_cblist,
+			      rcu_seq_current(&ssp->srcu_gp_seq));
+	s = rcu_seq_snap(&ssp->srcu_gp_seq);
+	(void)rcu_segcblist_accelerate(&sdp->srcu_cblist, s);
+	if (ULONG_CMP_LT(sdp->srcu_gp_seq_needed, s)) {
+		sdp->srcu_gp_seq_needed = s;
+		needgp = true;
+	}
+	if (!do_norm && ULONG_CMP_LT(sdp->srcu_gp_seq_needed_exp, s)) {
+		sdp->srcu_gp_seq_needed_exp = s;
+		needexp = true;
+	}
+	spin_unlock_irqrestore_rcu_node(sdp, flags);
+	if (needgp)
+		srcu_funnel_gp_start(ssp, sdp, s, do_norm);
+	else if (needexp)
+		srcu_funnel_exp_start(ssp, sdp->mynode, s);
+	srcu_read_unlock(ssp, idx);
+}
+
+/*
  * Enqueue an SRCU callback on the srcu_data structure associated with
  * the current CPU and the specified srcu_struct structure, initiating
  * grace-period processing if it is not already running.
@@ -838,13 +874,6 @@ static void srcu_leak_callback(struct rcu_head *rhp)
 static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
 			rcu_callback_t func, bool do_norm)
 {
-	unsigned long flags;
-	int idx;
-	bool needexp = false;
-	bool needgp = false;
-	unsigned long s;
-	struct srcu_data *sdp;
-
 	check_init_srcu_struct(ssp);
 	if (debug_rcu_head_queue(rhp)) {
 		/* Probable double call_srcu(), so leak the callback. */
@@ -853,28 +882,7 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
 		return;
 	}
 	rhp->func = func;
-	idx = srcu_read_lock(ssp);
-	sdp = raw_cpu_ptr(ssp->sda);
-	spin_lock_irqsave_rcu_node(sdp, flags);
-	rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
-	rcu_segcblist_advance(&sdp->srcu_cblist,
-			      rcu_seq_current(&ssp->srcu_gp_seq));
-	s = rcu_seq_snap(&ssp->srcu_gp_seq);
-	(void)rcu_segcblist_accelerate(&sdp->srcu_cblist, s);
-	if (ULONG_CMP_LT(sdp->srcu_gp_seq_needed, s)) {
-		sdp->srcu_gp_seq_needed = s;
-		needgp = true;
-	}
-	if (!do_norm && ULONG_CMP_LT(sdp->srcu_gp_seq_needed_exp, s)) {
-		sdp->srcu_gp_seq_needed_exp = s;
-		needexp = true;
-	}
-	spin_unlock_irqrestore_rcu_node(sdp, flags);
-	if (needgp)
-		srcu_funnel_gp_start(ssp, sdp, s, do_norm);
-	else if (needexp)
-		srcu_funnel_exp_start(ssp, sdp->mynode, s);
-	srcu_read_unlock(ssp, idx);
+	srcu_gp_start_if_needed(ssp, rhp, do_norm);
 }
 
 /**
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 tip/core/rcu 4/6] srcu: Provide polling interfaces for Tiny SRCU grace periods
       [not found] <@@@>
                   ` (13 preceding siblings ...)
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 3/6] srcu: Provide internal interface to start a Tree " paulmck
@ 2020-11-21  0:59 ` paulmck
  2020-11-22 14:30   ` Neeraj Upadhyay
  2020-11-23  4:43   ` Neeraj Upadhyay
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 5/6] srcu: Provide polling interfaces for Tree " paulmck
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 6/6] srcu: Document " paulmck
  16 siblings, 2 replies; 32+ messages in thread
From: paulmck @ 2020-11-21  0:59 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@kernel.org>

There is a need for a polling interface for SRCU grace
periods, so this commit supplies get_state_synchronize_srcu(),
start_poll_synchronize_srcu(), and poll_state_synchronize_srcu() for this
purpose.  The first can be used if future grace periods are inevitable
(perhaps due to a later call_srcu() invocation), the second if future
grace periods might not otherwise happen, and the third to check if a
grace period has elapsed since the corresponding call to either of the
first two.

As with get_state_synchronize_rcu() and cond_synchronize_rcu(),
the return value from either get_state_synchronize_srcu() or
start_poll_synchronize_srcu() must be passed in to a later call to
poll_state_synchronize_srcu().

Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
[ paulmck: Add EXPORT_SYMBOL_GPL() per kernel test robot feedback. ]
[ paulmck: Apply feedback from Neeraj Upadhyay. ]
Link: https://lore.kernel.org/lkml/20201117004017.GA7444@paulmck-ThinkPad-P72/
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 include/linux/rcupdate.h |  2 ++
 include/linux/srcu.h     |  3 +++
 include/linux/srcutiny.h |  1 +
 kernel/rcu/srcutiny.c    | 52 ++++++++++++++++++++++++++++++++++++++++++++++--
 4 files changed, 56 insertions(+), 2 deletions(-)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index de08264..e09c0d8 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -33,6 +33,8 @@
 #define ULONG_CMP_GE(a, b)	(ULONG_MAX / 2 >= (a) - (b))
 #define ULONG_CMP_LT(a, b)	(ULONG_MAX / 2 < (a) - (b))
 #define ulong2long(a)		(*(long *)(&(a)))
+#define USHORT_CMP_GE(a, b)	(USHRT_MAX / 2 >= (unsigned short)((a) - (b)))
+#define USHORT_CMP_LT(a, b)	(USHRT_MAX / 2 < (unsigned short)((a) - (b)))
 
 /* Exported common interfaces */
 void call_rcu(struct rcu_head *head, rcu_callback_t func);
diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index e432cc9..a0895bb 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -60,6 +60,9 @@ void cleanup_srcu_struct(struct srcu_struct *ssp);
 int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
 void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
 void synchronize_srcu(struct srcu_struct *ssp);
+unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp);
+unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp);
+bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie);
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
 
diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
index d9edb67..c7f0c1f 100644
--- a/include/linux/srcutiny.h
+++ b/include/linux/srcutiny.h
@@ -16,6 +16,7 @@
 struct srcu_struct {
 	short srcu_lock_nesting[2];	/* srcu_read_lock() nesting depth. */
 	unsigned short srcu_idx;	/* Current reader array element in bit 0x2. */
+	unsigned short srcu_idx_max;	/* Furthest future srcu_idx request. */
 	u8 srcu_gp_running;		/* GP workqueue running? */
 	u8 srcu_gp_waiting;		/* GP waiting for readers? */
 	struct swait_queue_head srcu_wq;
diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
index 3bac1db..b073175 100644
--- a/kernel/rcu/srcutiny.c
+++ b/kernel/rcu/srcutiny.c
@@ -34,6 +34,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp)
 	ssp->srcu_gp_running = false;
 	ssp->srcu_gp_waiting = false;
 	ssp->srcu_idx = 0;
+	ssp->srcu_idx_max = 0;
 	INIT_WORK(&ssp->srcu_work, srcu_drive_gp);
 	INIT_LIST_HEAD(&ssp->srcu_work.entry);
 	return 0;
@@ -114,7 +115,7 @@ void srcu_drive_gp(struct work_struct *wp)
 	struct srcu_struct *ssp;
 
 	ssp = container_of(wp, struct srcu_struct, srcu_work);
-	if (ssp->srcu_gp_running || !READ_ONCE(ssp->srcu_cb_head))
+	if (ssp->srcu_gp_running || USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
 		return; /* Already running or nothing to do. */
 
 	/* Remove recently arrived callbacks and wait for readers. */
@@ -147,13 +148,18 @@ void srcu_drive_gp(struct work_struct *wp)
 	 * straighten that out.
 	 */
 	WRITE_ONCE(ssp->srcu_gp_running, false);
-	if (READ_ONCE(ssp->srcu_cb_head))
+	if (USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
 		schedule_work(&ssp->srcu_work);
 }
 EXPORT_SYMBOL_GPL(srcu_drive_gp);
 
 static void srcu_gp_start_if_needed(struct srcu_struct *ssp)
 {
+	unsigned short cookie;
+
+	cookie = get_state_synchronize_srcu(ssp);
+	if (USHORT_CMP_LT(READ_ONCE(ssp->srcu_idx_max), cookie))
+		WRITE_ONCE(ssp->srcu_idx_max, cookie);
 	if (!READ_ONCE(ssp->srcu_gp_running)) {
 		if (likely(srcu_init_done))
 			schedule_work(&ssp->srcu_work);
@@ -196,6 +202,48 @@ void synchronize_srcu(struct srcu_struct *ssp)
 }
 EXPORT_SYMBOL_GPL(synchronize_srcu);
 
+/*
+ * get_state_synchronize_srcu - Provide an end-of-grace-period cookie
+ */
+unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp)
+{
+	unsigned long ret;
+
+	barrier();
+	ret = (READ_ONCE(ssp->srcu_idx) + 3) & ~0x1;
+	barrier();
+	return ret & USHRT_MAX;
+}
+EXPORT_SYMBOL_GPL(get_state_synchronize_srcu);
+
+/*
+ * start_poll_synchronize_srcu - Provide cookie and start grace period
+ *
+ * The difference between this and get_state_synchronize_srcu() is that
+ * this function ensures that the poll_state_synchronize_srcu() will
+ * eventually return the value true.
+ */
+unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp)
+{
+	unsigned long ret = get_state_synchronize_srcu(ssp);
+
+	srcu_gp_start_if_needed(ssp);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(start_poll_synchronize_srcu);
+
+/*
+ * poll_state_synchronize_srcu - Has cookie's grace period ended?
+ */
+bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie)
+{
+	bool ret = USHORT_CMP_GE(READ_ONCE(ssp->srcu_idx), cookie);
+
+	barrier();
+	return ret;
+}
+EXPORT_SYMBOL_GPL(poll_state_synchronize_srcu);
+
 /* Lockdep diagnostics.  */
 void __init rcu_scheduler_starting(void)
 {
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 tip/core/rcu 5/6] srcu: Provide polling interfaces for Tree SRCU grace periods
       [not found] <@@@>
                   ` (14 preceding siblings ...)
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 4/6] srcu: Provide polling interfaces for Tiny SRCU grace periods paulmck
@ 2020-11-21  0:59 ` paulmck
  2020-11-27  4:52   ` Neeraj Upadhyay
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 6/6] srcu: Document " paulmck
  16 siblings, 1 reply; 32+ messages in thread
From: paulmck @ 2020-11-21  0:59 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@kernel.org>

There is a need for a polling interface for SRCU grace
periods, so this commit supplies get_state_synchronize_srcu(),
start_poll_synchronize_srcu(), and poll_state_synchronize_srcu() for this
purpose.  The first can be used if future grace periods are inevitable
(perhaps due to a later call_srcu() invocation), the second if future
grace periods might not otherwise happen, and the third to check if a
grace period has elapsed since the corresponding call to either of the
first two.

As with get_state_synchronize_rcu() and cond_synchronize_rcu(),
the return value from either get_state_synchronize_srcu() or
start_poll_synchronize_srcu() must be passed in to a later call to
poll_state_synchronize_srcu().

Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
[ paulmck: Add EXPORT_SYMBOL_GPL() per kernel test robot feedback. ]
[ paulmck: Apply feedback from Neeraj Upadhyay. ]
Link: https://lore.kernel.org/lkml/20201117004017.GA7444@paulmck-ThinkPad-P72/
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 kernel/rcu/srcutiny.c |  2 +-
 kernel/rcu/srcutree.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++++---
 2 files changed, 64 insertions(+), 5 deletions(-)

diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
index b073175..c8f4202 100644
--- a/kernel/rcu/srcutiny.c
+++ b/kernel/rcu/srcutiny.c
@@ -148,7 +148,7 @@ void srcu_drive_gp(struct work_struct *wp)
 	 * straighten that out.
 	 */
 	WRITE_ONCE(ssp->srcu_gp_running, false);
-	if (USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
+	if (USHORT_CMP_LT(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
 		schedule_work(&ssp->srcu_work);
 }
 EXPORT_SYMBOL_GPL(srcu_drive_gp);
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index d930ece..945c047 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -810,7 +810,8 @@ static void srcu_leak_callback(struct rcu_head *rhp)
 /*
  * Start an SRCU grace period, and also queue the callback if non-NULL.
  */
-static void srcu_gp_start_if_needed(struct srcu_struct *ssp, struct rcu_head *rhp, bool do_norm)
+static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
+					     struct rcu_head *rhp, bool do_norm)
 {
 	unsigned long flags;
 	int idx;
@@ -819,10 +820,12 @@ static void srcu_gp_start_if_needed(struct srcu_struct *ssp, struct rcu_head *rh
 	unsigned long s;
 	struct srcu_data *sdp;
 
+	check_init_srcu_struct(ssp);
 	idx = srcu_read_lock(ssp);
 	sdp = raw_cpu_ptr(ssp->sda);
 	spin_lock_irqsave_rcu_node(sdp, flags);
-	rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
+	if (rhp)
+		rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
 	rcu_segcblist_advance(&sdp->srcu_cblist,
 			      rcu_seq_current(&ssp->srcu_gp_seq));
 	s = rcu_seq_snap(&ssp->srcu_gp_seq);
@@ -841,6 +844,7 @@ static void srcu_gp_start_if_needed(struct srcu_struct *ssp, struct rcu_head *rh
 	else if (needexp)
 		srcu_funnel_exp_start(ssp, sdp->mynode, s);
 	srcu_read_unlock(ssp, idx);
+	return s;
 }
 
 /*
@@ -874,7 +878,6 @@ static void srcu_gp_start_if_needed(struct srcu_struct *ssp, struct rcu_head *rh
 static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
 			rcu_callback_t func, bool do_norm)
 {
-	check_init_srcu_struct(ssp);
 	if (debug_rcu_head_queue(rhp)) {
 		/* Probable double call_srcu(), so leak the callback. */
 		WRITE_ONCE(rhp->func, srcu_leak_callback);
@@ -882,7 +885,7 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
 		return;
 	}
 	rhp->func = func;
-	srcu_gp_start_if_needed(ssp, rhp, do_norm);
+	(void)srcu_gp_start_if_needed(ssp, rhp, do_norm);
 }
 
 /**
@@ -1011,6 +1014,62 @@ void synchronize_srcu(struct srcu_struct *ssp)
 }
 EXPORT_SYMBOL_GPL(synchronize_srcu);
 
+/**
+ * get_state_synchronize_srcu - Provide an end-of-grace-period cookie
+ * @ssp: srcu_struct to provide cookie for.
+ *
+ * This function returns a cookie that can be passed to
+ * poll_state_synchronize_srcu(), which will return true if a full grace
+ * period has elapsed in the meantime.  It is the caller's responsibility
+ * to make sure that grace period happens, for example, by invoking
+ * call_srcu() after return from get_state_synchronize_srcu().
+ */
+unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp)
+{
+	// Any prior manipulation of SRCU-protected data must happen
+	// before the load from ->srcu_gp_seq.
+	smp_mb();
+	return rcu_seq_snap(&ssp->srcu_gp_seq);
+}
+EXPORT_SYMBOL_GPL(get_state_synchronize_srcu);
+
+/**
+ * start_poll_synchronize_srcu - Provide cookie and start grace period
+ * @ssp: srcu_struct to provide cookie for.
+ *
+ * This function returns a cookie that can be passed to
+ * poll_state_synchronize_srcu(), which will return true if a full grace
+ * period has elapsed in the meantime.  Unlike get_state_synchronize_srcu(),
+ * this function also ensures that any needed SRCU grace period will be
+ * started.  This convenience does come at a cost in terms of CPU overhead.
+ */
+unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp)
+{
+	return srcu_gp_start_if_needed(ssp, NULL, true);
+}
+EXPORT_SYMBOL_GPL(start_poll_synchronize_srcu);
+
+/**
+ * poll_state_synchronize_srcu - Has cookie's grace period ended?
+ * @ssp: srcu_struct to provide cookie for.
+ * @cookie: Return value from get_state_synchronize_srcu() or start_poll_synchronize_srcu().
+ *
+ * This function takes the cookie that was returned from either
+ * get_state_synchronize_srcu() or start_poll_synchronize_srcu(), and
+ * returns @true if an SRCU grace period elapsed since the time that the
+ * cookie was created.
+ */
+bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie)
+{
+	if (!rcu_seq_done(&ssp->srcu_gp_seq, cookie))
+		return false;
+	// Ensure that the end of the SRCU grace period happens before
+	// any subsequent code that the caller might execute.
+	smp_mb(); // ^^^
+	return true;
+}
+EXPORT_SYMBOL_GPL(poll_state_synchronize_srcu);
+
 /*
  * Callback function for srcu_barrier() use.
  */
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 tip/core/rcu 6/6] srcu: Document polling interfaces for Tree SRCU grace periods
       [not found] <@@@>
                   ` (15 preceding siblings ...)
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 5/6] srcu: Provide polling interfaces for Tree " paulmck
@ 2020-11-21  0:59 ` paulmck
  2020-11-27  8:27   ` Neeraj Upadhyay
  16 siblings, 1 reply; 32+ messages in thread
From: paulmck @ 2020-11-21  0:59 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet,
	Paul E. McKenney

From: "Paul E. McKenney" <paulmck@kernel.org>

This commit adds requirements documentation for the
get_state_synchronize_srcu(), start_poll_synchronize_srcu(), and
poll_state_synchronize_srcu() functions.

Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 Documentation/RCU/Design/Requirements/Requirements.rst | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/Documentation/RCU/Design/Requirements/Requirements.rst b/Documentation/RCU/Design/Requirements/Requirements.rst
index 1e3df77..2dce79d 100644
--- a/Documentation/RCU/Design/Requirements/Requirements.rst
+++ b/Documentation/RCU/Design/Requirements/Requirements.rst
@@ -2600,6 +2600,24 @@ also includes DEFINE_SRCU(), DEFINE_STATIC_SRCU(), and
 init_srcu_struct() APIs for defining and initializing
 ``srcu_struct`` structures.
 
+More recently, the SRCU API has added polling interfaces:
+
+#. start_poll_synchronize_srcu() returns a cookie identifying
+   the completion of a future SRCU grace period and ensures
+   that this grace period will be started.
+#. poll_state_synchronize_srcu() returns ``true`` iff the
+   specified cookie corresponds to an already-completed
+   SRCU grace period.
+#. get_state_synchronize_srcu() returns a cookie just like
+   start_poll_synchronize_srcu() does, but differs in that
+   it does nothing to ensure that any future SRCU grace period
+   will be started.
+
+These functions are used to avoid unnecessary SRCU grace periods in
+certain types of buffer-cache algorithms having multi-stage age-out
+mechanisms.  The idea is that by the time the block has aged completely
+from the cache, an SRCU grace period will be very likely to have elapsed.
+
 Tasks RCU
 ~~~~~~~~~
 
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 4/6] srcu: Provide polling interfaces for Tiny SRCU grace periods
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 4/6] srcu: Provide polling interfaces for Tiny SRCU grace periods paulmck
@ 2020-11-22 14:30   ` Neeraj Upadhyay
  2020-11-22 17:57     ` Paul E. McKenney
  2020-11-23  4:43   ` Neeraj Upadhyay
  1 sibling, 1 reply; 32+ messages in thread
From: Neeraj Upadhyay @ 2020-11-22 14:30 UTC (permalink / raw)
  To: paulmck, rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet



On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
> From: "Paul E. McKenney" <paulmck@kernel.org>
> 
> There is a need for a polling interface for SRCU grace
> periods, so this commit supplies get_state_synchronize_srcu(),
> start_poll_synchronize_srcu(), and poll_state_synchronize_srcu() for this
> purpose.  The first can be used if future grace periods are inevitable
> (perhaps due to a later call_srcu() invocation), the second if future
> grace periods might not otherwise happen, and the third to check if a
> grace period has elapsed since the corresponding call to either of the
> first two.
> 
> As with get_state_synchronize_rcu() and cond_synchronize_rcu(),
> the return value from either get_state_synchronize_srcu() or
> start_poll_synchronize_srcu() must be passed in to a later call to
> poll_state_synchronize_srcu().
> 
> Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
> Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
> [ paulmck: Add EXPORT_SYMBOL_GPL() per kernel test robot feedback. ]
> [ paulmck: Apply feedback from Neeraj Upadhyay. ]
> Link: https://lore.kernel.org/lkml/20201117004017.GA7444@paulmck-ThinkPad-P72/
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> ---
>   include/linux/rcupdate.h |  2 ++
>   include/linux/srcu.h     |  3 +++
>   include/linux/srcutiny.h |  1 +
>   kernel/rcu/srcutiny.c    | 52 ++++++++++++++++++++++++++++++++++++++++++++++--
>   4 files changed, 56 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index de08264..e09c0d8 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -33,6 +33,8 @@
>   #define ULONG_CMP_GE(a, b)	(ULONG_MAX / 2 >= (a) - (b))
>   #define ULONG_CMP_LT(a, b)	(ULONG_MAX / 2 < (a) - (b))
>   #define ulong2long(a)		(*(long *)(&(a)))
> +#define USHORT_CMP_GE(a, b)	(USHRT_MAX / 2 >= (unsigned short)((a) - (b)))
> +#define USHORT_CMP_LT(a, b)	(USHRT_MAX / 2 < (unsigned short)((a) - (b)))
>   
>   /* Exported common interfaces */
>   void call_rcu(struct rcu_head *head, rcu_callback_t func);
> diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> index e432cc9..a0895bb 100644
> --- a/include/linux/srcu.h
> +++ b/include/linux/srcu.h
> @@ -60,6 +60,9 @@ void cleanup_srcu_struct(struct srcu_struct *ssp);
>   int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
>   void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
>   void synchronize_srcu(struct srcu_struct *ssp);
> +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp);
> +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp);
> +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie);
>   
>   #ifdef CONFIG_DEBUG_LOCK_ALLOC
>   
> diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
> index d9edb67..c7f0c1f 100644
> --- a/include/linux/srcutiny.h
> +++ b/include/linux/srcutiny.h
> @@ -16,6 +16,7 @@
>   struct srcu_struct {
>   	short srcu_lock_nesting[2];	/* srcu_read_lock() nesting depth. */
>   	unsigned short srcu_idx;	/* Current reader array element in bit 0x2. */
> +	unsigned short srcu_idx_max;	/* Furthest future srcu_idx request. */
>   	u8 srcu_gp_running;		/* GP workqueue running? */
>   	u8 srcu_gp_waiting;		/* GP waiting for readers? */
>   	struct swait_queue_head srcu_wq;
> diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
> index 3bac1db..b073175 100644
> --- a/kernel/rcu/srcutiny.c
> +++ b/kernel/rcu/srcutiny.c
> @@ -34,6 +34,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp)
>   	ssp->srcu_gp_running = false;
>   	ssp->srcu_gp_waiting = false;
>   	ssp->srcu_idx = 0;
> +	ssp->srcu_idx_max = 0;
>   	INIT_WORK(&ssp->srcu_work, srcu_drive_gp);
>   	INIT_LIST_HEAD(&ssp->srcu_work.entry);
>   	return 0;
> @@ -114,7 +115,7 @@ void srcu_drive_gp(struct work_struct *wp)
>   	struct srcu_struct *ssp;
>   
>   	ssp = container_of(wp, struct srcu_struct, srcu_work);
> -	if (ssp->srcu_gp_running || !READ_ONCE(ssp->srcu_cb_head))
> +	if (ssp->srcu_gp_running || USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
>   		return; /* Already running or nothing to do. */
>   
>   	/* Remove recently arrived callbacks and wait for readers. */
> @@ -147,13 +148,18 @@ void srcu_drive_gp(struct work_struct *wp)
>   	 * straighten that out.
>   	 */
>   	WRITE_ONCE(ssp->srcu_gp_running, false);
> -	if (READ_ONCE(ssp->srcu_cb_head))
> +	if (USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))

USHORT_CMP_LT ?

Thanks
Neeraj

>   		schedule_work(&ssp->srcu_work);

>   }
>   EXPORT_SYMBOL_GPL(srcu_drive_gp);
>   
>   static void srcu_gp_start_if_needed(struct srcu_struct *ssp)
>   {
> +	unsigned short cookie;
> +
> +	cookie = get_state_synchronize_srcu(ssp);
> +	if (USHORT_CMP_LT(READ_ONCE(ssp->srcu_idx_max), cookie))
> +		WRITE_ONCE(ssp->srcu_idx_max, cookie);
>   	if (!READ_ONCE(ssp->srcu_gp_running)) {
>   		if (likely(srcu_init_done))
>   			schedule_work(&ssp->srcu_work);
> @@ -196,6 +202,48 @@ void synchronize_srcu(struct srcu_struct *ssp)
>   }
>   EXPORT_SYMBOL_GPL(synchronize_srcu);
>   
> +/*
> + * get_state_synchronize_srcu - Provide an end-of-grace-period cookie
> + */
> +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp)
> +{
> +	unsigned long ret;
> +
> +	barrier();
> +	ret = (READ_ONCE(ssp->srcu_idx) + 3) & ~0x1;
> +	barrier();
> +	return ret & USHRT_MAX;
> +}
> +EXPORT_SYMBOL_GPL(get_state_synchronize_srcu);
> +
> +/*
> + * start_poll_synchronize_srcu - Provide cookie and start grace period
> + *
> + * The difference between this and get_state_synchronize_srcu() is that
> + * this function ensures that the poll_state_synchronize_srcu() will
> + * eventually return the value true.
> + */
> +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp)
> +{
> +	unsigned long ret = get_state_synchronize_srcu(ssp);
> +
> +	srcu_gp_start_if_needed(ssp);
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(start_poll_synchronize_srcu);
> +
> +/*
> + * poll_state_synchronize_srcu - Has cookie's grace period ended?
> + */
> +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie)
> +{
> +	bool ret = USHORT_CMP_GE(READ_ONCE(ssp->srcu_idx), cookie);
> +
> +	barrier();
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(poll_state_synchronize_srcu);
> +
>   /* Lockdep diagnostics.  */
>   void __init rcu_scheduler_starting(void)
>   {
> 

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
member of the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 4/6] srcu: Provide polling interfaces for Tiny SRCU grace periods
  2020-11-22 14:30   ` Neeraj Upadhyay
@ 2020-11-22 17:57     ` Paul E. McKenney
  0 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2020-11-22 17:57 UTC (permalink / raw)
  To: Neeraj Upadhyay
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet

On Sun, Nov 22, 2020 at 08:00:09PM +0530, Neeraj Upadhyay wrote:
> 
> 
> On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
> > From: "Paul E. McKenney" <paulmck@kernel.org>
> > 
> > There is a need for a polling interface for SRCU grace
> > periods, so this commit supplies get_state_synchronize_srcu(),
> > start_poll_synchronize_srcu(), and poll_state_synchronize_srcu() for this
> > purpose.  The first can be used if future grace periods are inevitable
> > (perhaps due to a later call_srcu() invocation), the second if future
> > grace periods might not otherwise happen, and the third to check if a
> > grace period has elapsed since the corresponding call to either of the
> > first two.
> > 
> > As with get_state_synchronize_rcu() and cond_synchronize_rcu(),
> > the return value from either get_state_synchronize_srcu() or
> > start_poll_synchronize_srcu() must be passed in to a later call to
> > poll_state_synchronize_srcu().
> > 
> > Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
> > Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
> > [ paulmck: Add EXPORT_SYMBOL_GPL() per kernel test robot feedback. ]
> > [ paulmck: Apply feedback from Neeraj Upadhyay. ]
> > Link: https://lore.kernel.org/lkml/20201117004017.GA7444@paulmck-ThinkPad-P72/
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > ---
> >   include/linux/rcupdate.h |  2 ++
> >   include/linux/srcu.h     |  3 +++
> >   include/linux/srcutiny.h |  1 +
> >   kernel/rcu/srcutiny.c    | 52 ++++++++++++++++++++++++++++++++++++++++++++++--
> >   4 files changed, 56 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index de08264..e09c0d8 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -33,6 +33,8 @@
> >   #define ULONG_CMP_GE(a, b)	(ULONG_MAX / 2 >= (a) - (b))
> >   #define ULONG_CMP_LT(a, b)	(ULONG_MAX / 2 < (a) - (b))
> >   #define ulong2long(a)		(*(long *)(&(a)))
> > +#define USHORT_CMP_GE(a, b)	(USHRT_MAX / 2 >= (unsigned short)((a) - (b)))
> > +#define USHORT_CMP_LT(a, b)	(USHRT_MAX / 2 < (unsigned short)((a) - (b)))
> >   /* Exported common interfaces */
> >   void call_rcu(struct rcu_head *head, rcu_callback_t func);
> > diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> > index e432cc9..a0895bb 100644
> > --- a/include/linux/srcu.h
> > +++ b/include/linux/srcu.h
> > @@ -60,6 +60,9 @@ void cleanup_srcu_struct(struct srcu_struct *ssp);
> >   int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
> >   void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
> >   void synchronize_srcu(struct srcu_struct *ssp);
> > +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp);
> > +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp);
> > +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie);
> >   #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
> > index d9edb67..c7f0c1f 100644
> > --- a/include/linux/srcutiny.h
> > +++ b/include/linux/srcutiny.h
> > @@ -16,6 +16,7 @@
> >   struct srcu_struct {
> >   	short srcu_lock_nesting[2];	/* srcu_read_lock() nesting depth. */
> >   	unsigned short srcu_idx;	/* Current reader array element in bit 0x2. */
> > +	unsigned short srcu_idx_max;	/* Furthest future srcu_idx request. */
> >   	u8 srcu_gp_running;		/* GP workqueue running? */
> >   	u8 srcu_gp_waiting;		/* GP waiting for readers? */
> >   	struct swait_queue_head srcu_wq;
> > diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
> > index 3bac1db..b073175 100644
> > --- a/kernel/rcu/srcutiny.c
> > +++ b/kernel/rcu/srcutiny.c
> > @@ -34,6 +34,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp)
> >   	ssp->srcu_gp_running = false;
> >   	ssp->srcu_gp_waiting = false;
> >   	ssp->srcu_idx = 0;
> > +	ssp->srcu_idx_max = 0;
> >   	INIT_WORK(&ssp->srcu_work, srcu_drive_gp);
> >   	INIT_LIST_HEAD(&ssp->srcu_work.entry);
> >   	return 0;
> > @@ -114,7 +115,7 @@ void srcu_drive_gp(struct work_struct *wp)
> >   	struct srcu_struct *ssp;
> >   	ssp = container_of(wp, struct srcu_struct, srcu_work);
> > -	if (ssp->srcu_gp_running || !READ_ONCE(ssp->srcu_cb_head))
> > +	if (ssp->srcu_gp_running || USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
> >   		return; /* Already running or nothing to do. */
> >   	/* Remove recently arrived callbacks and wait for readers. */
> > @@ -147,13 +148,18 @@ void srcu_drive_gp(struct work_struct *wp)
> >   	 * straighten that out.
> >   	 */
> >   	WRITE_ONCE(ssp->srcu_gp_running, false);
> > -	if (READ_ONCE(ssp->srcu_cb_head))
> > +	if (USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
> 
> USHORT_CMP_LT ?

Gah!  I squashed that change with 5/6 rather than 4/6.  I will adjust, 
and thank you for checking!

							Thanx, Paul

> Thanks
> Neeraj
> 
> >   		schedule_work(&ssp->srcu_work);
> 
> >   }
> >   EXPORT_SYMBOL_GPL(srcu_drive_gp);
> >   static void srcu_gp_start_if_needed(struct srcu_struct *ssp)
> >   {
> > +	unsigned short cookie;
> > +
> > +	cookie = get_state_synchronize_srcu(ssp);
> > +	if (USHORT_CMP_LT(READ_ONCE(ssp->srcu_idx_max), cookie))
> > +		WRITE_ONCE(ssp->srcu_idx_max, cookie);
> >   	if (!READ_ONCE(ssp->srcu_gp_running)) {
> >   		if (likely(srcu_init_done))
> >   			schedule_work(&ssp->srcu_work);
> > @@ -196,6 +202,48 @@ void synchronize_srcu(struct srcu_struct *ssp)
> >   }
> >   EXPORT_SYMBOL_GPL(synchronize_srcu);
> > +/*
> > + * get_state_synchronize_srcu - Provide an end-of-grace-period cookie
> > + */
> > +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp)
> > +{
> > +	unsigned long ret;
> > +
> > +	barrier();
> > +	ret = (READ_ONCE(ssp->srcu_idx) + 3) & ~0x1;
> > +	barrier();
> > +	return ret & USHRT_MAX;
> > +}
> > +EXPORT_SYMBOL_GPL(get_state_synchronize_srcu);
> > +
> > +/*
> > + * start_poll_synchronize_srcu - Provide cookie and start grace period
> > + *
> > + * The difference between this and get_state_synchronize_srcu() is that
> > + * this function ensures that the poll_state_synchronize_srcu() will
> > + * eventually return the value true.
> > + */
> > +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp)
> > +{
> > +	unsigned long ret = get_state_synchronize_srcu(ssp);
> > +
> > +	srcu_gp_start_if_needed(ssp);
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(start_poll_synchronize_srcu);
> > +
> > +/*
> > + * poll_state_synchronize_srcu - Has cookie's grace period ended?
> > + */
> > +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie)
> > +{
> > +	bool ret = USHORT_CMP_GE(READ_ONCE(ssp->srcu_idx), cookie);
> > +
> > +	barrier();
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(poll_state_synchronize_srcu);
> > +
> >   /* Lockdep diagnostics.  */
> >   void __init rcu_scheduler_starting(void)
> >   {
> > 
> 
> -- 
> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of
> the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 1/6] srcu: Make Tiny SRCU use multi-bit grace-period counter
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 1/6] srcu: Make Tiny SRCU use multi-bit grace-period counter paulmck
@ 2020-11-23  4:31   ` Neeraj Upadhyay
  2020-11-23 19:55     ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Neeraj Upadhyay @ 2020-11-23  4:31 UTC (permalink / raw)
  To: paulmck, rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet



On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
> From: "Paul E. McKenney" <paulmck@kernel.org>
> 
> There is a need for a polling interface for SRCU grace periods.  This
> polling needs to distinguish between an SRCU instance being idle on the
> one hand or in the middle of a grace period on the other.  This commit
> therefore converts the Tiny SRCU srcu_struct structure's srcu_idx from
> a defacto boolean to a free-running counter, using the bottom bit to
> indicate that a grace period is in progress.  The second-from-bottom
> bit is thus used as the index returned by srcu_read_lock().
> 
> Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
> Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
> [ paulmck: Fix __srcu_read_lock() idx computation Neeraj per Upadhyay. ]
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> ---
>   include/linux/srcutiny.h | 4 ++--
>   kernel/rcu/srcutiny.c    | 5 +++--
>   2 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
> index 5a5a194..d9edb67 100644
> --- a/include/linux/srcutiny.h
> +++ b/include/linux/srcutiny.h
> @@ -15,7 +15,7 @@
>   
>   struct srcu_struct {
>   	short srcu_lock_nesting[2];	/* srcu_read_lock() nesting depth. */
> -	short srcu_idx;			/* Current reader array element. */
> +	unsigned short srcu_idx;	/* Current reader array element in bit 0x2. */
>   	u8 srcu_gp_running;		/* GP workqueue running? */
>   	u8 srcu_gp_waiting;		/* GP waiting for readers? */
>   	struct swait_queue_head srcu_wq;
> @@ -59,7 +59,7 @@ static inline int __srcu_read_lock(struct srcu_struct *ssp)
>   {
>   	int idx;
>   
> -	idx = READ_ONCE(ssp->srcu_idx);
> +	idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
>   	WRITE_ONCE(ssp->srcu_lock_nesting[idx], ssp->srcu_lock_nesting[idx] + 1);
>   	return idx;
>   }

Need change in idx calcultion in srcu_torture_stats_print() ?

static inline void srcu_torture_stats_print(struct srcu_struct *ssp,
   idx = READ_ONCE(ssp->srcu_idx) & 0x1;

Thanks
Neeraj

> diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
> index 6208c1d..5598cf6 100644
> --- a/kernel/rcu/srcutiny.c
> +++ b/kernel/rcu/srcutiny.c
> @@ -124,11 +124,12 @@ void srcu_drive_gp(struct work_struct *wp)
>   	ssp->srcu_cb_head = NULL;
>   	ssp->srcu_cb_tail = &ssp->srcu_cb_head;
>   	local_irq_enable();
> -	idx = ssp->srcu_idx;
> -	WRITE_ONCE(ssp->srcu_idx, !ssp->srcu_idx);
> +	idx = (ssp->srcu_idx & 0x2) / 2;
> +	WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
>   	WRITE_ONCE(ssp->srcu_gp_waiting, true);  /* srcu_read_unlock() wakes! */
>   	swait_event_exclusive(ssp->srcu_wq, !READ_ONCE(ssp->srcu_lock_nesting[idx]));
>   	WRITE_ONCE(ssp->srcu_gp_waiting, false); /* srcu_read_unlock() cheap. */
> +	WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
>   
>   	/* Invoke the callbacks we removed above. */
>   	while (lh) {
> 

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
member of the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 4/6] srcu: Provide polling interfaces for Tiny SRCU grace periods
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 4/6] srcu: Provide polling interfaces for Tiny SRCU grace periods paulmck
  2020-11-22 14:30   ` Neeraj Upadhyay
@ 2020-11-23  4:43   ` Neeraj Upadhyay
  2020-11-23 21:12     ` Paul E. McKenney
  1 sibling, 1 reply; 32+ messages in thread
From: Neeraj Upadhyay @ 2020-11-23  4:43 UTC (permalink / raw)
  To: paulmck, rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet



On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
> From: "Paul E. McKenney" <paulmck@kernel.org>
> 
> There is a need for a polling interface for SRCU grace
> periods, so this commit supplies get_state_synchronize_srcu(),
> start_poll_synchronize_srcu(), and poll_state_synchronize_srcu() for this
> purpose.  The first can be used if future grace periods are inevitable
> (perhaps due to a later call_srcu() invocation), the second if future
> grace periods might not otherwise happen, and the third to check if a
> grace period has elapsed since the corresponding call to either of the
> first two.
> 
> As with get_state_synchronize_rcu() and cond_synchronize_rcu(),
> the return value from either get_state_synchronize_srcu() or
> start_poll_synchronize_srcu() must be passed in to a later call to
> poll_state_synchronize_srcu().
> 
> Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
> Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
> [ paulmck: Add EXPORT_SYMBOL_GPL() per kernel test robot feedback. ]
> [ paulmck: Apply feedback from Neeraj Upadhyay. ]
> Link: https://lore.kernel.org/lkml/20201117004017.GA7444@paulmck-ThinkPad-P72/
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> ---
>   include/linux/rcupdate.h |  2 ++
>   include/linux/srcu.h     |  3 +++
>   include/linux/srcutiny.h |  1 +
>   kernel/rcu/srcutiny.c    | 52 ++++++++++++++++++++++++++++++++++++++++++++++--
>   4 files changed, 56 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index de08264..e09c0d8 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -33,6 +33,8 @@
>   #define ULONG_CMP_GE(a, b)	(ULONG_MAX / 2 >= (a) - (b))
>   #define ULONG_CMP_LT(a, b)	(ULONG_MAX / 2 < (a) - (b))
>   #define ulong2long(a)		(*(long *)(&(a)))
> +#define USHORT_CMP_GE(a, b)	(USHRT_MAX / 2 >= (unsigned short)((a) - (b)))
> +#define USHORT_CMP_LT(a, b)	(USHRT_MAX / 2 < (unsigned short)((a) - (b)))
>   
>   /* Exported common interfaces */
>   void call_rcu(struct rcu_head *head, rcu_callback_t func);
> diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> index e432cc9..a0895bb 100644
> --- a/include/linux/srcu.h
> +++ b/include/linux/srcu.h
> @@ -60,6 +60,9 @@ void cleanup_srcu_struct(struct srcu_struct *ssp);
>   int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
>   void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
>   void synchronize_srcu(struct srcu_struct *ssp);
> +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp);
> +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp);
> +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie);
>   
>   #ifdef CONFIG_DEBUG_LOCK_ALLOC
>   
> diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
> index d9edb67..c7f0c1f 100644
> --- a/include/linux/srcutiny.h
> +++ b/include/linux/srcutiny.h
> @@ -16,6 +16,7 @@
>   struct srcu_struct {
>   	short srcu_lock_nesting[2];	/* srcu_read_lock() nesting depth. */
>   	unsigned short srcu_idx;	/* Current reader array element in bit 0x2. */
> +	unsigned short srcu_idx_max;	/* Furthest future srcu_idx request. */
>   	u8 srcu_gp_running;		/* GP workqueue running? */
>   	u8 srcu_gp_waiting;		/* GP waiting for readers? */
>   	struct swait_queue_head srcu_wq;
> diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
> index 3bac1db..b073175 100644
> --- a/kernel/rcu/srcutiny.c
> +++ b/kernel/rcu/srcutiny.c
> @@ -34,6 +34,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp)
>   	ssp->srcu_gp_running = false;
>   	ssp->srcu_gp_waiting = false;
>   	ssp->srcu_idx = 0;
> +	ssp->srcu_idx_max = 0;
>   	INIT_WORK(&ssp->srcu_work, srcu_drive_gp);
>   	INIT_LIST_HEAD(&ssp->srcu_work.entry);
>   	return 0;

Minor: cleanup_srcu_struct() can probably have 2 new sanity checks?

WARN_ON(ssp->srcu_idx != ssp->srcu_idx_max);
WARN_ON(ssp->srcu_idx & 1);

Thanks
Neeraj

> @@ -114,7 +115,7 @@ void srcu_drive_gp(struct work_struct *wp)
>   	struct srcu_struct *ssp;
>   
>   	ssp = container_of(wp, struct srcu_struct, srcu_work);
> -	if (ssp->srcu_gp_running || !READ_ONCE(ssp->srcu_cb_head))
> +	if (ssp->srcu_gp_running || USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
>   		return; /* Already running or nothing to do. */
>   
>   	/* Remove recently arrived callbacks and wait for readers. */
> @@ -147,13 +148,18 @@ void srcu_drive_gp(struct work_struct *wp)
>   	 * straighten that out.
>   	 */
>   	WRITE_ONCE(ssp->srcu_gp_running, false);
> -	if (READ_ONCE(ssp->srcu_cb_head))
> +	if (USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
>   		schedule_work(&ssp->srcu_work);
>   }
>   EXPORT_SYMBOL_GPL(srcu_drive_gp);
>   
>   static void srcu_gp_start_if_needed(struct srcu_struct *ssp)
>   {
> +	unsigned short cookie;
> +
> +	cookie = get_state_synchronize_srcu(ssp);
> +	if (USHORT_CMP_LT(READ_ONCE(ssp->srcu_idx_max), cookie))
> +		WRITE_ONCE(ssp->srcu_idx_max, cookie);

Minor: Maybe we can return in the else part of USHORT_CMP_LT check, to 
avoid scheduling work?

Thanks
Neeraj

>   	if (!READ_ONCE(ssp->srcu_gp_running)) {
>   		if (likely(srcu_init_done))
>   			schedule_work(&ssp->srcu_work);
> @@ -196,6 +202,48 @@ void synchronize_srcu(struct srcu_struct *ssp)
>   }
>   EXPORT_SYMBOL_GPL(synchronize_srcu);
>   
> +/*
> + * get_state_synchronize_srcu - Provide an end-of-grace-period cookie
> + */
> +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp)
> +{
> +	unsigned long ret;
> +
> +	barrier();
> +	ret = (READ_ONCE(ssp->srcu_idx) + 3) & ~0x1;
> +	barrier();
> +	return ret & USHRT_MAX;
> +}
> +EXPORT_SYMBOL_GPL(get_state_synchronize_srcu);
> +
> +/*
> + * start_poll_synchronize_srcu - Provide cookie and start grace period
> + *
> + * The difference between this and get_state_synchronize_srcu() is that
> + * this function ensures that the poll_state_synchronize_srcu() will
> + * eventually return the value true.
> + */
> +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp)
> +{
> +	unsigned long ret = get_state_synchronize_srcu(ssp);
> +
> +	srcu_gp_start_if_needed(ssp);
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(start_poll_synchronize_srcu);
> +
> +/*
> + * poll_state_synchronize_srcu - Has cookie's grace period ended?
> + */
> +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie)
> +{
> +	bool ret = USHORT_CMP_GE(READ_ONCE(ssp->srcu_idx), cookie);
> +
> +	barrier();
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(poll_state_synchronize_srcu);
> +
>   /* Lockdep diagnostics.  */
>   void __init rcu_scheduler_starting(void)
>   {
> 

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
member of the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 1/6] srcu: Make Tiny SRCU use multi-bit grace-period counter
  2020-11-23  4:31   ` Neeraj Upadhyay
@ 2020-11-23 19:55     ` Paul E. McKenney
  2020-11-24  5:18       ` Neeraj Upadhyay
  0 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2020-11-23 19:55 UTC (permalink / raw)
  To: Neeraj Upadhyay
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet

On Mon, Nov 23, 2020 at 10:01:13AM +0530, Neeraj Upadhyay wrote:
> On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
> > From: "Paul E. McKenney" <paulmck@kernel.org>
> > 
> > There is a need for a polling interface for SRCU grace periods.  This
> > polling needs to distinguish between an SRCU instance being idle on the
> > one hand or in the middle of a grace period on the other.  This commit
> > therefore converts the Tiny SRCU srcu_struct structure's srcu_idx from
> > a defacto boolean to a free-running counter, using the bottom bit to
> > indicate that a grace period is in progress.  The second-from-bottom
> > bit is thus used as the index returned by srcu_read_lock().
> > 
> > Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
> > Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
> > [ paulmck: Fix __srcu_read_lock() idx computation Neeraj per Upadhyay. ]
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > ---
> >   include/linux/srcutiny.h | 4 ++--
> >   kernel/rcu/srcutiny.c    | 5 +++--
> >   2 files changed, 5 insertions(+), 4 deletions(-)
> > 
> > diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
> > index 5a5a194..d9edb67 100644
> > --- a/include/linux/srcutiny.h
> > +++ b/include/linux/srcutiny.h
> > @@ -15,7 +15,7 @@
> >   struct srcu_struct {
> >   	short srcu_lock_nesting[2];	/* srcu_read_lock() nesting depth. */
> > -	short srcu_idx;			/* Current reader array element. */
> > +	unsigned short srcu_idx;	/* Current reader array element in bit 0x2. */
> >   	u8 srcu_gp_running;		/* GP workqueue running? */
> >   	u8 srcu_gp_waiting;		/* GP waiting for readers? */
> >   	struct swait_queue_head srcu_wq;
> > @@ -59,7 +59,7 @@ static inline int __srcu_read_lock(struct srcu_struct *ssp)
> >   {
> >   	int idx;
> > -	idx = READ_ONCE(ssp->srcu_idx);
> > +	idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
> >   	WRITE_ONCE(ssp->srcu_lock_nesting[idx], ssp->srcu_lock_nesting[idx] + 1);
> >   	return idx;
> >   }
> 
> Need change in idx calcultion in srcu_torture_stats_print() ?
> 
> static inline void srcu_torture_stats_print(struct srcu_struct *ssp,
>   idx = READ_ONCE(ssp->srcu_idx) & 0x1;

Excellent point!  It should match the calculation in __srcu_read_lock(),
shouldn't it?  I have updated this, thank you!

							Thanx, Paul

> Thanks
> Neeraj
> 
> > diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
> > index 6208c1d..5598cf6 100644
> > --- a/kernel/rcu/srcutiny.c
> > +++ b/kernel/rcu/srcutiny.c
> > @@ -124,11 +124,12 @@ void srcu_drive_gp(struct work_struct *wp)
> >   	ssp->srcu_cb_head = NULL;
> >   	ssp->srcu_cb_tail = &ssp->srcu_cb_head;
> >   	local_irq_enable();
> > -	idx = ssp->srcu_idx;
> > -	WRITE_ONCE(ssp->srcu_idx, !ssp->srcu_idx);
> > +	idx = (ssp->srcu_idx & 0x2) / 2;
> > +	WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
> >   	WRITE_ONCE(ssp->srcu_gp_waiting, true);  /* srcu_read_unlock() wakes! */
> >   	swait_event_exclusive(ssp->srcu_wq, !READ_ONCE(ssp->srcu_lock_nesting[idx]));
> >   	WRITE_ONCE(ssp->srcu_gp_waiting, false); /* srcu_read_unlock() cheap. */
> > +	WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
> >   	/* Invoke the callbacks we removed above. */
> >   	while (lh) {
> > 
> 
> -- 
> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of
> the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 4/6] srcu: Provide polling interfaces for Tiny SRCU grace periods
  2020-11-23  4:43   ` Neeraj Upadhyay
@ 2020-11-23 21:12     ` Paul E. McKenney
  2020-11-24  5:14       ` Neeraj Upadhyay
  0 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2020-11-23 21:12 UTC (permalink / raw)
  To: Neeraj Upadhyay
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet

On Mon, Nov 23, 2020 at 10:13:13AM +0530, Neeraj Upadhyay wrote:
> 
> 
> On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
> > From: "Paul E. McKenney" <paulmck@kernel.org>
> > 
> > There is a need for a polling interface for SRCU grace
> > periods, so this commit supplies get_state_synchronize_srcu(),
> > start_poll_synchronize_srcu(), and poll_state_synchronize_srcu() for this
> > purpose.  The first can be used if future grace periods are inevitable
> > (perhaps due to a later call_srcu() invocation), the second if future
> > grace periods might not otherwise happen, and the third to check if a
> > grace period has elapsed since the corresponding call to either of the
> > first two.
> > 
> > As with get_state_synchronize_rcu() and cond_synchronize_rcu(),
> > the return value from either get_state_synchronize_srcu() or
> > start_poll_synchronize_srcu() must be passed in to a later call to
> > poll_state_synchronize_srcu().
> > 
> > Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
> > Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
> > [ paulmck: Add EXPORT_SYMBOL_GPL() per kernel test robot feedback. ]
> > [ paulmck: Apply feedback from Neeraj Upadhyay. ]
> > Link: https://lore.kernel.org/lkml/20201117004017.GA7444@paulmck-ThinkPad-P72/
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > ---
> >   include/linux/rcupdate.h |  2 ++
> >   include/linux/srcu.h     |  3 +++
> >   include/linux/srcutiny.h |  1 +
> >   kernel/rcu/srcutiny.c    | 52 ++++++++++++++++++++++++++++++++++++++++++++++--
> >   4 files changed, 56 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index de08264..e09c0d8 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -33,6 +33,8 @@
> >   #define ULONG_CMP_GE(a, b)	(ULONG_MAX / 2 >= (a) - (b))
> >   #define ULONG_CMP_LT(a, b)	(ULONG_MAX / 2 < (a) - (b))
> >   #define ulong2long(a)		(*(long *)(&(a)))
> > +#define USHORT_CMP_GE(a, b)	(USHRT_MAX / 2 >= (unsigned short)((a) - (b)))
> > +#define USHORT_CMP_LT(a, b)	(USHRT_MAX / 2 < (unsigned short)((a) - (b)))
> >   /* Exported common interfaces */
> >   void call_rcu(struct rcu_head *head, rcu_callback_t func);
> > diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> > index e432cc9..a0895bb 100644
> > --- a/include/linux/srcu.h
> > +++ b/include/linux/srcu.h
> > @@ -60,6 +60,9 @@ void cleanup_srcu_struct(struct srcu_struct *ssp);
> >   int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
> >   void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
> >   void synchronize_srcu(struct srcu_struct *ssp);
> > +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp);
> > +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp);
> > +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie);
> >   #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
> > index d9edb67..c7f0c1f 100644
> > --- a/include/linux/srcutiny.h
> > +++ b/include/linux/srcutiny.h
> > @@ -16,6 +16,7 @@
> >   struct srcu_struct {
> >   	short srcu_lock_nesting[2];	/* srcu_read_lock() nesting depth. */
> >   	unsigned short srcu_idx;	/* Current reader array element in bit 0x2. */
> > +	unsigned short srcu_idx_max;	/* Furthest future srcu_idx request. */
> >   	u8 srcu_gp_running;		/* GP workqueue running? */
> >   	u8 srcu_gp_waiting;		/* GP waiting for readers? */
> >   	struct swait_queue_head srcu_wq;
> > diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
> > index 3bac1db..b073175 100644
> > --- a/kernel/rcu/srcutiny.c
> > +++ b/kernel/rcu/srcutiny.c
> > @@ -34,6 +34,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp)
> >   	ssp->srcu_gp_running = false;
> >   	ssp->srcu_gp_waiting = false;
> >   	ssp->srcu_idx = 0;
> > +	ssp->srcu_idx_max = 0;
> >   	INIT_WORK(&ssp->srcu_work, srcu_drive_gp);
> >   	INIT_LIST_HEAD(&ssp->srcu_work.entry);
> >   	return 0;
> 
> Minor: cleanup_srcu_struct() can probably have 2 new sanity checks?
> 
> WARN_ON(ssp->srcu_idx != ssp->srcu_idx_max);
> WARN_ON(ssp->srcu_idx & 1);

Good point, added and under test.

> Thanks
> Neeraj
> 
> > @@ -114,7 +115,7 @@ void srcu_drive_gp(struct work_struct *wp)
> >   	struct srcu_struct *ssp;
> >   	ssp = container_of(wp, struct srcu_struct, srcu_work);
> > -	if (ssp->srcu_gp_running || !READ_ONCE(ssp->srcu_cb_head))
> > +	if (ssp->srcu_gp_running || USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
> >   		return; /* Already running or nothing to do. */
> >   	/* Remove recently arrived callbacks and wait for readers. */
> > @@ -147,13 +148,18 @@ void srcu_drive_gp(struct work_struct *wp)
> >   	 * straighten that out.
> >   	 */
> >   	WRITE_ONCE(ssp->srcu_gp_running, false);
> > -	if (READ_ONCE(ssp->srcu_cb_head))
> > +	if (USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
> >   		schedule_work(&ssp->srcu_work);
> >   }
> >   EXPORT_SYMBOL_GPL(srcu_drive_gp);
> >   static void srcu_gp_start_if_needed(struct srcu_struct *ssp)
> >   {
> > +	unsigned short cookie;
> > +
> > +	cookie = get_state_synchronize_srcu(ssp);
> > +	if (USHORT_CMP_LT(READ_ONCE(ssp->srcu_idx_max), cookie))
> > +		WRITE_ONCE(ssp->srcu_idx_max, cookie);
> 
> Minor: Maybe we can return in the else part of USHORT_CMP_LT check, to avoid
> scheduling work?

How about like this?

	static void srcu_gp_start_if_needed(struct srcu_struct *ssp)
	{
		unsigned short cookie;

		cookie = get_state_synchronize_srcu(ssp);
		if (USHORT_CMP_GE(READ_ONCE(ssp->srcu_idx_max), cookie))
			return;
		WRITE_ONCE(ssp->srcu_idx_max, cookie);
		if (!READ_ONCE(ssp->srcu_gp_running)) {
			if (likely(srcu_init_done))
				schedule_work(&ssp->srcu_work);
			else if (list_empty(&ssp->srcu_work.entry))
				list_add(&ssp->srcu_work.entry, &srcu_boot_list);
		}
	}

Testing this next.  ;-)

							Thanx, Paul

> Thanks
> Neeraj
> 
> >   	if (!READ_ONCE(ssp->srcu_gp_running)) {
> >   		if (likely(srcu_init_done))
> >   			schedule_work(&ssp->srcu_work);
> > @@ -196,6 +202,48 @@ void synchronize_srcu(struct srcu_struct *ssp)
> >   }
> >   EXPORT_SYMBOL_GPL(synchronize_srcu);
> > +/*
> > + * get_state_synchronize_srcu - Provide an end-of-grace-period cookie
> > + */
> > +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp)
> > +{
> > +	unsigned long ret;
> > +
> > +	barrier();
> > +	ret = (READ_ONCE(ssp->srcu_idx) + 3) & ~0x1;
> > +	barrier();
> > +	return ret & USHRT_MAX;
> > +}
> > +EXPORT_SYMBOL_GPL(get_state_synchronize_srcu);
> > +
> > +/*
> > + * start_poll_synchronize_srcu - Provide cookie and start grace period
> > + *
> > + * The difference between this and get_state_synchronize_srcu() is that
> > + * this function ensures that the poll_state_synchronize_srcu() will
> > + * eventually return the value true.
> > + */
> > +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp)
> > +{
> > +	unsigned long ret = get_state_synchronize_srcu(ssp);
> > +
> > +	srcu_gp_start_if_needed(ssp);
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(start_poll_synchronize_srcu);
> > +
> > +/*
> > + * poll_state_synchronize_srcu - Has cookie's grace period ended?
> > + */
> > +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie)
> > +{
> > +	bool ret = USHORT_CMP_GE(READ_ONCE(ssp->srcu_idx), cookie);
> > +
> > +	barrier();
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(poll_state_synchronize_srcu);
> > +
> >   /* Lockdep diagnostics.  */
> >   void __init rcu_scheduler_starting(void)
> >   {
> > 
> 
> -- 
> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of
> the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 4/6] srcu: Provide polling interfaces for Tiny SRCU grace periods
  2020-11-23 21:12     ` Paul E. McKenney
@ 2020-11-24  5:14       ` Neeraj Upadhyay
  2020-11-24 19:30         ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Neeraj Upadhyay @ 2020-11-24  5:14 UTC (permalink / raw)
  To: paulmck
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet



On 11/24/2020 2:42 AM, Paul E. McKenney wrote:
> On Mon, Nov 23, 2020 at 10:13:13AM +0530, Neeraj Upadhyay wrote:
>>
>>
>> On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
>>> From: "Paul E. McKenney" <paulmck@kernel.org>
>>>
>>> There is a need for a polling interface for SRCU grace
>>> periods, so this commit supplies get_state_synchronize_srcu(),
>>> start_poll_synchronize_srcu(), and poll_state_synchronize_srcu() for this
>>> purpose.  The first can be used if future grace periods are inevitable
>>> (perhaps due to a later call_srcu() invocation), the second if future
>>> grace periods might not otherwise happen, and the third to check if a
>>> grace period has elapsed since the corresponding call to either of the
>>> first two.
>>>
>>> As with get_state_synchronize_rcu() and cond_synchronize_rcu(),
>>> the return value from either get_state_synchronize_srcu() or
>>> start_poll_synchronize_srcu() must be passed in to a later call to
>>> poll_state_synchronize_srcu().
>>>
>>> Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
>>> Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
>>> [ paulmck: Add EXPORT_SYMBOL_GPL() per kernel test robot feedback. ]
>>> [ paulmck: Apply feedback from Neeraj Upadhyay. ]
>>> Link: https://lore.kernel.org/lkml/20201117004017.GA7444@paulmck-ThinkPad-P72/
>>> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
>>> ---
>>>    include/linux/rcupdate.h |  2 ++
>>>    include/linux/srcu.h     |  3 +++
>>>    include/linux/srcutiny.h |  1 +
>>>    kernel/rcu/srcutiny.c    | 52 ++++++++++++++++++++++++++++++++++++++++++++++--
>>>    4 files changed, 56 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
>>> index de08264..e09c0d8 100644
>>> --- a/include/linux/rcupdate.h
>>> +++ b/include/linux/rcupdate.h
>>> @@ -33,6 +33,8 @@
>>>    #define ULONG_CMP_GE(a, b)	(ULONG_MAX / 2 >= (a) - (b))
>>>    #define ULONG_CMP_LT(a, b)	(ULONG_MAX / 2 < (a) - (b))
>>>    #define ulong2long(a)		(*(long *)(&(a)))
>>> +#define USHORT_CMP_GE(a, b)	(USHRT_MAX / 2 >= (unsigned short)((a) - (b)))
>>> +#define USHORT_CMP_LT(a, b)	(USHRT_MAX / 2 < (unsigned short)((a) - (b)))
>>>    /* Exported common interfaces */
>>>    void call_rcu(struct rcu_head *head, rcu_callback_t func);
>>> diff --git a/include/linux/srcu.h b/include/linux/srcu.h
>>> index e432cc9..a0895bb 100644
>>> --- a/include/linux/srcu.h
>>> +++ b/include/linux/srcu.h
>>> @@ -60,6 +60,9 @@ void cleanup_srcu_struct(struct srcu_struct *ssp);
>>>    int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
>>>    void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
>>>    void synchronize_srcu(struct srcu_struct *ssp);
>>> +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp);
>>> +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp);
>>> +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie);
>>>    #ifdef CONFIG_DEBUG_LOCK_ALLOC
>>> diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
>>> index d9edb67..c7f0c1f 100644
>>> --- a/include/linux/srcutiny.h
>>> +++ b/include/linux/srcutiny.h
>>> @@ -16,6 +16,7 @@
>>>    struct srcu_struct {
>>>    	short srcu_lock_nesting[2];	/* srcu_read_lock() nesting depth. */
>>>    	unsigned short srcu_idx;	/* Current reader array element in bit 0x2. */
>>> +	unsigned short srcu_idx_max;	/* Furthest future srcu_idx request. */
>>>    	u8 srcu_gp_running;		/* GP workqueue running? */
>>>    	u8 srcu_gp_waiting;		/* GP waiting for readers? */
>>>    	struct swait_queue_head srcu_wq;
>>> diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
>>> index 3bac1db..b073175 100644
>>> --- a/kernel/rcu/srcutiny.c
>>> +++ b/kernel/rcu/srcutiny.c
>>> @@ -34,6 +34,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp)
>>>    	ssp->srcu_gp_running = false;
>>>    	ssp->srcu_gp_waiting = false;
>>>    	ssp->srcu_idx = 0;
>>> +	ssp->srcu_idx_max = 0;
>>>    	INIT_WORK(&ssp->srcu_work, srcu_drive_gp);
>>>    	INIT_LIST_HEAD(&ssp->srcu_work.entry);
>>>    	return 0;
>>
>> Minor: cleanup_srcu_struct() can probably have 2 new sanity checks?
>>
>> WARN_ON(ssp->srcu_idx != ssp->srcu_idx_max);
>> WARN_ON(ssp->srcu_idx & 1);
> 
> Good point, added and under test.
> 
>> Thanks
>> Neeraj
>>
>>> @@ -114,7 +115,7 @@ void srcu_drive_gp(struct work_struct *wp)
>>>    	struct srcu_struct *ssp;
>>>    	ssp = container_of(wp, struct srcu_struct, srcu_work);
>>> -	if (ssp->srcu_gp_running || !READ_ONCE(ssp->srcu_cb_head))
>>> +	if (ssp->srcu_gp_running || USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
>>>    		return; /* Already running or nothing to do. */
>>>    	/* Remove recently arrived callbacks and wait for readers. */
>>> @@ -147,13 +148,18 @@ void srcu_drive_gp(struct work_struct *wp)
>>>    	 * straighten that out.
>>>    	 */
>>>    	WRITE_ONCE(ssp->srcu_gp_running, false);
>>> -	if (READ_ONCE(ssp->srcu_cb_head))
>>> +	if (USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
>>>    		schedule_work(&ssp->srcu_work);
>>>    }
>>>    EXPORT_SYMBOL_GPL(srcu_drive_gp);
>>>    static void srcu_gp_start_if_needed(struct srcu_struct *ssp)
>>>    {
>>> +	unsigned short cookie;
>>> +
>>> +	cookie = get_state_synchronize_srcu(ssp);
>>> +	if (USHORT_CMP_LT(READ_ONCE(ssp->srcu_idx_max), cookie))
>>> +		WRITE_ONCE(ssp->srcu_idx_max, cookie);
>>
>> Minor: Maybe we can return in the else part of USHORT_CMP_LT check, to avoid
>> scheduling work?
> 
> How about like this?

Looks good!


Thanks
Neeraj

> 
> 	static void srcu_gp_start_if_needed(struct srcu_struct *ssp)
> 	{
> 		unsigned short cookie;
> 
> 		cookie = get_state_synchronize_srcu(ssp);
> 		if (USHORT_CMP_GE(READ_ONCE(ssp->srcu_idx_max), cookie))
> 			return;
> 		WRITE_ONCE(ssp->srcu_idx_max, cookie);
> 		if (!READ_ONCE(ssp->srcu_gp_running)) {
> 			if (likely(srcu_init_done))
> 				schedule_work(&ssp->srcu_work);
> 			else if (list_empty(&ssp->srcu_work.entry))
> 				list_add(&ssp->srcu_work.entry, &srcu_boot_list);
> 		}
> 	}
> 
> Testing this next.  ;-)
> 
> 							Thanx, Paul
> 
>> Thanks
>> Neeraj
>>
>>>    	if (!READ_ONCE(ssp->srcu_gp_running)) {
>>>    		if (likely(srcu_init_done))
>>>    			schedule_work(&ssp->srcu_work);
>>> @@ -196,6 +202,48 @@ void synchronize_srcu(struct srcu_struct *ssp)
>>>    }
>>>    EXPORT_SYMBOL_GPL(synchronize_srcu);
>>> +/*
>>> + * get_state_synchronize_srcu - Provide an end-of-grace-period cookie
>>> + */
>>> +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp)
>>> +{
>>> +	unsigned long ret;
>>> +
>>> +	barrier();
>>> +	ret = (READ_ONCE(ssp->srcu_idx) + 3) & ~0x1;
>>> +	barrier();
>>> +	return ret & USHRT_MAX;
>>> +}
>>> +EXPORT_SYMBOL_GPL(get_state_synchronize_srcu);
>>> +
>>> +/*
>>> + * start_poll_synchronize_srcu - Provide cookie and start grace period
>>> + *
>>> + * The difference between this and get_state_synchronize_srcu() is that
>>> + * this function ensures that the poll_state_synchronize_srcu() will
>>> + * eventually return the value true.
>>> + */
>>> +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp)
>>> +{
>>> +	unsigned long ret = get_state_synchronize_srcu(ssp);
>>> +
>>> +	srcu_gp_start_if_needed(ssp);
>>> +	return ret;
>>> +}
>>> +EXPORT_SYMBOL_GPL(start_poll_synchronize_srcu);
>>> +
>>> +/*
>>> + * poll_state_synchronize_srcu - Has cookie's grace period ended?
>>> + */
>>> +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie)
>>> +{
>>> +	bool ret = USHORT_CMP_GE(READ_ONCE(ssp->srcu_idx), cookie);
>>> +
>>> +	barrier();
>>> +	return ret;
>>> +}
>>> +EXPORT_SYMBOL_GPL(poll_state_synchronize_srcu);
>>> +
>>>    /* Lockdep diagnostics.  */
>>>    void __init rcu_scheduler_starting(void)
>>>    {
>>>
>>
>> -- 
>> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of
>> the Code Aurora Forum, hosted by The Linux Foundation

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
member of the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 1/6] srcu: Make Tiny SRCU use multi-bit grace-period counter
  2020-11-23 19:55     ` Paul E. McKenney
@ 2020-11-24  5:18       ` Neeraj Upadhyay
  2020-11-25  4:33         ` Neeraj Upadhyay
  0 siblings, 1 reply; 32+ messages in thread
From: Neeraj Upadhyay @ 2020-11-24  5:18 UTC (permalink / raw)
  To: paulmck
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet



On 11/24/2020 1:25 AM, Paul E. McKenney wrote:
> On Mon, Nov 23, 2020 at 10:01:13AM +0530, Neeraj Upadhyay wrote:
>> On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
>>> From: "Paul E. McKenney" <paulmck@kernel.org>
>>>
>>> There is a need for a polling interface for SRCU grace periods.  This
>>> polling needs to distinguish between an SRCU instance being idle on the
>>> one hand or in the middle of a grace period on the other.  This commit
>>> therefore converts the Tiny SRCU srcu_struct structure's srcu_idx from
>>> a defacto boolean to a free-running counter, using the bottom bit to
>>> indicate that a grace period is in progress.  The second-from-bottom
>>> bit is thus used as the index returned by srcu_read_lock().
>>>
>>> Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
>>> Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
>>> [ paulmck: Fix __srcu_read_lock() idx computation Neeraj per Upadhyay. ]
>>> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
>>> ---
>>>    include/linux/srcutiny.h | 4 ++--
>>>    kernel/rcu/srcutiny.c    | 5 +++--
>>>    2 files changed, 5 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
>>> index 5a5a194..d9edb67 100644
>>> --- a/include/linux/srcutiny.h
>>> +++ b/include/linux/srcutiny.h
>>> @@ -15,7 +15,7 @@
>>>    struct srcu_struct {
>>>    	short srcu_lock_nesting[2];	/* srcu_read_lock() nesting depth. */
>>> -	short srcu_idx;			/* Current reader array element. */
>>> +	unsigned short srcu_idx;	/* Current reader array element in bit 0x2. */
>>>    	u8 srcu_gp_running;		/* GP workqueue running? */
>>>    	u8 srcu_gp_waiting;		/* GP waiting for readers? */
>>>    	struct swait_queue_head srcu_wq;
>>> @@ -59,7 +59,7 @@ static inline int __srcu_read_lock(struct srcu_struct *ssp)
>>>    {
>>>    	int idx;
>>> -	idx = READ_ONCE(ssp->srcu_idx);
>>> +	idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
>>>    	WRITE_ONCE(ssp->srcu_lock_nesting[idx], ssp->srcu_lock_nesting[idx] + 1);
>>>    	return idx;
>>>    }
>>
>> Need change in idx calcultion in srcu_torture_stats_print() ?
>>
>> static inline void srcu_torture_stats_print(struct srcu_struct *ssp,
>>    idx = READ_ONCE(ssp->srcu_idx) & 0x1;
> 
> Excellent point!  It should match the calculation in __srcu_read_lock(),
> shouldn't it?  I have updated this, thank you!
> 
> 							Thanx, Paul
> 

Updated version looks good!


Thanks
Neeraj

>> Thanks
>> Neeraj
>>
>>> diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
>>> index 6208c1d..5598cf6 100644
>>> --- a/kernel/rcu/srcutiny.c
>>> +++ b/kernel/rcu/srcutiny.c
>>> @@ -124,11 +124,12 @@ void srcu_drive_gp(struct work_struct *wp)
>>>    	ssp->srcu_cb_head = NULL;
>>>    	ssp->srcu_cb_tail = &ssp->srcu_cb_head;
>>>    	local_irq_enable();
>>> -	idx = ssp->srcu_idx;
>>> -	WRITE_ONCE(ssp->srcu_idx, !ssp->srcu_idx);
>>> +	idx = (ssp->srcu_idx & 0x2) / 2;
>>> +	WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
>>>    	WRITE_ONCE(ssp->srcu_gp_waiting, true);  /* srcu_read_unlock() wakes! */
>>>    	swait_event_exclusive(ssp->srcu_wq, !READ_ONCE(ssp->srcu_lock_nesting[idx]));
>>>    	WRITE_ONCE(ssp->srcu_gp_waiting, false); /* srcu_read_unlock() cheap. */
>>> +	WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
>>>    	/* Invoke the callbacks we removed above. */
>>>    	while (lh) {
>>>
>>
>> -- 
>> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of
>> the Code Aurora Forum, hosted by The Linux Foundation

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
member of the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 4/6] srcu: Provide polling interfaces for Tiny SRCU grace periods
  2020-11-24  5:14       ` Neeraj Upadhyay
@ 2020-11-24 19:30         ` Paul E. McKenney
  2020-11-25  4:39           ` Neeraj Upadhyay
  0 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2020-11-24 19:30 UTC (permalink / raw)
  To: Neeraj Upadhyay
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet

On Tue, Nov 24, 2020 at 10:44:24AM +0530, Neeraj Upadhyay wrote:
> 
> 
> On 11/24/2020 2:42 AM, Paul E. McKenney wrote:
> > On Mon, Nov 23, 2020 at 10:13:13AM +0530, Neeraj Upadhyay wrote:
> > > 
> > > 
> > > On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
> > > > From: "Paul E. McKenney" <paulmck@kernel.org>
> > > > 
> > > > There is a need for a polling interface for SRCU grace
> > > > periods, so this commit supplies get_state_synchronize_srcu(),
> > > > start_poll_synchronize_srcu(), and poll_state_synchronize_srcu() for this
> > > > purpose.  The first can be used if future grace periods are inevitable
> > > > (perhaps due to a later call_srcu() invocation), the second if future
> > > > grace periods might not otherwise happen, and the third to check if a
> > > > grace period has elapsed since the corresponding call to either of the
> > > > first two.
> > > > 
> > > > As with get_state_synchronize_rcu() and cond_synchronize_rcu(),
> > > > the return value from either get_state_synchronize_srcu() or
> > > > start_poll_synchronize_srcu() must be passed in to a later call to
> > > > poll_state_synchronize_srcu().
> > > > 
> > > > Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
> > > > Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
> > > > [ paulmck: Add EXPORT_SYMBOL_GPL() per kernel test robot feedback. ]
> > > > [ paulmck: Apply feedback from Neeraj Upadhyay. ]
> > > > Link: https://lore.kernel.org/lkml/20201117004017.GA7444@paulmck-ThinkPad-P72/
> > > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > > > ---
> > > >    include/linux/rcupdate.h |  2 ++
> > > >    include/linux/srcu.h     |  3 +++
> > > >    include/linux/srcutiny.h |  1 +
> > > >    kernel/rcu/srcutiny.c    | 52 ++++++++++++++++++++++++++++++++++++++++++++++--
> > > >    4 files changed, 56 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > > > index de08264..e09c0d8 100644
> > > > --- a/include/linux/rcupdate.h
> > > > +++ b/include/linux/rcupdate.h
> > > > @@ -33,6 +33,8 @@
> > > >    #define ULONG_CMP_GE(a, b)	(ULONG_MAX / 2 >= (a) - (b))
> > > >    #define ULONG_CMP_LT(a, b)	(ULONG_MAX / 2 < (a) - (b))
> > > >    #define ulong2long(a)		(*(long *)(&(a)))
> > > > +#define USHORT_CMP_GE(a, b)	(USHRT_MAX / 2 >= (unsigned short)((a) - (b)))
> > > > +#define USHORT_CMP_LT(a, b)	(USHRT_MAX / 2 < (unsigned short)((a) - (b)))
> > > >    /* Exported common interfaces */
> > > >    void call_rcu(struct rcu_head *head, rcu_callback_t func);
> > > > diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> > > > index e432cc9..a0895bb 100644
> > > > --- a/include/linux/srcu.h
> > > > +++ b/include/linux/srcu.h
> > > > @@ -60,6 +60,9 @@ void cleanup_srcu_struct(struct srcu_struct *ssp);
> > > >    int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
> > > >    void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
> > > >    void synchronize_srcu(struct srcu_struct *ssp);
> > > > +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp);
> > > > +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp);
> > > > +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie);
> > > >    #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > > > diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
> > > > index d9edb67..c7f0c1f 100644
> > > > --- a/include/linux/srcutiny.h
> > > > +++ b/include/linux/srcutiny.h
> > > > @@ -16,6 +16,7 @@
> > > >    struct srcu_struct {
> > > >    	short srcu_lock_nesting[2];	/* srcu_read_lock() nesting depth. */
> > > >    	unsigned short srcu_idx;	/* Current reader array element in bit 0x2. */
> > > > +	unsigned short srcu_idx_max;	/* Furthest future srcu_idx request. */
> > > >    	u8 srcu_gp_running;		/* GP workqueue running? */
> > > >    	u8 srcu_gp_waiting;		/* GP waiting for readers? */
> > > >    	struct swait_queue_head srcu_wq;
> > > > diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
> > > > index 3bac1db..b073175 100644
> > > > --- a/kernel/rcu/srcutiny.c
> > > > +++ b/kernel/rcu/srcutiny.c
> > > > @@ -34,6 +34,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp)
> > > >    	ssp->srcu_gp_running = false;
> > > >    	ssp->srcu_gp_waiting = false;
> > > >    	ssp->srcu_idx = 0;
> > > > +	ssp->srcu_idx_max = 0;
> > > >    	INIT_WORK(&ssp->srcu_work, srcu_drive_gp);
> > > >    	INIT_LIST_HEAD(&ssp->srcu_work.entry);
> > > >    	return 0;
> > > 
> > > Minor: cleanup_srcu_struct() can probably have 2 new sanity checks?
> > > 
> > > WARN_ON(ssp->srcu_idx != ssp->srcu_idx_max);
> > > WARN_ON(ssp->srcu_idx & 1);
> > 
> > Good point, added and under test.
> > 
> > > Thanks
> > > Neeraj
> > > 
> > > > @@ -114,7 +115,7 @@ void srcu_drive_gp(struct work_struct *wp)
> > > >    	struct srcu_struct *ssp;
> > > >    	ssp = container_of(wp, struct srcu_struct, srcu_work);
> > > > -	if (ssp->srcu_gp_running || !READ_ONCE(ssp->srcu_cb_head))
> > > > +	if (ssp->srcu_gp_running || USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
> > > >    		return; /* Already running or nothing to do. */
> > > >    	/* Remove recently arrived callbacks and wait for readers. */
> > > > @@ -147,13 +148,18 @@ void srcu_drive_gp(struct work_struct *wp)
> > > >    	 * straighten that out.
> > > >    	 */
> > > >    	WRITE_ONCE(ssp->srcu_gp_running, false);
> > > > -	if (READ_ONCE(ssp->srcu_cb_head))
> > > > +	if (USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
> > > >    		schedule_work(&ssp->srcu_work);
> > > >    }
> > > >    EXPORT_SYMBOL_GPL(srcu_drive_gp);
> > > >    static void srcu_gp_start_if_needed(struct srcu_struct *ssp)
> > > >    {
> > > > +	unsigned short cookie;
> > > > +
> > > > +	cookie = get_state_synchronize_srcu(ssp);
> > > > +	if (USHORT_CMP_LT(READ_ONCE(ssp->srcu_idx_max), cookie))
> > > > +		WRITE_ONCE(ssp->srcu_idx_max, cookie);
> > > 
> > > Minor: Maybe we can return in the else part of USHORT_CMP_LT check, to avoid
> > > scheduling work?
> > 
> > How about like this?
> 
> Looks good!

Are you willing to give an ack or reviewed-by for either patch?

							Thanx, Paul

> Thanks
> Neeraj
> 
> > 
> > 	static void srcu_gp_start_if_needed(struct srcu_struct *ssp)
> > 	{
> > 		unsigned short cookie;
> > 
> > 		cookie = get_state_synchronize_srcu(ssp);
> > 		if (USHORT_CMP_GE(READ_ONCE(ssp->srcu_idx_max), cookie))
> > 			return;
> > 		WRITE_ONCE(ssp->srcu_idx_max, cookie);
> > 		if (!READ_ONCE(ssp->srcu_gp_running)) {
> > 			if (likely(srcu_init_done))
> > 				schedule_work(&ssp->srcu_work);
> > 			else if (list_empty(&ssp->srcu_work.entry))
> > 				list_add(&ssp->srcu_work.entry, &srcu_boot_list);
> > 		}
> > 	}
> > 
> > Testing this next.  ;-)
> > 
> > 							Thanx, Paul
> > 
> > > Thanks
> > > Neeraj
> > > 
> > > >    	if (!READ_ONCE(ssp->srcu_gp_running)) {
> > > >    		if (likely(srcu_init_done))
> > > >    			schedule_work(&ssp->srcu_work);
> > > > @@ -196,6 +202,48 @@ void synchronize_srcu(struct srcu_struct *ssp)
> > > >    }
> > > >    EXPORT_SYMBOL_GPL(synchronize_srcu);
> > > > +/*
> > > > + * get_state_synchronize_srcu - Provide an end-of-grace-period cookie
> > > > + */
> > > > +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp)
> > > > +{
> > > > +	unsigned long ret;
> > > > +
> > > > +	barrier();
> > > > +	ret = (READ_ONCE(ssp->srcu_idx) + 3) & ~0x1;
> > > > +	barrier();
> > > > +	return ret & USHRT_MAX;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(get_state_synchronize_srcu);
> > > > +
> > > > +/*
> > > > + * start_poll_synchronize_srcu - Provide cookie and start grace period
> > > > + *
> > > > + * The difference between this and get_state_synchronize_srcu() is that
> > > > + * this function ensures that the poll_state_synchronize_srcu() will
> > > > + * eventually return the value true.
> > > > + */
> > > > +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp)
> > > > +{
> > > > +	unsigned long ret = get_state_synchronize_srcu(ssp);
> > > > +
> > > > +	srcu_gp_start_if_needed(ssp);
> > > > +	return ret;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(start_poll_synchronize_srcu);
> > > > +
> > > > +/*
> > > > + * poll_state_synchronize_srcu - Has cookie's grace period ended?
> > > > + */
> > > > +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie)
> > > > +{
> > > > +	bool ret = USHORT_CMP_GE(READ_ONCE(ssp->srcu_idx), cookie);
> > > > +
> > > > +	barrier();
> > > > +	return ret;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(poll_state_synchronize_srcu);
> > > > +
> > > >    /* Lockdep diagnostics.  */
> > > >    void __init rcu_scheduler_starting(void)
> > > >    {
> > > > 
> > > 
> > > -- 
> > > QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of
> > > the Code Aurora Forum, hosted by The Linux Foundation
> 
> -- 
> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of
> the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 1/6] srcu: Make Tiny SRCU use multi-bit grace-period counter
  2020-11-24  5:18       ` Neeraj Upadhyay
@ 2020-11-25  4:33         ` Neeraj Upadhyay
  2020-11-28  2:16           ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Neeraj Upadhyay @ 2020-11-25  4:33 UTC (permalink / raw)
  To: paulmck
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet



On 11/24/2020 10:48 AM, Neeraj Upadhyay wrote:
> 
> 
> On 11/24/2020 1:25 AM, Paul E. McKenney wrote:
>> On Mon, Nov 23, 2020 at 10:01:13AM +0530, Neeraj Upadhyay wrote:
>>> On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
>>>> From: "Paul E. McKenney" <paulmck@kernel.org>
>>>>
>>>> There is a need for a polling interface for SRCU grace periods.  This
>>>> polling needs to distinguish between an SRCU instance being idle on the
>>>> one hand or in the middle of a grace period on the other.  This commit
>>>> therefore converts the Tiny SRCU srcu_struct structure's srcu_idx from
>>>> a defacto boolean to a free-running counter, using the bottom bit to
>>>> indicate that a grace period is in progress.  The second-from-bottom
>>>> bit is thus used as the index returned by srcu_read_lock().
>>>>
>>>> Link: 
>>>> https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
>>>> Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
>>>> [ paulmck: Fix __srcu_read_lock() idx computation Neeraj per 
>>>> Upadhyay. ]
>>>> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
>>>> ---
>>>>    include/linux/srcutiny.h | 4 ++--
>>>>    kernel/rcu/srcutiny.c    | 5 +++--
>>>>    2 files changed, 5 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
>>>> index 5a5a194..d9edb67 100644
>>>> --- a/include/linux/srcutiny.h
>>>> +++ b/include/linux/srcutiny.h
>>>> @@ -15,7 +15,7 @@
>>>>    struct srcu_struct {
>>>>        short srcu_lock_nesting[2];    /* srcu_read_lock() nesting 
>>>> depth. */
>>>> -    short srcu_idx;            /* Current reader array element. */
>>>> +    unsigned short srcu_idx;    /* Current reader array element in 
>>>> bit 0x2. */
>>>>        u8 srcu_gp_running;        /* GP workqueue running? */
>>>>        u8 srcu_gp_waiting;        /* GP waiting for readers? */
>>>>        struct swait_queue_head srcu_wq;
>>>> @@ -59,7 +59,7 @@ static inline int __srcu_read_lock(struct 
>>>> srcu_struct *ssp)
>>>>    {
>>>>        int idx;
>>>> -    idx = READ_ONCE(ssp->srcu_idx);
>>>> +    idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
>>>>        WRITE_ONCE(ssp->srcu_lock_nesting[idx], 
>>>> ssp->srcu_lock_nesting[idx] + 1);
>>>>        return idx;
>>>>    }
>>>
>>> Need change in idx calcultion in srcu_torture_stats_print() ?
>>>
>>> static inline void srcu_torture_stats_print(struct srcu_struct *ssp,
>>>    idx = READ_ONCE(ssp->srcu_idx) & 0x1;
>>
>> Excellent point!  It should match the calculation in __srcu_read_lock(),
>> shouldn't it?  I have updated this, thank you!
>>
>>                             Thanx, Paul
>>
> 
> Updated version looks good!
> 
> 
> Thanks
> Neeraj
> 

For the version in rcu -dev:

Reviewed-by: Neeraj Upadhyay <neeraju@codeaurora.org>

Only minor point which I have is, the idx calculation can be made an 
inline func (though srcu_drive_gp() does not require a READ_ONCE for 
->srcu_idx):

__srcu_read_lock() and srcu_torture_stats_print() are using

idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;

whereas srcu_drive_gp() uses:

idx = (ssp->srcu_idx & 0x2) / 2;

Thanks
Neeraj

>>> Thanks
>>> Neeraj
>>>
>>>> diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
>>>> index 6208c1d..5598cf6 100644
>>>> --- a/kernel/rcu/srcutiny.c
>>>> +++ b/kernel/rcu/srcutiny.c
>>>> @@ -124,11 +124,12 @@ void srcu_drive_gp(struct work_struct *wp)
>>>>        ssp->srcu_cb_head = NULL;
>>>>        ssp->srcu_cb_tail = &ssp->srcu_cb_head;
>>>>        local_irq_enable();
>>>> -    idx = ssp->srcu_idx;
>>>> -    WRITE_ONCE(ssp->srcu_idx, !ssp->srcu_idx);
>>>> +    idx = (ssp->srcu_idx & 0x2) / 2;
>>>> +    WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
>>>>        WRITE_ONCE(ssp->srcu_gp_waiting, true);  /* 
>>>> srcu_read_unlock() wakes! */
>>>>        swait_event_exclusive(ssp->srcu_wq, 
>>>> !READ_ONCE(ssp->srcu_lock_nesting[idx]));
>>>>        WRITE_ONCE(ssp->srcu_gp_waiting, false); /* 
>>>> srcu_read_unlock() cheap. */
>>>> +    WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
>>>>        /* Invoke the callbacks we removed above. */
>>>>        while (lh) {
>>>>
>>>
>>> -- 
>>> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
>>> member of
>>> the Code Aurora Forum, hosted by The Linux Foundation
> 

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
member of the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 4/6] srcu: Provide polling interfaces for Tiny SRCU grace periods
  2020-11-24 19:30         ` Paul E. McKenney
@ 2020-11-25  4:39           ` Neeraj Upadhyay
  0 siblings, 0 replies; 32+ messages in thread
From: Neeraj Upadhyay @ 2020-11-25  4:39 UTC (permalink / raw)
  To: paulmck
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet



On 11/25/2020 1:00 AM, Paul E. McKenney wrote:
> On Tue, Nov 24, 2020 at 10:44:24AM +0530, Neeraj Upadhyay wrote:
>>
>>
>> On 11/24/2020 2:42 AM, Paul E. McKenney wrote:
>>> On Mon, Nov 23, 2020 at 10:13:13AM +0530, Neeraj Upadhyay wrote:
>>>>
>>>>
>>>> On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
>>>>> From: "Paul E. McKenney" <paulmck@kernel.org>
>>>>>
>>>>> There is a need for a polling interface for SRCU grace
>>>>> periods, so this commit supplies get_state_synchronize_srcu(),
>>>>> start_poll_synchronize_srcu(), and poll_state_synchronize_srcu() for this
>>>>> purpose.  The first can be used if future grace periods are inevitable
>>>>> (perhaps due to a later call_srcu() invocation), the second if future
>>>>> grace periods might not otherwise happen, and the third to check if a
>>>>> grace period has elapsed since the corresponding call to either of the
>>>>> first two.
>>>>>
>>>>> As with get_state_synchronize_rcu() and cond_synchronize_rcu(),
>>>>> the return value from either get_state_synchronize_srcu() or
>>>>> start_poll_synchronize_srcu() must be passed in to a later call to
>>>>> poll_state_synchronize_srcu().
>>>>>
>>>>> Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
>>>>> Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
>>>>> [ paulmck: Add EXPORT_SYMBOL_GPL() per kernel test robot feedback. ]
>>>>> [ paulmck: Apply feedback from Neeraj Upadhyay. ]
>>>>> Link: https://lore.kernel.org/lkml/20201117004017.GA7444@paulmck-ThinkPad-P72/
>>>>> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
>>>>> ---
>>>>>     include/linux/rcupdate.h |  2 ++
>>>>>     include/linux/srcu.h     |  3 +++
>>>>>     include/linux/srcutiny.h |  1 +
>>>>>     kernel/rcu/srcutiny.c    | 52 ++++++++++++++++++++++++++++++++++++++++++++++--
>>>>>     4 files changed, 56 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
>>>>> index de08264..e09c0d8 100644
>>>>> --- a/include/linux/rcupdate.h
>>>>> +++ b/include/linux/rcupdate.h
>>>>> @@ -33,6 +33,8 @@
>>>>>     #define ULONG_CMP_GE(a, b)	(ULONG_MAX / 2 >= (a) - (b))
>>>>>     #define ULONG_CMP_LT(a, b)	(ULONG_MAX / 2 < (a) - (b))
>>>>>     #define ulong2long(a)		(*(long *)(&(a)))
>>>>> +#define USHORT_CMP_GE(a, b)	(USHRT_MAX / 2 >= (unsigned short)((a) - (b)))
>>>>> +#define USHORT_CMP_LT(a, b)	(USHRT_MAX / 2 < (unsigned short)((a) - (b)))
>>>>>     /* Exported common interfaces */
>>>>>     void call_rcu(struct rcu_head *head, rcu_callback_t func);
>>>>> diff --git a/include/linux/srcu.h b/include/linux/srcu.h
>>>>> index e432cc9..a0895bb 100644
>>>>> --- a/include/linux/srcu.h
>>>>> +++ b/include/linux/srcu.h
>>>>> @@ -60,6 +60,9 @@ void cleanup_srcu_struct(struct srcu_struct *ssp);
>>>>>     int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
>>>>>     void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
>>>>>     void synchronize_srcu(struct srcu_struct *ssp);
>>>>> +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp);
>>>>> +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp);
>>>>> +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie);
>>>>>     #ifdef CONFIG_DEBUG_LOCK_ALLOC
>>>>> diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
>>>>> index d9edb67..c7f0c1f 100644
>>>>> --- a/include/linux/srcutiny.h
>>>>> +++ b/include/linux/srcutiny.h
>>>>> @@ -16,6 +16,7 @@
>>>>>     struct srcu_struct {
>>>>>     	short srcu_lock_nesting[2];	/* srcu_read_lock() nesting depth. */
>>>>>     	unsigned short srcu_idx;	/* Current reader array element in bit 0x2. */
>>>>> +	unsigned short srcu_idx_max;	/* Furthest future srcu_idx request. */
>>>>>     	u8 srcu_gp_running;		/* GP workqueue running? */
>>>>>     	u8 srcu_gp_waiting;		/* GP waiting for readers? */
>>>>>     	struct swait_queue_head srcu_wq;
>>>>> diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
>>>>> index 3bac1db..b073175 100644
>>>>> --- a/kernel/rcu/srcutiny.c
>>>>> +++ b/kernel/rcu/srcutiny.c
>>>>> @@ -34,6 +34,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp)
>>>>>     	ssp->srcu_gp_running = false;
>>>>>     	ssp->srcu_gp_waiting = false;
>>>>>     	ssp->srcu_idx = 0;
>>>>> +	ssp->srcu_idx_max = 0;
>>>>>     	INIT_WORK(&ssp->srcu_work, srcu_drive_gp);
>>>>>     	INIT_LIST_HEAD(&ssp->srcu_work.entry);
>>>>>     	return 0;
>>>>
>>>> Minor: cleanup_srcu_struct() can probably have 2 new sanity checks?
>>>>
>>>> WARN_ON(ssp->srcu_idx != ssp->srcu_idx_max);
>>>> WARN_ON(ssp->srcu_idx & 1);
>>>
>>> Good point, added and under test.
>>>
>>>> Thanks
>>>> Neeraj
>>>>
>>>>> @@ -114,7 +115,7 @@ void srcu_drive_gp(struct work_struct *wp)
>>>>>     	struct srcu_struct *ssp;
>>>>>     	ssp = container_of(wp, struct srcu_struct, srcu_work);
>>>>> -	if (ssp->srcu_gp_running || !READ_ONCE(ssp->srcu_cb_head))
>>>>> +	if (ssp->srcu_gp_running || USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
>>>>>     		return; /* Already running or nothing to do. */
>>>>>     	/* Remove recently arrived callbacks and wait for readers. */
>>>>> @@ -147,13 +148,18 @@ void srcu_drive_gp(struct work_struct *wp)
>>>>>     	 * straighten that out.
>>>>>     	 */
>>>>>     	WRITE_ONCE(ssp->srcu_gp_running, false);
>>>>> -	if (READ_ONCE(ssp->srcu_cb_head))
>>>>> +	if (USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
>>>>>     		schedule_work(&ssp->srcu_work);
>>>>>     }
>>>>>     EXPORT_SYMBOL_GPL(srcu_drive_gp);
>>>>>     static void srcu_gp_start_if_needed(struct srcu_struct *ssp)
>>>>>     {
>>>>> +	unsigned short cookie;
>>>>> +
>>>>> +	cookie = get_state_synchronize_srcu(ssp);
>>>>> +	if (USHORT_CMP_LT(READ_ONCE(ssp->srcu_idx_max), cookie))
>>>>> +		WRITE_ONCE(ssp->srcu_idx_max, cookie);
>>>>
>>>> Minor: Maybe we can return in the else part of USHORT_CMP_LT check, to avoid
>>>> scheduling work?
>>>
>>> How about like this?
>>
>> Looks good!
> 
> Are you willing to give an ack or reviewed-by for either patch?
> 
> 							Thanx, Paul
> 

For the version in rcu -dev

Reviewed-by: Neeraj Upadhyay <neeraju@codeaurora.org>

Thanks
Neeraj

>> Thanks
>> Neeraj
>>
>>>
>>> 	static void srcu_gp_start_if_needed(struct srcu_struct *ssp)
>>> 	{
>>> 		unsigned short cookie;
>>>
>>> 		cookie = get_state_synchronize_srcu(ssp);
>>> 		if (USHORT_CMP_GE(READ_ONCE(ssp->srcu_idx_max), cookie))
>>> 			return;
>>> 		WRITE_ONCE(ssp->srcu_idx_max, cookie);
>>> 		if (!READ_ONCE(ssp->srcu_gp_running)) {
>>> 			if (likely(srcu_init_done))
>>> 				schedule_work(&ssp->srcu_work);
>>> 			else if (list_empty(&ssp->srcu_work.entry))
>>> 				list_add(&ssp->srcu_work.entry, &srcu_boot_list);
>>> 		}
>>> 	}
>>>
>>> Testing this next.  ;-)
>>>
>>> 							Thanx, Paul
>>>
>>>> Thanks
>>>> Neeraj
>>>>
>>>>>     	if (!READ_ONCE(ssp->srcu_gp_running)) {
>>>>>     		if (likely(srcu_init_done))
>>>>>     			schedule_work(&ssp->srcu_work);
>>>>> @@ -196,6 +202,48 @@ void synchronize_srcu(struct srcu_struct *ssp)
>>>>>     }
>>>>>     EXPORT_SYMBOL_GPL(synchronize_srcu);
>>>>> +/*
>>>>> + * get_state_synchronize_srcu - Provide an end-of-grace-period cookie
>>>>> + */
>>>>> +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp)
>>>>> +{
>>>>> +	unsigned long ret;
>>>>> +
>>>>> +	barrier();
>>>>> +	ret = (READ_ONCE(ssp->srcu_idx) + 3) & ~0x1;
>>>>> +	barrier();
>>>>> +	return ret & USHRT_MAX;
>>>>> +}
>>>>> +EXPORT_SYMBOL_GPL(get_state_synchronize_srcu);
>>>>> +
>>>>> +/*
>>>>> + * start_poll_synchronize_srcu - Provide cookie and start grace period
>>>>> + *
>>>>> + * The difference between this and get_state_synchronize_srcu() is that
>>>>> + * this function ensures that the poll_state_synchronize_srcu() will
>>>>> + * eventually return the value true.
>>>>> + */
>>>>> +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp)
>>>>> +{
>>>>> +	unsigned long ret = get_state_synchronize_srcu(ssp);
>>>>> +
>>>>> +	srcu_gp_start_if_needed(ssp);
>>>>> +	return ret;
>>>>> +}
>>>>> +EXPORT_SYMBOL_GPL(start_poll_synchronize_srcu);
>>>>> +
>>>>> +/*
>>>>> + * poll_state_synchronize_srcu - Has cookie's grace period ended?
>>>>> + */
>>>>> +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie)
>>>>> +{
>>>>> +	bool ret = USHORT_CMP_GE(READ_ONCE(ssp->srcu_idx), cookie);
>>>>> +
>>>>> +	barrier();
>>>>> +	return ret;
>>>>> +}
>>>>> +EXPORT_SYMBOL_GPL(poll_state_synchronize_srcu);
>>>>> +
>>>>>     /* Lockdep diagnostics.  */
>>>>>     void __init rcu_scheduler_starting(void)
>>>>>     {
>>>>>
>>>>
>>>> -- 
>>>> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of
>>>> the Code Aurora Forum, hosted by The Linux Foundation
>>
>> -- 
>> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of
>> the Code Aurora Forum, hosted by The Linux Foundation

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
member of the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 5/6] srcu: Provide polling interfaces for Tree SRCU grace periods
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 5/6] srcu: Provide polling interfaces for Tree " paulmck
@ 2020-11-27  4:52   ` Neeraj Upadhyay
  0 siblings, 0 replies; 32+ messages in thread
From: Neeraj Upadhyay @ 2020-11-27  4:52 UTC (permalink / raw)
  To: paulmck, rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet



On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
> From: "Paul E. McKenney" <paulmck@kernel.org>
> 
> There is a need for a polling interface for SRCU grace
> periods, so this commit supplies get_state_synchronize_srcu(),
> start_poll_synchronize_srcu(), and poll_state_synchronize_srcu() for this
> purpose.  The first can be used if future grace periods are inevitable
> (perhaps due to a later call_srcu() invocation), the second if future
> grace periods might not otherwise happen, and the third to check if a
> grace period has elapsed since the corresponding call to either of the
> first two.
> 
> As with get_state_synchronize_rcu() and cond_synchronize_rcu(),
> the return value from either get_state_synchronize_srcu() or
> start_poll_synchronize_srcu() must be passed in to a later call to
> poll_state_synchronize_srcu().
> 
> Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
> Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
> [ paulmck: Add EXPORT_SYMBOL_GPL() per kernel test robot feedback. ]
> [ paulmck: Apply feedback from Neeraj Upadhyay. ]
> Link: https://lore.kernel.org/lkml/20201117004017.GA7444@paulmck-ThinkPad-P72/
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> ---

For version in -rcu dev

Reviewed-by: Neeraj Upadhyay <neeraju@codeaurora.org>


Thanks
Neeraj

>   kernel/rcu/srcutiny.c |  2 +-
>   kernel/rcu/srcutree.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++++---
>   2 files changed, 64 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
> index b073175..c8f4202 100644
> --- a/kernel/rcu/srcutiny.c
> +++ b/kernel/rcu/srcutiny.c
> @@ -148,7 +148,7 @@ void srcu_drive_gp(struct work_struct *wp)
>   	 * straighten that out.
>   	 */
>   	WRITE_ONCE(ssp->srcu_gp_running, false);
> -	if (USHORT_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
> +	if (USHORT_CMP_LT(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max)))
>   		schedule_work(&ssp->srcu_work);
>   }
>   EXPORT_SYMBOL_GPL(srcu_drive_gp);
> diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> index d930ece..945c047 100644
> --- a/kernel/rcu/srcutree.c
> +++ b/kernel/rcu/srcutree.c
> @@ -810,7 +810,8 @@ static void srcu_leak_callback(struct rcu_head *rhp)
>   /*
>    * Start an SRCU grace period, and also queue the callback if non-NULL.
>    */
> -static void srcu_gp_start_if_needed(struct srcu_struct *ssp, struct rcu_head *rhp, bool do_norm)
> +static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
> +					     struct rcu_head *rhp, bool do_norm)
>   {
>   	unsigned long flags;
>   	int idx;
> @@ -819,10 +820,12 @@ static void srcu_gp_start_if_needed(struct srcu_struct *ssp, struct rcu_head *rh
>   	unsigned long s;
>   	struct srcu_data *sdp;
>   
> +	check_init_srcu_struct(ssp);
>   	idx = srcu_read_lock(ssp);
>   	sdp = raw_cpu_ptr(ssp->sda);
>   	spin_lock_irqsave_rcu_node(sdp, flags);
> -	rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
> +	if (rhp)
> +		rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
>   	rcu_segcblist_advance(&sdp->srcu_cblist,
>   			      rcu_seq_current(&ssp->srcu_gp_seq));
>   	s = rcu_seq_snap(&ssp->srcu_gp_seq);
> @@ -841,6 +844,7 @@ static void srcu_gp_start_if_needed(struct srcu_struct *ssp, struct rcu_head *rh
>   	else if (needexp)
>   		srcu_funnel_exp_start(ssp, sdp->mynode, s);
>   	srcu_read_unlock(ssp, idx);
> +	return s;
>   }
>   
>   /*
> @@ -874,7 +878,6 @@ static void srcu_gp_start_if_needed(struct srcu_struct *ssp, struct rcu_head *rh
>   static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
>   			rcu_callback_t func, bool do_norm)
>   {
> -	check_init_srcu_struct(ssp);
>   	if (debug_rcu_head_queue(rhp)) {
>   		/* Probable double call_srcu(), so leak the callback. */
>   		WRITE_ONCE(rhp->func, srcu_leak_callback);
> @@ -882,7 +885,7 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
>   		return;
>   	}
>   	rhp->func = func;
> -	srcu_gp_start_if_needed(ssp, rhp, do_norm);
> +	(void)srcu_gp_start_if_needed(ssp, rhp, do_norm);
>   }
>   
>   /**
> @@ -1011,6 +1014,62 @@ void synchronize_srcu(struct srcu_struct *ssp)
>   }
>   EXPORT_SYMBOL_GPL(synchronize_srcu);
>   
> +/**
> + * get_state_synchronize_srcu - Provide an end-of-grace-period cookie
> + * @ssp: srcu_struct to provide cookie for.
> + *
> + * This function returns a cookie that can be passed to
> + * poll_state_synchronize_srcu(), which will return true if a full grace
> + * period has elapsed in the meantime.  It is the caller's responsibility
> + * to make sure that grace period happens, for example, by invoking
> + * call_srcu() after return from get_state_synchronize_srcu().
> + */
> +unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp)
> +{
> +	// Any prior manipulation of SRCU-protected data must happen
> +	// before the load from ->srcu_gp_seq.
> +	smp_mb();
> +	return rcu_seq_snap(&ssp->srcu_gp_seq);
> +}
> +EXPORT_SYMBOL_GPL(get_state_synchronize_srcu);
> +
> +/**
> + * start_poll_synchronize_srcu - Provide cookie and start grace period
> + * @ssp: srcu_struct to provide cookie for.
> + *
> + * This function returns a cookie that can be passed to
> + * poll_state_synchronize_srcu(), which will return true if a full grace
> + * period has elapsed in the meantime.  Unlike get_state_synchronize_srcu(),
> + * this function also ensures that any needed SRCU grace period will be
> + * started.  This convenience does come at a cost in terms of CPU overhead.
> + */
> +unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp)
> +{
> +	return srcu_gp_start_if_needed(ssp, NULL, true);
> +}
> +EXPORT_SYMBOL_GPL(start_poll_synchronize_srcu);
> +
> +/**
> + * poll_state_synchronize_srcu - Has cookie's grace period ended?
> + * @ssp: srcu_struct to provide cookie for.
> + * @cookie: Return value from get_state_synchronize_srcu() or start_poll_synchronize_srcu().
> + *
> + * This function takes the cookie that was returned from either
> + * get_state_synchronize_srcu() or start_poll_synchronize_srcu(), and
> + * returns @true if an SRCU grace period elapsed since the time that the
> + * cookie was created.
> + */
> +bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie)
> +{
> +	if (!rcu_seq_done(&ssp->srcu_gp_seq, cookie))
> +		return false;
> +	// Ensure that the end of the SRCU grace period happens before
> +	// any subsequent code that the caller might execute.
> +	smp_mb(); // ^^^
> +	return true;
> +}
> +EXPORT_SYMBOL_GPL(poll_state_synchronize_srcu);
> +
>   /*
>    * Callback function for srcu_barrier() use.
>    */
> 

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
member of the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 6/6] srcu: Document polling interfaces for Tree SRCU grace periods
  2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 6/6] srcu: Document " paulmck
@ 2020-11-27  8:27   ` Neeraj Upadhyay
  0 siblings, 0 replies; 32+ messages in thread
From: Neeraj Upadhyay @ 2020-11-27  8:27 UTC (permalink / raw)
  To: paulmck, rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet



On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
> From: "Paul E. McKenney" <paulmck@kernel.org>
> 
> This commit adds requirements documentation for the
> get_state_synchronize_srcu(), start_poll_synchronize_srcu(), and
> poll_state_synchronize_srcu() functions.
> 
> Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
> Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> ---

Reviewed-by: Neeraj Upadhyay <neeraju@codeaurora.org>


Thanks
Neeraj

>   Documentation/RCU/Design/Requirements/Requirements.rst | 18 ++++++++++++++++++
>   1 file changed, 18 insertions(+)
> 
> diff --git a/Documentation/RCU/Design/Requirements/Requirements.rst b/Documentation/RCU/Design/Requirements/Requirements.rst
> index 1e3df77..2dce79d 100644
> --- a/Documentation/RCU/Design/Requirements/Requirements.rst
> +++ b/Documentation/RCU/Design/Requirements/Requirements.rst
> @@ -2600,6 +2600,24 @@ also includes DEFINE_SRCU(), DEFINE_STATIC_SRCU(), and
>   init_srcu_struct() APIs for defining and initializing
>   ``srcu_struct`` structures.
>   
> +More recently, the SRCU API has added polling interfaces:
> +
> +#. start_poll_synchronize_srcu() returns a cookie identifying
> +   the completion of a future SRCU grace period and ensures
> +   that this grace period will be started.
> +#. poll_state_synchronize_srcu() returns ``true`` iff the
> +   specified cookie corresponds to an already-completed
> +   SRCU grace period.
> +#. get_state_synchronize_srcu() returns a cookie just like
> +   start_poll_synchronize_srcu() does, but differs in that
> +   it does nothing to ensure that any future SRCU grace period
> +   will be started.
> +
> +These functions are used to avoid unnecessary SRCU grace periods in
> +certain types of buffer-cache algorithms having multi-stage age-out
> +mechanisms.  The idea is that by the time the block has aged completely
> +from the cache, an SRCU grace period will be very likely to have elapsed.
> +
>   Tasks RCU
>   ~~~~~~~~~
>   
> 

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
member of the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 1/6] srcu: Make Tiny SRCU use multi-bit grace-period counter
  2020-11-25  4:33         ` Neeraj Upadhyay
@ 2020-11-28  2:16           ` Paul E. McKenney
  2020-11-28  4:12             ` Neeraj Upadhyay
  0 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2020-11-28  2:16 UTC (permalink / raw)
  To: Neeraj Upadhyay
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet

On Wed, Nov 25, 2020 at 10:03:26AM +0530, Neeraj Upadhyay wrote:
> 
> 
> On 11/24/2020 10:48 AM, Neeraj Upadhyay wrote:
> > 
> > 
> > On 11/24/2020 1:25 AM, Paul E. McKenney wrote:
> > > On Mon, Nov 23, 2020 at 10:01:13AM +0530, Neeraj Upadhyay wrote:
> > > > On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
> > > > > From: "Paul E. McKenney" <paulmck@kernel.org>
> > > > > 
> > > > > There is a need for a polling interface for SRCU grace periods.  This
> > > > > polling needs to distinguish between an SRCU instance being idle on the
> > > > > one hand or in the middle of a grace period on the other.  This commit
> > > > > therefore converts the Tiny SRCU srcu_struct structure's srcu_idx from
> > > > > a defacto boolean to a free-running counter, using the bottom bit to
> > > > > indicate that a grace period is in progress.  The second-from-bottom
> > > > > bit is thus used as the index returned by srcu_read_lock().
> > > > > 
> > > > > Link:
> > > > > https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
> > > > > Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
> > > > > [ paulmck: Fix __srcu_read_lock() idx computation Neeraj per
> > > > > Upadhyay. ]
> > > > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > > > > ---
> > > > >    include/linux/srcutiny.h | 4 ++--
> > > > >    kernel/rcu/srcutiny.c    | 5 +++--
> > > > >    2 files changed, 5 insertions(+), 4 deletions(-)
> > > > > 
> > > > > diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
> > > > > index 5a5a194..d9edb67 100644
> > > > > --- a/include/linux/srcutiny.h
> > > > > +++ b/include/linux/srcutiny.h
> > > > > @@ -15,7 +15,7 @@
> > > > >    struct srcu_struct {
> > > > >        short srcu_lock_nesting[2];    /* srcu_read_lock()
> > > > > nesting depth. */
> > > > > -    short srcu_idx;            /* Current reader array element. */
> > > > > +    unsigned short srcu_idx;    /* Current reader array
> > > > > element in bit 0x2. */
> > > > >        u8 srcu_gp_running;        /* GP workqueue running? */
> > > > >        u8 srcu_gp_waiting;        /* GP waiting for readers? */
> > > > >        struct swait_queue_head srcu_wq;
> > > > > @@ -59,7 +59,7 @@ static inline int __srcu_read_lock(struct
> > > > > srcu_struct *ssp)
> > > > >    {
> > > > >        int idx;
> > > > > -    idx = READ_ONCE(ssp->srcu_idx);
> > > > > +    idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
> > > > >        WRITE_ONCE(ssp->srcu_lock_nesting[idx],
> > > > > ssp->srcu_lock_nesting[idx] + 1);
> > > > >        return idx;
> > > > >    }
> > > > 
> > > > Need change in idx calcultion in srcu_torture_stats_print() ?
> > > > 
> > > > static inline void srcu_torture_stats_print(struct srcu_struct *ssp,
> > > >    idx = READ_ONCE(ssp->srcu_idx) & 0x1;
> > > 
> > > Excellent point!  It should match the calculation in __srcu_read_lock(),
> > > shouldn't it?  I have updated this, thank you!
> > > 
> > >                             Thanx, Paul
> > > 
> > 
> > Updated version looks good!
> > 
> > 
> > Thanks
> > Neeraj
> > 
> 
> For the version in rcu -dev:
> 
> Reviewed-by: Neeraj Upadhyay <neeraju@codeaurora.org>

I applied all of these, thank you very much!

> Only minor point which I have is, the idx calculation can be made an inline
> func (though srcu_drive_gp() does not require a READ_ONCE for ->srcu_idx):
> 
> __srcu_read_lock() and srcu_torture_stats_print() are using
> 
> idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
> 
> whereas srcu_drive_gp() uses:
> 
> idx = (ssp->srcu_idx & 0x2) / 2;

They do work on different elements of the various arrays.  Or do you
believe that the srcu_drive_gp() use needs adjusting?

Either way, the overhead of READ_ONCE() is absolutely not at all
a problem.  Would you like to put together a patch so that I can see
exactly what you are suggesting?

							Thanx, Paul

> Thanks
> Neeraj
> 
> > > > Thanks
> > > > Neeraj
> > > > 
> > > > > diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
> > > > > index 6208c1d..5598cf6 100644
> > > > > --- a/kernel/rcu/srcutiny.c
> > > > > +++ b/kernel/rcu/srcutiny.c
> > > > > @@ -124,11 +124,12 @@ void srcu_drive_gp(struct work_struct *wp)
> > > > >        ssp->srcu_cb_head = NULL;
> > > > >        ssp->srcu_cb_tail = &ssp->srcu_cb_head;
> > > > >        local_irq_enable();
> > > > > -    idx = ssp->srcu_idx;
> > > > > -    WRITE_ONCE(ssp->srcu_idx, !ssp->srcu_idx);
> > > > > +    idx = (ssp->srcu_idx & 0x2) / 2;
> > > > > +    WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
> > > > >        WRITE_ONCE(ssp->srcu_gp_waiting, true);  /*
> > > > > srcu_read_unlock() wakes! */
> > > > >        swait_event_exclusive(ssp->srcu_wq,
> > > > > !READ_ONCE(ssp->srcu_lock_nesting[idx]));
> > > > >        WRITE_ONCE(ssp->srcu_gp_waiting, false); /*
> > > > > srcu_read_unlock() cheap. */
> > > > > +    WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
> > > > >        /* Invoke the callbacks we removed above. */
> > > > >        while (lh) {
> > > > > 
> > > > 
> > > > -- 
> > > > QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is
> > > > a member of
> > > > the Code Aurora Forum, hosted by The Linux Foundation
> > 
> 
> -- 
> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of
> the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 tip/core/rcu 1/6] srcu: Make Tiny SRCU use multi-bit grace-period counter
  2020-11-28  2:16           ` Paul E. McKenney
@ 2020-11-28  4:12             ` Neeraj Upadhyay
  0 siblings, 0 replies; 32+ messages in thread
From: Neeraj Upadhyay @ 2020-11-28  4:12 UTC (permalink / raw)
  To: paulmck
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, kent.overstreet



On 11/28/2020 7:46 AM, Paul E. McKenney wrote:
> On Wed, Nov 25, 2020 at 10:03:26AM +0530, Neeraj Upadhyay wrote:
>>
>>
>> On 11/24/2020 10:48 AM, Neeraj Upadhyay wrote:
>>>
>>>
>>> On 11/24/2020 1:25 AM, Paul E. McKenney wrote:
>>>> On Mon, Nov 23, 2020 at 10:01:13AM +0530, Neeraj Upadhyay wrote:
>>>>> On 11/21/2020 6:29 AM, paulmck@kernel.org wrote:
>>>>>> From: "Paul E. McKenney" <paulmck@kernel.org>
>>>>>>
>>>>>> There is a need for a polling interface for SRCU grace periods.  This
>>>>>> polling needs to distinguish between an SRCU instance being idle on the
>>>>>> one hand or in the middle of a grace period on the other.  This commit
>>>>>> therefore converts the Tiny SRCU srcu_struct structure's srcu_idx from
>>>>>> a defacto boolean to a free-running counter, using the bottom bit to
>>>>>> indicate that a grace period is in progress.  The second-from-bottom
>>>>>> bit is thus used as the index returned by srcu_read_lock().
>>>>>>
>>>>>> Link:
>>>>>> https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
>>>>>> Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
>>>>>> [ paulmck: Fix __srcu_read_lock() idx computation Neeraj per
>>>>>> Upadhyay. ]
>>>>>> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
>>>>>> ---
>>>>>>     include/linux/srcutiny.h | 4 ++--
>>>>>>     kernel/rcu/srcutiny.c    | 5 +++--
>>>>>>     2 files changed, 5 insertions(+), 4 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
>>>>>> index 5a5a194..d9edb67 100644
>>>>>> --- a/include/linux/srcutiny.h
>>>>>> +++ b/include/linux/srcutiny.h
>>>>>> @@ -15,7 +15,7 @@
>>>>>>     struct srcu_struct {
>>>>>>         short srcu_lock_nesting[2];    /* srcu_read_lock()
>>>>>> nesting depth. */
>>>>>> -    short srcu_idx;            /* Current reader array element. */
>>>>>> +    unsigned short srcu_idx;    /* Current reader array
>>>>>> element in bit 0x2. */
>>>>>>         u8 srcu_gp_running;        /* GP workqueue running? */
>>>>>>         u8 srcu_gp_waiting;        /* GP waiting for readers? */
>>>>>>         struct swait_queue_head srcu_wq;
>>>>>> @@ -59,7 +59,7 @@ static inline int __srcu_read_lock(struct
>>>>>> srcu_struct *ssp)
>>>>>>     {
>>>>>>         int idx;
>>>>>> -    idx = READ_ONCE(ssp->srcu_idx);
>>>>>> +    idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
>>>>>>         WRITE_ONCE(ssp->srcu_lock_nesting[idx],
>>>>>> ssp->srcu_lock_nesting[idx] + 1);
>>>>>>         return idx;
>>>>>>     }
>>>>>
>>>>> Need change in idx calcultion in srcu_torture_stats_print() ?
>>>>>
>>>>> static inline void srcu_torture_stats_print(struct srcu_struct *ssp,
>>>>>     idx = READ_ONCE(ssp->srcu_idx) & 0x1;
>>>>
>>>> Excellent point!  It should match the calculation in __srcu_read_lock(),
>>>> shouldn't it?  I have updated this, thank you!
>>>>
>>>>                              Thanx, Paul
>>>>
>>>
>>> Updated version looks good!
>>>
>>>
>>> Thanks
>>> Neeraj
>>>
>>
>> For the version in rcu -dev:
>>
>> Reviewed-by: Neeraj Upadhyay <neeraju@codeaurora.org>
> 
> I applied all of these, thank you very much!
> 

Welcome :)

>> Only minor point which I have is, the idx calculation can be made an inline
>> func (though srcu_drive_gp() does not require a READ_ONCE for ->srcu_idx):
>>
>> __srcu_read_lock() and srcu_torture_stats_print() are using
>>
>> idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
>>
>> whereas srcu_drive_gp() uses:
>>
>> idx = (ssp->srcu_idx & 0x2) / 2;
> 
> They do work on different elements of the various arrays.  Or do you
> believe that the srcu_drive_gp() use needs adjusting?

My bad, I missed that they are using different elements of array.
Please ignore this comment.


Thanks
Neeraj

> 
> Either way, the overhead of READ_ONCE() is absolutely not at all
> a problem.  Would you like to put together a patch so that I can see
> exactly what you are suggesting?
> 
> 							Thanx, Paul
> 
>> Thanks
>> Neeraj
>>
>>>>> Thanks
>>>>> Neeraj
>>>>>
>>>>>> diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
>>>>>> index 6208c1d..5598cf6 100644
>>>>>> --- a/kernel/rcu/srcutiny.c
>>>>>> +++ b/kernel/rcu/srcutiny.c
>>>>>> @@ -124,11 +124,12 @@ void srcu_drive_gp(struct work_struct *wp)
>>>>>>         ssp->srcu_cb_head = NULL;
>>>>>>         ssp->srcu_cb_tail = &ssp->srcu_cb_head;
>>>>>>         local_irq_enable();
>>>>>> -    idx = ssp->srcu_idx;
>>>>>> -    WRITE_ONCE(ssp->srcu_idx, !ssp->srcu_idx);
>>>>>> +    idx = (ssp->srcu_idx & 0x2) / 2;
>>>>>> +    WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
>>>>>>         WRITE_ONCE(ssp->srcu_gp_waiting, true);  /*
>>>>>> srcu_read_unlock() wakes! */
>>>>>>         swait_event_exclusive(ssp->srcu_wq,
>>>>>> !READ_ONCE(ssp->srcu_lock_nesting[idx]));
>>>>>>         WRITE_ONCE(ssp->srcu_gp_waiting, false); /*
>>>>>> srcu_read_unlock() cheap. */
>>>>>> +    WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
>>>>>>         /* Invoke the callbacks we removed above. */
>>>>>>         while (lh) {
>>>>>>
>>>>>
>>>>> -- 
>>>>> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is
>>>>> a member of
>>>>> the Code Aurora Forum, hosted by The Linux Foundation
>>>
>>
>> -- 
>> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of
>> the Code Aurora Forum, hosted by The Linux Foundation

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
member of the Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2020-11-28 18:03 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <@@@>
2017-06-06 17:07 ` [PATCH v2 tip/core/rcu 1/2] srcu: Allow use of Tiny/Tree SRCU from both process and interrupt context Paul E. McKenney
2017-06-06 17:07 ` [PATCH v2 tip/core/rcu 2/2] srcu: Allow use of Classic " Paul E. McKenney
2020-02-14 23:38 ` [PATCH tip/core/rcu 1/9] doc: Add some more RCU list patterns in the kernel paulmck
2020-02-14 23:38 ` [PATCH tip/core/rcu 2/9] doc/RCU/Design: Remove remaining HTML tags in ReST files paulmck
2020-02-14 23:38 ` [PATCH tip/core/rcu 3/9] doc/RCU/listRCU: Fix typos in a example code snippets paulmck
2020-02-14 23:38 ` [PATCH tip/core/rcu 4/9] doc/RCU/listRCU: Update example function name paulmck
2020-02-14 23:38 ` [PATCH tip/core/rcu 5/9] doc/RCU/rcu: Use ':ref:' for links to other docs paulmck
2020-02-14 23:39 ` [PATCH tip/core/rcu 6/9] doc/RCU/rcu: Use absolute paths for non-rst files paulmck
2020-02-14 23:39 ` [PATCH tip/core/rcu 7/9] doc/RCU/rcu: Use https instead of http if possible paulmck
2020-02-14 23:39 ` [PATCH tip/core/rcu 8/9] doc: Add rcutorture scripting to torture.txt paulmck
2020-02-14 23:39 ` [PATCH tip/core/rcu 9/9] Documentation/memory-barriers: Fix typos paulmck
2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 1/6] srcu: Make Tiny SRCU use multi-bit grace-period counter paulmck
2020-11-23  4:31   ` Neeraj Upadhyay
2020-11-23 19:55     ` Paul E. McKenney
2020-11-24  5:18       ` Neeraj Upadhyay
2020-11-25  4:33         ` Neeraj Upadhyay
2020-11-28  2:16           ` Paul E. McKenney
2020-11-28  4:12             ` Neeraj Upadhyay
2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 2/6] srcu: Provide internal interface to start a Tiny SRCU grace period paulmck
2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 3/6] srcu: Provide internal interface to start a Tree " paulmck
2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 4/6] srcu: Provide polling interfaces for Tiny SRCU grace periods paulmck
2020-11-22 14:30   ` Neeraj Upadhyay
2020-11-22 17:57     ` Paul E. McKenney
2020-11-23  4:43   ` Neeraj Upadhyay
2020-11-23 21:12     ` Paul E. McKenney
2020-11-24  5:14       ` Neeraj Upadhyay
2020-11-24 19:30         ` Paul E. McKenney
2020-11-25  4:39           ` Neeraj Upadhyay
2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 5/6] srcu: Provide polling interfaces for Tree " paulmck
2020-11-27  4:52   ` Neeraj Upadhyay
2020-11-21  0:59 ` [PATCH v2 tip/core/rcu 6/6] srcu: Document " paulmck
2020-11-27  8:27   ` Neeraj Upadhyay

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).