linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH tip/core/rcu 0/10] Don not IPI offline CPUs, de-emphasize cond_resched_rcu_qs()
@ 2017-12-01 19:21 Paul E. McKenney
  2017-12-01 19:21 ` [PATCH tip/core/rcu 01/10] sched: Stop resched_cpu() from sending IPIs to offline CPUs Paul E. McKenney
                   ` (9 more replies)
  0 siblings, 10 replies; 32+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg

Hello!

This series provides some fixes to prevent sending IPIs to offline
CPUs and also replaces most uses of cond_resched_rcu_qs() with the new
and improved cond_resched().  There are still a few remaining uses of
cond_resched_rcu_qs() in rcutorture because the mechanisms that strengthen
cond_resched() rely on RCU making forward progress.  This series contains:

1.	Stop resched_cpu() from sending IPIs to offline CPUs.

2.	Stop switched_to_rt() from sending IPIs to offline CPUs.

3.	Move netfilter from cond_resched_rcu_qs() to cond_resched().

4.	Move mm from cond_resched_rcu_qs() to cond_resched().

5.	Move workqueue from cond_resched_rcu_qs() to cond_resched().

6.	Move trace from cond_resched_rcu_qs() to cond_resched().

7.	Move softirq from cond_resched_rcu_qs() to cond_resched().

8.	Move fs from cond_resched_rcu_qs() to cond_resched().

9.	Remove cond_resched_rcu_qs() from documentation.

10.	Improve performance by accounting for rcu_all_qs() in cond_resched().

							Thanx, Paul

------------------------------------------------------------------------

 Documentation/RCU/Design/Data-Structures/Data-Structures.html |    3 ++-
 Documentation/RCU/Design/Requirements/Requirements.html       |    4 ++--
 Documentation/RCU/stallwarn.txt                               |   10 ++++------
 fs/file.c                                                     |    2 +-
 include/linux/rcupdate.h                                      |    2 +-
 kernel/sched/core.c                                           |    3 ++-
 kernel/sched/rt.c                                             |    2 +-
 kernel/softirq.c                                              |    2 +-
 kernel/trace/trace_benchmark.c                                |    2 +-
 kernel/workqueue.c                                            |    2 +-
 mm/mlock.c                                                    |    2 +-
 net/netfilter/nf_conntrack_core.c                             |    2 +-
 12 files changed, 18 insertions(+), 18 deletions(-)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 01/10] sched: Stop resched_cpu() from sending IPIs to offline CPUs
  2017-12-01 19:21 [PATCH tip/core/rcu 0/10] Don not IPI offline CPUs, de-emphasize cond_resched_rcu_qs() Paul E. McKenney
@ 2017-12-01 19:21 ` Paul E. McKenney
  2017-12-01 19:21 ` [PATCH tip/core/rcu 02/10] sched: Stop switched_to_rt() " Paul E. McKenney
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney, Ingo Molnar

The rcutorture test suite occasionally provokes a splat due to invoking
resched_cpu() on an offline CPU:

WARNING: CPU: 2 PID: 8 at /home/paulmck/public_git/linux-rcu/arch/x86/kernel/smp.c:128 native_smp_send_reschedule+0x37/0x40
Modules linked in:
CPU: 2 PID: 8 Comm: rcu_preempt Not tainted 4.14.0-rc4+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
task: ffff902ede9daf00 task.stack: ffff96c50010c000
RIP: 0010:native_smp_send_reschedule+0x37/0x40
RSP: 0018:ffff96c50010fdb8 EFLAGS: 00010096
RAX: 000000000000002e RBX: ffff902edaab4680 RCX: 0000000000000003
RDX: 0000000080000003 RSI: 0000000000000000 RDI: 00000000ffffffff
RBP: ffff96c50010fdb8 R08: 0000000000000000 R09: 0000000000000001
R10: 0000000000000000 R11: 00000000299f36ae R12: 0000000000000001
R13: ffffffff9de64240 R14: 0000000000000001 R15: ffffffff9de64240
FS:  0000000000000000(0000) GS:ffff902edfc80000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000f7d4c642 CR3: 000000001e0e2000 CR4: 00000000000006e0
Call Trace:
 resched_curr+0x8f/0x1c0
 resched_cpu+0x2c/0x40
 rcu_implicit_dynticks_qs+0x152/0x220
 force_qs_rnp+0x147/0x1d0
 ? sync_rcu_exp_select_cpus+0x450/0x450
 rcu_gp_kthread+0x5a9/0x950
 kthread+0x142/0x180
 ? force_qs_rnp+0x1d0/0x1d0
 ? kthread_create_on_node+0x40/0x40
 ret_from_fork+0x27/0x40
Code: 14 01 0f 92 c0 84 c0 74 14 48 8b 05 14 4f f4 00 be fd 00 00 00 ff 90 a0 00 00 00 5d c3 89 fe 48 c7 c7 38 89 ca 9d e8 e5 56 08 00 <0f> ff 5d c3 0f 1f 44 00 00 8b 05 52 9e 37 02 85 c0 75 38 55 48
---[ end trace 26df9e5df4bba4ac ]---

This splat cannot be generated by expedited grace periods because they
always invoke resched_cpu() on the current CPU, which is good because
expedited grace periods require that resched_cpu() unconditionally
succeed.  However, other parts of RCU can tolerate resched_cpu() acting
as a no-op, at least as long as it doesn't happen too often.

This commit therefore makes resched_cpu() invoke resched_curr() only if
the CPU is either online or is the current CPU.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 kernel/sched/core.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 75554f366fd3..c85dfb746f8c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -508,7 +508,8 @@ void resched_cpu(int cpu)
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&rq->lock, flags);
-	resched_curr(rq);
+	if (cpu_online(cpu) || cpu == smp_processor_id())
+		resched_curr(rq);
 	raw_spin_unlock_irqrestore(&rq->lock, flags);
 }
 
-- 
2.5.2

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 02/10] sched: Stop switched_to_rt() from sending IPIs to offline CPUs
  2017-12-01 19:21 [PATCH tip/core/rcu 0/10] Don not IPI offline CPUs, de-emphasize cond_resched_rcu_qs() Paul E. McKenney
  2017-12-01 19:21 ` [PATCH tip/core/rcu 01/10] sched: Stop resched_cpu() from sending IPIs to offline CPUs Paul E. McKenney
@ 2017-12-01 19:21 ` Paul E. McKenney
  2017-12-01 19:21 ` [PATCH tip/core/rcu 03/10] netfilter: Eliminate cond_resched_rcu_qs() in favor of cond_resched() Paul E. McKenney
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney, Ingo Molnar

The rcutorture test suite occasionally provokes a splat due to invoking
rt_mutex_lock() which needs to boost the priority of a task currently
sitting on a runqueue that belongs to an offline CPU:

WARNING: CPU: 0 PID: 12 at /home/paulmck/public_git/linux-rcu/arch/x86/kernel/smp.c:128 native_smp_send_reschedule+0x37/0x40
Modules linked in:
CPU: 0 PID: 12 Comm: rcub/7 Not tainted 4.14.0-rc4+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
task: ffff9ed3de5f8cc0 task.stack: ffffbbf80012c000
RIP: 0010:native_smp_send_reschedule+0x37/0x40
RSP: 0018:ffffbbf80012fd10 EFLAGS: 00010082
RAX: 000000000000002f RBX: ffff9ed3dd9cb300 RCX: 0000000000000004
RDX: 0000000080000004 RSI: 0000000000000086 RDI: 00000000ffffffff
RBP: ffffbbf80012fd10 R08: 000000000009da7a R09: 0000000000007b9d
R10: 0000000000000001 R11: ffffffffbb57c2cd R12: 000000000000000d
R13: ffff9ed3de5f8cc0 R14: 0000000000000061 R15: ffff9ed3ded59200
FS:  0000000000000000(0000) GS:ffff9ed3dea00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000080686f0 CR3: 000000001b9e0000 CR4: 00000000000006f0
Call Trace:
 resched_curr+0x61/0xd0
 switched_to_rt+0x8f/0xa0
 rt_mutex_setprio+0x25c/0x410
 task_blocks_on_rt_mutex+0x1b3/0x1f0
 rt_mutex_slowlock+0xa9/0x1e0
 rt_mutex_lock+0x29/0x30
 rcu_boost_kthread+0x127/0x3c0
 kthread+0x104/0x140
 ? rcu_report_unblock_qs_rnp+0x90/0x90
 ? kthread_create_on_node+0x40/0x40
 ret_from_fork+0x22/0x30
Code: f0 00 0f 92 c0 84 c0 74 14 48 8b 05 34 74 c5 00 be fd 00 00 00 ff 90 a0 00 00 00 5d c3 89 fe 48 c7 c7 a0 c6 fc b9 e8 d5 b5 06 00 <0f> ff 5d c3 0f 1f 44 00 00 8b 05 a2 d1 13 02 85 c0 75 38 55 48

But the target task's priority has already been adjusted, so the only
purpose of switched_to_rt() invoking resched_curr() is to wake up the
CPU running some task that needs to be preempted by the boosted task.
But the CPU is offline, which presumably means that the task must be
migrated to some other CPU, and that this other CPU will undertake any
needed preemption at the time of migration.  Because the runqueue lock
is held when resched_curr() is invoked, we know that the boosted task
cannot go anywhere, so it is not necessary to invoke resched_curr()
in this particular case.

This commit therefore makes switched_to_rt() refrain from invoking
resched_curr() when the target CPU is offline.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 kernel/sched/rt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 4056c19ca3f0..f242f642ef53 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2206,7 +2206,7 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
 		if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
 			queue_push_tasks(rq);
 #endif /* CONFIG_SMP */
-		if (p->prio < rq->curr->prio)
+		if (p->prio < rq->curr->prio && cpu_online(cpu_of(rq)))
 			resched_curr(rq);
 	}
 }
-- 
2.5.2

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 03/10] netfilter: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2017-12-01 19:21 [PATCH tip/core/rcu 0/10] Don not IPI offline CPUs, de-emphasize cond_resched_rcu_qs() Paul E. McKenney
  2017-12-01 19:21 ` [PATCH tip/core/rcu 01/10] sched: Stop resched_cpu() from sending IPIs to offline CPUs Paul E. McKenney
  2017-12-01 19:21 ` [PATCH tip/core/rcu 02/10] sched: Stop switched_to_rt() " Paul E. McKenney
@ 2017-12-01 19:21 ` Paul E. McKenney
  2017-12-01 19:21 ` [PATCH tip/core/rcu 04/10] mm: " Paul E. McKenney
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney, Pablo Neira Ayuso, Jozsef Kadlecsik,
	Florian Westphal, David S. Miller, netfilter-devel

Now that cond_resched() also provides RCU quiescent states when
needed, it can be used in place of cond_resched_rcu_qs().  This
commit therefore makes this change.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Cc: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Cc: Florian Westphal <fw@strlen.de>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: <netfilter-devel@vger.kernel.org>
---
 net/netfilter/nf_conntrack_core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index 85f643c1e227..4efaa3066c78 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -1044,7 +1044,7 @@ static void gc_worker(struct work_struct *work)
 		 * we will just continue with next hash slot.
 		 */
 		rcu_read_unlock();
-		cond_resched_rcu_qs();
+		cond_resched();
 	} while (++buckets < goal);
 
 	if (gc_work->exiting)
-- 
2.5.2

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 04/10] mm: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2017-12-01 19:21 [PATCH tip/core/rcu 0/10] Don not IPI offline CPUs, de-emphasize cond_resched_rcu_qs() Paul E. McKenney
                   ` (2 preceding siblings ...)
  2017-12-01 19:21 ` [PATCH tip/core/rcu 03/10] netfilter: Eliminate cond_resched_rcu_qs() in favor of cond_resched() Paul E. McKenney
@ 2017-12-01 19:21 ` Paul E. McKenney
  2017-12-01 19:21 ` [PATCH tip/core/rcu 05/10] workqueue: " Paul E. McKenney
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney, Kirill A. Shutemov, Vlastimil Babka

Now that cond_resched() also provides RCU quiescent states when
needed, it can be used in place of cond_resched_rcu_qs().  This
commit therefore makes this change.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
---
 mm/mlock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/mlock.c b/mm/mlock.c
index 30472d438794..f7f54fd2e13f 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -779,7 +779,7 @@ static int apply_mlockall_flags(int flags)
 
 		/* Ignore errors */
 		mlock_fixup(vma, &prev, vma->vm_start, vma->vm_end, newflags);
-		cond_resched_rcu_qs();
+		cond_resched();
 	}
 out:
 	return 0;
-- 
2.5.2

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 05/10] workqueue: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2017-12-01 19:21 [PATCH tip/core/rcu 0/10] Don not IPI offline CPUs, de-emphasize cond_resched_rcu_qs() Paul E. McKenney
                   ` (3 preceding siblings ...)
  2017-12-01 19:21 ` [PATCH tip/core/rcu 04/10] mm: " Paul E. McKenney
@ 2017-12-01 19:21 ` Paul E. McKenney
  2017-12-02  1:06   ` Lai Jiangshan
  2017-12-01 19:21 ` [PATCH tip/core/rcu 06/10] trace: " Paul E. McKenney
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney, Tejun Heo

Now that cond_resched() also provides RCU quiescent states when
needed, it can be used in place of cond_resched_rcu_qs().  This
commit therefore makes this change.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
---
 kernel/workqueue.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 8fdb710bfdd7..aee7eaab05cb 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2135,7 +2135,7 @@ __acquires(&pool->lock)
 	 * stop_machine. At the same time, report a quiescent RCU state so
 	 * the same condition doesn't freeze RCU.
 	 */
-	cond_resched_rcu_qs();
+	cond_resched();
 
 	spin_lock_irq(&pool->lock);
 
-- 
2.5.2

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2017-12-01 19:21 [PATCH tip/core/rcu 0/10] Don not IPI offline CPUs, de-emphasize cond_resched_rcu_qs() Paul E. McKenney
                   ` (4 preceding siblings ...)
  2017-12-01 19:21 ` [PATCH tip/core/rcu 05/10] workqueue: " Paul E. McKenney
@ 2017-12-01 19:21 ` Paul E. McKenney
  2018-02-24 20:12   ` Steven Rostedt
  2017-12-01 19:21 ` [PATCH tip/core/rcu 07/10] softirq: " Paul E. McKenney
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney, Ingo Molnar

Now that cond_resched() also provides RCU quiescent states when
needed, it can be used in place of cond_resched_rcu_qs().  This
commit therefore makes this change.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@redhat.com>
---
 kernel/trace/trace_benchmark.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/trace/trace_benchmark.c b/kernel/trace/trace_benchmark.c
index 79f838a75077..22fee766081b 100644
--- a/kernel/trace/trace_benchmark.c
+++ b/kernel/trace/trace_benchmark.c
@@ -165,7 +165,7 @@ static int benchmark_event_kthread(void *arg)
 		 * this thread will never voluntarily schedule which would
 		 * block synchronize_rcu_tasks() indefinitely.
 		 */
-		cond_resched_rcu_qs();
+		cond_resched();
 	}
 
 	return 0;
-- 
2.5.2

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 07/10] softirq: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2017-12-01 19:21 [PATCH tip/core/rcu 0/10] Don not IPI offline CPUs, de-emphasize cond_resched_rcu_qs() Paul E. McKenney
                   ` (5 preceding siblings ...)
  2017-12-01 19:21 ` [PATCH tip/core/rcu 06/10] trace: " Paul E. McKenney
@ 2017-12-01 19:21 ` Paul E. McKenney
  2017-12-01 19:21 ` [PATCH tip/core/rcu 08/10] fs: " Paul E. McKenney
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney, NeilBrown

Now that cond_resched() also provides RCU quiescent states when
needed, it can be used in place of cond_resched_rcu_qs().  This
commit therefore makes this change.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: NeilBrown <neilb@suse.com>
Cc: Ingo Molnar <mingo@kernel.org>
---
 kernel/softirq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 2f5e87f1bae2..24d243ef8e71 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -665,7 +665,7 @@ static void run_ksoftirqd(unsigned int cpu)
 		 */
 		__do_softirq();
 		local_irq_enable();
-		cond_resched_rcu_qs();
+		cond_resched();
 		return;
 	}
 	local_irq_enable();
-- 
2.5.2

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 08/10] fs: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2017-12-01 19:21 [PATCH tip/core/rcu 0/10] Don not IPI offline CPUs, de-emphasize cond_resched_rcu_qs() Paul E. McKenney
                   ` (6 preceding siblings ...)
  2017-12-01 19:21 ` [PATCH tip/core/rcu 07/10] softirq: " Paul E. McKenney
@ 2017-12-01 19:21 ` Paul E. McKenney
  2017-12-01 19:21 ` [PATCH tip/core/rcu 09/10] doc: " Paul E. McKenney
  2017-12-01 19:21 ` [PATCH tip/core/rcu 10/10] rcu: Account for rcu_all_qs() in cond_resched() Paul E. McKenney
  9 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney, Alexander Viro, linux-fsdevel

Now that cond_resched() also provides RCU quiescent states when
needed, it can be used in place of cond_resched_rcu_qs().  This
commit therefore makes this change.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: <linux-fsdevel@vger.kernel.org>
---
 fs/file.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/file.c b/fs/file.c
index 3b080834b870..fc0eeb812e2c 100644
--- a/fs/file.c
+++ b/fs/file.c
@@ -391,7 +391,7 @@ static struct fdtable *close_files(struct files_struct * files)
 				struct file * file = xchg(&fdt->fd[i], NULL);
 				if (file) {
 					filp_close(file, files);
-					cond_resched_rcu_qs();
+					cond_resched();
 				}
 			}
 			i++;
-- 
2.5.2

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 09/10] doc: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2017-12-01 19:21 [PATCH tip/core/rcu 0/10] Don not IPI offline CPUs, de-emphasize cond_resched_rcu_qs() Paul E. McKenney
                   ` (7 preceding siblings ...)
  2017-12-01 19:21 ` [PATCH tip/core/rcu 08/10] fs: " Paul E. McKenney
@ 2017-12-01 19:21 ` Paul E. McKenney
  2017-12-01 19:21 ` [PATCH tip/core/rcu 10/10] rcu: Account for rcu_all_qs() in cond_resched() Paul E. McKenney
  9 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

Now that cond_resched() also provides RCU quiescent states when
needed, it can be used in place of cond_resched_rcu_qs().  This
commit therefore documents this change.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 Documentation/RCU/Design/Data-Structures/Data-Structures.html |  3 ++-
 Documentation/RCU/Design/Requirements/Requirements.html       |  4 ++--
 Documentation/RCU/stallwarn.txt                               | 10 ++++------
 3 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/Documentation/RCU/Design/Data-Structures/Data-Structures.html b/Documentation/RCU/Design/Data-Structures/Data-Structures.html
index 38d6d800761f..412466e4967a 100644
--- a/Documentation/RCU/Design/Data-Structures/Data-Structures.html
+++ b/Documentation/RCU/Design/Data-Structures/Data-Structures.html
@@ -1097,7 +1097,8 @@ will cause the CPU to disregard the values of its counters on
 its next exit from idle.
 Finally, the <tt>rcu_qs_ctr_snap</tt> field is used to detect
 cases where a given operation has resulted in a quiescent state
-for all flavors of RCU, for example, <tt>cond_resched_rcu_qs()</tt>.
+for all flavors of RCU, for example, <tt>cond_resched()</tt>
+when RCU has indicated a need for quiescent states.
 
 <h5>RCU Callback Handling</h5>
 
diff --git a/Documentation/RCU/Design/Requirements/Requirements.html b/Documentation/RCU/Design/Requirements/Requirements.html
index 62e847bcdcdd..0372e6c54eef 100644
--- a/Documentation/RCU/Design/Requirements/Requirements.html
+++ b/Documentation/RCU/Design/Requirements/Requirements.html
@@ -2797,7 +2797,7 @@ RCU must avoid degrading real-time response for CPU-bound threads, whether
 executing in usermode (which is one use case for
 <tt>CONFIG_NO_HZ_FULL=y</tt>) or in the kernel.
 That said, CPU-bound loops in the kernel must execute
-<tt>cond_resched_rcu_qs()</tt> at least once per few tens of milliseconds
+<tt>cond_resched()</tt> at least once per few tens of milliseconds
 in order to avoid receiving an IPI from RCU.
 
 <p>
@@ -3128,7 +3128,7 @@ The solution, in the form of
 is to have implicit
 read-side critical sections that are delimited by voluntary context
 switches, that is, calls to <tt>schedule()</tt>,
-<tt>cond_resched_rcu_qs()</tt>, and
+<tt>cond_resched()</tt>, and
 <tt>synchronize_rcu_tasks()</tt>.
 In addition, transitions to and from userspace execution also delimit
 tasks-RCU read-side critical sections.
diff --git a/Documentation/RCU/stallwarn.txt b/Documentation/RCU/stallwarn.txt
index a08f928c8557..4259f95c3261 100644
--- a/Documentation/RCU/stallwarn.txt
+++ b/Documentation/RCU/stallwarn.txt
@@ -23,12 +23,10 @@ o	A CPU looping with preemption disabled.  This condition can
 o	A CPU looping with bottom halves disabled.  This condition can
 	result in RCU-sched and RCU-bh stalls.
 
-o	For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the
-	kernel without invoking schedule().  Note that cond_resched()
-	does not necessarily prevent RCU CPU stall warnings.  Therefore,
-	if the looping in the kernel is really expected and desirable
-	behavior, you might need to replace some of the cond_resched()
-	calls with calls to cond_resched_rcu_qs().
+o	For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel
+	without invoking schedule().  If the looping in the kernel is
+	really expected and desirable behavior, you might need to add
+	some calls to cond_resched().
 
 o	Booting Linux using a console connection that is too slow to
 	keep up with the boot-time console-message rate.  For example,
-- 
2.5.2

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH tip/core/rcu 10/10] rcu: Account for rcu_all_qs() in cond_resched()
  2017-12-01 19:21 [PATCH tip/core/rcu 0/10] Don not IPI offline CPUs, de-emphasize cond_resched_rcu_qs() Paul E. McKenney
                   ` (8 preceding siblings ...)
  2017-12-01 19:21 ` [PATCH tip/core/rcu 09/10] doc: " Paul E. McKenney
@ 2017-12-01 19:21 ` Paul E. McKenney
  2017-12-02  8:56   ` Peter Zijlstra
  9 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2017-12-01 19:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, jiangshanlai, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, fweisbec, oleg,
	Paul E. McKenney

If cond_resched() returns false, then it has already invoked
rcu_all_qs().  This is also invoked (now redundantly) by
rcu_note_voluntary_context_switch().  This commit therefore changes
cond_resched_rcu_qs() to invoke rcu_note_voluntary_context_switch_lite()
instead of rcu_note_voluntary_context_switch() to avoid the redundant
invocation of rcu_all_qs().

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/linux/rcupdate.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index a6ddc42f87a5..7bd8b5a6db10 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -197,7 +197,7 @@ static inline void exit_tasks_rcu_finish(void) { }
 #define cond_resched_rcu_qs() \
 do { \
 	if (!cond_resched()) \
-		rcu_note_voluntary_context_switch(current); \
+		rcu_note_voluntary_context_switch_lite(current); \
 } while (0)
 
 /*
-- 
2.5.2

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 05/10] workqueue: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2017-12-01 19:21 ` [PATCH tip/core/rcu 05/10] workqueue: " Paul E. McKenney
@ 2017-12-02  1:06   ` Lai Jiangshan
  2017-12-04 18:28     ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Lai Jiangshan @ 2017-12-02  1:06 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: LKML, Ingo Molnar, dipankar, akpm, Mathieu Desnoyers,
	Josh Triplett, Thomas Gleixner, Peter Zijlstra, Steven Rostedt,
	David Howells, Eric Dumazet, Frédéric Weisbecker, oleg,
	Tejun Heo

On Sat, Dec 2, 2017 at 3:21 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> Now that cond_resched() also provides RCU quiescent states when
> needed, it can be used in place of cond_resched_rcu_qs().  This
> commit therefore makes this change.
>
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Lai Jiangshan <jiangshanlai@gmail.com>

Reviewed-by: Lai Jiangshan <jiangshanlai@gmail.com>

> ---
>  kernel/workqueue.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 8fdb710bfdd7..aee7eaab05cb 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -2135,7 +2135,7 @@ __acquires(&pool->lock)
>          * stop_machine. At the same time, report a quiescent RCU state so
>          * the same condition doesn't freeze RCU.
>          */
> -       cond_resched_rcu_qs();
> +       cond_resched();
>
>         spin_lock_irq(&pool->lock);
>
> --
> 2.5.2
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 10/10] rcu: Account for rcu_all_qs() in cond_resched()
  2017-12-01 19:21 ` [PATCH tip/core/rcu 10/10] rcu: Account for rcu_all_qs() in cond_resched() Paul E. McKenney
@ 2017-12-02  8:56   ` Peter Zijlstra
  2017-12-02 12:22     ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Peter Zijlstra @ 2017-12-02  8:56 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	fweisbec, oleg

On Fri, Dec 01, 2017 at 11:21:44AM -0800, Paul E. McKenney wrote:
> If cond_resched() returns false, then it has already invoked
> rcu_all_qs().  This is also invoked (now redundantly) by
> rcu_note_voluntary_context_switch().  This commit therefore changes
> cond_resched_rcu_qs() to invoke rcu_note_voluntary_context_switch_lite()
> instead of rcu_note_voluntary_context_switch() to avoid the redundant
> invocation of rcu_all_qs().
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> ---
>  include/linux/rcupdate.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index a6ddc42f87a5..7bd8b5a6db10 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -197,7 +197,7 @@ static inline void exit_tasks_rcu_finish(void) { }
>  #define cond_resched_rcu_qs() \
>  do { \
>  	if (!cond_resched()) \
> -		rcu_note_voluntary_context_switch(current); \
> +		rcu_note_voluntary_context_switch_lite(current); \
>  } while (0)
>  

Maybe I'm confused, but why are we keeping cond_resched_rcu_qs() around
at all?

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 10/10] rcu: Account for rcu_all_qs() in cond_resched()
  2017-12-02  8:56   ` Peter Zijlstra
@ 2017-12-02 12:22     ` Paul E. McKenney
  2017-12-02 13:55       ` Peter Zijlstra
  2018-02-24 20:18       ` Steven Rostedt
  0 siblings, 2 replies; 32+ messages in thread
From: Paul E. McKenney @ 2017-12-02 12:22 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	fweisbec, oleg

On Sat, Dec 02, 2017 at 09:56:26AM +0100, Peter Zijlstra wrote:
> On Fri, Dec 01, 2017 at 11:21:44AM -0800, Paul E. McKenney wrote:
> > If cond_resched() returns false, then it has already invoked
> > rcu_all_qs().  This is also invoked (now redundantly) by
> > rcu_note_voluntary_context_switch().  This commit therefore changes
> > cond_resched_rcu_qs() to invoke rcu_note_voluntary_context_switch_lite()
> > instead of rcu_note_voluntary_context_switch() to avoid the redundant
> > invocation of rcu_all_qs().
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > ---
> >  include/linux/rcupdate.h | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index a6ddc42f87a5..7bd8b5a6db10 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -197,7 +197,7 @@ static inline void exit_tasks_rcu_finish(void) { }
> >  #define cond_resched_rcu_qs() \
> >  do { \
> >  	if (!cond_resched()) \
> > -		rcu_note_voluntary_context_switch(current); \
> > +		rcu_note_voluntary_context_switch_lite(current); \
> >  } while (0)
> >  
> 
> Maybe I'm confused, but why are we keeping cond_resched_rcu_qs() around
> at all?

Because there are a few key places within RCU and rcutorture that need it.
Without it, there are scenarios where the new cond_resched() never gets
activated, and thus doesn't take effect.

The key point is that with this series in place, it should not be necessary
to use cond_resched_rcu_qs() outside of kernel/rcu and kernel/torture.c.
Which is a valuable step forward, right?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 10/10] rcu: Account for rcu_all_qs() in cond_resched()
  2017-12-02 12:22     ` Paul E. McKenney
@ 2017-12-02 13:55       ` Peter Zijlstra
  2018-02-24 20:18       ` Steven Rostedt
  1 sibling, 0 replies; 32+ messages in thread
From: Peter Zijlstra @ 2017-12-02 13:55 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	fweisbec, oleg

On Sat, Dec 02, 2017 at 04:22:20AM -0800, Paul E. McKenney wrote:
> Because there are a few key places within RCU and rcutorture that need it.
> Without it, there are scenarios where the new cond_resched() never gets
> activated, and thus doesn't take effect.

Ah, I missed that interaction.

> The key point is that with this series in place, it should not be necessary
> to use cond_resched_rcu_qs() outside of kernel/rcu and kernel/torture.c.
> Which is a valuable step forward, right?

Quite.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 05/10] workqueue: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2017-12-02  1:06   ` Lai Jiangshan
@ 2017-12-04 18:28     ` Paul E. McKenney
  0 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2017-12-04 18:28 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: LKML, Ingo Molnar, dipankar, akpm, Mathieu Desnoyers,
	Josh Triplett, Thomas Gleixner, Peter Zijlstra, Steven Rostedt,
	David Howells, Eric Dumazet, Frédéric Weisbecker, oleg,
	Tejun Heo

On Sat, Dec 02, 2017 at 09:06:29AM +0800, Lai Jiangshan wrote:
> On Sat, Dec 2, 2017 at 3:21 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > Now that cond_resched() also provides RCU quiescent states when
> > needed, it can be used in place of cond_resched_rcu_qs().  This
> > commit therefore makes this change.
> >
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > Cc: Tejun Heo <tj@kernel.org>
> > Cc: Lai Jiangshan <jiangshanlai@gmail.com>
> 
> Reviewed-by: Lai Jiangshan <jiangshanlai@gmail.com>

Applied, thank you for the review!

							Thanx, Paul

> > ---
> >  kernel/workqueue.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> > index 8fdb710bfdd7..aee7eaab05cb 100644
> > --- a/kernel/workqueue.c
> > +++ b/kernel/workqueue.c
> > @@ -2135,7 +2135,7 @@ __acquires(&pool->lock)
> >          * stop_machine. At the same time, report a quiescent RCU state so
> >          * the same condition doesn't freeze RCU.
> >          */
> > -       cond_resched_rcu_qs();
> > +       cond_resched();
> >
> >         spin_lock_irq(&pool->lock);
> >
> > --
> > 2.5.2
> >
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2017-12-01 19:21 ` [PATCH tip/core/rcu 06/10] trace: " Paul E. McKenney
@ 2018-02-24 20:12   ` Steven Rostedt
  2018-02-25 17:49     ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Steven Rostedt @ 2018-02-24 20:12 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, dhowells, edumazet,
	fweisbec, oleg, Ingo Molnar

On Fri,  1 Dec 2017 11:21:40 -0800
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:

> Now that cond_resched() also provides RCU quiescent states when
> needed, it can be used in place of cond_resched_rcu_qs().  This
> commit therefore makes this change.

Are you sure this is true?

I just bisected a lock up on my machine down to this commit.

With CONFIG_TRACEPOINT_BENCHMARK=y

# cd linux.git/tools/testing/selftests/ftrace/
# ./ftracetest test.d/ftrace/func_traceonoff_triggers.tc

Locks up with a backtrace of:

[  614.186509] INFO: rcu_tasks detected stalls on tasks:
[  614.192253] 000000005834f2a5: .. nvcsw: 2/2 holdout: 1 idle_cpu: -1/1
[  614.199385] event_benchmark R  running task    15264  1507      2 0x90000000
[  614.207159] Call Trace:
[  614.210335]  ? trace_hardirqs_on_thunk+0x1a/0x1c
[  614.215653]  ? retint_kernel+0x2d/0x2d
[  614.220101]  ? ring_buffer_set_clock+0x10/0x10
[  614.225232]  ? benchmark_event_kthread+0x35/0x2d0
[  614.230624]  ? kthread+0x129/0x140
[  614.234708]  ? trace_benchmark_reg+0x80/0x80
[  614.239646]  ? kthread_create_worker_on_cpu+0x50/0x50
[  614.245361]  ? ret_from_fork+0x3a/0x50

The comment in the benchmark code that this commit affects is:

		 *
		 * Note the _rcu_qs() version of cond_resched() will
		 * notify synchronize_rcu_tasks() that this thread has
		 * passed a quiescent state for rcu_tasks. Otherwise
		 * this thread will never voluntarily schedule which would
		 * block synchronize_rcu_tasks() indefinitely.
		 */
		cond_resched();

Seems to me that cond_resched() isn't the same as cond_resched_rcu_qs().

-- Steve


> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> ---
>  kernel/trace/trace_benchmark.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/trace/trace_benchmark.c b/kernel/trace/trace_benchmark.c
> index 79f838a75077..22fee766081b 100644
> --- a/kernel/trace/trace_benchmark.c
> +++ b/kernel/trace/trace_benchmark.c
> @@ -165,7 +165,7 @@ static int benchmark_event_kthread(void *arg)
>  		 * this thread will never voluntarily schedule which would
>  		 * block synchronize_rcu_tasks() indefinitely.
>  		 */
> -		cond_resched_rcu_qs();
> +		cond_resched();
>  	}
>  
>  	return 0;
> -

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 10/10] rcu: Account for rcu_all_qs() in cond_resched()
  2017-12-02 12:22     ` Paul E. McKenney
  2017-12-02 13:55       ` Peter Zijlstra
@ 2018-02-24 20:18       ` Steven Rostedt
  2018-02-25 17:52         ` Paul E. McKenney
  1 sibling, 1 reply; 32+ messages in thread
From: Steven Rostedt @ 2018-02-24 20:18 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Peter Zijlstra, linux-kernel, mingo, jiangshanlai, dipankar,
	akpm, mathieu.desnoyers, josh, tglx, dhowells, edumazet,
	fweisbec, oleg

On Sat, 2 Dec 2017 04:22:20 -0800
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
 
> Because there are a few key places within RCU and rcutorture that need it.
> Without it, there are scenarios where the new cond_resched() never gets
> activated, and thus doesn't take effect.
> 
> The key point is that with this series in place, it should not be necessary
> to use cond_resched_rcu_qs() outside of kernel/rcu and kernel/torture.c.
> Which is a valuable step forward, right?

I'm guessing the tracepoint benchmark is another situation. It's only
existence is to benchmark tracepoints and should not be enabled on any
production system. Thus, I think reverting patch 6 (the one removing it
from the benchmark code) is the proper solution.

-- Steve

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2018-02-24 20:12   ` Steven Rostedt
@ 2018-02-25 17:49     ` Paul E. McKenney
  2018-02-25 18:17       ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2018-02-25 17:49 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, dhowells, edumazet,
	fweisbec, oleg, Ingo Molnar

On Sat, Feb 24, 2018 at 03:12:40PM -0500, Steven Rostedt wrote:
> On Fri,  1 Dec 2017 11:21:40 -0800
> "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> 
> > Now that cond_resched() also provides RCU quiescent states when
> > needed, it can be used in place of cond_resched_rcu_qs().  This
> > commit therefore makes this change.
> 
> Are you sure this is true?

Up to a point.  If a given CPU has been blocking an RCU grace period for
long enough, that CPU's rcu_dynticks.rcu_need_heavy_qs will be set, and
then the next cond_resched() will be treated as a cond_resched_rcu_qs().

However, to your point, if there is no grace period in progress or if 
the current grace period is not waiting on the CPU in question or if
the grace-period kthread is starved of CPU, then cond_resched() has no
effect on RCU.  Unless of course it results in a context switch.

> I just bisected a lock up on my machine down to this commit.
> 
> With CONFIG_TRACEPOINT_BENCHMARK=y
> 
> # cd linux.git/tools/testing/selftests/ftrace/
> # ./ftracetest test.d/ftrace/func_traceonoff_triggers.tc
> 
> Locks up with a backtrace of:
> 
> [  614.186509] INFO: rcu_tasks detected stalls on tasks:

Ah, but this is RCU-tasks!  Which never sets rcu_dynticks.rcu_need_heavy_qs,
thus needing a real context switch.

Hey, when you said that synchronize_rcu_tasks() could take a very long
time, I took you at your word!  ;-)

Does the following (untested, probably does not even build) patch make
cond_resched() take a more peremptory approach to RCU-tasks?

							Thanx, Paul

------------------------------------------------------------------------

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 0c337f5ba3c4..5155fe5e7702 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1088,12 +1088,16 @@ EXPORT_SYMBOL_GPL(rcu_is_watching);
 void rcu_request_urgent_qs_task(struct task_struct *t)
 {
 	int cpu;
+	struct rcu_dynticks *rdtp;
 
 	barrier();
 	cpu = task_cpu(t);
 	if (!task_curr(t))
 		return; /* This task is not running on that CPU. */
-	smp_store_release(per_cpu_ptr(&rcu_dynticks.rcu_urgent_qs, cpu), true);
+	rdtp = per_cpu_ptr(&rcu_dynticks, cpu);
+	WRITE_ONCE(rdtp->rcu_need_heavy_qs, true);
+	/* Store rcu_need_heavy_qs before rcu_urgent_qs. */
+	smp_store_release(&rdtp->rcu_urgent_qs, true);
 }
 
 #if defined(CONFIG_PROVE_RCU) && defined(CONFIG_HOTPLUG_CPU)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 10/10] rcu: Account for rcu_all_qs() in cond_resched()
  2018-02-24 20:18       ` Steven Rostedt
@ 2018-02-25 17:52         ` Paul E. McKenney
  0 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2018-02-25 17:52 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Peter Zijlstra, linux-kernel, mingo, jiangshanlai, dipankar,
	akpm, mathieu.desnoyers, josh, tglx, dhowells, edumazet,
	fweisbec, oleg

On Sat, Feb 24, 2018 at 03:18:16PM -0500, Steven Rostedt wrote:
> On Sat, 2 Dec 2017 04:22:20 -0800
> "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> 
> > Because there are a few key places within RCU and rcutorture that need it.
> > Without it, there are scenarios where the new cond_resched() never gets
> > activated, and thus doesn't take effect.
> > 
> > The key point is that with this series in place, it should not be necessary
> > to use cond_resched_rcu_qs() outside of kernel/rcu and kernel/torture.c.
> > Which is a valuable step forward, right?
> 
> I'm guessing the tracepoint benchmark is another situation. It's only
> existence is to benchmark tracepoints and should not be enabled on any
> production system. Thus, I think reverting patch 6 (the one removing it
> from the benchmark code) is the proper solution.

I would rather make the existing cond_resched() machinery work for
RCU-tasks, but please let me know if my proposed fix isn't doing what
you need.

And in any case, please accept my apologies for the hassle!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2018-02-25 17:49     ` Paul E. McKenney
@ 2018-02-25 18:17       ` Paul E. McKenney
  2018-02-25 18:39         ` Paul E. McKenney
  2018-02-26  4:57         ` Steven Rostedt
  0 siblings, 2 replies; 32+ messages in thread
From: Paul E. McKenney @ 2018-02-25 18:17 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, dhowells, edumazet,
	fweisbec, oleg, Ingo Molnar

On Sun, Feb 25, 2018 at 09:49:27AM -0800, Paul E. McKenney wrote:
> On Sat, Feb 24, 2018 at 03:12:40PM -0500, Steven Rostedt wrote:
> > On Fri,  1 Dec 2017 11:21:40 -0800
> > "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> > 
> > > Now that cond_resched() also provides RCU quiescent states when
> > > needed, it can be used in place of cond_resched_rcu_qs().  This
> > > commit therefore makes this change.
> > 
> > Are you sure this is true?
> 
> Up to a point.  If a given CPU has been blocking an RCU grace period for
> long enough, that CPU's rcu_dynticks.rcu_need_heavy_qs will be set, and
> then the next cond_resched() will be treated as a cond_resched_rcu_qs().
> 
> However, to your point, if there is no grace period in progress or if 
> the current grace period is not waiting on the CPU in question or if
> the grace-period kthread is starved of CPU, then cond_resched() has no
> effect on RCU.  Unless of course it results in a context switch.
> 
> > I just bisected a lock up on my machine down to this commit.
> > 
> > With CONFIG_TRACEPOINT_BENCHMARK=y
> > 
> > # cd linux.git/tools/testing/selftests/ftrace/
> > # ./ftracetest test.d/ftrace/func_traceonoff_triggers.tc
> > 
> > Locks up with a backtrace of:
> > 
> > [  614.186509] INFO: rcu_tasks detected stalls on tasks:
> 
> Ah, but this is RCU-tasks!  Which never sets rcu_dynticks.rcu_need_heavy_qs,
> thus needing a real context switch.
> 
> Hey, when you said that synchronize_rcu_tasks() could take a very long
> time, I took you at your word!  ;-)
> 
> Does the following (untested, probably does not even build) patch make
> cond_resched() take a more peremptory approach to RCU-tasks?

And probably not.  You are probably running CONFIG_PREEMPT=y (otherwise
RCU-tasks is trivial), so cond_resched() is a complete no-op:

static inline int _cond_resched(void) { return 0; }

I could make this call rcu_all_qs(), but I would not expect Peter Zijlstra
to be at all happy with that sort of change.

And the people who asked for the cond_resched() work probably aren't
going to be happy with the resumed proliferation of cond_resched_rcu_qs().

Hmmm...  Grasping at straws...  Could we make cond_resched() be something
like a tracepoint and instrument them with cond_resched_rcu_qs() if the
current RCU-tasks grace period ran for more that (say) a minute of its
ten-minute stall-warning span?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2018-02-25 18:17       ` Paul E. McKenney
@ 2018-02-25 18:39         ` Paul E. McKenney
  2018-02-27  2:29           ` Steven Rostedt
  2018-02-26  4:57         ` Steven Rostedt
  1 sibling, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2018-02-25 18:39 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, dhowells, edumazet,
	fweisbec, oleg, Ingo Molnar

On Sun, Feb 25, 2018 at 10:17:30AM -0800, Paul E. McKenney wrote:
> On Sun, Feb 25, 2018 at 09:49:27AM -0800, Paul E. McKenney wrote:
> > On Sat, Feb 24, 2018 at 03:12:40PM -0500, Steven Rostedt wrote:
> > > On Fri,  1 Dec 2017 11:21:40 -0800
> > > "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> > > 
> > > > Now that cond_resched() also provides RCU quiescent states when
> > > > needed, it can be used in place of cond_resched_rcu_qs().  This
> > > > commit therefore makes this change.
> > > 
> > > Are you sure this is true?
> > 
> > Up to a point.  If a given CPU has been blocking an RCU grace period for
> > long enough, that CPU's rcu_dynticks.rcu_need_heavy_qs will be set, and
> > then the next cond_resched() will be treated as a cond_resched_rcu_qs().
> > 
> > However, to your point, if there is no grace period in progress or if 
> > the current grace period is not waiting on the CPU in question or if
> > the grace-period kthread is starved of CPU, then cond_resched() has no
> > effect on RCU.  Unless of course it results in a context switch.
> > 
> > > I just bisected a lock up on my machine down to this commit.
> > > 
> > > With CONFIG_TRACEPOINT_BENCHMARK=y
> > > 
> > > # cd linux.git/tools/testing/selftests/ftrace/
> > > # ./ftracetest test.d/ftrace/func_traceonoff_triggers.tc
> > > 
> > > Locks up with a backtrace of:
> > > 
> > > [  614.186509] INFO: rcu_tasks detected stalls on tasks:
> > 
> > Ah, but this is RCU-tasks!  Which never sets rcu_dynticks.rcu_need_heavy_qs,
> > thus needing a real context switch.
> > 
> > Hey, when you said that synchronize_rcu_tasks() could take a very long
> > time, I took you at your word!  ;-)
> > 
> > Does the following (untested, probably does not even build) patch make
> > cond_resched() take a more peremptory approach to RCU-tasks?
> 
> And probably not.  You are probably running CONFIG_PREEMPT=y (otherwise
> RCU-tasks is trivial), so cond_resched() is a complete no-op:
> 
> static inline int _cond_resched(void) { return 0; }
> 
> I could make this call rcu_all_qs(), but I would not expect Peter Zijlstra
> to be at all happy with that sort of change.
> 
> And the people who asked for the cond_resched() work probably aren't
> going to be happy with the resumed proliferation of cond_resched_rcu_qs().
> 
> Hmmm...  Grasping at straws...  Could we make cond_resched() be something
> like a tracepoint and instrument them with cond_resched_rcu_qs() if the
> current RCU-tasks grace period ran for more that (say) a minute of its
> ten-minute stall-warning span?

On the other hand, you noted in your other email that the tracepoint
benchmark should not be enabled on production systems.  So how about
the following (again untested) patch?  The "defined(CONFIG_TASKS_RCU)"
might need to change, especially if RCU-tasks is used in production
kernels, but perhaps a starting point.

							Thanx, Paul

------------------------------------------------------------------------

diff --git a/include/linux/sched.h b/include/linux/sched.h
index b161ef8a902e..316c29c5e506 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1589,6 +1589,12 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
  */
 #ifndef CONFIG_PREEMPT
 extern int _cond_resched(void);
+#elif defined(CONFIG_TASKS_RCU)
+static inline int _cond_resched(void)
+{
+	rcu_note_voluntary_context_switch(current);
+	return 0;
+}
 #else
 static inline int _cond_resched(void) { return 0; }
 #endif

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2018-02-25 18:17       ` Paul E. McKenney
  2018-02-25 18:39         ` Paul E. McKenney
@ 2018-02-26  4:57         ` Steven Rostedt
  2018-02-26  5:47           ` Paul E. McKenney
  1 sibling, 1 reply; 32+ messages in thread
From: Steven Rostedt @ 2018-02-26  4:57 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, dhowells, edumazet,
	fweisbec, oleg, Ingo Molnar

On Sun, 25 Feb 2018 10:17:30 -0800
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:


> And probably not.  You are probably running CONFIG_PREEMPT=y (otherwise
> RCU-tasks is trivial), so cond_resched() is a complete no-op:
> 
> static inline int _cond_resched(void) { return 0; }
> 
> I could make this call rcu_all_qs(), but I would not expect Peter Zijlstra
> to be at all happy with that sort of change.
> 
> And the people who asked for the cond_resched() work probably aren't
> going to be happy with the resumed proliferation of cond_resched_rcu_qs().
> 
> Hmmm...  Grasping at straws...  Could we make cond_resched() be something
> like a tracepoint and instrument them with cond_resched_rcu_qs() if the
> current RCU-tasks grace period ran for more that (say) a minute of its
> ten-minute stall-warning span?
> 

Instead of monkeying with cond_resched(), since this is "special" code,
why don't I just have that code call it directly?

	cond_resched();
	rcu_note_voluntary_context_switch(current);

-- Steve

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2018-02-26  4:57         ` Steven Rostedt
@ 2018-02-26  5:47           ` Paul E. McKenney
  0 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2018-02-26  5:47 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, dhowells, edumazet,
	fweisbec, oleg, Ingo Molnar

On Sun, Feb 25, 2018 at 11:57:48PM -0500, Steven Rostedt wrote:
> On Sun, 25 Feb 2018 10:17:30 -0800
> "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> 
> 
> > And probably not.  You are probably running CONFIG_PREEMPT=y (otherwise
> > RCU-tasks is trivial), so cond_resched() is a complete no-op:
> > 
> > static inline int _cond_resched(void) { return 0; }
> > 
> > I could make this call rcu_all_qs(), but I would not expect Peter Zijlstra
> > to be at all happy with that sort of change.
> > 
> > And the people who asked for the cond_resched() work probably aren't
> > going to be happy with the resumed proliferation of cond_resched_rcu_qs().
> > 
> > Hmmm...  Grasping at straws...  Could we make cond_resched() be something
> > like a tracepoint and instrument them with cond_resched_rcu_qs() if the
> > current RCU-tasks grace period ran for more that (say) a minute of its
> > ten-minute stall-warning span?
> > 
> 
> Instead of monkeying with cond_resched(), since this is "special" code,
> why don't I just have that code call it directly?
> 
> 	cond_resched();
> 	rcu_note_voluntary_context_switch(current);

The advantage of the last patch that I sent is that the special call
is in one place.  (This is the one that adds the "special" definition
for _cond_resched().)

							Thanx, Paul

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2018-02-25 18:39         ` Paul E. McKenney
@ 2018-02-27  2:29           ` Steven Rostedt
  2018-02-27 15:36             ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Steven Rostedt @ 2018-02-27  2:29 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, dhowells, edumazet,
	fweisbec, oleg, Ingo Molnar

On Sun, 25 Feb 2018 10:39:44 -0800
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:

> > Hmmm...  Grasping at straws...  Could we make cond_resched() be something
> > like a tracepoint and instrument them with cond_resched_rcu_qs() if the
> > current RCU-tasks grace period ran for more that (say) a minute of its
> > ten-minute stall-warning span?  
> 
> On the other hand, you noted in your other email that the tracepoint
> benchmark should not be enabled on production systems.  So how about
> the following (again untested) patch?  The "defined(CONFIG_TASKS_RCU)"
> might need to change, especially if RCU-tasks is used in production
> kernels, but perhaps a starting point.

RCU tasks are used in production systems if PREEMPT is enabled (it
allows for optimizations with ftrace, perf and kprobes).

But the tracepoint is not used.

> 
> 							Thanx, Paul
> 
> ------------------------------------------------------------------------
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index b161ef8a902e..316c29c5e506 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1589,6 +1589,12 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
>   */
>  #ifndef CONFIG_PREEMPT
>  extern int _cond_resched(void);
> +#elif defined(CONFIG_TASKS_RCU)
> +static inline int _cond_resched(void)
> +{
> +	rcu_note_voluntary_context_switch(current);
> +	return 0;
> +}
>  #else
>  static inline int _cond_resched(void) { return 0; }
>  #endif


This does work, but so does the below, without causing cond_resched()
from being something other than a nop of CONFIG_PREEMPT.

-- Steve

diff --git a/kernel/trace/trace_benchmark.c b/kernel/trace/trace_benchmark.c
index 22fee766081b..82d83bb4874b 100644
--- a/kernel/trace/trace_benchmark.c
+++ b/kernel/trace/trace_benchmark.c
@@ -166,6 +166,7 @@ static int benchmark_event_kthread(void *arg)
 		 * block synchronize_rcu_tasks() indefinitely.
 		 */
 		cond_resched();
+		rcu_note_voluntary_context_switch(current);
 	}
 
 	return 0;

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2018-02-27  2:29           ` Steven Rostedt
@ 2018-02-27 15:36             ` Paul E. McKenney
  2018-02-28 23:12               ` Steven Rostedt
  0 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2018-02-27 15:36 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, dhowells, edumazet,
	fweisbec, oleg, Ingo Molnar

On Mon, Feb 26, 2018 at 09:29:20PM -0500, Steven Rostedt wrote:
> On Sun, 25 Feb 2018 10:39:44 -0800
> "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> 
> > > Hmmm...  Grasping at straws...  Could we make cond_resched() be something
> > > like a tracepoint and instrument them with cond_resched_rcu_qs() if the
> > > current RCU-tasks grace period ran for more that (say) a minute of its
> > > ten-minute stall-warning span?  
> > 
> > On the other hand, you noted in your other email that the tracepoint
> > benchmark should not be enabled on production systems.  So how about
> > the following (again untested) patch?  The "defined(CONFIG_TASKS_RCU)"
> > might need to change, especially if RCU-tasks is used in production
> > kernels, but perhaps a starting point.
> 
> RCU tasks are used in production systems if PREEMPT is enabled (it
> allows for optimizations with ftrace, perf and kprobes).
> 
> But the tracepoint is not used.

Right, so I should use defined(CONFIG_TRACEPOINT_BENCHMARK) instead of
defined(CONFIG_TASKS_RCU).

Or am I misinterpreting the code in kernel/trace?

> > ------------------------------------------------------------------------
> > 
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index b161ef8a902e..316c29c5e506 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -1589,6 +1589,12 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
> >   */
> >  #ifndef CONFIG_PREEMPT
> >  extern int _cond_resched(void);
> > +#elif defined(CONFIG_TASKS_RCU)
> > +static inline int _cond_resched(void)
> > +{
> > +	rcu_note_voluntary_context_switch(current);
> > +	return 0;
> > +}
> >  #else
> >  static inline int _cond_resched(void) { return 0; }
> >  #endif
> 
> 
> This does work, but so does the below, without causing cond_resched()
> from being something other than a nop of CONFIG_PREEMPT.

True, but based on the cond_resched_rcu_qs() experience, I bet that
trace_benchmark.c won't be the only place needing help.

							Thanx, Paul

> -- Steve
> 
> diff --git a/kernel/trace/trace_benchmark.c b/kernel/trace/trace_benchmark.c
> index 22fee766081b..82d83bb4874b 100644
> --- a/kernel/trace/trace_benchmark.c
> +++ b/kernel/trace/trace_benchmark.c
> @@ -166,6 +166,7 @@ static int benchmark_event_kthread(void *arg)
>  		 * block synchronize_rcu_tasks() indefinitely.
>  		 */
>  		cond_resched();
> +		rcu_note_voluntary_context_switch(current);
>  	}
> 
>  	return 0;
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2018-02-27 15:36             ` Paul E. McKenney
@ 2018-02-28 23:12               ` Steven Rostedt
  2018-03-01  1:21                 ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Steven Rostedt @ 2018-02-28 23:12 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, dhowells, edumazet,
	fweisbec, oleg, Ingo Molnar

On Tue, 27 Feb 2018 07:36:46 -0800
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:

> > > --- a/include/linux/sched.h
> > > +++ b/include/linux/sched.h
> > > @@ -1589,6 +1589,12 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
> > >   */
> > >  #ifndef CONFIG_PREEMPT
> > >  extern int _cond_resched(void);
> > > +#elif defined(CONFIG_TASKS_RCU)
> > > +static inline int _cond_resched(void)
> > > +{
> > > +	rcu_note_voluntary_context_switch(current);
> > > +	return 0;
> > > +}
> > >  #else
> > >  static inline int _cond_resched(void) { return 0; }
> > >  #endif  
> > 
> > 
> > This does work, but so does the below, without causing cond_resched()
> > from being something other than a nop of CONFIG_PREEMPT.  
> 
> True, but based on the cond_resched_rcu_qs() experience, I bet that
> trace_benchmark.c won't be the only place needing help.

Perhaps, still think this is a special case. That said, perhaps
cond_resched isn't done in critical locations as it's a place that is
explicitly stating that it's OK to schedule.

-- Steve

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2018-02-28 23:12               ` Steven Rostedt
@ 2018-03-01  1:21                 ` Paul E. McKenney
  2018-03-01  5:04                   ` Steven Rostedt
  0 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2018-03-01  1:21 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, dhowells, edumazet,
	fweisbec, oleg, Ingo Molnar

On Wed, Feb 28, 2018 at 06:12:52PM -0500, Steven Rostedt wrote:
> On Tue, 27 Feb 2018 07:36:46 -0800
> "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> 
> > > > --- a/include/linux/sched.h
> > > > +++ b/include/linux/sched.h
> > > > @@ -1589,6 +1589,12 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
> > > >   */
> > > >  #ifndef CONFIG_PREEMPT
> > > >  extern int _cond_resched(void);
> > > > +#elif defined(CONFIG_TASKS_RCU)
> > > > +static inline int _cond_resched(void)
> > > > +{
> > > > +	rcu_note_voluntary_context_switch(current);
> > > > +	return 0;
> > > > +}
> > > >  #else
> > > >  static inline int _cond_resched(void) { return 0; }
> > > >  #endif  
> > > 
> > > 
> > > This does work, but so does the below, without causing cond_resched()
> > > from being something other than a nop of CONFIG_PREEMPT.  
> > 
> > True, but based on the cond_resched_rcu_qs() experience, I bet that
> > trace_benchmark.c won't be the only place needing help.
> 
> Perhaps, still think this is a special case. That said, perhaps
> cond_resched isn't done in critical locations as it's a place that is
> explicitly stating that it's OK to schedule.

Building on your second sentence, when you are running a non-production
stress test, adding an extra function call and conditional branch to
cond_resched() should not be a problem.

So how about the (still untested) patch below?

							Thanx, Paul

------------------------------------------------------------------------

commit e9a6ea9fc2542459f9a63cf2b3a0264d09fbc266
Author: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Date:   Sun Feb 25 10:40:44 2018 -0800

    EXP sched: Make non-production PREEMPT cond_resched() help Tasks RCU
    
    In CONFIG_PREEMPT=y kernels, cond_resched() is a complete no-op, and
    thus cannot help advance Tasks-RCU grace periods.  However, such grace
    periods are only an issue in non-production benchmarking runs of the
    Linux kernel.  This commit therefore makes cond_resched() invoke
    rcu_note_voluntary_context_switch() for kernels implementing Tasks RCU
    even in CONFIG_PREEMPT=y kernels.
    
    Reported-by: Steven Rostedt <rostedt@goodmis.org>
    Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

diff --git a/include/linux/sched.h b/include/linux/sched.h
index b161ef8a902e..970dadefb86f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1589,6 +1589,12 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
  */
 #ifndef CONFIG_PREEMPT
 extern int _cond_resched(void);
+#elif defined(CONFIG_TRACEPOINT_BENCHMARK)
+static inline int _cond_resched(void)
+{
+	rcu_note_voluntary_context_switch(current);
+	return 0;
+}
 #else
 static inline int _cond_resched(void) { return 0; }
 #endif

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2018-03-01  1:21                 ` Paul E. McKenney
@ 2018-03-01  5:04                   ` Steven Rostedt
  2018-03-01 20:48                     ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Steven Rostedt @ 2018-03-01  5:04 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, dhowells, edumazet,
	fweisbec, oleg, Ingo Molnar

On Wed, 28 Feb 2018 17:21:44 -0800
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:

> > Perhaps, still think this is a special case. That said, perhaps
> > cond_resched isn't done in critical locations as it's a place that is
> > explicitly stating that it's OK to schedule.  
> 
> Building on your second sentence, when you are running a non-production
> stress test, adding an extra function call and conditional branch to
> cond_resched() should not be a problem.
> 
> So how about the (still untested) patch below?
> 
> 							Thanx, Paul
> 
> ------------------------------------------------------------------------
> 
> commit e9a6ea9fc2542459f9a63cf2b3a0264d09fbc266
> Author: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Date:   Sun Feb 25 10:40:44 2018 -0800
> 
>     EXP sched: Make non-production PREEMPT cond_resched() help Tasks RCU
>     
>     In CONFIG_PREEMPT=y kernels, cond_resched() is a complete no-op, and
>     thus cannot help advance Tasks-RCU grace periods.  However, such grace
>     periods are only an issue in non-production benchmarking runs of the
>     Linux kernel.  This commit therefore makes cond_resched() invoke
>     rcu_note_voluntary_context_switch() for kernels implementing Tasks RCU
>     even in CONFIG_PREEMPT=y kernels.
>     
>     Reported-by: Steven Rostedt <rostedt@goodmis.org>
>     Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index b161ef8a902e..970dadefb86f 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1589,6 +1589,12 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
>   */
>  #ifndef CONFIG_PREEMPT
>  extern int _cond_resched(void);
> +#elif defined(CONFIG_TRACEPOINT_BENCHMARK)
> +static inline int _cond_resched(void)
> +{
> +	rcu_note_voluntary_context_switch(current);

The thing I hate about this is that it is invasive to code outside of
the tracepoint benchmark. Why do the rcu_note_voluntary_context_switch
here and not in the tracepoint code? Seems odd to have it called
everywhere in the kernel when it is only needed by the benchmark
tracepoint code.

-- Steve



> +	return 0;
> +}
>  #else
>  static inline int _cond_resched(void) { return 0; }
>  #endif

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2018-03-01  5:04                   ` Steven Rostedt
@ 2018-03-01 20:48                     ` Paul E. McKenney
  2018-03-02 20:06                       ` Steven Rostedt
  0 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2018-03-01 20:48 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, dhowells, edumazet,
	fweisbec, oleg, Ingo Molnar

On Thu, Mar 01, 2018 at 12:04:04AM -0500, Steven Rostedt wrote:
> On Wed, 28 Feb 2018 17:21:44 -0800
> "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> 
> > > Perhaps, still think this is a special case. That said, perhaps
> > > cond_resched isn't done in critical locations as it's a place that is
> > > explicitly stating that it's OK to schedule.  
> > 
> > Building on your second sentence, when you are running a non-production
> > stress test, adding an extra function call and conditional branch to
> > cond_resched() should not be a problem.
> > 
> > So how about the (still untested) patch below?
> > 
> > 							Thanx, Paul
> > 
> > ------------------------------------------------------------------------
> > 
> > commit e9a6ea9fc2542459f9a63cf2b3a0264d09fbc266
> > Author: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > Date:   Sun Feb 25 10:40:44 2018 -0800
> > 
> >     EXP sched: Make non-production PREEMPT cond_resched() help Tasks RCU
> >     
> >     In CONFIG_PREEMPT=y kernels, cond_resched() is a complete no-op, and
> >     thus cannot help advance Tasks-RCU grace periods.  However, such grace
> >     periods are only an issue in non-production benchmarking runs of the
> >     Linux kernel.  This commit therefore makes cond_resched() invoke
> >     rcu_note_voluntary_context_switch() for kernels implementing Tasks RCU
> >     even in CONFIG_PREEMPT=y kernels.
> >     
> >     Reported-by: Steven Rostedt <rostedt@goodmis.org>
> >     Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > 
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index b161ef8a902e..970dadefb86f 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -1589,6 +1589,12 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
> >   */
> >  #ifndef CONFIG_PREEMPT
> >  extern int _cond_resched(void);
> > +#elif defined(CONFIG_TRACEPOINT_BENCHMARK)
> > +static inline int _cond_resched(void)
> > +{
> > +	rcu_note_voluntary_context_switch(current);
> 
> The thing I hate about this is that it is invasive to code outside of
> the tracepoint benchmark. Why do the rcu_note_voluntary_context_switch
> here and not in the tracepoint code? Seems odd to have it called
> everywhere in the kernel when it is only needed by the benchmark
> tracepoint code.

Understood, and I am not completely devoid of sympathy for that view.
My problem with adding rcu_note_voluntary_context_switch() is that it
is a pretty deep detail of RCU.

Hmmm...  I wasn't happy with your original use of cond_resched_rcu_qs()
because it is now a rather strange thing.  However, this discussion has
helped me to understand that its real distinction over cond_resched()
as things stand now is that is provides a quiescent state for Tasks RCU.

So how about I rename cond_resched_rcu_qs() to cond_resched_tasks_rcu_qs(),
which at least gives a hint as to where it needs to be used?

Would that work for you?

							Thanx, Paul

> -- Steve
> 
> 
> 
> > +	return 0;
> > +}
> >  #else
> >  static inline int _cond_resched(void) { return 0; }
> >  #endif
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2018-03-01 20:48                     ` Paul E. McKenney
@ 2018-03-02 20:06                       ` Steven Rostedt
  2018-03-03  0:54                         ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Steven Rostedt @ 2018-03-02 20:06 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, dhowells, edumazet,
	fweisbec, oleg, Ingo Molnar

On Thu, 1 Mar 2018 12:48:58 -0800
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:

> So how about I rename cond_resched_rcu_qs() to cond_resched_tasks_rcu_qs(),
> which at least gives a hint as to where it needs to be used?
> 
> Would that work for you?

Yes, definitely!

-- Steve

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH tip/core/rcu 06/10] trace: Eliminate cond_resched_rcu_qs() in favor of cond_resched()
  2018-03-02 20:06                       ` Steven Rostedt
@ 2018-03-03  0:54                         ` Paul E. McKenney
  0 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2018-03-03  0:54 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, mingo, jiangshanlai, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, dhowells, edumazet,
	fweisbec, oleg, Ingo Molnar

On Fri, Mar 02, 2018 at 03:06:21PM -0500, Steven Rostedt wrote:
> On Thu, 1 Mar 2018 12:48:58 -0800
> "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> 
> > So how about I rename cond_resched_rcu_qs() to cond_resched_tasks_rcu_qs(),
> > which at least gives a hint as to where it needs to be used?
> > 
> > Would that work for you?
> 
> Yes, definitely!

Like this?

							Thanx, Paul

------------------------------------------------------------------------

commit 4551cfd69a85393f478462fe5e16e42f0fa6391e
Author: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Date:   Fri Mar 2 16:35:27 2018 -0800

    rcu: Rename cond_resched_rcu_qs() to cond_resched_tasks_rcu_qs()
    
    Commit e31d28b6ab8f ("trace: Eliminate cond_resched_rcu_qs() in favor
    of cond_resched()") substituted cond_resched() for the earlier call
    to cond_resched_rcu_qs().  However, the new-age cond_resched() does
    not do anything to help RCU-tasks grace periods because (1) RCU-tasks
    is only enabled when CONFIG_PREEMPT=y and (2) cond_resched() is a
    complete no-op when preemption is enabled.  This situation results
    in hangs when running the trace benchmarks.
    
    A number of potential fixes were discussed on LKML
    (https://lkml.kernel.org/r/20180224151240.0d63a059@vmware.local.home),
    including making cond_resched() not be a no-op; making cond_resched()
    not be a no-op, but only when running tracing benchmarks; reverting
    the aforementioned commit (which works because cond_resched_rcu_qs()
    does provide an RCU-tasks quiescent state; and adding a call to the
    scheduler/RCU rcu_note_voluntary_context_switch() function.  All were
    deemed unsatisfactory, either due to added cond_resched() overhead or
    due to magic functions inviting cargo culting.
    
    This commit renames cond_resched_rcu_qs() to cond_resched_tasks_rcu_qs(),
    which provides a clear hint as to what this function is doing and
    why and where it should be used, and then replaces the call to
    cond_resched() with cond_resched_tasks_rcu_qs() in the trace benchmark's
    benchmark_event_kthread() function.
    
    Reported-by: Steven Rostedt <rostedt@goodmis.org>
    Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 36360d07f25b..19d235fefdb9 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -188,13 +188,13 @@ static inline void exit_tasks_rcu_finish(void) { }
 #endif /* #else #ifdef CONFIG_TASKS_RCU */
 
 /**
- * cond_resched_rcu_qs - Report potential quiescent states to RCU
+ * cond_resched_tasks_rcu_qs - Report potential quiescent states to RCU
  *
  * This macro resembles cond_resched(), except that it is defined to
  * report potential quiescent states to RCU-tasks even if the cond_resched()
  * machinery were to be shut off, as some advocate for PREEMPT kernels.
  */
-#define cond_resched_rcu_qs() \
+#define cond_resched_tasks_rcu_qs() \
 do { \
 	if (!cond_resched()) \
 		rcu_note_voluntary_context_switch_lite(current); \
diff --git a/kernel/rcu/rcuperf.c b/kernel/rcu/rcuperf.c
index 777e7a6a0292..e232846516b3 100644
--- a/kernel/rcu/rcuperf.c
+++ b/kernel/rcu/rcuperf.c
@@ -369,7 +369,7 @@ static bool __maybe_unused torturing_tasks(void)
  */
 static void rcu_perf_wait_shutdown(void)
 {
-	cond_resched_rcu_qs();
+	cond_resched_tasks_rcu_qs();
 	if (atomic_read(&n_rcu_perf_writer_finished) < nrealwriters)
 		return;
 	while (!torture_must_stop())
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 8fde264e24aa..381b47a68ac6 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1234,10 +1234,10 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp)
 	}
 
 	/*
-	 * Has this CPU encountered a cond_resched_rcu_qs() since the
-	 * beginning of the grace period?  For this to be the case,
-	 * the CPU has to have noticed the current grace period.  This
-	 * might not be the case for nohz_full CPUs looping in the kernel.
+	 * Has this CPU encountered a cond_resched() since the beginning
+	 * of the grace period?  For this to be the case, the CPU has to
+	 * have noticed the current grace period.  This might not be the
+	 * case for nohz_full CPUs looping in the kernel.
 	 */
 	jtsq = jiffies_till_sched_qs;
 	ruqp = per_cpu_ptr(&rcu_dynticks.rcu_urgent_qs, rdp->cpu);
@@ -2049,7 +2049,7 @@ static bool rcu_gp_init(struct rcu_state *rsp)
 					    rnp->level, rnp->grplo,
 					    rnp->grphi, rnp->qsmask);
 		raw_spin_unlock_irq_rcu_node(rnp);
-		cond_resched_rcu_qs();
+		cond_resched_tasks_rcu_qs();
 		WRITE_ONCE(rsp->gp_activity, jiffies);
 	}
 
@@ -2152,7 +2152,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
 		sq = rcu_nocb_gp_get(rnp);
 		raw_spin_unlock_irq_rcu_node(rnp);
 		rcu_nocb_gp_cleanup(sq);
-		cond_resched_rcu_qs();
+		cond_resched_tasks_rcu_qs();
 		WRITE_ONCE(rsp->gp_activity, jiffies);
 		rcu_gp_slow(rsp, gp_cleanup_delay);
 	}
@@ -2203,7 +2203,7 @@ static int __noreturn rcu_gp_kthread(void *arg)
 			/* Locking provides needed memory barrier. */
 			if (rcu_gp_init(rsp))
 				break;
-			cond_resched_rcu_qs();
+			cond_resched_tasks_rcu_qs();
 			WRITE_ONCE(rsp->gp_activity, jiffies);
 			WARN_ON(signal_pending(current));
 			trace_rcu_grace_period(rsp->name,
@@ -2248,7 +2248,7 @@ static int __noreturn rcu_gp_kthread(void *arg)
 				trace_rcu_grace_period(rsp->name,
 						       READ_ONCE(rsp->gpnum),
 						       TPS("fqsend"));
-				cond_resched_rcu_qs();
+				cond_resched_tasks_rcu_qs();
 				WRITE_ONCE(rsp->gp_activity, jiffies);
 				ret = 0; /* Force full wait till next FQS. */
 				j = jiffies_till_next_fqs;
@@ -2261,7 +2261,7 @@ static int __noreturn rcu_gp_kthread(void *arg)
 				}
 			} else {
 				/* Deal with stray signal. */
-				cond_resched_rcu_qs();
+				cond_resched_tasks_rcu_qs();
 				WRITE_ONCE(rsp->gp_activity, jiffies);
 				WARN_ON(signal_pending(current));
 				trace_rcu_grace_period(rsp->name,
@@ -2784,7 +2784,7 @@ static void force_qs_rnp(struct rcu_state *rsp, int (*f)(struct rcu_data *rsp))
 	struct rcu_node *rnp;
 
 	rcu_for_each_leaf_node(rsp, rnp) {
-		cond_resched_rcu_qs();
+		cond_resched_tasks_rcu_qs();
 		mask = 0;
 		raw_spin_lock_irqsave_rcu_node(rnp, flags);
 		if (rnp->qsmask == 0) {
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 3695c12cfcdc..9accacffd138 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1632,7 +1632,7 @@ static int rcu_oom_notify(struct notifier_block *self,
 
 	for_each_online_cpu(cpu) {
 		smp_call_function_single(cpu, rcu_oom_notify_cpu, NULL, 1);
-		cond_resched_rcu_qs();
+		cond_resched_tasks_rcu_qs();
 	}
 
 	/* Unconditionally decrement: no need to wake ourselves up. */
@@ -2261,7 +2261,7 @@ static int rcu_nocb_kthread(void *arg)
 				cl++;
 			c++;
 			local_bh_enable();
-			cond_resched_rcu_qs();
+			cond_resched_tasks_rcu_qs();
 			list = next;
 		}
 		trace_rcu_batch_end(rdp->rsp->name, c, !!list, 0, 0, 1);
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index 68fa19a5e7bd..e401960c7f51 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -624,7 +624,7 @@ EXPORT_SYMBOL_GPL(call_rcu_tasks);
  * grace period has elapsed, in other words after all currently
  * executing rcu-tasks read-side critical sections have elapsed.  These
  * read-side critical sections are delimited by calls to schedule(),
- * cond_resched_rcu_qs(), idle execution, userspace execution, calls
+ * cond_resched_tasks_rcu_qs(), idle execution, userspace execution, calls
  * to synchronize_rcu_tasks(), and (in theory, anyway) cond_resched().
  *
  * This is a very specialized primitive, intended only for a few uses in
diff --git a/kernel/torture.c b/kernel/torture.c
index 37b94012a3f8..3de1efbecd6a 100644
--- a/kernel/torture.c
+++ b/kernel/torture.c
@@ -574,7 +574,7 @@ void stutter_wait(const char *title)
 {
 	int spt;
 
-	cond_resched_rcu_qs();
+	cond_resched_tasks_rcu_qs();
 	spt = READ_ONCE(stutter_pause_test);
 	for (; spt; spt = READ_ONCE(stutter_pause_test)) {
 		if (spt == 1) {
diff --git a/kernel/trace/trace_benchmark.c b/kernel/trace/trace_benchmark.c
index 22fee766081b..80e0b2aca703 100644
--- a/kernel/trace/trace_benchmark.c
+++ b/kernel/trace/trace_benchmark.c
@@ -159,13 +159,13 @@ static int benchmark_event_kthread(void *arg)
 		 * wants to run, schedule in, but if the CPU is idle,
 		 * we'll keep burning cycles.
 		 *
-		 * Note the _rcu_qs() version of cond_resched() will
+		 * Note the tasks_rcu_qs() version of cond_resched() will
 		 * notify synchronize_rcu_tasks() that this thread has
 		 * passed a quiescent state for rcu_tasks. Otherwise
 		 * this thread will never voluntarily schedule which would
 		 * block synchronize_rcu_tasks() indefinitely.
 		 */
-		cond_resched();
+		cond_resched_tasks_rcu_qs();
 	}
 
 	return 0;

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2018-03-03  0:53 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-01 19:21 [PATCH tip/core/rcu 0/10] Don not IPI offline CPUs, de-emphasize cond_resched_rcu_qs() Paul E. McKenney
2017-12-01 19:21 ` [PATCH tip/core/rcu 01/10] sched: Stop resched_cpu() from sending IPIs to offline CPUs Paul E. McKenney
2017-12-01 19:21 ` [PATCH tip/core/rcu 02/10] sched: Stop switched_to_rt() " Paul E. McKenney
2017-12-01 19:21 ` [PATCH tip/core/rcu 03/10] netfilter: Eliminate cond_resched_rcu_qs() in favor of cond_resched() Paul E. McKenney
2017-12-01 19:21 ` [PATCH tip/core/rcu 04/10] mm: " Paul E. McKenney
2017-12-01 19:21 ` [PATCH tip/core/rcu 05/10] workqueue: " Paul E. McKenney
2017-12-02  1:06   ` Lai Jiangshan
2017-12-04 18:28     ` Paul E. McKenney
2017-12-01 19:21 ` [PATCH tip/core/rcu 06/10] trace: " Paul E. McKenney
2018-02-24 20:12   ` Steven Rostedt
2018-02-25 17:49     ` Paul E. McKenney
2018-02-25 18:17       ` Paul E. McKenney
2018-02-25 18:39         ` Paul E. McKenney
2018-02-27  2:29           ` Steven Rostedt
2018-02-27 15:36             ` Paul E. McKenney
2018-02-28 23:12               ` Steven Rostedt
2018-03-01  1:21                 ` Paul E. McKenney
2018-03-01  5:04                   ` Steven Rostedt
2018-03-01 20:48                     ` Paul E. McKenney
2018-03-02 20:06                       ` Steven Rostedt
2018-03-03  0:54                         ` Paul E. McKenney
2018-02-26  4:57         ` Steven Rostedt
2018-02-26  5:47           ` Paul E. McKenney
2017-12-01 19:21 ` [PATCH tip/core/rcu 07/10] softirq: " Paul E. McKenney
2017-12-01 19:21 ` [PATCH tip/core/rcu 08/10] fs: " Paul E. McKenney
2017-12-01 19:21 ` [PATCH tip/core/rcu 09/10] doc: " Paul E. McKenney
2017-12-01 19:21 ` [PATCH tip/core/rcu 10/10] rcu: Account for rcu_all_qs() in cond_resched() Paul E. McKenney
2017-12-02  8:56   ` Peter Zijlstra
2017-12-02 12:22     ` Paul E. McKenney
2017-12-02 13:55       ` Peter Zijlstra
2018-02-24 20:18       ` Steven Rostedt
2018-02-25 17:52         ` Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).