All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation
@ 2014-07-31  0:39 Paul E. McKenney
  2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
                   ` (2 more replies)
  0 siblings, 3 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31  0:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani

Hello!

This series provides a prototype of an RCU-tasks implementation, which has
been requested to assist with tramopoline removal.  This flavor of RCU
is task-based rather than CPU-based, and has voluntary context switch,
usermode execution, and the idle loops as its only quiescent states.
This selection of quiescent states ensures that at the end of a grace
period, there will no longer be any tasks depending on a trampoline that
was removed before the beginning of that grace period.  This works because
such trampolines do not contain function calls, do not contain voluntary
context switches, do not switch to usermode, and do not switch to idle.

The patches in this series are as follows:

1.	Adds the basic call_rcu_tasks() functionality.

2.	Provides cond_resched_rcu_qs() to force quiescent states, including
	RCU-tasks quiescent states, in long loops.

3.	Adds synchronous APIs: synchronize_rcu_tasks() and
	rcu_barrier_tasks().

4.	Adds GPL exports for the above APIs, courtesy of Steven Rostedt.

5.	Adds rcutorture tests for RCU-tasks.

6.	Adds RCU-tasks test cases to rcutorture scripting.

7.	Adds stall-warning checks for RCU-tasks.

8.	Improves RCU-tasks energy efficiency by replacing polling with
	wait/wakeup.

9.	Document RCU-tasks stall-warning messages.

10.	Adds synchronization with exiting tasks, preventing RCU-tasks from
	waiting on exited tasks.

Changes from v1:

o	The lockdep issue with list locking was finessed by ditching
	list locking in favor of having the list manipulated by a single
	kthread.  This change trimmed about 150 highly concurrent lines
	from the implementation.

o	Passes more aggressive rcutorture runs, which indicates that
	an increase in rcutorture's aggression is called for.

o	Handled review comments from Peter Zijlstra, Lai Jiangshan,
	Frederic Weisbecker, and Oleg Nesterov.

o	Added RCU-tasks stall-warning documentation.

Remaining issues include:

o	It is not clear that trampolines in functions called from the
	idle loop are correctly handled.  Or if anyone cares about
	trampolines in functions called from the idle loop.

o	The current implementation does not yet recognize tasks that start
	out executing is usermode.  Instead, it waits for the next
	scheduling-clock tick to note them.

o	As a result, the current implementation does not handle nohz_full=
	CPUs executing tasks running in usermode.  There are a couple of
	possible fixes under consideration.

o	If a task is preempted while executing in usermode, the RCU-tasks
	grace period will not end until that task resumes.  (Is there
	some reasonable way to determine that a given preempted task
	was preempted from usermode execution?)

o	More about RCU-tasks needs to be added to Documentation/RCU.

o	There are probably still bugs.

							Thanx, Paul

------------------------------------------------------------------------

 b/Documentation/RCU/stallwarn.txt                             |   33 
 b/Documentation/kernel-parameters.txt                         |    5 
 b/fs/file.c                                                   |    2 
 b/include/linux/init_task.h                                   |   15 
 b/include/linux/rcupdate.h                                    |   60 +
 b/include/linux/sched.h                                       |   30 
 b/init/Kconfig                                                |   10 
 b/kernel/exit.c                                               |    1 
 b/kernel/rcu/rcutorture.c                                     |   44 -
 b/kernel/rcu/tiny.c                                           |    2 
 b/kernel/rcu/tree.c                                           |   14 
 b/kernel/rcu/tree_plugin.h                                    |    2 
 b/kernel/rcu/update.c                                         |  438 ++++++++--
 b/mm/mlock.c                                                  |    2 
 b/tools/testing/selftests/rcutorture/configs/rcu/TASKS01      |    7 
 b/tools/testing/selftests/rcutorture/configs/rcu/TASKS01.boot |    1 
 b/tools/testing/selftests/rcutorture/configs/rcu/TASKS02      |    6 
 b/tools/testing/selftests/rcutorture/configs/rcu/TASKS02.boot |    1 
 18 files changed, 590 insertions(+), 83 deletions(-)


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-07-31  0:39 [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation Paul E. McKenney
@ 2014-07-31  0:39 ` Paul E. McKenney
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 02/10] rcu: Provide cond_resched_rcu_qs() to force quiescent states in long loops Paul E. McKenney
                     ` (10 more replies)
  2014-07-31 16:19 ` [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation josh
  2014-07-31 19:29 ` Andi Kleen
  2 siblings, 11 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31  0:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit adds a new RCU-tasks flavor of RCU, which provides
call_rcu_tasks().  This RCU flavor's quiescent states are voluntary
context switch (not preemption!), userspace execution, and the idle loop.
Note that unlike other RCU flavors, these quiescent states occur in tasks,
not necessarily CPUs.  Includes fixes from Steven Rostedt.

This RCU flavor is assumed to have very infrequent latency-tolerate
updaters.  This assumption permits significant simplifications, including
a single global callback list protected by a single global lock, along
with a single linked list containing all tasks that have not yet passed
through a quiescent state.  If experience shows this assumption to be
incorrect, the required additional complexity will be added.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/linux/init_task.h |   9 +++
 include/linux/rcupdate.h  |  36 ++++++++++
 include/linux/sched.h     |  23 +++----
 init/Kconfig              |  10 +++
 kernel/rcu/tiny.c         |   2 +
 kernel/rcu/tree.c         |   2 +
 kernel/rcu/update.c       | 164 ++++++++++++++++++++++++++++++++++++++++++++++
 7 files changed, 235 insertions(+), 11 deletions(-)

diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index 6df7f9fe0d01..78715ea7c30c 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -124,6 +124,14 @@ extern struct group_info init_groups;
 #else
 #define INIT_TASK_RCU_PREEMPT(tsk)
 #endif
+#ifdef CONFIG_TASKS_RCU
+#define INIT_TASK_RCU_TASKS(tsk)					\
+	.rcu_tasks_holdout = false,					\
+	.rcu_tasks_holdout_list =					\
+		LIST_HEAD_INIT(tsk.rcu_tasks_holdout_list),
+#else
+#define INIT_TASK_RCU_TASKS(tsk)
+#endif
 
 extern struct cred init_cred;
 
@@ -231,6 +239,7 @@ extern struct task_group root_task_group;
 	INIT_FTRACE_GRAPH						\
 	INIT_TRACE_RECURSION						\
 	INIT_TASK_RCU_PREEMPT(tsk)					\
+	INIT_TASK_RCU_TASKS(tsk)					\
 	INIT_CPUSET_SEQ(tsk)						\
 	INIT_RT_MUTEXES(tsk)						\
 	INIT_VTIME(tsk)							\
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 6a94cc8b1ca0..05656b504295 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -197,6 +197,26 @@ void call_rcu_sched(struct rcu_head *head,
 
 void synchronize_sched(void);
 
+/**
+ * call_rcu_tasks() - Queue an RCU for invocation task-based grace period
+ * @head: structure to be used for queueing the RCU updates.
+ * @func: actual callback function to be invoked after the grace period
+ *
+ * The callback function will be invoked some time after a full grace
+ * period elapses, in other words after all currently executing RCU
+ * read-side critical sections have completed. call_rcu_tasks() assumes
+ * that the read-side critical sections end at a voluntary context
+ * switch (not a preemption!), entry into idle, or transition to usermode
+ * execution.  As such, there are no read-side primitives analogous to
+ * rcu_read_lock() and rcu_read_unlock() because this primitive is intended
+ * to determine that all tasks have passed through a safe state, not so
+ * much for data-strcuture synchronization.
+ *
+ * See the description of call_rcu() for more detailed information on
+ * memory ordering guarantees.
+ */
+void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head *head));
+
 #ifdef CONFIG_PREEMPT_RCU
 
 void __rcu_read_lock(void);
@@ -294,6 +314,22 @@ static inline void rcu_user_hooks_switch(struct task_struct *prev,
 		rcu_irq_exit(); \
 	} while (0)
 
+/*
+ * Note a voluntary context switch for RCU-tasks benefit.  This is a
+ * macro rather than an inline function to avoid #include hell.
+ */
+#ifdef CONFIG_TASKS_RCU
+#define rcu_note_voluntary_context_switch(t) \
+	do { \
+		preempt_disable(); /* Exclude synchronize_sched(); */ \
+		if ((t)->rcu_tasks_holdout) \
+			smp_store_release(&(t)->rcu_tasks_holdout, 0); \
+		preempt_enable(); \
+	} while (0)
+#else /* #ifdef CONFIG_TASKS_RCU */
+#define rcu_note_voluntary_context_switch(t)	do { } while (0)
+#endif /* #else #ifdef CONFIG_TASKS_RCU */
+
 #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP)
 bool __rcu_is_watching(void);
 #endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 306f4f0c987a..3cf124389ec7 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1273,6 +1273,11 @@ struct task_struct {
 #ifdef CONFIG_RCU_BOOST
 	struct rt_mutex *rcu_boost_mutex;
 #endif /* #ifdef CONFIG_RCU_BOOST */
+#ifdef CONFIG_TASKS_RCU
+	unsigned long rcu_tasks_nvcsw;
+	int rcu_tasks_holdout;
+	struct list_head rcu_tasks_holdout_list;
+#endif /* #ifdef CONFIG_TASKS_RCU */
 
 #if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT)
 	struct sched_info sched_info;
@@ -1998,31 +2003,27 @@ extern void task_clear_jobctl_pending(struct task_struct *task,
 				      unsigned int mask);
 
 #ifdef CONFIG_PREEMPT_RCU
-
 #define RCU_READ_UNLOCK_BLOCKED (1 << 0) /* blocked while in RCU read-side. */
 #define RCU_READ_UNLOCK_NEED_QS (1 << 1) /* RCU core needs CPU response. */
+#endif /* #ifdef CONFIG_PREEMPT_RCU */
 
 static inline void rcu_copy_process(struct task_struct *p)
 {
+#ifdef CONFIG_PREEMPT_RCU
 	p->rcu_read_lock_nesting = 0;
 	p->rcu_read_unlock_special = 0;
-#ifdef CONFIG_TREE_PREEMPT_RCU
 	p->rcu_blocked_node = NULL;
-#endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */
 #ifdef CONFIG_RCU_BOOST
 	p->rcu_boost_mutex = NULL;
 #endif /* #ifdef CONFIG_RCU_BOOST */
 	INIT_LIST_HEAD(&p->rcu_node_entry);
+#endif /* #ifdef CONFIG_PREEMPT_RCU */
+#ifdef CONFIG_TASKS_RCU
+	p->rcu_tasks_holdout = false;
+	INIT_LIST_HEAD(&p->rcu_tasks_holdout_list);
+#endif /* #ifdef CONFIG_TASKS_RCU */
 }
 
-#else
-
-static inline void rcu_copy_process(struct task_struct *p)
-{
-}
-
-#endif
-
 static inline void tsk_restore_flags(struct task_struct *task,
 				unsigned long orig_flags, unsigned long flags)
 {
diff --git a/init/Kconfig b/init/Kconfig
index 9d76b99af1b9..c56cb62a2df1 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -507,6 +507,16 @@ config PREEMPT_RCU
 	  This option enables preemptible-RCU code that is common between
 	  the TREE_PREEMPT_RCU and TINY_PREEMPT_RCU implementations.
 
+config TASKS_RCU
+	bool "Task_based RCU implementation using voluntary context switch"
+	default n
+	help
+	  This option enables a task-based RCU implementation that uses
+	  only voluntary context switch (not preemption!), idle, and
+	  user-mode execution as quiescent states.
+
+	  If unsure, say N.
+
 config RCU_STALL_COMMON
 	def_bool ( TREE_RCU || TREE_PREEMPT_RCU || RCU_TRACE )
 	help
diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c
index d9efcc13008c..717f00854fc0 100644
--- a/kernel/rcu/tiny.c
+++ b/kernel/rcu/tiny.c
@@ -254,6 +254,8 @@ void rcu_check_callbacks(int cpu, int user)
 		rcu_sched_qs(cpu);
 	else if (!in_softirq())
 		rcu_bh_qs(cpu);
+	if (user)
+		rcu_note_voluntary_context_switch(current);
 }
 
 /*
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 625d0b0cd75a..f958c52f644d 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2413,6 +2413,8 @@ void rcu_check_callbacks(int cpu, int user)
 	rcu_preempt_check_callbacks(cpu);
 	if (rcu_pending(cpu))
 		invoke_rcu_core();
+	if (user)
+		rcu_note_voluntary_context_switch(current);
 	trace_rcu_utilization(TPS("End scheduler-tick"));
 }
 
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index bc7883570530..b92268647a01 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -47,6 +47,7 @@
 #include <linux/hardirq.h>
 #include <linux/delay.h>
 #include <linux/module.h>
+#include <linux/kthread.h>
 
 #define CREATE_TRACE_POINTS
 
@@ -350,3 +351,166 @@ static int __init check_cpu_stall_init(void)
 early_initcall(check_cpu_stall_init);
 
 #endif /* #ifdef CONFIG_RCU_STALL_COMMON */
+
+#ifdef CONFIG_TASKS_RCU
+
+/*
+ * Simple variant of RCU whose quiescent states are voluntary context switch,
+ * user-space execution, and idle.  As such, grace periods can take one good
+ * long time.  There are no read-side primitives similar to rcu_read_lock()
+ * and rcu_read_unlock() because this implementation is intended to get
+ * the system into a safe state for some of the manipulations involved in
+ * tracing and the like.  Finally, this implementation does not support
+ * high call_rcu_tasks() rates from multiple CPUs.  If this is required,
+ * per-CPU callback lists will be needed.
+ */
+
+/* Lists of tasks that we are still waiting for during this grace period. */
+static LIST_HEAD(rcu_tasks_holdouts);
+
+/* Global list of callbacks and associated lock. */
+static struct rcu_head *rcu_tasks_cbs_head;
+static struct rcu_head **rcu_tasks_cbs_tail = &rcu_tasks_cbs_head;
+static DEFINE_RAW_SPINLOCK(rcu_tasks_cbs_lock);
+
+/* Post an RCU-tasks callback. */
+void call_rcu_tasks(struct rcu_head *rhp, void (*func)(struct rcu_head *rhp))
+{
+	unsigned long flags;
+
+	rhp->next = NULL;
+	rhp->func = func;
+	raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
+	*rcu_tasks_cbs_tail = rhp;
+	rcu_tasks_cbs_tail = &rhp->next;
+	raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
+}
+EXPORT_SYMBOL_GPL(call_rcu_tasks);
+
+/* RCU-tasks kthread that detects grace periods and invokes callbacks. */
+static int __noreturn rcu_tasks_kthread(void *arg)
+{
+	unsigned long flags;
+	struct task_struct *g, *t;
+	struct rcu_head *list;
+	struct rcu_head *next;
+
+	/* FIXME: Add housekeeping affinity. */
+
+	/*
+	 * Each pass through the following loop makes one check for
+	 * newly arrived callbacks, and, if there are some, waits for
+	 * one RCU-tasks grace period and then invokes the callbacks.
+	 * This loop is terminated by the system going down.  ;-)
+	 */
+	for (;;) {
+
+		/* Pick up any new callbacks. */
+		raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
+		smp_mb__after_unlock_lock(); /* Enforce GP memory ordering. */
+		list = rcu_tasks_cbs_head;
+		rcu_tasks_cbs_head = NULL;
+		rcu_tasks_cbs_tail = &rcu_tasks_cbs_head;
+		raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
+
+		/* If there were none, wait a bit and start over. */
+		if (!list) {
+			schedule_timeout_interruptible(HZ);
+			flush_signals(current);
+			continue;
+		}
+
+		/*
+		 * There were callbacks, so we need to wait for an
+		 * RCU-tasks grace period.  Start off by scanning
+		 * the task list for tasks that are not already
+		 * voluntarily blocked.  Mark these tasks and make
+		 * a list of them in rcu_tasks_holdouts.
+		 */
+		rcu_read_lock();
+		for_each_process_thread(g, t) {
+			if (t != current && ACCESS_ONCE(t->on_rq) &&
+			    !is_idle_task(t)) {
+				t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
+				t->rcu_tasks_holdout = 1;
+				list_add(&t->rcu_tasks_holdout_list,
+					 &rcu_tasks_holdouts);
+			}
+		}
+		rcu_read_unlock();
+
+		/*
+		 * The "t != current" and "!is_idle_task()" comparisons
+		 * above are stable, but the "t->on_rq" value could
+		 * change at any time, and is generally unordered.
+		 * Therefore, we need some ordering.  The trick is
+		 * that t->on_rq is updated with a runqueue lock held,
+		 * and thus with interrupts disabled.  So the following
+		 * synchronize_sched() provides the needed ordering by:
+		 * (1) Waiting for all interrupts-disabled code sections
+		 * to complete and (2) The synchronize_sched() ordering
+		 * guarantees, which provide for a memory barrier on each
+		 * CPU since the completion of its last read-side critical
+		 * section, including interrupt-disabled code sections.
+		 */
+		synchronize_sched();
+
+		/*
+		 * Each pass through the following loop scans the list
+		 * of holdout tasks, removing any that are no longer
+		 * holdouts.  When the list is empty, we are done.
+		 */
+		while (!list_empty(&rcu_tasks_holdouts)) {
+			schedule_timeout_interruptible(HZ / 10);
+			flush_signals(current);
+			rcu_read_lock();
+			list_for_each_entry_rcu(t, &rcu_tasks_holdouts,
+						rcu_tasks_holdout_list) {
+				if (smp_load_acquire(&t->rcu_tasks_holdout)) {
+					if (t->rcu_tasks_nvcsw ==
+					    ACCESS_ONCE(t->nvcsw))
+						continue;
+					ACCESS_ONCE(t->rcu_tasks_holdout) = 0;
+				}
+				list_del_init(&t->rcu_tasks_holdout_list);
+				/* @@@ need to check for usermode on CPU. */
+			}
+			rcu_read_unlock();
+		}
+
+		/*
+		 * Because ->nvcsw is not guaranteed to have a full
+		 * memory barrier prior to it in the schedule() path,
+		 * memory reordering on other CPUs could cause their
+		 * RCU-tasks read-side critical sections to extend past
+		 * the end of the grace period.  However, because these
+		 * ->nvcsw updates are carried out with interrupts
+		 * disabled, we can use synchronize_sched() to force
+		 * the needed ordering on all such CPUs.
+		 */
+		synchronize_sched();
+
+		/* Invoke the callbacks. */
+		while (list) {
+			next = list->next;
+			local_bh_disable();
+			list->func(list);
+			local_bh_enable();
+			list = next;
+			cond_resched();
+		}
+	}
+}
+
+/* Spawn rcu_tasks_kthread() at boot time. */
+static int __init rcu_spawn_tasks_kthread(void)
+{
+	struct task_struct __maybe_unused *t;
+
+	t = kthread_run(rcu_tasks_kthread, NULL, "rcu_tasks_kthread");
+	BUG_ON(IS_ERR(t));
+	return 0;
+}
+early_initcall(rcu_spawn_tasks_kthread);
+
+#endif /* #ifdef CONFIG_TASKS_RCU */
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v2 tip/core/rcu 02/10] rcu: Provide cond_resched_rcu_qs() to force quiescent states in long loops
  2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
@ 2014-07-31  0:39   ` Paul E. McKenney
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 03/10] rcu: Add synchronous grace-period waiting for RCU-tasks Paul E. McKenney
                     ` (9 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31  0:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

RCU-tasks requires the occasional voluntary context switch
from CPU-bound in-kernel tasks.  In some cases, this requires
instrumenting cond_resched().  However, there is some reluctance
to countenance unconditionally instrumenting cond_resched() (see
http://lwn.net/Articles/603252/), so this commit creates a separate
cond_resched_rcu_qs() that may be used in place of cond_resched() in
locations prone to long-duration in-kernel looping.

This commit currently instruments only RCU-tasks.  Future possibilities
include also instrumenting RCU, RCU-bh, and RCU-sched in order to reduce
IPI usage.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 fs/file.c                |  2 +-
 include/linux/rcupdate.h | 13 +++++++++++++
 kernel/rcu/rcutorture.c  |  4 ++--
 kernel/rcu/tree.c        | 12 ++++++------
 kernel/rcu/tree_plugin.h |  2 +-
 mm/mlock.c               |  2 +-
 6 files changed, 24 insertions(+), 11 deletions(-)

diff --git a/fs/file.c b/fs/file.c
index 66923fe3176e..1cafc4c9275b 100644
--- a/fs/file.c
+++ b/fs/file.c
@@ -367,7 +367,7 @@ static struct fdtable *close_files(struct files_struct * files)
 				struct file * file = xchg(&fdt->fd[i], NULL);
 				if (file) {
 					filp_close(file, files);
-					cond_resched();
+					cond_resched_rcu_qs();
 				}
 			}
 			i++;
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 05656b504295..3299ff98ad03 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -330,6 +330,19 @@ static inline void rcu_user_hooks_switch(struct task_struct *prev,
 #define rcu_note_voluntary_context_switch(t)	do { } while (0)
 #endif /* #else #ifdef CONFIG_TASKS_RCU */
 
+/**
+ * cond_resched_rcu_qs - Report potential quiescent states to RCU
+ *
+ * This macro resembles cond_resched(), except that it is defined to
+ * report potential quiescent states to RCU-tasks even if the cond_resched()
+ * machinery were to be shut off, as some advocate for PREEMPT kernels.
+ */
+#define cond_resched_rcu_qs() \
+do { \
+	rcu_note_voluntary_context_switch(current); \
+	cond_resched(); \
+} while (0)
+
 #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP)
 bool __rcu_is_watching(void);
 #endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) */
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index 7fa34f86e5ba..febe07062ac5 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -667,7 +667,7 @@ static int rcu_torture_boost(void *arg)
 				}
 				call_rcu_time = jiffies;
 			}
-			cond_resched();
+			cond_resched_rcu_qs();
 			stutter_wait("rcu_torture_boost");
 			if (torture_must_stop())
 				goto checkwait;
@@ -1019,7 +1019,7 @@ rcu_torture_reader(void *arg)
 		__this_cpu_inc(rcu_torture_batch[completed]);
 		preempt_enable();
 		cur_ops->readunlock(idx);
-		cond_resched();
+		cond_resched_rcu_qs();
 		stutter_wait("rcu_torture_reader");
 	} while (!torture_must_stop());
 	if (irqreader && cur_ops->irq_capable) {
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index f958c52f644d..645a33efc0d4 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1650,7 +1650,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
 		    system_state == SYSTEM_RUNNING)
 			udelay(200);
 #endif /* #ifdef CONFIG_PROVE_RCU_DELAY */
-		cond_resched();
+		cond_resched_rcu_qs();
 	}
 
 	mutex_unlock(&rsp->onoff_mutex);
@@ -1739,7 +1739,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
 		/* smp_mb() provided by prior unlock-lock pair. */
 		nocb += rcu_future_gp_cleanup(rsp, rnp);
 		raw_spin_unlock_irq(&rnp->lock);
-		cond_resched();
+		cond_resched_rcu_qs();
 	}
 	rnp = rcu_get_root(rsp);
 	raw_spin_lock_irq(&rnp->lock);
@@ -1788,7 +1788,7 @@ static int __noreturn rcu_gp_kthread(void *arg)
 			/* Locking provides needed memory barrier. */
 			if (rcu_gp_init(rsp))
 				break;
-			cond_resched();
+			cond_resched_rcu_qs();
 			flush_signals(current);
 			trace_rcu_grace_period(rsp->name,
 					       ACCESS_ONCE(rsp->gpnum),
@@ -1831,10 +1831,10 @@ static int __noreturn rcu_gp_kthread(void *arg)
 				trace_rcu_grace_period(rsp->name,
 						       ACCESS_ONCE(rsp->gpnum),
 						       TPS("fqsend"));
-				cond_resched();
+				cond_resched_rcu_qs();
 			} else {
 				/* Deal with stray signal. */
-				cond_resched();
+				cond_resched_rcu_qs();
 				flush_signals(current);
 				trace_rcu_grace_period(rsp->name,
 						       ACCESS_ONCE(rsp->gpnum),
@@ -2437,7 +2437,7 @@ static void force_qs_rnp(struct rcu_state *rsp,
 	struct rcu_node *rnp;
 
 	rcu_for_each_leaf_node(rsp, rnp) {
-		cond_resched();
+		cond_resched_rcu_qs();
 		mask = 0;
 		raw_spin_lock_irqsave(&rnp->lock, flags);
 		smp_mb__after_unlock_lock();
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 02ac0fb186b8..a86a363ea453 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1842,7 +1842,7 @@ static int rcu_oom_notify(struct notifier_block *self,
 	get_online_cpus();
 	for_each_online_cpu(cpu) {
 		smp_call_function_single(cpu, rcu_oom_notify_cpu, NULL, 1);
-		cond_resched();
+		cond_resched_rcu_qs();
 	}
 	put_online_cpus();
 
diff --git a/mm/mlock.c b/mm/mlock.c
index b1eb53634005..bc386a22d647 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -782,7 +782,7 @@ static int do_mlockall(int flags)
 
 		/* Ignore errors */
 		mlock_fixup(vma, &prev, vma->vm_start, vma->vm_end, newflags);
-		cond_resched();
+		cond_resched_rcu_qs();
 	}
 out:
 	return 0;
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v2 tip/core/rcu 03/10] rcu: Add synchronous grace-period waiting for RCU-tasks
  2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 02/10] rcu: Provide cond_resched_rcu_qs() to force quiescent states in long loops Paul E. McKenney
@ 2014-07-31  0:39   ` Paul E. McKenney
  2014-07-31 16:58     ` josh
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 04/10] rcu: Export RCU-tasks APIs to GPL modules Paul E. McKenney
                     ` (8 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31  0:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

It turns out to be easier to add the synchronous grace-period waiting
functions to RCU-tasks than to work around their absense in rcutorture,
so this commit adds them.  The key point is that the existence of
call_rcu_tasks() means that rcutorture needs an rcu_barrier_tasks().

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/linux/rcupdate.h |  2 ++
 kernel/rcu/update.c      | 55 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 57 insertions(+)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 3299ff98ad03..17c7e25c38be 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -216,6 +216,8 @@ void synchronize_sched(void);
  * memory ordering guarantees.
  */
 void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head *head));
+void synchronize_rcu_tasks(void);
+void rcu_barrier_tasks(void);
 
 #ifdef CONFIG_PREEMPT_RCU
 
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index b92268647a01..c8d304dc6d8a 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -387,6 +387,61 @@ void call_rcu_tasks(struct rcu_head *rhp, void (*func)(struct rcu_head *rhp))
 }
 EXPORT_SYMBOL_GPL(call_rcu_tasks);
 
+/**
+ * synchronize_rcu_tasks - wait until an rcu-tasks grace period has elapsed.
+ *
+ * Control will return to the caller some time after a full rcu-tasks
+ * grace period has elapsed, in other words after all currently
+ * executing rcu-tasks read-side critical sections have elapsed.  These
+ * read-side critical sections are delimited by calls to schedule(),
+ * cond_resched_rcu_qs(), idle execution, userspace execution, calls
+ * to synchronize_rcu_tasks(), and (in theory, anyway) cond_resched().
+ *
+ * This is a very specialized primitive, intended only for a few uses in
+ * tracing and other situations requiring manipulation of function
+ * preambles and profiling hooks.  The synchronize_rcu_tasks() function
+ * is not (yet) intended for heavy use from multiple CPUs.
+ *
+ * Note that this guarantee implies further memory-ordering guarantees.
+ * On systems with more than one CPU, when synchronize_rcu_tasks() returns,
+ * each CPU is guaranteed to have executed a full memory barrier since the
+ * end of its last RCU-tasks read-side critical section whose beginning
+ * preceded the call to synchronize_rcu_tasks().  In addition, each CPU
+ * having an RCU-tasks read-side critical section that extends beyond
+ * the return from synchronize_rcu_tasks() is guaranteed to have executed
+ * a full memory barrier after the beginning of synchronize_rcu_tasks()
+ * and before the beginning of that RCU-tasks read-side critical section.
+ * Note that these guarantees include CPUs that are offline, idle, or
+ * executing in user mode, as well as CPUs that are executing in the kernel.
+ *
+ * Furthermore, if CPU A invoked synchronize_rcu_tasks(), which returned
+ * to its caller on CPU B, then both CPU A and CPU B are guaranteed
+ * to have executed a full memory barrier during the execution of
+ * synchronize_rcu_tasks() -- even if CPU A and CPU B are the same CPU
+ * (but again only if the system has more than one CPU).
+ */
+void synchronize_rcu_tasks(void)
+{
+	/* Complain if the scheduler has not started.  */
+	rcu_lockdep_assert(!rcu_scheduler_active,
+			   "synchronize_rcu_tasks called too soon");
+
+	/* Wait for the grace period. */
+	wait_rcu_gp(call_rcu_tasks);
+}
+
+/**
+ * rcu_barrier_tasks - Wait for in-flight call_rcu_tasks() callbacks.
+ *
+ * Although the current implementation is guaranteed to wait, it is not
+ * obligated to, for example, if there are no pending callbacks.
+ */
+void rcu_barrier_tasks(void)
+{
+	/* There is only one callback queue, so this is easy.  ;-) */
+	synchronize_rcu_tasks();
+}
+
 /* RCU-tasks kthread that detects grace periods and invokes callbacks. */
 static int __noreturn rcu_tasks_kthread(void *arg)
 {
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v2 tip/core/rcu 04/10] rcu: Export RCU-tasks APIs to GPL modules
  2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 02/10] rcu: Provide cond_resched_rcu_qs() to force quiescent states in long loops Paul E. McKenney
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 03/10] rcu: Add synchronous grace-period waiting for RCU-tasks Paul E. McKenney
@ 2014-07-31  0:39   ` Paul E. McKenney
  2014-07-31 16:56     ` josh
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 05/10] rcutorture: Add torture tests for RCU-tasks Paul E. McKenney
                     ` (7 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31  0:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: Steven Rostedt <rostedt@goodmis.org>

This commit exports the RCU-tasks APIs, call_rcu_tasks(),
synchronize_rcu_tasks(), and rcu_barrier_tasks(), to GPL-licensed
kernel modules.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/update.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index c8d304dc6d8a..1bfc07ed854e 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -429,6 +429,7 @@ void synchronize_rcu_tasks(void)
 	/* Wait for the grace period. */
 	wait_rcu_gp(call_rcu_tasks);
 }
+EXPORT_SYMBOL_GPL(synchronize_rcu_tasks);
 
 /**
  * rcu_barrier_tasks - Wait for in-flight call_rcu_tasks() callbacks.
@@ -441,6 +442,7 @@ void rcu_barrier_tasks(void)
 	/* There is only one callback queue, so this is easy.  ;-) */
 	synchronize_rcu_tasks();
 }
+EXPORT_SYMBOL_GPL(rcu_barrier_tasks);
 
 /* RCU-tasks kthread that detects grace periods and invokes callbacks. */
 static int __noreturn rcu_tasks_kthread(void *arg)
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v2 tip/core/rcu 05/10] rcutorture: Add torture tests for RCU-tasks
  2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
                     ` (2 preceding siblings ...)
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 04/10] rcu: Export RCU-tasks APIs to GPL modules Paul E. McKenney
@ 2014-07-31  0:39   ` Paul E. McKenney
  2014-07-31 17:01     ` josh
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 06/10] rcutorture: Add RCU-tasks test cases Paul E. McKenney
                     ` (6 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31  0:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit adds torture tests for RCU-tasks.  It also fixes a bug that
would segfault for an RCU flavor lacking a callback-barrier function.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/linux/rcupdate.h |  1 +
 kernel/rcu/rcutorture.c  | 40 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 40 insertions(+), 1 deletion(-)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 17c7e25c38be..ecb2198849e0 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -55,6 +55,7 @@ enum rcutorture_type {
 	RCU_FLAVOR,
 	RCU_BH_FLAVOR,
 	RCU_SCHED_FLAVOR,
+	RCU_TASKS_FLAVOR,
 	SRCU_FLAVOR,
 	INVALID_RCU_FLAVOR
 };
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index febe07062ac5..6d12ab6675bc 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -602,6 +602,42 @@ static struct rcu_torture_ops sched_ops = {
 };
 
 /*
+ * Definitions for RCU-tasks torture testing.
+ */
+
+static int tasks_torture_read_lock(void)
+{
+	return 0;
+}
+
+static void tasks_torture_read_unlock(int idx)
+{
+}
+
+static void rcu_tasks_torture_deferred_free(struct rcu_torture *p)
+{
+	call_rcu_tasks(&p->rtort_rcu, rcu_torture_cb);
+}
+
+static struct rcu_torture_ops tasks_ops = {
+	.ttype		= RCU_TASKS_FLAVOR,
+	.init		= rcu_sync_torture_init,
+	.readlock	= tasks_torture_read_lock,
+	.read_delay	= rcu_read_delay,  /* just reuse rcu's version. */
+	.readunlock	= tasks_torture_read_unlock,
+	.completed	= rcu_no_completed,
+	.deferred_free	= rcu_tasks_torture_deferred_free,
+	.sync		= synchronize_rcu_tasks,
+	.exp_sync	= synchronize_rcu_tasks,
+	.call		= call_rcu_tasks,
+	.cb_barrier	= rcu_barrier_tasks,
+	.fqs		= NULL,
+	.stats		= NULL,
+	.irq_capable	= 1,
+	.name		= "tasks"
+};
+
+/*
  * RCU torture priority-boost testing.  Runs one real-time thread per
  * CPU for moderate bursts, repeatedly registering RCU callbacks and
  * spinning waiting for them to be invoked.  If a given callback takes
@@ -1295,7 +1331,8 @@ static int rcu_torture_barrier_cbs(void *arg)
 		if (atomic_dec_and_test(&barrier_cbs_count))
 			wake_up(&barrier_wq);
 	} while (!torture_must_stop());
-	cur_ops->cb_barrier();
+	if (cur_ops->cb_barrier != NULL)
+		cur_ops->cb_barrier();
 	destroy_rcu_head_on_stack(&rcu);
 	torture_kthread_stopping("rcu_torture_barrier_cbs");
 	return 0;
@@ -1534,6 +1571,7 @@ rcu_torture_init(void)
 	int firsterr = 0;
 	static struct rcu_torture_ops *torture_ops[] = {
 		&rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &sched_ops,
+		&tasks_ops,
 	};
 
 	if (!torture_init_begin(torture_type, verbose, &rcutorture_runnable))
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v2 tip/core/rcu 06/10] rcutorture: Add RCU-tasks test cases
  2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
                     ` (3 preceding siblings ...)
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 05/10] rcutorture: Add torture tests for RCU-tasks Paul E. McKenney
@ 2014-07-31  0:39   ` Paul E. McKenney
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 07/10] rcu: Add stall-warning checks for RCU-tasks Paul E. McKenney
                     ` (5 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31  0:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit adds the TASKS01 and TASKS02 Kconfig fragments, along with
the corresponding TASKS01.boot and TASKS02.boot boot-parameter files
specifying that rcutorture test RCU-tasks instead of the default flavor.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 tools/testing/selftests/rcutorture/configs/rcu/TASKS01      | 7 +++++++
 tools/testing/selftests/rcutorture/configs/rcu/TASKS01.boot | 1 +
 tools/testing/selftests/rcutorture/configs/rcu/TASKS02      | 6 ++++++
 tools/testing/selftests/rcutorture/configs/rcu/TASKS02.boot | 1 +
 4 files changed, 15 insertions(+)
 create mode 100644 tools/testing/selftests/rcutorture/configs/rcu/TASKS01
 create mode 100644 tools/testing/selftests/rcutorture/configs/rcu/TASKS01.boot
 create mode 100644 tools/testing/selftests/rcutorture/configs/rcu/TASKS02
 create mode 100644 tools/testing/selftests/rcutorture/configs/rcu/TASKS02.boot

diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TASKS01 b/tools/testing/selftests/rcutorture/configs/rcu/TASKS01
new file mode 100644
index 000000000000..263a20f01fae
--- /dev/null
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TASKS01
@@ -0,0 +1,7 @@
+CONFIG_SMP=y
+CONFIG_NR_CPUS=2
+CONFIG_HOTPLUG_CPU=y
+CONFIG_PREEMPT_NONE=n
+CONFIG_PREEMPT_VOLUNTARY=n
+CONFIG_PREEMPT=y
+CONFIG_TASKS_RCU=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TASKS01.boot b/tools/testing/selftests/rcutorture/configs/rcu/TASKS01.boot
new file mode 100644
index 000000000000..cd2a188eeb6d
--- /dev/null
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TASKS01.boot
@@ -0,0 +1 @@
+rcutorture.torture_type=tasks
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TASKS02 b/tools/testing/selftests/rcutorture/configs/rcu/TASKS02
new file mode 100644
index 000000000000..17b669c8833c
--- /dev/null
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TASKS02
@@ -0,0 +1,6 @@
+CONFIG_SMP=n
+CONFIG_HOTPLUG_CPU=y
+CONFIG_PREEMPT_NONE=y
+CONFIG_PREEMPT_VOLUNTARY=n
+CONFIG_PREEMPT=n
+CONFIG_TASKS_RCU=y
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TASKS02.boot b/tools/testing/selftests/rcutorture/configs/rcu/TASKS02.boot
new file mode 100644
index 000000000000..cd2a188eeb6d
--- /dev/null
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TASKS02.boot
@@ -0,0 +1 @@
+rcutorture.torture_type=tasks
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v2 tip/core/rcu 07/10] rcu: Add stall-warning checks for RCU-tasks
  2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
                     ` (4 preceding siblings ...)
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 06/10] rcutorture: Add RCU-tasks test cases Paul E. McKenney
@ 2014-07-31  0:39   ` Paul E. McKenney
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 08/10] rcu: Improve RCU-tasks energy efficiency Paul E. McKenney
                     ` (4 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31  0:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit adds a three-minute RCU-tasks stall warning.  The actual
time is controlled by the boot/sysfs parameter rcu_task_stall_timeout,
with values less than or equal to zero disabling the stall warnings.
The default value is three minutes, which means that the tasks that
have not yet responded will get their stacks dumped every three minutes,
until they pass through a voluntary context switch.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

Conflicts:
	kernel/rcu/update.c
---
 Documentation/kernel-parameters.txt |  5 ++++
 kernel/rcu/update.c                 | 48 +++++++++++++++++++++++++++++--------
 2 files changed, 43 insertions(+), 10 deletions(-)

diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 910c3829f81d..8cdbde7b17f5 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2921,6 +2921,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 	rcupdate.rcu_cpu_stall_timeout= [KNL]
 			Set timeout for RCU CPU stall warning messages.
 
+	rcupdate.rcu_task_stall_timeout= [KNL]
+			Set timeout in jiffies for RCU task stall warning
+			messages.  Disable with a value less than or equal
+			to zero.
+
 	rdinit=		[KNL]
 			Format: <full_path>
 			Run specified binary instead of /init from the ramdisk,
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index 1bfc07ed854e..aef581854239 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -373,6 +373,10 @@ static struct rcu_head *rcu_tasks_cbs_head;
 static struct rcu_head **rcu_tasks_cbs_tail = &rcu_tasks_cbs_head;
 static DEFINE_RAW_SPINLOCK(rcu_tasks_cbs_lock);
 
+/* Control stall timeouts.  Disable with <= 0, otherwise jiffies till stall. */
+static int rcu_task_stall_timeout __read_mostly = HZ * 60 * 3;
+module_param(rcu_task_stall_timeout, int, 0644);
+
 /* Post an RCU-tasks callback. */
 void call_rcu_tasks(struct rcu_head *rhp, void (*func)(struct rcu_head *rhp))
 {
@@ -444,11 +448,32 @@ void rcu_barrier_tasks(void)
 }
 EXPORT_SYMBOL_GPL(rcu_barrier_tasks);
 
+/* See if tasks are still holding out, complain if so. */
+static void check_holdout_task(struct task_struct *t,
+			       bool needreport, bool *firstreport)
+{
+	if (!smp_load_acquire(&t->rcu_tasks_holdout) ||
+	    t->rcu_tasks_nvcsw != ACCESS_ONCE(t->nvcsw)) {
+		ACCESS_ONCE(t->rcu_tasks_holdout) = 0;
+		/* @@@ need to check for usermode on CPU. */
+		list_del_rcu(&t->rcu_tasks_holdout_list);
+		return;
+	}
+	if (!needreport)
+		return;
+	if (*firstreport) {
+		pr_err("INFO: rcu_tasks detected stalls on tasks:\n");
+		*firstreport = false;
+	}
+	sched_show_task(current);
+}
+
 /* RCU-tasks kthread that detects grace periods and invokes callbacks. */
 static int __noreturn rcu_tasks_kthread(void *arg)
 {
 	unsigned long flags;
 	struct task_struct *g, *t;
+	unsigned long lastreport;
 	struct rcu_head *list;
 	struct rcu_head *next;
 
@@ -517,21 +542,24 @@ static int __noreturn rcu_tasks_kthread(void *arg)
 		 * of holdout tasks, removing any that are no longer
 		 * holdouts.  When the list is empty, we are done.
 		 */
+		lastreport = jiffies;
 		while (!list_empty(&rcu_tasks_holdouts)) {
+			bool firstreport;
+			bool needreport;
+			int rtst;
+
 			schedule_timeout_interruptible(HZ / 10);
+			rtst = ACCESS_ONCE(rcu_task_stall_timeout);
+			needreport = rtst > 0 &&
+				     time_after(jiffies, lastreport + rtst);
+			if (needreport)
+				lastreport = jiffies;
+			firstreport = true;
 			flush_signals(current);
 			rcu_read_lock();
 			list_for_each_entry_rcu(t, &rcu_tasks_holdouts,
-						rcu_tasks_holdout_list) {
-				if (smp_load_acquire(&t->rcu_tasks_holdout)) {
-					if (t->rcu_tasks_nvcsw ==
-					    ACCESS_ONCE(t->nvcsw))
-						continue;
-					ACCESS_ONCE(t->rcu_tasks_holdout) = 0;
-				}
-				list_del_init(&t->rcu_tasks_holdout_list);
-				/* @@@ need to check for usermode on CPU. */
-			}
+						rcu_tasks_holdout_list)
+				check_holdout_task(t, needreport, &firstreport);
 			rcu_read_unlock();
 		}
 
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v2 tip/core/rcu 08/10] rcu: Improve RCU-tasks energy efficiency
  2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
                     ` (5 preceding siblings ...)
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 07/10] rcu: Add stall-warning checks for RCU-tasks Paul E. McKenney
@ 2014-07-31  0:39   ` Paul E. McKenney
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 09/10] documentation: Add verbiage on RCU-tasks stall warning messages Paul E. McKenney
                     ` (3 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31  0:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

The current RCU-tasks implementation uses strict polling to detect
callback arrivals.  This works quite well, but is not so good for
energy efficiency.  This commit therefore replaces the strict polling
with a wait queue.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcu/update.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index aef581854239..030494690c93 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -371,6 +371,7 @@ static LIST_HEAD(rcu_tasks_holdouts);
 /* Global list of callbacks and associated lock. */
 static struct rcu_head *rcu_tasks_cbs_head;
 static struct rcu_head **rcu_tasks_cbs_tail = &rcu_tasks_cbs_head;
+static DECLARE_WAIT_QUEUE_HEAD(rcu_tasks_cbs_wq);
 static DEFINE_RAW_SPINLOCK(rcu_tasks_cbs_lock);
 
 /* Control stall timeouts.  Disable with <= 0, otherwise jiffies till stall. */
@@ -381,13 +382,17 @@ module_param(rcu_task_stall_timeout, int, 0644);
 void call_rcu_tasks(struct rcu_head *rhp, void (*func)(struct rcu_head *rhp))
 {
 	unsigned long flags;
+	bool needwake;
 
 	rhp->next = NULL;
 	rhp->func = func;
 	raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
+	needwake = !rcu_tasks_cbs_head;
 	*rcu_tasks_cbs_tail = rhp;
 	rcu_tasks_cbs_tail = &rhp->next;
 	raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
+	if (needwake)
+		wake_up(&rcu_tasks_cbs_wq);
 }
 EXPORT_SYMBOL_GPL(call_rcu_tasks);
 
@@ -497,8 +502,12 @@ static int __noreturn rcu_tasks_kthread(void *arg)
 
 		/* If there were none, wait a bit and start over. */
 		if (!list) {
-			schedule_timeout_interruptible(HZ);
-			flush_signals(current);
+			wait_event_interruptible(rcu_tasks_cbs_wq,
+						 rcu_tasks_cbs_head);
+			if (!rcu_tasks_cbs_head) {
+				flush_signals(current);
+				schedule_timeout_interruptible(HZ/10);
+			}
 			continue;
 		}
 
@@ -584,6 +593,7 @@ static int __noreturn rcu_tasks_kthread(void *arg)
 			list = next;
 			cond_resched();
 		}
+		schedule_timeout_uninterruptible(HZ/10);
 	}
 }
 
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v2 tip/core/rcu 09/10] documentation: Add verbiage on RCU-tasks stall warning messages
  2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
                     ` (6 preceding siblings ...)
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 08/10] rcu: Improve RCU-tasks energy efficiency Paul E. McKenney
@ 2014-07-31  0:39   ` Paul E. McKenney
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 10/10] rcu: Make RCU-tasks track exiting tasks Paul E. McKenney
                     ` (2 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31  0:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit documents RCU-tasks stall warning messages and also describes
when to use the new cond_resched_rcu_qs() API.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 Documentation/RCU/stallwarn.txt | 33 ++++++++++++++++++++++++---------
 1 file changed, 24 insertions(+), 9 deletions(-)

diff --git a/Documentation/RCU/stallwarn.txt b/Documentation/RCU/stallwarn.txt
index 68fe3ad27015..ef5a2fd4ff70 100644
--- a/Documentation/RCU/stallwarn.txt
+++ b/Documentation/RCU/stallwarn.txt
@@ -56,8 +56,20 @@ RCU_STALL_RAT_DELAY
 	two jiffies.  (This is a cpp macro, not a kernel configuration
 	parameter.)
 
-When a CPU detects that it is stalling, it will print a message similar
-to the following:
+rcupdate.rcu_task_stall_timeout
+
+	This boot/sysfs parameter controls the RCU-tasks stall warning
+	interval.  A value of zero or less suppresses RCU-tasks stall
+	warnings.  A positive value sets the stall-warning interval
+	in jiffies.  An RCU-tasks stall warning starts wtih the line:
+
+		INFO: rcu_tasks detected stalls on tasks:
+
+	And continues with the output of sched_show_task() for each
+	task stalling the current RCU-tasks grace period.
+
+For non-RCU-tasks flavors of RCU, when a CPU detects that it is stalling,
+it will print a message similar to the following:
 
 INFO: rcu_sched_state detected stall on CPU 5 (t=2500 jiffies)
 
@@ -174,8 +186,12 @@ o	A CPU looping with preemption disabled.  This condition can
 o	A CPU looping with bottom halves disabled.  This condition can
 	result in RCU-sched and RCU-bh stalls.
 
-o	For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel
-	without invoking schedule().
+o	For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the
+	kernel without invoking schedule().  Note that cond_resched()
+	does not necessarily prevent RCU CPU stall warnings.  Therefore,
+	if the looping in the kernel is really expected and desirable
+	behavior, you might need to replace some of the cond_resched()
+	calls with calls to cond_resched_rcu_qs().
 
 o	A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might
 	happen to preempt a low-priority task in the middle of an RCU
@@ -208,11 +224,10 @@ o	A hardware failure.  This is quite unlikely, but has occurred
 	This resulted in a series of RCU CPU stall warnings, eventually
 	leading the realization that the CPU had failed.
 
-The RCU, RCU-sched, and RCU-bh implementations have CPU stall warning.
-SRCU does not have its own CPU stall warnings, but its calls to
-synchronize_sched() will result in RCU-sched detecting RCU-sched-related
-CPU stalls.  Please note that RCU only detects CPU stalls when there is
-a grace period in progress.  No grace period, no CPU stall warnings.
+The RCU, RCU-sched, RCU-bh, and RCU-tasks implementations have CPU stall
+warning.  Note that SRCU does -not- have CPU stall warnings.  Please note
+that RCU only detects CPU stalls when there is a grace period in progress.
+No grace period, no CPU stall warnings.
 
 To diagnose the cause of the stall, inspect the stack traces.
 The offending function will usually be near the top of the stack.
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v2 tip/core/rcu 10/10] rcu: Make RCU-tasks track exiting tasks
  2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
                     ` (7 preceding siblings ...)
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 09/10] documentation: Add verbiage on RCU-tasks stall warning messages Paul E. McKenney
@ 2014-07-31  0:39   ` Paul E. McKenney
  2014-07-31  7:30   ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Lai Jiangshan
  2014-08-01 15:53   ` Oleg Nesterov
  10 siblings, 0 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31  0:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, tglx,
	peterz, rostedt, dhowells, edumazet, dvhart, fweisbec, oleg,
	bobby.prani, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

This commit adds synchronization with exiting tasks, so that RCU-tasks
avoids waiting on tasks that no longer exist.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/linux/init_task.h |   6 +-
 include/linux/rcupdate.h  |   8 +++
 include/linux/sched.h     |   7 ++-
 kernel/exit.c             |   1 +
 kernel/rcu/update.c       | 155 +++++++++++++++++++++++++++++++++++-----------
 5 files changed, 138 insertions(+), 39 deletions(-)

diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index 78715ea7c30c..26322200937d 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -127,8 +127,10 @@ extern struct group_info init_groups;
 #ifdef CONFIG_TASKS_RCU
 #define INIT_TASK_RCU_TASKS(tsk)					\
 	.rcu_tasks_holdout = false,					\
-	.rcu_tasks_holdout_list =					\
-		LIST_HEAD_INIT(tsk.rcu_tasks_holdout_list),
+	.rcu_tasks_holdout_list.prev = LIST_POISON2,			\
+	.rcu_tasks_lock = __SPIN_LOCK_UNLOCKED(tsk.rcu_tasks_lock),	\
+	.rcu_tasks_exiting = 0,						\
+	.rcu_tasks_exit_wq = NULL,
 #else
 #define INIT_TASK_RCU_TASKS(tsk)
 #endif
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index ecb2198849e0..0805a74f88ca 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -292,6 +292,14 @@ static inline void rcu_user_hooks_switch(struct task_struct *prev,
 					 struct task_struct *next) { }
 #endif /* CONFIG_RCU_USER_QS */
 
+#ifdef CONFIG_TASKS_RCU
+void exit_rcu_tasks(void);
+#else /* #ifdef CONFIG_TASKS_RCU */
+static inline void exit_rcu_tasks(void)
+{
+}
+#endif /* #else #ifdef CONFIG_TASKS_RCU */
+
 /**
  * RCU_NONIDLE - Indicate idle-loop code that needs RCU readers
  * @a: Code that RCU needs to pay attention to.
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 3cf124389ec7..8c02508c9e47 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1277,6 +1277,9 @@ struct task_struct {
 	unsigned long rcu_tasks_nvcsw;
 	int rcu_tasks_holdout;
 	struct list_head rcu_tasks_holdout_list;
+	spinlock_t rcu_tasks_lock;
+	int rcu_tasks_exiting;
+	wait_queue_head_t *rcu_tasks_exit_wq;
 #endif /* #ifdef CONFIG_TASKS_RCU */
 
 #if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT)
@@ -2020,7 +2023,9 @@ static inline void rcu_copy_process(struct task_struct *p)
 #endif /* #ifdef CONFIG_PREEMPT_RCU */
 #ifdef CONFIG_TASKS_RCU
 	p->rcu_tasks_holdout = false;
-	INIT_LIST_HEAD(&p->rcu_tasks_holdout_list);
+	p->rcu_tasks_holdout_list.prev = LIST_POISON2;
+	spin_lock_init(&p->rcu_tasks_lock);
+	p->rcu_tasks_exit_wq = NULL;
 #endif /* #ifdef CONFIG_TASKS_RCU */
 }
 
diff --git a/kernel/exit.c b/kernel/exit.c
index e5c4668f1799..b50b1afc4092 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -805,6 +805,7 @@ void do_exit(long code)
 		put_page(tsk->task_frag.page);
 
 	validate_creds_for_do_exit(tsk);
+	exit_rcu_tasks();
 
 	check_stack_usage();
 	preempt_disable();
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index 030494690c93..9d2cf41f3161 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -453,15 +453,103 @@ void rcu_barrier_tasks(void)
 }
 EXPORT_SYMBOL_GPL(rcu_barrier_tasks);
 
+/*
+ * Note a RCU-tasks quiescent state, which might require interacting
+ * with an exiting task.
+ */
+static void rcu_tasks_note_qs(struct task_struct *t)
+{
+	spin_lock(&t->rcu_tasks_lock);
+	list_del_rcu(&t->rcu_tasks_holdout_list);
+	t->rcu_tasks_holdout = 0;
+	if (t->rcu_tasks_exit_wq)
+		wake_up(t->rcu_tasks_exit_wq);
+	spin_unlock(&t->rcu_tasks_lock);
+}
+
+/*
+ * Build the list of tasks that must be waited on for this RCU-tasks
+ * grace period.  Note that we must wait for pre-existing exiting tasks
+ * to finish exiting in order to avoid the ABA problem.
+ */
+static void rcu_tasks_build_list(void)
+{
+	struct task_struct *g, *t;
+	int n_exiting = 0;
+
+	/*
+	 * Wait for all pre-existing t->on_rq transitions to complete.
+	 * Invoking synchronize_sched() suffices because all t->on_rq
+	 * transitions occur with interrupts disabled.
+	 */
+	synchronize_sched();
+
+	/*
+	 * Scan the task list under RCU protection, accumulating
+	 * tasks that are currently running or preempted that are
+	 * not also in the process of exiting.
+	 */
+	rcu_read_lock();
+	for_each_process_thread(g, t) {
+		/* Acquire this thread's lock to synchronize with exit. */
+		spin_lock(&t->rcu_tasks_lock);
+		/* Assume that we must wait for this task. */
+		t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
+		ACCESS_ONCE(t->rcu_tasks_holdout) = 1;
+		if (t->rcu_tasks_exiting) {
+			/*
+			 * Task is exiting, so don't add to list.  Instead,
+			 * set up to wait for its exiting to complete.
+			 */
+			n_exiting++;
+			t->rcu_tasks_exiting = 1; /* Task already exiting. */
+			spin_unlock(&t->rcu_tasks_lock);
+			goto next_thread;
+		}
+
+		spin_unlock(&t->rcu_tasks_lock);
+		smp_mb();  /* Order ->rcu_tasks_holdout store before "if". */
+		if (t == current || !ACCESS_ONCE(t->on_rq) || is_idle_task(t))
+			smp_store_release(&t->rcu_tasks_holdout, 0);
+		else
+			list_add_tail_rcu(&t->rcu_tasks_holdout_list,
+					  &rcu_tasks_holdouts);
+next_thread:;
+	}
+	rcu_read_unlock();
+
+	/*
+	 * OK, we have our candidate list of threads.  Now wait for
+	 * the threads that were in the process of exiting to finish
+	 * doing so.
+	 */
+	while (n_exiting) {
+		n_exiting = 0;
+		rcu_read_lock();
+		for_each_process_thread(g, t) {
+			int am_exiting = ACCESS_ONCE(t->rcu_tasks_exiting);
+
+			if (am_exiting == 1 &&
+			    ACCESS_ONCE(t->rcu_tasks_holdout)) {
+				n_exiting++; /* Started exit before GP. */
+			} else if (am_exiting == 2) {
+				/* Holdout exited after GP, dequeue & wake. */
+				rcu_tasks_note_qs(t);
+			}
+		}
+		rcu_read_unlock();
+		schedule_timeout_interruptible(HZ / 10);
+	}
+}
+
 /* See if tasks are still holding out, complain if so. */
 static void check_holdout_task(struct task_struct *t,
 			       bool needreport, bool *firstreport)
 {
 	if (!smp_load_acquire(&t->rcu_tasks_holdout) ||
-	    t->rcu_tasks_nvcsw != ACCESS_ONCE(t->nvcsw)) {
-		ACCESS_ONCE(t->rcu_tasks_holdout) = 0;
-		/* @@@ need to check for usermode on CPU. */
-		list_del_rcu(&t->rcu_tasks_holdout_list);
+	    t->rcu_tasks_nvcsw != ACCESS_ONCE(t->nvcsw) ||
+	    !ACCESS_ONCE(t->on_rq)) {
+		rcu_tasks_note_qs(t);
 		return;
 	}
 	if (!needreport)
@@ -477,7 +565,7 @@ static void check_holdout_task(struct task_struct *t,
 static int __noreturn rcu_tasks_kthread(void *arg)
 {
 	unsigned long flags;
-	struct task_struct *g, *t;
+	struct task_struct *t;
 	unsigned long lastreport;
 	struct rcu_head *list;
 	struct rcu_head *next;
@@ -513,38 +601,10 @@ static int __noreturn rcu_tasks_kthread(void *arg)
 
 		/*
 		 * There were callbacks, so we need to wait for an
-		 * RCU-tasks grace period.  Start off by scanning
-		 * the task list for tasks that are not already
-		 * voluntarily blocked.  Mark these tasks and make
-		 * a list of them in rcu_tasks_holdouts.
-		 */
-		rcu_read_lock();
-		for_each_process_thread(g, t) {
-			if (t != current && ACCESS_ONCE(t->on_rq) &&
-			    !is_idle_task(t)) {
-				t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
-				t->rcu_tasks_holdout = 1;
-				list_add(&t->rcu_tasks_holdout_list,
-					 &rcu_tasks_holdouts);
-			}
-		}
-		rcu_read_unlock();
-
-		/*
-		 * The "t != current" and "!is_idle_task()" comparisons
-		 * above are stable, but the "t->on_rq" value could
-		 * change at any time, and is generally unordered.
-		 * Therefore, we need some ordering.  The trick is
-		 * that t->on_rq is updated with a runqueue lock held,
-		 * and thus with interrupts disabled.  So the following
-		 * synchronize_sched() provides the needed ordering by:
-		 * (1) Waiting for all interrupts-disabled code sections
-		 * to complete and (2) The synchronize_sched() ordering
-		 * guarantees, which provide for a memory barrier on each
-		 * CPU since the completion of its last read-side critical
-		 * section, including interrupt-disabled code sections.
+		 * RCU-tasks grace period.  Go build the list of
+		 * tasks that must be waited for.
 		 */
-		synchronize_sched();
+		rcu_tasks_build_list();
 
 		/*
 		 * Each pass through the following loop scans the list
@@ -608,4 +668,27 @@ static int __init rcu_spawn_tasks_kthread(void)
 }
 early_initcall(rcu_spawn_tasks_kthread);
 
+/*
+ * RCU-tasks hook for exiting tasks.  This hook prevents the current
+ * task from being added to the RCU-tasks list, and also ensures that
+ * any future RCU-tasks grace period will wait for the current task
+ * to finish exiting.
+ */
+void exit_rcu_tasks(void)
+{
+	int exitcode;
+	struct task_struct *t = current;
+	DECLARE_WAIT_QUEUE_HEAD(wq);
+
+	spin_lock(&t->rcu_tasks_lock);
+	exitcode = t->rcu_tasks_holdout + 1;
+	t->rcu_tasks_exiting = exitcode;
+	if (exitcode)
+		t->rcu_tasks_exit_wq = &wq;
+	spin_unlock(&t->rcu_tasks_lock);
+	wait_event(wq,
+		   ACCESS_ONCE(t->rcu_tasks_holdout_list.prev) == LIST_POISON2);
+	t->rcu_tasks_exit_wq = NULL;
+}
+
 #endif /* #ifdef CONFIG_TASKS_RCU */
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
                     ` (8 preceding siblings ...)
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 10/10] rcu: Make RCU-tasks track exiting tasks Paul E. McKenney
@ 2014-07-31  7:30   ` Lai Jiangshan
  2014-07-31 16:09     ` Paul E. McKenney
  2014-08-01 15:53   ` Oleg Nesterov
  10 siblings, 1 reply; 42+ messages in thread
From: Lai Jiangshan @ 2014-07-31  7:30 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On 07/31/2014 08:39 AM, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> This commit adds a new RCU-tasks flavor of RCU, which provides
> call_rcu_tasks().  This RCU flavor's quiescent states are voluntary
> context switch (not preemption!), userspace execution, and the idle loop.
> Note that unlike other RCU flavors, these quiescent states occur in tasks,
> not necessarily CPUs.  Includes fixes from Steven Rostedt.
> 
> This RCU flavor is assumed to have very infrequent latency-tolerate
> updaters.  This assumption permits significant simplifications, including
> a single global callback list protected by a single global lock, along
> with a single linked list containing all tasks that have not yet passed
> through a quiescent state.  If experience shows this assumption to be
> incorrect, the required additional complexity will be added.
> 
> Suggested-by: Steven Rostedt <rostedt@goodmis.org>
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> ---
>  include/linux/init_task.h |   9 +++
>  include/linux/rcupdate.h  |  36 ++++++++++
>  include/linux/sched.h     |  23 +++----
>  init/Kconfig              |  10 +++
>  kernel/rcu/tiny.c         |   2 +
>  kernel/rcu/tree.c         |   2 +
>  kernel/rcu/update.c       | 164 ++++++++++++++++++++++++++++++++++++++++++++++
>  7 files changed, 235 insertions(+), 11 deletions(-)
> 
> diff --git a/include/linux/init_task.h b/include/linux/init_task.h
> index 6df7f9fe0d01..78715ea7c30c 100644
> --- a/include/linux/init_task.h
> +++ b/include/linux/init_task.h
> @@ -124,6 +124,14 @@ extern struct group_info init_groups;
>  #else
>  #define INIT_TASK_RCU_PREEMPT(tsk)
>  #endif
> +#ifdef CONFIG_TASKS_RCU
> +#define INIT_TASK_RCU_TASKS(tsk)					\
> +	.rcu_tasks_holdout = false,					\
> +	.rcu_tasks_holdout_list =					\
> +		LIST_HEAD_INIT(tsk.rcu_tasks_holdout_list),
> +#else
> +#define INIT_TASK_RCU_TASKS(tsk)
> +#endif
>  
>  extern struct cred init_cred;
>  
> @@ -231,6 +239,7 @@ extern struct task_group root_task_group;
>  	INIT_FTRACE_GRAPH						\
>  	INIT_TRACE_RECURSION						\
>  	INIT_TASK_RCU_PREEMPT(tsk)					\
> +	INIT_TASK_RCU_TASKS(tsk)					\
>  	INIT_CPUSET_SEQ(tsk)						\
>  	INIT_RT_MUTEXES(tsk)						\
>  	INIT_VTIME(tsk)							\
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 6a94cc8b1ca0..05656b504295 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -197,6 +197,26 @@ void call_rcu_sched(struct rcu_head *head,
>  
>  void synchronize_sched(void);
>  
> +/**
> + * call_rcu_tasks() - Queue an RCU for invocation task-based grace period
> + * @head: structure to be used for queueing the RCU updates.
> + * @func: actual callback function to be invoked after the grace period
> + *
> + * The callback function will be invoked some time after a full grace
> + * period elapses, in other words after all currently executing RCU
> + * read-side critical sections have completed. call_rcu_tasks() assumes
> + * that the read-side critical sections end at a voluntary context
> + * switch (not a preemption!), entry into idle, or transition to usermode
> + * execution.  As such, there are no read-side primitives analogous to
> + * rcu_read_lock() and rcu_read_unlock() because this primitive is intended
> + * to determine that all tasks have passed through a safe state, not so
> + * much for data-strcuture synchronization.
> + *
> + * See the description of call_rcu() for more detailed information on
> + * memory ordering guarantees.
> + */
> +void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head *head));
> +
>  #ifdef CONFIG_PREEMPT_RCU
>  
>  void __rcu_read_lock(void);
> @@ -294,6 +314,22 @@ static inline void rcu_user_hooks_switch(struct task_struct *prev,
>  		rcu_irq_exit(); \
>  	} while (0)
>  
> +/*
> + * Note a voluntary context switch for RCU-tasks benefit.  This is a
> + * macro rather than an inline function to avoid #include hell.
> + */
> +#ifdef CONFIG_TASKS_RCU
> +#define rcu_note_voluntary_context_switch(t) \
> +	do { \
> +		preempt_disable(); /* Exclude synchronize_sched(); */ \
> +		if ((t)->rcu_tasks_holdout) \
> +			smp_store_release(&(t)->rcu_tasks_holdout, 0); \
> +		preempt_enable(); \
> +	} while (0)
> +#else /* #ifdef CONFIG_TASKS_RCU */
> +#define rcu_note_voluntary_context_switch(t)	do { } while (0)
> +#endif /* #else #ifdef CONFIG_TASKS_RCU */
> +
>  #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP)
>  bool __rcu_is_watching(void);
>  #endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) */
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 306f4f0c987a..3cf124389ec7 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1273,6 +1273,11 @@ struct task_struct {
>  #ifdef CONFIG_RCU_BOOST
>  	struct rt_mutex *rcu_boost_mutex;
>  #endif /* #ifdef CONFIG_RCU_BOOST */
> +#ifdef CONFIG_TASKS_RCU
> +	unsigned long rcu_tasks_nvcsw;
> +	int rcu_tasks_holdout;
> +	struct list_head rcu_tasks_holdout_list;
> +#endif /* #ifdef CONFIG_TASKS_RCU */
>  
>  #if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT)
>  	struct sched_info sched_info;
> @@ -1998,31 +2003,27 @@ extern void task_clear_jobctl_pending(struct task_struct *task,
>  				      unsigned int mask);
>  
>  #ifdef CONFIG_PREEMPT_RCU
> -
>  #define RCU_READ_UNLOCK_BLOCKED (1 << 0) /* blocked while in RCU read-side. */
>  #define RCU_READ_UNLOCK_NEED_QS (1 << 1) /* RCU core needs CPU response. */
> +#endif /* #ifdef CONFIG_PREEMPT_RCU */
>  
>  static inline void rcu_copy_process(struct task_struct *p)
>  {
> +#ifdef CONFIG_PREEMPT_RCU
>  	p->rcu_read_lock_nesting = 0;
>  	p->rcu_read_unlock_special = 0;
> -#ifdef CONFIG_TREE_PREEMPT_RCU
>  	p->rcu_blocked_node = NULL;
> -#endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */
>  #ifdef CONFIG_RCU_BOOST
>  	p->rcu_boost_mutex = NULL;
>  #endif /* #ifdef CONFIG_RCU_BOOST */
>  	INIT_LIST_HEAD(&p->rcu_node_entry);
> +#endif /* #ifdef CONFIG_PREEMPT_RCU */
> +#ifdef CONFIG_TASKS_RCU
> +	p->rcu_tasks_holdout = false;
> +	INIT_LIST_HEAD(&p->rcu_tasks_holdout_list);
> +#endif /* #ifdef CONFIG_TASKS_RCU */
>  }
>  
> -#else
> -
> -static inline void rcu_copy_process(struct task_struct *p)
> -{
> -}
> -
> -#endif
> -
>  static inline void tsk_restore_flags(struct task_struct *task,
>  				unsigned long orig_flags, unsigned long flags)
>  {
> diff --git a/init/Kconfig b/init/Kconfig
> index 9d76b99af1b9..c56cb62a2df1 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -507,6 +507,16 @@ config PREEMPT_RCU
>  	  This option enables preemptible-RCU code that is common between
>  	  the TREE_PREEMPT_RCU and TINY_PREEMPT_RCU implementations.
>  
> +config TASKS_RCU
> +	bool "Task_based RCU implementation using voluntary context switch"
> +	default n
> +	help
> +	  This option enables a task-based RCU implementation that uses
> +	  only voluntary context switch (not preemption!), idle, and
> +	  user-mode execution as quiescent states.
> +
> +	  If unsure, say N.
> +
>  config RCU_STALL_COMMON
>  	def_bool ( TREE_RCU || TREE_PREEMPT_RCU || RCU_TRACE )
>  	help
> diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c
> index d9efcc13008c..717f00854fc0 100644
> --- a/kernel/rcu/tiny.c
> +++ b/kernel/rcu/tiny.c
> @@ -254,6 +254,8 @@ void rcu_check_callbacks(int cpu, int user)
>  		rcu_sched_qs(cpu);
>  	else if (!in_softirq())
>  		rcu_bh_qs(cpu);
> +	if (user)
> +		rcu_note_voluntary_context_switch(current);
>  }
>  
>  /*
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 625d0b0cd75a..f958c52f644d 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -2413,6 +2413,8 @@ void rcu_check_callbacks(int cpu, int user)
>  	rcu_preempt_check_callbacks(cpu);
>  	if (rcu_pending(cpu))
>  		invoke_rcu_core();
> +	if (user)
> +		rcu_note_voluntary_context_switch(current);
>  	trace_rcu_utilization(TPS("End scheduler-tick"));
>  }
>  
> diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> index bc7883570530..b92268647a01 100644
> --- a/kernel/rcu/update.c
> +++ b/kernel/rcu/update.c
> @@ -47,6 +47,7 @@
>  #include <linux/hardirq.h>
>  #include <linux/delay.h>
>  #include <linux/module.h>
> +#include <linux/kthread.h>
>  
>  #define CREATE_TRACE_POINTS
>  
> @@ -350,3 +351,166 @@ static int __init check_cpu_stall_init(void)
>  early_initcall(check_cpu_stall_init);
>  
>  #endif /* #ifdef CONFIG_RCU_STALL_COMMON */
> +
> +#ifdef CONFIG_TASKS_RCU
> +
> +/*
> + * Simple variant of RCU whose quiescent states are voluntary context switch,
> + * user-space execution, and idle.  As such, grace periods can take one good
> + * long time.  There are no read-side primitives similar to rcu_read_lock()
> + * and rcu_read_unlock() because this implementation is intended to get
> + * the system into a safe state for some of the manipulations involved in
> + * tracing and the like.  Finally, this implementation does not support
> + * high call_rcu_tasks() rates from multiple CPUs.  If this is required,
> + * per-CPU callback lists will be needed.
> + */
> +
> +/* Lists of tasks that we are still waiting for during this grace period. */
> +static LIST_HEAD(rcu_tasks_holdouts);
> +
> +/* Global list of callbacks and associated lock. */
> +static struct rcu_head *rcu_tasks_cbs_head;
> +static struct rcu_head **rcu_tasks_cbs_tail = &rcu_tasks_cbs_head;
> +static DEFINE_RAW_SPINLOCK(rcu_tasks_cbs_lock);
> +
> +/* Post an RCU-tasks callback. */
> +void call_rcu_tasks(struct rcu_head *rhp, void (*func)(struct rcu_head *rhp))
> +{
> +	unsigned long flags;
> +
> +	rhp->next = NULL;
> +	rhp->func = func;
> +	raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
> +	*rcu_tasks_cbs_tail = rhp;
> +	rcu_tasks_cbs_tail = &rhp->next;
> +	raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
> +}
> +EXPORT_SYMBOL_GPL(call_rcu_tasks);
> +
> +/* RCU-tasks kthread that detects grace periods and invokes callbacks. */
> +static int __noreturn rcu_tasks_kthread(void *arg)
> +{
> +	unsigned long flags;
> +	struct task_struct *g, *t;
> +	struct rcu_head *list;
> +	struct rcu_head *next;
> +
> +	/* FIXME: Add housekeeping affinity. */
> +
> +	/*
> +	 * Each pass through the following loop makes one check for
> +	 * newly arrived callbacks, and, if there are some, waits for
> +	 * one RCU-tasks grace period and then invokes the callbacks.
> +	 * This loop is terminated by the system going down.  ;-)
> +	 */
> +	for (;;) {
> +
> +		/* Pick up any new callbacks. */
> +		raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
> +		smp_mb__after_unlock_lock(); /* Enforce GP memory ordering. */
> +		list = rcu_tasks_cbs_head;
> +		rcu_tasks_cbs_head = NULL;
> +		rcu_tasks_cbs_tail = &rcu_tasks_cbs_head;
> +		raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
> +
> +		/* If there were none, wait a bit and start over. */
> +		if (!list) {
> +			schedule_timeout_interruptible(HZ);
> +			flush_signals(current);
> +			continue;
> +		}
> +

A "synchronize_sched();" may be needed here, or else the following code may
read t->on_rq == 0 but actually the task is just running and traced with trampoline.

> +		/*
> +		 * There were callbacks, so we need to wait for an
> +		 * RCU-tasks grace period.  Start off by scanning
> +		 * the task list for tasks that are not already
> +		 * voluntarily blocked.  Mark these tasks and make
> +		 * a list of them in rcu_tasks_holdouts.
> +		 */
> +		rcu_read_lock();
> +		for_each_process_thread(g, t) {
> +			if (t != current && ACCESS_ONCE(t->on_rq) &&
> +			    !is_idle_task(t)) {

What happen when the trampoline is on the idle task?

I think we need to use schedule_on_each_cpu() to replace one of
the synchronize_sched() in this function. (or other stuff which can
cause real schedule for *ALL* online CPUs).

> +				t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
> +				t->rcu_tasks_holdout = 1;
> +				list_add(&t->rcu_tasks_holdout_list,
> +					 &rcu_tasks_holdouts);

I think get_task_struct() is needed here to avoid the task disappears.

> +			}
> +		}
> +		rcu_read_unlock();
> +
> +		/*
> +		 * The "t != current" and "!is_idle_task()" comparisons
> +		 * above are stable, but the "t->on_rq" value could
> +		 * change at any time, and is generally unordered.
> +		 * Therefore, we need some ordering.  The trick is
> +		 * that t->on_rq is updated with a runqueue lock held,
> +		 * and thus with interrupts disabled.  So the following
> +		 * synchronize_sched() provides the needed ordering by:
> +		 * (1) Waiting for all interrupts-disabled code sections
> +		 * to complete and (2) The synchronize_sched() ordering
> +		 * guarantees, which provide for a memory barrier on each
> +		 * CPU since the completion of its last read-side critical
> +		 * section, including interrupt-disabled code sections.
> +		 */
> +		synchronize_sched();
> +
> +		/*
> +		 * Each pass through the following loop scans the list
> +		 * of holdout tasks, removing any that are no longer
> +		 * holdouts.  When the list is empty, we are done.
> +		 */
> +		while (!list_empty(&rcu_tasks_holdouts)) {
> +			schedule_timeout_interruptible(HZ / 10);
> +			flush_signals(current);
> +			rcu_read_lock();
> +			list_for_each_entry_rcu(t, &rcu_tasks_holdouts,
> +						rcu_tasks_holdout_list) {
> +				if (smp_load_acquire(&t->rcu_tasks_holdout)) {
> +					if (t->rcu_tasks_nvcsw ==
> +					    ACCESS_ONCE(t->nvcsw))
> +						continue;
> +					ACCESS_ONCE(t->rcu_tasks_holdout) = 0;
> +				}

t->on_rq can be rechecked here.

I think the recheck for t->on_rq and get_task_struct()/put_task_struct() can
handle the exiting of the tasks, thus we don't need the complex of the patch 10/10.

Quick and superficial thought,

Thanks,
Lai

> +				list_del_init(&t->rcu_tasks_holdout_list);
> +				/* @@@ need to check for usermode on CPU. */
> +			}
> +			rcu_read_unlock();
> +		}
> +
> +		/*
> +		 * Because ->nvcsw is not guaranteed to have a full
> +		 * memory barrier prior to it in the schedule() path,
> +		 * memory reordering on other CPUs could cause their
> +		 * RCU-tasks read-side critical sections to extend past
> +		 * the end of the grace period.  However, because these
> +		 * ->nvcsw updates are carried out with interrupts
> +		 * disabled, we can use synchronize_sched() to force
> +		 * the needed ordering on all such CPUs.
> +		 */
> +		synchronize_sched();
> +
> +		/* Invoke the callbacks. */
> +		while (list) {
> +			next = list->next;
> +			local_bh_disable();
> +			list->func(list);
> +			local_bh_enable();
> +			list = next;
> +			cond_resched();
> +		}
> +	}
> +}
> +
> +/* Spawn rcu_tasks_kthread() at boot time. */
> +static int __init rcu_spawn_tasks_kthread(void)
> +{
> +	struct task_struct __maybe_unused *t;
> +
> +	t = kthread_run(rcu_tasks_kthread, NULL, "rcu_tasks_kthread");
> +	BUG_ON(IS_ERR(t));
> +	return 0;
> +}
> +early_initcall(rcu_spawn_tasks_kthread);
> +
> +#endif /* #ifdef CONFIG_TASKS_RCU */


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-07-31  7:30   ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Lai Jiangshan
@ 2014-07-31 16:09     ` Paul E. McKenney
  2014-07-31 16:20       ` Peter Zijlstra
                         ` (2 more replies)
  0 siblings, 3 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31 16:09 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, mingo, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On Thu, Jul 31, 2014 at 03:30:12PM +0800, Lai Jiangshan wrote:
> On 07/31/2014 08:39 AM, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > 
> > This commit adds a new RCU-tasks flavor of RCU, which provides
> > call_rcu_tasks().  This RCU flavor's quiescent states are voluntary
> > context switch (not preemption!), userspace execution, and the idle loop.
> > Note that unlike other RCU flavors, these quiescent states occur in tasks,
> > not necessarily CPUs.  Includes fixes from Steven Rostedt.
> > 
> > This RCU flavor is assumed to have very infrequent latency-tolerate
> > updaters.  This assumption permits significant simplifications, including
> > a single global callback list protected by a single global lock, along
> > with a single linked list containing all tasks that have not yet passed
> > through a quiescent state.  If experience shows this assumption to be
> > incorrect, the required additional complexity will be added.
> > 
> > Suggested-by: Steven Rostedt <rostedt@goodmis.org>
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > ---
> >  include/linux/init_task.h |   9 +++
> >  include/linux/rcupdate.h  |  36 ++++++++++
> >  include/linux/sched.h     |  23 +++----
> >  init/Kconfig              |  10 +++
> >  kernel/rcu/tiny.c         |   2 +
> >  kernel/rcu/tree.c         |   2 +
> >  kernel/rcu/update.c       | 164 ++++++++++++++++++++++++++++++++++++++++++++++
> >  7 files changed, 235 insertions(+), 11 deletions(-)
> > 
> > diff --git a/include/linux/init_task.h b/include/linux/init_task.h
> > index 6df7f9fe0d01..78715ea7c30c 100644
> > --- a/include/linux/init_task.h
> > +++ b/include/linux/init_task.h
> > @@ -124,6 +124,14 @@ extern struct group_info init_groups;
> >  #else
> >  #define INIT_TASK_RCU_PREEMPT(tsk)
> >  #endif
> > +#ifdef CONFIG_TASKS_RCU
> > +#define INIT_TASK_RCU_TASKS(tsk)					\
> > +	.rcu_tasks_holdout = false,					\
> > +	.rcu_tasks_holdout_list =					\
> > +		LIST_HEAD_INIT(tsk.rcu_tasks_holdout_list),
> > +#else
> > +#define INIT_TASK_RCU_TASKS(tsk)
> > +#endif
> >  
> >  extern struct cred init_cred;
> >  
> > @@ -231,6 +239,7 @@ extern struct task_group root_task_group;
> >  	INIT_FTRACE_GRAPH						\
> >  	INIT_TRACE_RECURSION						\
> >  	INIT_TASK_RCU_PREEMPT(tsk)					\
> > +	INIT_TASK_RCU_TASKS(tsk)					\
> >  	INIT_CPUSET_SEQ(tsk)						\
> >  	INIT_RT_MUTEXES(tsk)						\
> >  	INIT_VTIME(tsk)							\
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index 6a94cc8b1ca0..05656b504295 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -197,6 +197,26 @@ void call_rcu_sched(struct rcu_head *head,
> >  
> >  void synchronize_sched(void);
> >  
> > +/**
> > + * call_rcu_tasks() - Queue an RCU for invocation task-based grace period
> > + * @head: structure to be used for queueing the RCU updates.
> > + * @func: actual callback function to be invoked after the grace period
> > + *
> > + * The callback function will be invoked some time after a full grace
> > + * period elapses, in other words after all currently executing RCU
> > + * read-side critical sections have completed. call_rcu_tasks() assumes
> > + * that the read-side critical sections end at a voluntary context
> > + * switch (not a preemption!), entry into idle, or transition to usermode
> > + * execution.  As such, there are no read-side primitives analogous to
> > + * rcu_read_lock() and rcu_read_unlock() because this primitive is intended
> > + * to determine that all tasks have passed through a safe state, not so
> > + * much for data-strcuture synchronization.
> > + *
> > + * See the description of call_rcu() for more detailed information on
> > + * memory ordering guarantees.
> > + */
> > +void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head *head));
> > +
> >  #ifdef CONFIG_PREEMPT_RCU
> >  
> >  void __rcu_read_lock(void);
> > @@ -294,6 +314,22 @@ static inline void rcu_user_hooks_switch(struct task_struct *prev,
> >  		rcu_irq_exit(); \
> >  	} while (0)
> >  
> > +/*
> > + * Note a voluntary context switch for RCU-tasks benefit.  This is a
> > + * macro rather than an inline function to avoid #include hell.
> > + */
> > +#ifdef CONFIG_TASKS_RCU
> > +#define rcu_note_voluntary_context_switch(t) \
> > +	do { \
> > +		preempt_disable(); /* Exclude synchronize_sched(); */ \
> > +		if ((t)->rcu_tasks_holdout) \
> > +			smp_store_release(&(t)->rcu_tasks_holdout, 0); \
> > +		preempt_enable(); \
> > +	} while (0)
> > +#else /* #ifdef CONFIG_TASKS_RCU */
> > +#define rcu_note_voluntary_context_switch(t)	do { } while (0)
> > +#endif /* #else #ifdef CONFIG_TASKS_RCU */
> > +
> >  #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP)
> >  bool __rcu_is_watching(void);
> >  #endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) */
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index 306f4f0c987a..3cf124389ec7 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -1273,6 +1273,11 @@ struct task_struct {
> >  #ifdef CONFIG_RCU_BOOST
> >  	struct rt_mutex *rcu_boost_mutex;
> >  #endif /* #ifdef CONFIG_RCU_BOOST */
> > +#ifdef CONFIG_TASKS_RCU
> > +	unsigned long rcu_tasks_nvcsw;
> > +	int rcu_tasks_holdout;
> > +	struct list_head rcu_tasks_holdout_list;
> > +#endif /* #ifdef CONFIG_TASKS_RCU */
> >  
> >  #if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT)
> >  	struct sched_info sched_info;
> > @@ -1998,31 +2003,27 @@ extern void task_clear_jobctl_pending(struct task_struct *task,
> >  				      unsigned int mask);
> >  
> >  #ifdef CONFIG_PREEMPT_RCU
> > -
> >  #define RCU_READ_UNLOCK_BLOCKED (1 << 0) /* blocked while in RCU read-side. */
> >  #define RCU_READ_UNLOCK_NEED_QS (1 << 1) /* RCU core needs CPU response. */
> > +#endif /* #ifdef CONFIG_PREEMPT_RCU */
> >  
> >  static inline void rcu_copy_process(struct task_struct *p)
> >  {
> > +#ifdef CONFIG_PREEMPT_RCU
> >  	p->rcu_read_lock_nesting = 0;
> >  	p->rcu_read_unlock_special = 0;
> > -#ifdef CONFIG_TREE_PREEMPT_RCU
> >  	p->rcu_blocked_node = NULL;
> > -#endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */
> >  #ifdef CONFIG_RCU_BOOST
> >  	p->rcu_boost_mutex = NULL;
> >  #endif /* #ifdef CONFIG_RCU_BOOST */
> >  	INIT_LIST_HEAD(&p->rcu_node_entry);
> > +#endif /* #ifdef CONFIG_PREEMPT_RCU */
> > +#ifdef CONFIG_TASKS_RCU
> > +	p->rcu_tasks_holdout = false;
> > +	INIT_LIST_HEAD(&p->rcu_tasks_holdout_list);
> > +#endif /* #ifdef CONFIG_TASKS_RCU */
> >  }
> >  
> > -#else
> > -
> > -static inline void rcu_copy_process(struct task_struct *p)
> > -{
> > -}
> > -
> > -#endif
> > -
> >  static inline void tsk_restore_flags(struct task_struct *task,
> >  				unsigned long orig_flags, unsigned long flags)
> >  {
> > diff --git a/init/Kconfig b/init/Kconfig
> > index 9d76b99af1b9..c56cb62a2df1 100644
> > --- a/init/Kconfig
> > +++ b/init/Kconfig
> > @@ -507,6 +507,16 @@ config PREEMPT_RCU
> >  	  This option enables preemptible-RCU code that is common between
> >  	  the TREE_PREEMPT_RCU and TINY_PREEMPT_RCU implementations.
> >  
> > +config TASKS_RCU
> > +	bool "Task_based RCU implementation using voluntary context switch"
> > +	default n
> > +	help
> > +	  This option enables a task-based RCU implementation that uses
> > +	  only voluntary context switch (not preemption!), idle, and
> > +	  user-mode execution as quiescent states.
> > +
> > +	  If unsure, say N.
> > +
> >  config RCU_STALL_COMMON
> >  	def_bool ( TREE_RCU || TREE_PREEMPT_RCU || RCU_TRACE )
> >  	help
> > diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c
> > index d9efcc13008c..717f00854fc0 100644
> > --- a/kernel/rcu/tiny.c
> > +++ b/kernel/rcu/tiny.c
> > @@ -254,6 +254,8 @@ void rcu_check_callbacks(int cpu, int user)
> >  		rcu_sched_qs(cpu);
> >  	else if (!in_softirq())
> >  		rcu_bh_qs(cpu);
> > +	if (user)
> > +		rcu_note_voluntary_context_switch(current);
> >  }
> >  
> >  /*
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 625d0b0cd75a..f958c52f644d 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -2413,6 +2413,8 @@ void rcu_check_callbacks(int cpu, int user)
> >  	rcu_preempt_check_callbacks(cpu);
> >  	if (rcu_pending(cpu))
> >  		invoke_rcu_core();
> > +	if (user)
> > +		rcu_note_voluntary_context_switch(current);
> >  	trace_rcu_utilization(TPS("End scheduler-tick"));
> >  }
> >  
> > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > index bc7883570530..b92268647a01 100644
> > --- a/kernel/rcu/update.c
> > +++ b/kernel/rcu/update.c
> > @@ -47,6 +47,7 @@
> >  #include <linux/hardirq.h>
> >  #include <linux/delay.h>
> >  #include <linux/module.h>
> > +#include <linux/kthread.h>
> >  
> >  #define CREATE_TRACE_POINTS
> >  
> > @@ -350,3 +351,166 @@ static int __init check_cpu_stall_init(void)
> >  early_initcall(check_cpu_stall_init);
> >  
> >  #endif /* #ifdef CONFIG_RCU_STALL_COMMON */
> > +
> > +#ifdef CONFIG_TASKS_RCU
> > +
> > +/*
> > + * Simple variant of RCU whose quiescent states are voluntary context switch,
> > + * user-space execution, and idle.  As such, grace periods can take one good
> > + * long time.  There are no read-side primitives similar to rcu_read_lock()
> > + * and rcu_read_unlock() because this implementation is intended to get
> > + * the system into a safe state for some of the manipulations involved in
> > + * tracing and the like.  Finally, this implementation does not support
> > + * high call_rcu_tasks() rates from multiple CPUs.  If this is required,
> > + * per-CPU callback lists will be needed.
> > + */
> > +
> > +/* Lists of tasks that we are still waiting for during this grace period. */
> > +static LIST_HEAD(rcu_tasks_holdouts);
> > +
> > +/* Global list of callbacks and associated lock. */
> > +static struct rcu_head *rcu_tasks_cbs_head;
> > +static struct rcu_head **rcu_tasks_cbs_tail = &rcu_tasks_cbs_head;
> > +static DEFINE_RAW_SPINLOCK(rcu_tasks_cbs_lock);
> > +
> > +/* Post an RCU-tasks callback. */
> > +void call_rcu_tasks(struct rcu_head *rhp, void (*func)(struct rcu_head *rhp))
> > +{
> > +	unsigned long flags;
> > +
> > +	rhp->next = NULL;
> > +	rhp->func = func;
> > +	raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
> > +	*rcu_tasks_cbs_tail = rhp;
> > +	rcu_tasks_cbs_tail = &rhp->next;
> > +	raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(call_rcu_tasks);
> > +
> > +/* RCU-tasks kthread that detects grace periods and invokes callbacks. */
> > +static int __noreturn rcu_tasks_kthread(void *arg)
> > +{
> > +	unsigned long flags;
> > +	struct task_struct *g, *t;
> > +	struct rcu_head *list;
> > +	struct rcu_head *next;
> > +
> > +	/* FIXME: Add housekeeping affinity. */
> > +
> > +	/*
> > +	 * Each pass through the following loop makes one check for
> > +	 * newly arrived callbacks, and, if there are some, waits for
> > +	 * one RCU-tasks grace period and then invokes the callbacks.
> > +	 * This loop is terminated by the system going down.  ;-)
> > +	 */
> > +	for (;;) {
> > +
> > +		/* Pick up any new callbacks. */
> > +		raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
> > +		smp_mb__after_unlock_lock(); /* Enforce GP memory ordering. */
> > +		list = rcu_tasks_cbs_head;
> > +		rcu_tasks_cbs_head = NULL;
> > +		rcu_tasks_cbs_tail = &rcu_tasks_cbs_head;
> > +		raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
> > +
> > +		/* If there were none, wait a bit and start over. */
> > +		if (!list) {
> > +			schedule_timeout_interruptible(HZ);
> > +			flush_signals(current);
> > +			continue;
> > +		}
> > +
> 
> A "synchronize_sched();" may be needed here, or else the following code may
> read t->on_rq == 0 but actually the task is just running and traced with trampoline.

Good point!  This is fixed by later patches in the series, but it would
be good to have it set up initially.

> > +		/*
> > +		 * There were callbacks, so we need to wait for an
> > +		 * RCU-tasks grace period.  Start off by scanning
> > +		 * the task list for tasks that are not already
> > +		 * voluntarily blocked.  Mark these tasks and make
> > +		 * a list of them in rcu_tasks_holdouts.
> > +		 */
> > +		rcu_read_lock();
> > +		for_each_process_thread(g, t) {
> > +			if (t != current && ACCESS_ONCE(t->on_rq) &&
> > +			    !is_idle_task(t)) {
> 
> What happen when the trampoline is on the idle task?
> 
> I think we need to use schedule_on_each_cpu() to replace one of
> the synchronize_sched() in this function. (or other stuff which can
> cause real schedule for *ALL* online CPUs).

Well, that is one of the questions in the 0/10 cover letter.  If it turns
out to be necessary to worry about idle-task trampolines, it should be
possible to avoid hammering all idle CPUs in the common case.  Though maybe
battery-powered devices won't need RCU-tasks.

> > +				t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
> > +				t->rcu_tasks_holdout = 1;
> > +				list_add(&t->rcu_tasks_holdout_list,
> > +					 &rcu_tasks_holdouts);
> 
> I think get_task_struct() is needed here to avoid the task disappears.

Hmmm...  Let's see...

Looks like get_task_struct() does a blind atomic increment of ->usage.
And put_task_struct() does an atomic_dec_and_test().  So one question
is "what prevents us from doing get_task_struct() after the final
put_task_struct() has pushed ->usage down to zero?"

Hopefully there is a grace period in there somewhere, otherwise it will
be necessary to take the task-list lock, which I would like to avoid.

Looks like the call_rcu() of delayed_put_task_struct() in release_task()
might be doing this.  And release_task() invokes __exit_signals() before
that call_rcu(), and __exit_signals() does an __unhash_process(), which
looks to remove the task from all the RCU-protected lists.  In the case
of for_each_process_thread(), the ->tasks and ->thread_node lists are
relevant, and __unhash_process() does RCU-safe removal from both lists.

This means that when for_each_process_thread() finds a given task under
RCU protection, we can indeed expect get_task_struct() to hold on to the
task structure.  The corresponding put_task_struct() gets called as soon
as we remove the task from the list.  No tricky exit-side synchronization
is then required.

That does indeed look way simpler, thank you!

> > +			}
> > +		}
> > +		rcu_read_unlock();
> > +
> > +		/*
> > +		 * The "t != current" and "!is_idle_task()" comparisons
> > +		 * above are stable, but the "t->on_rq" value could
> > +		 * change at any time, and is generally unordered.
> > +		 * Therefore, we need some ordering.  The trick is
> > +		 * that t->on_rq is updated with a runqueue lock held,
> > +		 * and thus with interrupts disabled.  So the following
> > +		 * synchronize_sched() provides the needed ordering by:
> > +		 * (1) Waiting for all interrupts-disabled code sections
> > +		 * to complete and (2) The synchronize_sched() ordering
> > +		 * guarantees, which provide for a memory barrier on each
> > +		 * CPU since the completion of its last read-side critical
> > +		 * section, including interrupt-disabled code sections.
> > +		 */
> > +		synchronize_sched();
> > +
> > +		/*
> > +		 * Each pass through the following loop scans the list
> > +		 * of holdout tasks, removing any that are no longer
> > +		 * holdouts.  When the list is empty, we are done.
> > +		 */
> > +		while (!list_empty(&rcu_tasks_holdouts)) {
> > +			schedule_timeout_interruptible(HZ / 10);
> > +			flush_signals(current);
> > +			rcu_read_lock();
> > +			list_for_each_entry_rcu(t, &rcu_tasks_holdouts,
> > +						rcu_tasks_holdout_list) {
> > +				if (smp_load_acquire(&t->rcu_tasks_holdout)) {
> > +					if (t->rcu_tasks_nvcsw ==
> > +					    ACCESS_ONCE(t->nvcsw))
> > +						continue;
> > +					ACCESS_ONCE(t->rcu_tasks_holdout) = 0;
> > +				}
> 
> t->on_rq can be rechecked here.

Yep, I would need to pull some of the logic from 10/10 to this point
in 1/1.

> I think the recheck for t->on_rq and get_task_struct()/put_task_struct() can
> handle the exiting of the tasks, thus we don't need the complex of the patch 10/10.
> 
> Quick and superficial thought,

Seems likely, will give it a try.

							Thanx, Paul

> Thanks,
> Lai
> 
> > +				list_del_init(&t->rcu_tasks_holdout_list);
> > +				/* @@@ need to check for usermode on CPU. */
> > +			}
> > +			rcu_read_unlock();
> > +		}
> > +
> > +		/*
> > +		 * Because ->nvcsw is not guaranteed to have a full
> > +		 * memory barrier prior to it in the schedule() path,
> > +		 * memory reordering on other CPUs could cause their
> > +		 * RCU-tasks read-side critical sections to extend past
> > +		 * the end of the grace period.  However, because these
> > +		 * ->nvcsw updates are carried out with interrupts
> > +		 * disabled, we can use synchronize_sched() to force
> > +		 * the needed ordering on all such CPUs.
> > +		 */
> > +		synchronize_sched();
> > +
> > +		/* Invoke the callbacks. */
> > +		while (list) {
> > +			next = list->next;
> > +			local_bh_disable();
> > +			list->func(list);
> > +			local_bh_enable();
> > +			list = next;
> > +			cond_resched();
> > +		}
> > +	}
> > +}
> > +
> > +/* Spawn rcu_tasks_kthread() at boot time. */
> > +static int __init rcu_spawn_tasks_kthread(void)
> > +{
> > +	struct task_struct __maybe_unused *t;
> > +
> > +	t = kthread_run(rcu_tasks_kthread, NULL, "rcu_tasks_kthread");
> > +	BUG_ON(IS_ERR(t));
> > +	return 0;
> > +}
> > +early_initcall(rcu_spawn_tasks_kthread);
> > +
> > +#endif /* #ifdef CONFIG_TASKS_RCU */
> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation
  2014-07-31  0:39 [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation Paul E. McKenney
  2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
@ 2014-07-31 16:19 ` josh
  2014-07-31 16:30   ` Peter Zijlstra
  2014-07-31 16:58   ` Paul E. McKenney
  2014-07-31 19:29 ` Andi Kleen
  2 siblings, 2 replies; 42+ messages in thread
From: josh @ 2014-07-31 16:19 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On Wed, Jul 30, 2014 at 05:39:14PM -0700, Paul E. McKenney wrote:
> This series provides a prototype of an RCU-tasks implementation, which has
> been requested to assist with tramopoline removal.  This flavor of RCU
> is task-based rather than CPU-based, and has voluntary context switch,
> usermode execution, and the idle loops as its only quiescent states.
> This selection of quiescent states ensures that at the end of a grace
> period, there will no longer be any tasks depending on a trampoline that
> was removed before the beginning of that grace period.  This works because
> such trampolines do not contain function calls, do not contain voluntary
> context switches, do not switch to usermode, and do not switch to idle.

I'm concerned about the amount of system overhead this introduces.
Polling for holdout tasks seems quite excessive.  If I understand the
intended use case correctly, the users of this will want to free
relatively small amounts of memory; thus, waiting a while to do so seems
fine, especially if the system isn't under any particular memory
pressure.

Thus, rather than polling, could you simply flag the holdout
tasks, telling the scheduler "hey, next time you don't have anything
better to do..."?  Then don't bother with them again unless the system
runs low on memory and asks you to free some.  (And mandate that you can
only use this to free memory rather than for any other purpose.)

Also, ideally this should remain entirely optional; nothing in the core
kernel should depend on it.

- Josh Triplett

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-07-31 16:09     ` Paul E. McKenney
@ 2014-07-31 16:20       ` Peter Zijlstra
  2014-07-31 16:47         ` Paul E. McKenney
  2014-07-31 16:31       ` Oleg Nesterov
  2014-08-01  0:53       ` Lai Jiangshan
  2 siblings, 1 reply; 42+ messages in thread
From: Peter Zijlstra @ 2014-07-31 16:20 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Lai Jiangshan, linux-kernel, mingo, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	dvhart, fweisbec, oleg, bobby.prani

[-- Attachment #1: Type: text/plain, Size: 1010 bytes --]

On Thu, Jul 31, 2014 at 09:09:24AM -0700, Paul E. McKenney wrote:
> Well, that is one of the questions in the 0/10 cover letter.  If it turns
> out to be necessary to worry about idle-task trampolines, it should be
> possible to avoid hammering all idle CPUs in the common case.  Though maybe
> battery-powered devices won't need RCU-tasks.

So do we really need this to be a full blown RCU implementation? Could
we not live with something like a task_barrier() function that
guarantees the same as this rcu_task_synchronize() or somesuch?

So for rostedt's case he could patch out all trampolines, call
task_barrier() and then free them.

We can make task_barrier() force each cpu to resched once -- which, if
we do that after the task's have all scheduled once, could be the empty
set.

It also pushes the cost of this machinery into the caller of
task_barrier() and not in some random kthread (yet another one).

Now, if only task_work would work for kernel threads, then we could do
something far nicer...

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation
  2014-07-31 16:19 ` [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation josh
@ 2014-07-31 16:30   ` Peter Zijlstra
  2014-07-31 16:43     ` josh
  2014-07-31 16:49     ` Paul E. McKenney
  2014-07-31 16:58   ` Paul E. McKenney
  1 sibling, 2 replies; 42+ messages in thread
From: Peter Zijlstra @ 2014-07-31 16:30 UTC (permalink / raw)
  To: josh
  Cc: Paul E. McKenney, linux-kernel, mingo, laijs, dipankar, akpm,
	mathieu.desnoyers, tglx, rostedt, dhowells, edumazet, dvhart,
	fweisbec, oleg, bobby.prani

[-- Attachment #1: Type: text/plain, Size: 216 bytes --]

On Thu, Jul 31, 2014 at 09:19:02AM -0700, josh@joshtriplett.org wrote:
> Also, ideally this should remain entirely optional; nothing in the core
> kernel should depend on it.

I suspect Steve wants it for ftrace :-)

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-07-31 16:09     ` Paul E. McKenney
  2014-07-31 16:20       ` Peter Zijlstra
@ 2014-07-31 16:31       ` Oleg Nesterov
  2014-07-31 17:02         ` Paul E. McKenney
  2014-08-01  0:53       ` Lai Jiangshan
  2 siblings, 1 reply; 42+ messages in thread
From: Oleg Nesterov @ 2014-07-31 16:31 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Lai Jiangshan, linux-kernel, mingo, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, dvhart, fweisbec, bobby.prani

Again, sorry, I didn't read the patches yet, just noticed your discussion...

On 07/31, Paul E. McKenney wrote:
>
> On Thu, Jul 31, 2014 at 03:30:12PM +0800, Lai Jiangshan wrote:
>
> > > +				t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
> > > +				t->rcu_tasks_holdout = 1;
> > > +				list_add(&t->rcu_tasks_holdout_list,
> > > +					 &rcu_tasks_holdouts);
> >
> > I think get_task_struct() is needed here to avoid the task disappears.
>
> Hmmm...  Let's see...
>
> Looks like get_task_struct() does a blind atomic increment of ->usage.
> And put_task_struct() does an atomic_dec_and_test().  So one question
> is "what prevents us from doing get_task_struct() after the final
> put_task_struct() has pushed ->usage down to zero?"
>
> Hopefully there is a grace period in there somewhere, otherwise it will
> be necessary to take the task-list lock, which I would like to avoid.
>
> Looks like the call_rcu() of delayed_put_task_struct() in release_task()
> might be doing this.

Yes, exactly, so get_task_struct() is always fine as long as task_struct
itself is protected by RCU.

But can't we avoid get_task_struct()? This can pin a lot of task_struct's.
Can't we just add list_del_rcu(holdout_list) into __unhash_process() ?

We only need to ensure that list_add() above can't race with that list_del(),
perhaps we can tolerate lock_task_sighand() ?

Oleg.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation
  2014-07-31 16:30   ` Peter Zijlstra
@ 2014-07-31 16:43     ` josh
  2014-07-31 16:49     ` Paul E. McKenney
  1 sibling, 0 replies; 42+ messages in thread
From: josh @ 2014-07-31 16:43 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Paul E. McKenney, linux-kernel, mingo, laijs, dipankar, akpm,
	mathieu.desnoyers, tglx, rostedt, dhowells, edumazet, dvhart,
	fweisbec, oleg, bobby.prani

On Thu, Jul 31, 2014 at 06:30:12PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 31, 2014 at 09:19:02AM -0700, josh@joshtriplett.org wrote:
> > Also, ideally this should remain entirely optional; nothing in the core
> > kernel should depend on it.
> 
> I suspect Steve wants it for ftrace :-)

Which can be compiled out. :)

- Josh Triplett

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-07-31 16:20       ` Peter Zijlstra
@ 2014-07-31 16:47         ` Paul E. McKenney
  2014-07-31 16:57           ` Peter Zijlstra
  0 siblings, 1 reply; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31 16:47 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Lai Jiangshan, linux-kernel, mingo, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	dvhart, fweisbec, oleg, bobby.prani

On Thu, Jul 31, 2014 at 06:20:30PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 31, 2014 at 09:09:24AM -0700, Paul E. McKenney wrote:
> > Well, that is one of the questions in the 0/10 cover letter.  If it turns
> > out to be necessary to worry about idle-task trampolines, it should be
> > possible to avoid hammering all idle CPUs in the common case.  Though maybe
> > battery-powered devices won't need RCU-tasks.
> 
> So do we really need this to be a full blown RCU implementation? Could
> we not live with something like a task_barrier() function that
> guarantees the same as this rcu_task_synchronize() or somesuch?
> 
> So for rostedt's case he could patch out all trampolines, call
> task_barrier() and then free them.
> 
> We can make task_barrier() force each cpu to resched once -- which, if
> we do that after the task's have all scheduled once, could be the empty
> set.
> 
> It also pushes the cost of this machinery into the caller of
> task_barrier() and not in some random kthread (yet another one).

The idea here is to avoid having the kthread and to avoid providing
callbacks, but to instead do the work currently done by the kthread
in a synchronous function called by the updater?

My concern with this approach is that I bet that Steven will want
to have multiple concurrent calls to remove trampolines, and if I
need to support that efficiently, I end up with more-painful counter
tricks instead of the current callbacks.

Or am I confused about what you are suggesting?

> Now, if only task_work would work for kernel threads, then we could do
> something far nicer...

OK, I'll bite...  What is task_work?  I am guessing that you are not
talking about struct ql4_task_data in drivers/scsi/qla4xxx/ql4_def.h.  ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation
  2014-07-31 16:30   ` Peter Zijlstra
  2014-07-31 16:43     ` josh
@ 2014-07-31 16:49     ` Paul E. McKenney
  1 sibling, 0 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31 16:49 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: josh, linux-kernel, mingo, laijs, dipankar, akpm,
	mathieu.desnoyers, tglx, rostedt, dhowells, edumazet, dvhart,
	fweisbec, oleg, bobby.prani

On Thu, Jul 31, 2014 at 06:30:12PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 31, 2014 at 09:19:02AM -0700, josh@joshtriplett.org wrote:
> > Also, ideally this should remain entirely optional; nothing in the core
> > kernel should depend on it.
> 
> I suspect Steve wants it for ftrace :-)

This is my guess as well.  ;-)

Steven should be back next week, and perhaps we will learn more then.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 04/10] rcu: Export RCU-tasks APIs to GPL modules
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 04/10] rcu: Export RCU-tasks APIs to GPL modules Paul E. McKenney
@ 2014-07-31 16:56     ` josh
  2014-07-31 20:55       ` Paul E. McKenney
  0 siblings, 1 reply; 42+ messages in thread
From: josh @ 2014-07-31 16:56 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On Wed, Jul 30, 2014 at 05:39:36PM -0700, Paul E. McKenney wrote:
> From: Steven Rostedt <rostedt@goodmis.org>
> 
> This commit exports the RCU-tasks APIs, call_rcu_tasks(),
> synchronize_rcu_tasks(), and rcu_barrier_tasks(), to GPL-licensed
> kernel modules.
> 
> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

Reviewed-by: Josh Triplett <josh@joshtriplett.org>

Should this remain a separate patch, or go into the patch that creates
these APIs?

>  kernel/rcu/update.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> index c8d304dc6d8a..1bfc07ed854e 100644
> --- a/kernel/rcu/update.c
> +++ b/kernel/rcu/update.c
> @@ -429,6 +429,7 @@ void synchronize_rcu_tasks(void)
>  	/* Wait for the grace period. */
>  	wait_rcu_gp(call_rcu_tasks);
>  }
> +EXPORT_SYMBOL_GPL(synchronize_rcu_tasks);
>  
>  /**
>   * rcu_barrier_tasks - Wait for in-flight call_rcu_tasks() callbacks.
> @@ -441,6 +442,7 @@ void rcu_barrier_tasks(void)
>  	/* There is only one callback queue, so this is easy.  ;-) */
>  	synchronize_rcu_tasks();
>  }
> +EXPORT_SYMBOL_GPL(rcu_barrier_tasks);
>  
>  /* RCU-tasks kthread that detects grace periods and invokes callbacks. */
>  static int __noreturn rcu_tasks_kthread(void *arg)
> -- 
> 1.8.1.5
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-07-31 16:47         ` Paul E. McKenney
@ 2014-07-31 16:57           ` Peter Zijlstra
  0 siblings, 0 replies; 42+ messages in thread
From: Peter Zijlstra @ 2014-07-31 16:57 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Lai Jiangshan, linux-kernel, mingo, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, rostedt, dhowells, edumazet,
	dvhart, fweisbec, oleg, bobby.prani

[-- Attachment #1: Type: text/plain, Size: 1483 bytes --]

On Thu, Jul 31, 2014 at 09:47:39AM -0700, Paul E. McKenney wrote:

> The idea here is to avoid having the kthread and to avoid providing
> callbacks, but to instead do the work currently done by the kthread
> in a synchronous function called by the updater?

yep.

> My concern with this approach is that I bet that Steven will want
> to have multiple concurrent calls to remove trampolines, and if I
> need to support that efficiently, I end up with more-painful counter
> tricks instead of the current callbacks.
> 
> Or am I confused about what you are suggesting?

Nope dont thing you are.

Typically things like ftrace are global and so we don't end up
with concurrency like that, but who knows, we'll have to wait for steve
to get back.

> > Now, if only task_work would work for kernel threads, then we could do
> > something far nicer...
> 
> OK, I'll bite...  What is task_work?  I am guessing that you are not
> talking about struct ql4_task_data in drivers/scsi/qla4xxx/ql4_def.h.  ;-)

kernel/task_work.c include/linux/task_work.h

its a worklet ran on return to userspace or exit -- ie. safe points for
our tasks.

problem of course is that kernel threads never do the userspace part and
typically do not do the exit thing either.

But the idea was, do the task scan, add a task_work to each that need
attention, have the work call complete and then wait for the completion.
Avoids the polling entirely.

If only kernel threads..

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation
  2014-07-31 16:19 ` [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation josh
  2014-07-31 16:30   ` Peter Zijlstra
@ 2014-07-31 16:58   ` Paul E. McKenney
  2014-07-31 17:20     ` josh
  1 sibling, 1 reply; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31 16:58 UTC (permalink / raw)
  To: josh
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On Thu, Jul 31, 2014 at 09:19:02AM -0700, josh@joshtriplett.org wrote:
> On Wed, Jul 30, 2014 at 05:39:14PM -0700, Paul E. McKenney wrote:
> > This series provides a prototype of an RCU-tasks implementation, which has
> > been requested to assist with tramopoline removal.  This flavor of RCU
> > is task-based rather than CPU-based, and has voluntary context switch,
> > usermode execution, and the idle loops as its only quiescent states.
> > This selection of quiescent states ensures that at the end of a grace
> > period, there will no longer be any tasks depending on a trampoline that
> > was removed before the beginning of that grace period.  This works because
> > such trampolines do not contain function calls, do not contain voluntary
> > context switches, do not switch to usermode, and do not switch to idle.
> 
> I'm concerned about the amount of system overhead this introduces.
> Polling for holdout tasks seems quite excessive.  If I understand the
> intended use case correctly, the users of this will want to free
> relatively small amounts of memory; thus, waiting a while to do so seems
> fine, especially if the system isn't under any particular memory
> pressure.
> 
> Thus, rather than polling, could you simply flag the holdout
> tasks, telling the scheduler "hey, next time you don't have anything
> better to do..."?  Then don't bother with them again unless the system
> runs low on memory and asks you to free some.  (And mandate that you can
> only use this to free memory rather than for any other purpose.)

One of the many of my alternative suggestions that Steven rejected was
to simply leak the memory.  ;-)

But from what I can see, if we simply flag the holdout tasks, we
either are also holding onto the task_struct structures, re-introducing
concurrency to the list of holdout tasks, or requiring that the eventual
scan for holdout tasks scan the entire task list.  Neither of these seems
particularly appetizing to me.

The nice thing about Lai Jiangshan's suggestion is that it allows the
scan of the holdout list to be done completely unsynchronized, which
allows pauses during the scan, thus allowing the loop to check for
competing work on that CPU.  This should get almost all the effect
of indefinite delay without the indefinite delay (at least in the
common case).

Or am I missing something here?

> Also, ideally this should remain entirely optional; nothing in the core
> kernel should depend on it.

Agreed, the CONFIG_TASKS_RCU is not likely to disappear anytime soon.
I therefore do not see RCU-tasks as an obstacle to kernel tinification.
I also would also guess that you might complain if someone does try to
use if from the tinified core of the Linux kernel.  ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 03/10] rcu: Add synchronous grace-period waiting for RCU-tasks
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 03/10] rcu: Add synchronous grace-period waiting for RCU-tasks Paul E. McKenney
@ 2014-07-31 16:58     ` josh
  2014-07-31 18:34       ` Paul E. McKenney
  0 siblings, 1 reply; 42+ messages in thread
From: josh @ 2014-07-31 16:58 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On Wed, Jul 30, 2014 at 05:39:35PM -0700, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> It turns out to be easier to add the synchronous grace-period waiting
> functions to RCU-tasks than to work around their absense in rcutorture,
> so this commit adds them.  The key point is that the existence of
> call_rcu_tasks() means that rcutorture needs an rcu_barrier_tasks().
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

With rcu_barrier_tasks being a trivial wrapper, why not just let
rcutorture call synchronize_rcu_tasks directly?

>  include/linux/rcupdate.h |  2 ++
>  kernel/rcu/update.c      | 55 ++++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 57 insertions(+)
> 
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 3299ff98ad03..17c7e25c38be 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -216,6 +216,8 @@ void synchronize_sched(void);
>   * memory ordering guarantees.
>   */
>  void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head *head));
> +void synchronize_rcu_tasks(void);
> +void rcu_barrier_tasks(void);
>  
>  #ifdef CONFIG_PREEMPT_RCU
>  
> diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> index b92268647a01..c8d304dc6d8a 100644
> --- a/kernel/rcu/update.c
> +++ b/kernel/rcu/update.c
> @@ -387,6 +387,61 @@ void call_rcu_tasks(struct rcu_head *rhp, void (*func)(struct rcu_head *rhp))
>  }
>  EXPORT_SYMBOL_GPL(call_rcu_tasks);
>  
> +/**
> + * synchronize_rcu_tasks - wait until an rcu-tasks grace period has elapsed.
> + *
> + * Control will return to the caller some time after a full rcu-tasks
> + * grace period has elapsed, in other words after all currently
> + * executing rcu-tasks read-side critical sections have elapsed.  These
> + * read-side critical sections are delimited by calls to schedule(),
> + * cond_resched_rcu_qs(), idle execution, userspace execution, calls
> + * to synchronize_rcu_tasks(), and (in theory, anyway) cond_resched().
> + *
> + * This is a very specialized primitive, intended only for a few uses in
> + * tracing and other situations requiring manipulation of function
> + * preambles and profiling hooks.  The synchronize_rcu_tasks() function
> + * is not (yet) intended for heavy use from multiple CPUs.
> + *
> + * Note that this guarantee implies further memory-ordering guarantees.
> + * On systems with more than one CPU, when synchronize_rcu_tasks() returns,
> + * each CPU is guaranteed to have executed a full memory barrier since the
> + * end of its last RCU-tasks read-side critical section whose beginning
> + * preceded the call to synchronize_rcu_tasks().  In addition, each CPU
> + * having an RCU-tasks read-side critical section that extends beyond
> + * the return from synchronize_rcu_tasks() is guaranteed to have executed
> + * a full memory barrier after the beginning of synchronize_rcu_tasks()
> + * and before the beginning of that RCU-tasks read-side critical section.
> + * Note that these guarantees include CPUs that are offline, idle, or
> + * executing in user mode, as well as CPUs that are executing in the kernel.
> + *
> + * Furthermore, if CPU A invoked synchronize_rcu_tasks(), which returned
> + * to its caller on CPU B, then both CPU A and CPU B are guaranteed
> + * to have executed a full memory barrier during the execution of
> + * synchronize_rcu_tasks() -- even if CPU A and CPU B are the same CPU
> + * (but again only if the system has more than one CPU).
> + */
> +void synchronize_rcu_tasks(void)
> +{
> +	/* Complain if the scheduler has not started.  */
> +	rcu_lockdep_assert(!rcu_scheduler_active,
> +			   "synchronize_rcu_tasks called too soon");
> +
> +	/* Wait for the grace period. */
> +	wait_rcu_gp(call_rcu_tasks);
> +}
> +
> +/**
> + * rcu_barrier_tasks - Wait for in-flight call_rcu_tasks() callbacks.
> + *
> + * Although the current implementation is guaranteed to wait, it is not
> + * obligated to, for example, if there are no pending callbacks.
> + */
> +void rcu_barrier_tasks(void)
> +{
> +	/* There is only one callback queue, so this is easy.  ;-) */
> +	synchronize_rcu_tasks();
> +}
> +
>  /* RCU-tasks kthread that detects grace periods and invokes callbacks. */
>  static int __noreturn rcu_tasks_kthread(void *arg)
>  {
> -- 
> 1.8.1.5
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 05/10] rcutorture: Add torture tests for RCU-tasks
  2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 05/10] rcutorture: Add torture tests for RCU-tasks Paul E. McKenney
@ 2014-07-31 17:01     ` josh
  2014-07-31 20:55       ` Paul E. McKenney
  0 siblings, 1 reply; 42+ messages in thread
From: josh @ 2014-07-31 17:01 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On Wed, Jul 30, 2014 at 05:39:37PM -0700, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> This commit adds torture tests for RCU-tasks.  It also fixes a bug that
> would segfault for an RCU flavor lacking a callback-barrier function.
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

Seems reasonable; however, you could also set .cb_barrier =
synchronize_rcu_tasks and drop rcu_barrier_tasks.

Either way:
Reviewed-by: Josh Triplett <josh@joshtriplett.org>

>  include/linux/rcupdate.h |  1 +
>  kernel/rcu/rcutorture.c  | 40 +++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 40 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 17c7e25c38be..ecb2198849e0 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -55,6 +55,7 @@ enum rcutorture_type {
>  	RCU_FLAVOR,
>  	RCU_BH_FLAVOR,
>  	RCU_SCHED_FLAVOR,
> +	RCU_TASKS_FLAVOR,
>  	SRCU_FLAVOR,
>  	INVALID_RCU_FLAVOR
>  };
> diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
> index febe07062ac5..6d12ab6675bc 100644
> --- a/kernel/rcu/rcutorture.c
> +++ b/kernel/rcu/rcutorture.c
> @@ -602,6 +602,42 @@ static struct rcu_torture_ops sched_ops = {
>  };
>  
>  /*
> + * Definitions for RCU-tasks torture testing.
> + */
> +
> +static int tasks_torture_read_lock(void)
> +{
> +	return 0;
> +}
> +
> +static void tasks_torture_read_unlock(int idx)
> +{
> +}
> +
> +static void rcu_tasks_torture_deferred_free(struct rcu_torture *p)
> +{
> +	call_rcu_tasks(&p->rtort_rcu, rcu_torture_cb);
> +}
> +
> +static struct rcu_torture_ops tasks_ops = {
> +	.ttype		= RCU_TASKS_FLAVOR,
> +	.init		= rcu_sync_torture_init,
> +	.readlock	= tasks_torture_read_lock,
> +	.read_delay	= rcu_read_delay,  /* just reuse rcu's version. */
> +	.readunlock	= tasks_torture_read_unlock,
> +	.completed	= rcu_no_completed,
> +	.deferred_free	= rcu_tasks_torture_deferred_free,
> +	.sync		= synchronize_rcu_tasks,
> +	.exp_sync	= synchronize_rcu_tasks,
> +	.call		= call_rcu_tasks,
> +	.cb_barrier	= rcu_barrier_tasks,
> +	.fqs		= NULL,
> +	.stats		= NULL,
> +	.irq_capable	= 1,
> +	.name		= "tasks"
> +};
> +
> +/*
>   * RCU torture priority-boost testing.  Runs one real-time thread per
>   * CPU for moderate bursts, repeatedly registering RCU callbacks and
>   * spinning waiting for them to be invoked.  If a given callback takes
> @@ -1295,7 +1331,8 @@ static int rcu_torture_barrier_cbs(void *arg)
>  		if (atomic_dec_and_test(&barrier_cbs_count))
>  			wake_up(&barrier_wq);
>  	} while (!torture_must_stop());
> -	cur_ops->cb_barrier();
> +	if (cur_ops->cb_barrier != NULL)
> +		cur_ops->cb_barrier();
>  	destroy_rcu_head_on_stack(&rcu);
>  	torture_kthread_stopping("rcu_torture_barrier_cbs");
>  	return 0;
> @@ -1534,6 +1571,7 @@ rcu_torture_init(void)
>  	int firsterr = 0;
>  	static struct rcu_torture_ops *torture_ops[] = {
>  		&rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &sched_ops,
> +		&tasks_ops,
>  	};
>  
>  	if (!torture_init_begin(torture_type, verbose, &rcutorture_runnable))
> -- 
> 1.8.1.5
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-07-31 16:31       ` Oleg Nesterov
@ 2014-07-31 17:02         ` Paul E. McKenney
  2014-07-31 17:27           ` Oleg Nesterov
  0 siblings, 1 reply; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31 17:02 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Lai Jiangshan, linux-kernel, mingo, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, dvhart, fweisbec, bobby.prani

On Thu, Jul 31, 2014 at 06:31:38PM +0200, Oleg Nesterov wrote:
> Again, sorry, I didn't read the patches yet, just noticed your discussion...
> 
> On 07/31, Paul E. McKenney wrote:
> >
> > On Thu, Jul 31, 2014 at 03:30:12PM +0800, Lai Jiangshan wrote:
> >
> > > > +				t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
> > > > +				t->rcu_tasks_holdout = 1;
> > > > +				list_add(&t->rcu_tasks_holdout_list,
> > > > +					 &rcu_tasks_holdouts);
> > >
> > > I think get_task_struct() is needed here to avoid the task disappears.
> >
> > Hmmm...  Let's see...
> >
> > Looks like get_task_struct() does a blind atomic increment of ->usage.
> > And put_task_struct() does an atomic_dec_and_test().  So one question
> > is "what prevents us from doing get_task_struct() after the final
> > put_task_struct() has pushed ->usage down to zero?"
> >
> > Hopefully there is a grace period in there somewhere, otherwise it will
> > be necessary to take the task-list lock, which I would like to avoid.
> >
> > Looks like the call_rcu() of delayed_put_task_struct() in release_task()
> > might be doing this.
> 
> Yes, exactly, so get_task_struct() is always fine as long as task_struct
> itself is protected by RCU.
> 
> But can't we avoid get_task_struct()? This can pin a lot of task_struct's.
> Can't we just add list_del_rcu(holdout_list) into __unhash_process() ?

If I add the list_del_rcu() there, then I am back to a concurrent list,
which I would like to avoid.  Don't get me wrong, it was fun playing with
the list-locked stuff, but best to avoid it if we can.

Also, the current implementation implicitly locks down the task_structs
via the exit path.  The nice thing about using get_task_struct to lock
them down is that -only- the task_struct itself is locked down -- the
task can be reaped and so on.  In contrast, the current exit_rcu_tasks()
approach delays the semantic exit instead of just hanging onto the
underlying task_struct a bit longer.

> We only need to ensure that list_add() above can't race with that list_del(),
> perhaps we can tolerate lock_task_sighand() ?

I am worried about a task that does a voluntary context switch, then exits.
This could results in rcu_tasks_kthread() and __unhash_process() both
wanting to dequeue at the same time, right?

							Thanx, Paul


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation
  2014-07-31 16:58   ` Paul E. McKenney
@ 2014-07-31 17:20     ` josh
  2014-07-31 18:38       ` Paul E. McKenney
  0 siblings, 1 reply; 42+ messages in thread
From: josh @ 2014-07-31 17:20 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On Thu, Jul 31, 2014 at 09:58:43AM -0700, Paul E. McKenney wrote:
> On Thu, Jul 31, 2014 at 09:19:02AM -0700, josh@joshtriplett.org wrote:
> > On Wed, Jul 30, 2014 at 05:39:14PM -0700, Paul E. McKenney wrote:
> > > This series provides a prototype of an RCU-tasks implementation, which has
> > > been requested to assist with tramopoline removal.  This flavor of RCU
> > > is task-based rather than CPU-based, and has voluntary context switch,
> > > usermode execution, and the idle loops as its only quiescent states.
> > > This selection of quiescent states ensures that at the end of a grace
> > > period, there will no longer be any tasks depending on a trampoline that
> > > was removed before the beginning of that grace period.  This works because
> > > such trampolines do not contain function calls, do not contain voluntary
> > > context switches, do not switch to usermode, and do not switch to idle.
> > 
> > I'm concerned about the amount of system overhead this introduces.
> > Polling for holdout tasks seems quite excessive.  If I understand the
> > intended use case correctly, the users of this will want to free
> > relatively small amounts of memory; thus, waiting a while to do so seems
> > fine, especially if the system isn't under any particular memory
> > pressure.
> > 
> > Thus, rather than polling, could you simply flag the holdout
> > tasks, telling the scheduler "hey, next time you don't have anything
> > better to do..."?  Then don't bother with them again unless the system
> > runs low on memory and asks you to free some.  (And mandate that you can
> > only use this to free memory rather than for any other purpose.)
> 
> One of the many of my alternative suggestions that Steven rejected was
> to simply leak the memory.  ;-)
> 
> But from what I can see, if we simply flag the holdout tasks, we
> either are also holding onto the task_struct structures, re-introducing
> concurrency to the list of holdout tasks, or requiring that the eventual
> scan for holdout tasks scan the entire task list.  Neither of these seems
> particularly appetizing to me.
> 
> The nice thing about Lai Jiangshan's suggestion is that it allows the
> scan of the holdout list to be done completely unsynchronized, which
> allows pauses during the scan, thus allowing the loop to check for
> competing work on that CPU.  This should get almost all the effect
> of indefinite delay without the indefinite delay (at least in the
> common case).
> 
> Or am I missing something here?

If you only allow a single outstanding set of callbacks at a time, you
could have a single flag stored in the task, combined with a count
stored with the set of callbacks.  Each time one of the holdout tasks
comes up, clear the flag and decrement the count.  If and only if you
get asked to free up memory, start poking the scheduler to bring up
those tasks.  When the count hits 0, free the memory.

The set of trampolines won't change often, and presumably only changes
in response to user-driven requests to trace or stop tracing things.
So, if you have to wait for the existing set of callbacks to go away
before adding more, that seems fine.  And you could then ditch polling
entirely.

> > Also, ideally this should remain entirely optional; nothing in the core
> > kernel should depend on it.
> 
> Agreed, the CONFIG_TASKS_RCU is not likely to disappear anytime soon.
> I therefore do not see RCU-tasks as an obstacle to kernel tinification.
> I also would also guess that you might complain if someone does try to
> use if from the tinified core of the Linux kernel.  ;-)

Yes. :)

- Josh Triplett

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-07-31 17:02         ` Paul E. McKenney
@ 2014-07-31 17:27           ` Oleg Nesterov
  2014-07-31 17:44             ` Paul E. McKenney
  0 siblings, 1 reply; 42+ messages in thread
From: Oleg Nesterov @ 2014-07-31 17:27 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Lai Jiangshan, linux-kernel, mingo, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, dvhart, fweisbec, bobby.prani

On 07/31, Paul E. McKenney wrote:
>
> On Thu, Jul 31, 2014 at 06:31:38PM +0200, Oleg Nesterov wrote:
>
> > But can't we avoid get_task_struct()? This can pin a lot of task_struct's.
> > Can't we just add list_del_rcu(holdout_list) into __unhash_process() ?
>
> If I add the list_del_rcu() there, then I am back to a concurrent list,
> which I would like to avoid.  Don't get me wrong, it was fun playing with
> the list-locked stuff, but best to avoid it if we can.

OK,

> The nice thing about using get_task_struct to lock
> them down is that -only- the task_struct itself is locked down -- the
> task can be reaped and so on.

I understand. but otoh it would be nice to not pin this memory if the
task was already (auto)reaped.

And afaics the number of pinned task_struct's is not bounded. In theory
it is not even limited by, say, PID_MAX_LIMIT. A thread can exit and reap
itself right after get_task_struct() but create another running thread
which can be noticed by rcu_tasks_kthread() too.

> > We only need to ensure that list_add() above can't race with that list_del(),
> > perhaps we can tolerate lock_task_sighand() ?
>
> I am worried about a task that does a voluntary context switch, then exits.
> This could results in rcu_tasks_kthread() and __unhash_process() both
> wanting to dequeue at the same time, right?

Oh yes, I was very wrong. And we do not want to abuse tasklist_lock...

OK, let me try to read the patch first.

Oleg.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-07-31 17:27           ` Oleg Nesterov
@ 2014-07-31 17:44             ` Paul E. McKenney
  0 siblings, 0 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31 17:44 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: Lai Jiangshan, linux-kernel, mingo, dipankar, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, dvhart, fweisbec, bobby.prani

On Thu, Jul 31, 2014 at 07:27:52PM +0200, Oleg Nesterov wrote:
> On 07/31, Paul E. McKenney wrote:
> >
> > On Thu, Jul 31, 2014 at 06:31:38PM +0200, Oleg Nesterov wrote:
> >
> > > But can't we avoid get_task_struct()? This can pin a lot of task_struct's.
> > > Can't we just add list_del_rcu(holdout_list) into __unhash_process() ?
> >
> > If I add the list_del_rcu() there, then I am back to a concurrent list,
> > which I would like to avoid.  Don't get me wrong, it was fun playing with
> > the list-locked stuff, but best to avoid it if we can.
> 
> OK,
> 
> > The nice thing about using get_task_struct to lock
> > them down is that -only- the task_struct itself is locked down -- the
> > task can be reaped and so on.
> 
> I understand. but otoh it would be nice to not pin this memory if the
> task was already (auto)reaped.
> 
> And afaics the number of pinned task_struct's is not bounded. In theory
> it is not even limited by, say, PID_MAX_LIMIT. A thread can exit and reap
> itself right after get_task_struct() but create another running thread
> which can be noticed by rcu_tasks_kthread() too.

Good point!  Maybe this means that I need to have rcu_struct_kthread()
be more energetic if memory runs low, perhaps via an OOM handler.
Would that help?

> > > We only need to ensure that list_add() above can't race with that list_del(),
> > > perhaps we can tolerate lock_task_sighand() ?
> >
> > I am worried about a task that does a voluntary context switch, then exits.
> > This could results in rcu_tasks_kthread() and __unhash_process() both
> > wanting to dequeue at the same time, right?
> 
> Oh yes, I was very wrong. And we do not want to abuse tasklist_lock...
> 
> OK, let me try to read the patch first.

Not a problem, looking forward to your feedback!

							Thanx, Paul


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 03/10] rcu: Add synchronous grace-period waiting for RCU-tasks
  2014-07-31 16:58     ` josh
@ 2014-07-31 18:34       ` Paul E. McKenney
  0 siblings, 0 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31 18:34 UTC (permalink / raw)
  To: josh
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On Thu, Jul 31, 2014 at 09:58:52AM -0700, josh@joshtriplett.org wrote:
> On Wed, Jul 30, 2014 at 05:39:35PM -0700, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > 
> > It turns out to be easier to add the synchronous grace-period waiting
> > functions to RCU-tasks than to work around their absense in rcutorture,
> > so this commit adds them.  The key point is that the existence of
> > call_rcu_tasks() means that rcutorture needs an rcu_barrier_tasks().
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> 
> With rcu_barrier_tasks being a trivial wrapper, why not just let
> rcutorture call synchronize_rcu_tasks directly?

I considered that, but took the rcu_barrier_tasks() approach so that
should anyone ever use call_rcu_tasks() from a module, they have the
rcu_barrier_tasks() call to use at module-exit time.

But I don't feel all that strongly about it.

							Thanx, Paul

> >  include/linux/rcupdate.h |  2 ++
> >  kernel/rcu/update.c      | 55 ++++++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 57 insertions(+)
> > 
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index 3299ff98ad03..17c7e25c38be 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -216,6 +216,8 @@ void synchronize_sched(void);
> >   * memory ordering guarantees.
> >   */
> >  void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head *head));
> > +void synchronize_rcu_tasks(void);
> > +void rcu_barrier_tasks(void);
> >  
> >  #ifdef CONFIG_PREEMPT_RCU
> >  
> > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > index b92268647a01..c8d304dc6d8a 100644
> > --- a/kernel/rcu/update.c
> > +++ b/kernel/rcu/update.c
> > @@ -387,6 +387,61 @@ void call_rcu_tasks(struct rcu_head *rhp, void (*func)(struct rcu_head *rhp))
> >  }
> >  EXPORT_SYMBOL_GPL(call_rcu_tasks);
> >  
> > +/**
> > + * synchronize_rcu_tasks - wait until an rcu-tasks grace period has elapsed.
> > + *
> > + * Control will return to the caller some time after a full rcu-tasks
> > + * grace period has elapsed, in other words after all currently
> > + * executing rcu-tasks read-side critical sections have elapsed.  These
> > + * read-side critical sections are delimited by calls to schedule(),
> > + * cond_resched_rcu_qs(), idle execution, userspace execution, calls
> > + * to synchronize_rcu_tasks(), and (in theory, anyway) cond_resched().
> > + *
> > + * This is a very specialized primitive, intended only for a few uses in
> > + * tracing and other situations requiring manipulation of function
> > + * preambles and profiling hooks.  The synchronize_rcu_tasks() function
> > + * is not (yet) intended for heavy use from multiple CPUs.
> > + *
> > + * Note that this guarantee implies further memory-ordering guarantees.
> > + * On systems with more than one CPU, when synchronize_rcu_tasks() returns,
> > + * each CPU is guaranteed to have executed a full memory barrier since the
> > + * end of its last RCU-tasks read-side critical section whose beginning
> > + * preceded the call to synchronize_rcu_tasks().  In addition, each CPU
> > + * having an RCU-tasks read-side critical section that extends beyond
> > + * the return from synchronize_rcu_tasks() is guaranteed to have executed
> > + * a full memory barrier after the beginning of synchronize_rcu_tasks()
> > + * and before the beginning of that RCU-tasks read-side critical section.
> > + * Note that these guarantees include CPUs that are offline, idle, or
> > + * executing in user mode, as well as CPUs that are executing in the kernel.
> > + *
> > + * Furthermore, if CPU A invoked synchronize_rcu_tasks(), which returned
> > + * to its caller on CPU B, then both CPU A and CPU B are guaranteed
> > + * to have executed a full memory barrier during the execution of
> > + * synchronize_rcu_tasks() -- even if CPU A and CPU B are the same CPU
> > + * (but again only if the system has more than one CPU).
> > + */
> > +void synchronize_rcu_tasks(void)
> > +{
> > +	/* Complain if the scheduler has not started.  */
> > +	rcu_lockdep_assert(!rcu_scheduler_active,
> > +			   "synchronize_rcu_tasks called too soon");
> > +
> > +	/* Wait for the grace period. */
> > +	wait_rcu_gp(call_rcu_tasks);
> > +}
> > +
> > +/**
> > + * rcu_barrier_tasks - Wait for in-flight call_rcu_tasks() callbacks.
> > + *
> > + * Although the current implementation is guaranteed to wait, it is not
> > + * obligated to, for example, if there are no pending callbacks.
> > + */
> > +void rcu_barrier_tasks(void)
> > +{
> > +	/* There is only one callback queue, so this is easy.  ;-) */
> > +	synchronize_rcu_tasks();
> > +}
> > +
> >  /* RCU-tasks kthread that detects grace periods and invokes callbacks. */
> >  static int __noreturn rcu_tasks_kthread(void *arg)
> >  {
> > -- 
> > 1.8.1.5
> > 
> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation
  2014-07-31 17:20     ` josh
@ 2014-07-31 18:38       ` Paul E. McKenney
  2014-07-31 20:58         ` josh
  0 siblings, 1 reply; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31 18:38 UTC (permalink / raw)
  To: josh
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On Thu, Jul 31, 2014 at 10:20:24AM -0700, josh@joshtriplett.org wrote:
> On Thu, Jul 31, 2014 at 09:58:43AM -0700, Paul E. McKenney wrote:
> > On Thu, Jul 31, 2014 at 09:19:02AM -0700, josh@joshtriplett.org wrote:
> > > On Wed, Jul 30, 2014 at 05:39:14PM -0700, Paul E. McKenney wrote:
> > > > This series provides a prototype of an RCU-tasks implementation, which has
> > > > been requested to assist with tramopoline removal.  This flavor of RCU
> > > > is task-based rather than CPU-based, and has voluntary context switch,
> > > > usermode execution, and the idle loops as its only quiescent states.
> > > > This selection of quiescent states ensures that at the end of a grace
> > > > period, there will no longer be any tasks depending on a trampoline that
> > > > was removed before the beginning of that grace period.  This works because
> > > > such trampolines do not contain function calls, do not contain voluntary
> > > > context switches, do not switch to usermode, and do not switch to idle.
> > > 
> > > I'm concerned about the amount of system overhead this introduces.
> > > Polling for holdout tasks seems quite excessive.  If I understand the
> > > intended use case correctly, the users of this will want to free
> > > relatively small amounts of memory; thus, waiting a while to do so seems
> > > fine, especially if the system isn't under any particular memory
> > > pressure.
> > > 
> > > Thus, rather than polling, could you simply flag the holdout
> > > tasks, telling the scheduler "hey, next time you don't have anything
> > > better to do..."?  Then don't bother with them again unless the system
> > > runs low on memory and asks you to free some.  (And mandate that you can
> > > only use this to free memory rather than for any other purpose.)
> > 
> > One of the many of my alternative suggestions that Steven rejected was
> > to simply leak the memory.  ;-)
> > 
> > But from what I can see, if we simply flag the holdout tasks, we
> > either are also holding onto the task_struct structures, re-introducing
> > concurrency to the list of holdout tasks, or requiring that the eventual
> > scan for holdout tasks scan the entire task list.  Neither of these seems
> > particularly appetizing to me.
> > 
> > The nice thing about Lai Jiangshan's suggestion is that it allows the
> > scan of the holdout list to be done completely unsynchronized, which
> > allows pauses during the scan, thus allowing the loop to check for
> > competing work on that CPU.  This should get almost all the effect
> > of indefinite delay without the indefinite delay (at least in the
> > common case).
> > 
> > Or am I missing something here?
> 
> If you only allow a single outstanding set of callbacks at a time, you
> could have a single flag stored in the task, combined with a count
> stored with the set of callbacks.  Each time one of the holdout tasks
> comes up, clear the flag and decrement the count.  If and only if you
> get asked to free up memory, start poking the scheduler to bring up
> those tasks.  When the count hits 0, free the memory.
> 
> The set of trampolines won't change often, and presumably only changes
> in response to user-driven requests to trace or stop tracing things.
> So, if you have to wait for the existing set of callbacks to go away
> before adding more, that seems fine.  And you could then ditch polling
> entirely.

If I understand what you are suggesting, this requires hooks in the
scheduler.  I used to have hooks in the scheduler, but I dropped them in
favor of polling the voluntary context-switch count in response to Peter
Zijlstra's concerns about adding overhead to the scheduler's fastpaths.

Therefore, although the flags are sometimes cleared externally from the
scheduling-clock interrupt (for usermode execution), it is quite possible
that a given task might never have its flag cleared asynchronously.

Another approach might be to poll more slowly or to make the polling
evict itself if it detects that this CPU has something else to do.
Would either or both of these help?

> > > Also, ideally this should remain entirely optional; nothing in the core
> > > kernel should depend on it.
> > 
> > Agreed, the CONFIG_TASKS_RCU is not likely to disappear anytime soon.
> > I therefore do not see RCU-tasks as an obstacle to kernel tinification.
> > I also would also guess that you might complain if someone does try to
> > use if from the tinified core of the Linux kernel.  ;-)
> 
> Yes. :)

Very good!  ;-) ;-) ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation
  2014-07-31  0:39 [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation Paul E. McKenney
  2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
  2014-07-31 16:19 ` [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation josh
@ 2014-07-31 19:29 ` Andi Kleen
  2014-07-31 21:08   ` Paul E. McKenney
  2 siblings, 1 reply; 42+ messages in thread
From: Andi Kleen @ 2014-07-31 19:29 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, tglx, peterz, rostedt, dhowells, edumazet, dvhart,
	fweisbec, oleg, bobby.prani

"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> writes:

> Hello!
>
> This series provides a prototype of an RCU-tasks implementation, which has
> been requested to assist with tramopoline removal.

Is that trampoline removal (whatever it is) important enough to justify
adding that much new (and complicated) code? Are there other potential
users of this? Is the long term maintenance cost justified?
Will RCU keep growing forever?

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 04/10] rcu: Export RCU-tasks APIs to GPL modules
  2014-07-31 16:56     ` josh
@ 2014-07-31 20:55       ` Paul E. McKenney
  0 siblings, 0 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31 20:55 UTC (permalink / raw)
  To: josh
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On Thu, Jul 31, 2014 at 09:56:20AM -0700, josh@joshtriplett.org wrote:
> On Wed, Jul 30, 2014 at 05:39:36PM -0700, Paul E. McKenney wrote:
> > From: Steven Rostedt <rostedt@goodmis.org>
> > 
> > This commit exports the RCU-tasks APIs, call_rcu_tasks(),
> > synchronize_rcu_tasks(), and rcu_barrier_tasks(), to GPL-licensed
> > kernel modules.
> > 
> > Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> 
> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
> 
> Should this remain a separate patch, or go into the patch that creates
> these APIs?

Excellent question.  Keeping them separate for now, but...

							Thanx, Paul

> >  kernel/rcu/update.c | 2 ++
> >  1 file changed, 2 insertions(+)
> > 
> > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > index c8d304dc6d8a..1bfc07ed854e 100644
> > --- a/kernel/rcu/update.c
> > +++ b/kernel/rcu/update.c
> > @@ -429,6 +429,7 @@ void synchronize_rcu_tasks(void)
> >  	/* Wait for the grace period. */
> >  	wait_rcu_gp(call_rcu_tasks);
> >  }
> > +EXPORT_SYMBOL_GPL(synchronize_rcu_tasks);
> >  
> >  /**
> >   * rcu_barrier_tasks - Wait for in-flight call_rcu_tasks() callbacks.
> > @@ -441,6 +442,7 @@ void rcu_barrier_tasks(void)
> >  	/* There is only one callback queue, so this is easy.  ;-) */
> >  	synchronize_rcu_tasks();
> >  }
> > +EXPORT_SYMBOL_GPL(rcu_barrier_tasks);
> >  
> >  /* RCU-tasks kthread that detects grace periods and invokes callbacks. */
> >  static int __noreturn rcu_tasks_kthread(void *arg)
> > -- 
> > 1.8.1.5
> > 
> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 05/10] rcutorture: Add torture tests for RCU-tasks
  2014-07-31 17:01     ` josh
@ 2014-07-31 20:55       ` Paul E. McKenney
  0 siblings, 0 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31 20:55 UTC (permalink / raw)
  To: josh
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On Thu, Jul 31, 2014 at 10:01:08AM -0700, josh@joshtriplett.org wrote:
> On Wed, Jul 30, 2014 at 05:39:37PM -0700, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > 
> > This commit adds torture tests for RCU-tasks.  It also fixes a bug that
> > would segfault for an RCU flavor lacking a callback-barrier function.
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> 
> Seems reasonable; however, you could also set .cb_barrier =
> synchronize_rcu_tasks and drop rcu_barrier_tasks.

Good point, though I like the idea of matching the existing RCU API
template.

> Either way:
> Reviewed-by: Josh Triplett <josh@joshtriplett.org>

And thank you for the review!

							Thanx, Paul

> >  include/linux/rcupdate.h |  1 +
> >  kernel/rcu/rcutorture.c  | 40 +++++++++++++++++++++++++++++++++++++++-
> >  2 files changed, 40 insertions(+), 1 deletion(-)
> > 
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index 17c7e25c38be..ecb2198849e0 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -55,6 +55,7 @@ enum rcutorture_type {
> >  	RCU_FLAVOR,
> >  	RCU_BH_FLAVOR,
> >  	RCU_SCHED_FLAVOR,
> > +	RCU_TASKS_FLAVOR,
> >  	SRCU_FLAVOR,
> >  	INVALID_RCU_FLAVOR
> >  };
> > diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
> > index febe07062ac5..6d12ab6675bc 100644
> > --- a/kernel/rcu/rcutorture.c
> > +++ b/kernel/rcu/rcutorture.c
> > @@ -602,6 +602,42 @@ static struct rcu_torture_ops sched_ops = {
> >  };
> >  
> >  /*
> > + * Definitions for RCU-tasks torture testing.
> > + */
> > +
> > +static int tasks_torture_read_lock(void)
> > +{
> > +	return 0;
> > +}
> > +
> > +static void tasks_torture_read_unlock(int idx)
> > +{
> > +}
> > +
> > +static void rcu_tasks_torture_deferred_free(struct rcu_torture *p)
> > +{
> > +	call_rcu_tasks(&p->rtort_rcu, rcu_torture_cb);
> > +}
> > +
> > +static struct rcu_torture_ops tasks_ops = {
> > +	.ttype		= RCU_TASKS_FLAVOR,
> > +	.init		= rcu_sync_torture_init,
> > +	.readlock	= tasks_torture_read_lock,
> > +	.read_delay	= rcu_read_delay,  /* just reuse rcu's version. */
> > +	.readunlock	= tasks_torture_read_unlock,
> > +	.completed	= rcu_no_completed,
> > +	.deferred_free	= rcu_tasks_torture_deferred_free,
> > +	.sync		= synchronize_rcu_tasks,
> > +	.exp_sync	= synchronize_rcu_tasks,
> > +	.call		= call_rcu_tasks,
> > +	.cb_barrier	= rcu_barrier_tasks,
> > +	.fqs		= NULL,
> > +	.stats		= NULL,
> > +	.irq_capable	= 1,
> > +	.name		= "tasks"
> > +};
> > +
> > +/*
> >   * RCU torture priority-boost testing.  Runs one real-time thread per
> >   * CPU for moderate bursts, repeatedly registering RCU callbacks and
> >   * spinning waiting for them to be invoked.  If a given callback takes
> > @@ -1295,7 +1331,8 @@ static int rcu_torture_barrier_cbs(void *arg)
> >  		if (atomic_dec_and_test(&barrier_cbs_count))
> >  			wake_up(&barrier_wq);
> >  	} while (!torture_must_stop());
> > -	cur_ops->cb_barrier();
> > +	if (cur_ops->cb_barrier != NULL)
> > +		cur_ops->cb_barrier();
> >  	destroy_rcu_head_on_stack(&rcu);
> >  	torture_kthread_stopping("rcu_torture_barrier_cbs");
> >  	return 0;
> > @@ -1534,6 +1571,7 @@ rcu_torture_init(void)
> >  	int firsterr = 0;
> >  	static struct rcu_torture_ops *torture_ops[] = {
> >  		&rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &sched_ops,
> > +		&tasks_ops,
> >  	};
> >  
> >  	if (!torture_init_begin(torture_type, verbose, &rcutorture_runnable))
> > -- 
> > 1.8.1.5
> > 
> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation
  2014-07-31 18:38       ` Paul E. McKenney
@ 2014-07-31 20:58         ` josh
  2014-07-31 21:11           ` Paul E. McKenney
  0 siblings, 1 reply; 42+ messages in thread
From: josh @ 2014-07-31 20:58 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On Thu, Jul 31, 2014 at 11:38:16AM -0700, Paul E. McKenney wrote:
> On Thu, Jul 31, 2014 at 10:20:24AM -0700, josh@joshtriplett.org wrote:
> > On Thu, Jul 31, 2014 at 09:58:43AM -0700, Paul E. McKenney wrote:
> > > On Thu, Jul 31, 2014 at 09:19:02AM -0700, josh@joshtriplett.org wrote:
> > > > On Wed, Jul 30, 2014 at 05:39:14PM -0700, Paul E. McKenney wrote:
> > > > > This series provides a prototype of an RCU-tasks implementation, which has
> > > > > been requested to assist with tramopoline removal.  This flavor of RCU
> > > > > is task-based rather than CPU-based, and has voluntary context switch,
> > > > > usermode execution, and the idle loops as its only quiescent states.
> > > > > This selection of quiescent states ensures that at the end of a grace
> > > > > period, there will no longer be any tasks depending on a trampoline that
> > > > > was removed before the beginning of that grace period.  This works because
> > > > > such trampolines do not contain function calls, do not contain voluntary
> > > > > context switches, do not switch to usermode, and do not switch to idle.
> > > > 
> > > > I'm concerned about the amount of system overhead this introduces.
> > > > Polling for holdout tasks seems quite excessive.  If I understand the
> > > > intended use case correctly, the users of this will want to free
> > > > relatively small amounts of memory; thus, waiting a while to do so seems
> > > > fine, especially if the system isn't under any particular memory
> > > > pressure.
> > > > 
> > > > Thus, rather than polling, could you simply flag the holdout
> > > > tasks, telling the scheduler "hey, next time you don't have anything
> > > > better to do..."?  Then don't bother with them again unless the system
> > > > runs low on memory and asks you to free some.  (And mandate that you can
> > > > only use this to free memory rather than for any other purpose.)
> > > 
> > > One of the many of my alternative suggestions that Steven rejected was
> > > to simply leak the memory.  ;-)
> > > 
> > > But from what I can see, if we simply flag the holdout tasks, we
> > > either are also holding onto the task_struct structures, re-introducing
> > > concurrency to the list of holdout tasks, or requiring that the eventual
> > > scan for holdout tasks scan the entire task list.  Neither of these seems
> > > particularly appetizing to me.
> > > 
> > > The nice thing about Lai Jiangshan's suggestion is that it allows the
> > > scan of the holdout list to be done completely unsynchronized, which
> > > allows pauses during the scan, thus allowing the loop to check for
> > > competing work on that CPU.  This should get almost all the effect
> > > of indefinite delay without the indefinite delay (at least in the
> > > common case).
> > > 
> > > Or am I missing something here?
> > 
> > If you only allow a single outstanding set of callbacks at a time, you
> > could have a single flag stored in the task, combined with a count
> > stored with the set of callbacks.  Each time one of the holdout tasks
> > comes up, clear the flag and decrement the count.  If and only if you
> > get asked to free up memory, start poking the scheduler to bring up
> > those tasks.  When the count hits 0, free the memory.
> > 
> > The set of trampolines won't change often, and presumably only changes
> > in response to user-driven requests to trace or stop tracing things.
> > So, if you have to wait for the existing set of callbacks to go away
> > before adding more, that seems fine.  And you could then ditch polling
> > entirely.
> 
> If I understand what you are suggesting, this requires hooks in the
> scheduler.  I used to have hooks in the scheduler, but I dropped them in
> favor of polling the voluntary context-switch count in response to Peter
> Zijlstra's concerns about adding overhead to the scheduler's fastpaths.
> 
> Therefore, although the flags are sometimes cleared externally from the
> scheduling-clock interrupt (for usermode execution), it is quite possible
> that a given task might never have its flag cleared asynchronously.
> 
> Another approach might be to poll more slowly or to make the polling
> evict itself if it detects that this CPU has something else to do.
> Would either or both of these help?

As discussed at lunch today, another option would be to drop the thread
and handle cleanup synchronously from the caller in the tracing code, or
fire off a kthread *on request* to handle it asynchronously.  That would
avoid paying the startup and overhead on a system that has tracing in
the kernel but never uses it, as will likely occur with distro kernels.

- Josh Triplett

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation
  2014-07-31 19:29 ` Andi Kleen
@ 2014-07-31 21:08   ` Paul E. McKenney
  0 siblings, 0 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31 21:08 UTC (permalink / raw)
  To: Andi Kleen
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, tglx, peterz, rostedt, dhowells, edumazet, dvhart,
	fweisbec, oleg, bobby.prani

On Thu, Jul 31, 2014 at 12:29:36PM -0700, Andi Kleen wrote:
> "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> writes:
> 
> > Hello!
> >
> > This series provides a prototype of an RCU-tasks implementation, which has
> > been requested to assist with tramopoline removal.
> 
> Is that trampoline removal (whatever it is) important enough to justify
> adding that much new (and complicated) code? Are there other potential
> users of this? Is the long term maintenance cost justified?
> Will RCU keep growing forever?

My answers to all of these questions is "I don't know."

That said, I cannot resist pointing out that similar questions could be
asked of a great many other subsystems of the Linux kernel, or indeed
of the Linux kernel as a whole.  ;-)

And things do appear to be moving in a good direction for RCU-tasks:

	v1: 18 files changed, 749 insertions(+), 55 deletions(-)
	v2: 18 files changed, 590 insertions(+), 83 deletions(-)
	v3: 17 files changed, 447 insertions(+), 32 deletions(-)

The numbers for v3 are tentative, and might change based on test results.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation
  2014-07-31 20:58         ` josh
@ 2014-07-31 21:11           ` Paul E. McKenney
  0 siblings, 0 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-07-31 21:11 UTC (permalink / raw)
  To: josh
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On Thu, Jul 31, 2014 at 01:58:17PM -0700, josh@joshtriplett.org wrote:
> On Thu, Jul 31, 2014 at 11:38:16AM -0700, Paul E. McKenney wrote:
> > On Thu, Jul 31, 2014 at 10:20:24AM -0700, josh@joshtriplett.org wrote:
> > > On Thu, Jul 31, 2014 at 09:58:43AM -0700, Paul E. McKenney wrote:
> > > > On Thu, Jul 31, 2014 at 09:19:02AM -0700, josh@joshtriplett.org wrote:
> > > > > On Wed, Jul 30, 2014 at 05:39:14PM -0700, Paul E. McKenney wrote:
> > > > > > This series provides a prototype of an RCU-tasks implementation, which has
> > > > > > been requested to assist with tramopoline removal.  This flavor of RCU
> > > > > > is task-based rather than CPU-based, and has voluntary context switch,
> > > > > > usermode execution, and the idle loops as its only quiescent states.
> > > > > > This selection of quiescent states ensures that at the end of a grace
> > > > > > period, there will no longer be any tasks depending on a trampoline that
> > > > > > was removed before the beginning of that grace period.  This works because
> > > > > > such trampolines do not contain function calls, do not contain voluntary
> > > > > > context switches, do not switch to usermode, and do not switch to idle.
> > > > > 
> > > > > I'm concerned about the amount of system overhead this introduces.
> > > > > Polling for holdout tasks seems quite excessive.  If I understand the
> > > > > intended use case correctly, the users of this will want to free
> > > > > relatively small amounts of memory; thus, waiting a while to do so seems
> > > > > fine, especially if the system isn't under any particular memory
> > > > > pressure.
> > > > > 
> > > > > Thus, rather than polling, could you simply flag the holdout
> > > > > tasks, telling the scheduler "hey, next time you don't have anything
> > > > > better to do..."?  Then don't bother with them again unless the system
> > > > > runs low on memory and asks you to free some.  (And mandate that you can
> > > > > only use this to free memory rather than for any other purpose.)
> > > > 
> > > > One of the many of my alternative suggestions that Steven rejected was
> > > > to simply leak the memory.  ;-)
> > > > 
> > > > But from what I can see, if we simply flag the holdout tasks, we
> > > > either are also holding onto the task_struct structures, re-introducing
> > > > concurrency to the list of holdout tasks, or requiring that the eventual
> > > > scan for holdout tasks scan the entire task list.  Neither of these seems
> > > > particularly appetizing to me.
> > > > 
> > > > The nice thing about Lai Jiangshan's suggestion is that it allows the
> > > > scan of the holdout list to be done completely unsynchronized, which
> > > > allows pauses during the scan, thus allowing the loop to check for
> > > > competing work on that CPU.  This should get almost all the effect
> > > > of indefinite delay without the indefinite delay (at least in the
> > > > common case).
> > > > 
> > > > Or am I missing something here?
> > > 
> > > If you only allow a single outstanding set of callbacks at a time, you
> > > could have a single flag stored in the task, combined with a count
> > > stored with the set of callbacks.  Each time one of the holdout tasks
> > > comes up, clear the flag and decrement the count.  If and only if you
> > > get asked to free up memory, start poking the scheduler to bring up
> > > those tasks.  When the count hits 0, free the memory.
> > > 
> > > The set of trampolines won't change often, and presumably only changes
> > > in response to user-driven requests to trace or stop tracing things.
> > > So, if you have to wait for the existing set of callbacks to go away
> > > before adding more, that seems fine.  And you could then ditch polling
> > > entirely.
> > 
> > If I understand what you are suggesting, this requires hooks in the
> > scheduler.  I used to have hooks in the scheduler, but I dropped them in
> > favor of polling the voluntary context-switch count in response to Peter
> > Zijlstra's concerns about adding overhead to the scheduler's fastpaths.
> > 
> > Therefore, although the flags are sometimes cleared externally from the
> > scheduling-clock interrupt (for usermode execution), it is quite possible
> > that a given task might never have its flag cleared asynchronously.
> > 
> > Another approach might be to poll more slowly or to make the polling
> > evict itself if it detects that this CPU has something else to do.
> > Would either or both of these help?
> 
> As discussed at lunch today, another option would be to drop the thread
> and handle cleanup synchronously from the caller in the tracing code, or
> fire off a kthread *on request* to handle it asynchronously.  That would
> avoid paying the startup and overhead on a system that has tracing in
> the kernel but never uses it, as will likely occur with distro kernels.

Good points!

As discussed, I am more inclined towards the second approach than
towards the first, but let me get v3 tested and posted and then see how
things look.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-07-31 16:09     ` Paul E. McKenney
  2014-07-31 16:20       ` Peter Zijlstra
  2014-07-31 16:31       ` Oleg Nesterov
@ 2014-08-01  0:53       ` Lai Jiangshan
  2014-08-01  2:09         ` Paul E. McKenney
  2 siblings, 1 reply; 42+ messages in thread
From: Lai Jiangshan @ 2014-08-01  0:53 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, mingo, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On 08/01/2014 12:09 AM, Paul E. McKenney wrote:

> 
>>> +		/*
>>> +		 * There were callbacks, so we need to wait for an
>>> +		 * RCU-tasks grace period.  Start off by scanning
>>> +		 * the task list for tasks that are not already
>>> +		 * voluntarily blocked.  Mark these tasks and make
>>> +		 * a list of them in rcu_tasks_holdouts.
>>> +		 */
>>> +		rcu_read_lock();
>>> +		for_each_process_thread(g, t) {
>>> +			if (t != current && ACCESS_ONCE(t->on_rq) &&
>>> +			    !is_idle_task(t)) {
>>
>> What happen when the trampoline is on the idle task?
>>
>> I think we need to use schedule_on_each_cpu() to replace one of
>> the synchronize_sched() in this function. (or other stuff which can
>> cause real schedule for *ALL* online CPUs).
> 
> Well, that is one of the questions in the 0/10 cover letter.  If it turns
> out to be necessary to worry about idle-task trampolines, it should be
> possible to avoid hammering all idle CPUs in the common case.  Though maybe
> battery-powered devices won't need RCU-tasks.
> 

trampolines on NO_HZ idle CPU can be arbitrary long, (example, SMI happens
inside the trampoline).  So only the real schedule on idle CPU is reliable
to me.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-08-01  0:53       ` Lai Jiangshan
@ 2014-08-01  2:09         ` Paul E. McKenney
  0 siblings, 0 replies; 42+ messages in thread
From: Paul E. McKenney @ 2014-08-01  2:09 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, mingo, dipankar, akpm, mathieu.desnoyers, josh,
	tglx, peterz, rostedt, dhowells, edumazet, dvhart, fweisbec,
	oleg, bobby.prani

On Fri, Aug 01, 2014 at 08:53:38AM +0800, Lai Jiangshan wrote:
> On 08/01/2014 12:09 AM, Paul E. McKenney wrote:
> 
> > 
> >>> +		/*
> >>> +		 * There were callbacks, so we need to wait for an
> >>> +		 * RCU-tasks grace period.  Start off by scanning
> >>> +		 * the task list for tasks that are not already
> >>> +		 * voluntarily blocked.  Mark these tasks and make
> >>> +		 * a list of them in rcu_tasks_holdouts.
> >>> +		 */
> >>> +		rcu_read_lock();
> >>> +		for_each_process_thread(g, t) {
> >>> +			if (t != current && ACCESS_ONCE(t->on_rq) &&
> >>> +			    !is_idle_task(t)) {
> >>
> >> What happen when the trampoline is on the idle task?
> >>
> >> I think we need to use schedule_on_each_cpu() to replace one of
> >> the synchronize_sched() in this function. (or other stuff which can
> >> cause real schedule for *ALL* online CPUs).
> > 
> > Well, that is one of the questions in the 0/10 cover letter.  If it turns
> > out to be necessary to worry about idle-task trampolines, it should be
> > possible to avoid hammering all idle CPUs in the common case.  Though maybe
> > battery-powered devices won't need RCU-tasks.
> 
> trampolines on NO_HZ idle CPU can be arbitrary long, (example, SMI happens
> inside the trampoline).  So only the real schedule on idle CPU is reliable
> to me.

You might well be right, but first let's see if Steven needs this to
work in the idle task to begin with.  If he doesn't, then there is no
point in worrying about it.  If he does, I bet I can come up with a
trick or two.  ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
                     ` (9 preceding siblings ...)
  2014-07-31  7:30   ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Lai Jiangshan
@ 2014-08-01 15:53   ` Oleg Nesterov
  2014-08-01 18:19     ` Paul E. McKenney
  10 siblings, 1 reply; 42+ messages in thread
From: Oleg Nesterov @ 2014-08-01 15:53 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, tglx, peterz, rostedt, dhowells, edumazet, dvhart,
	fweisbec, bobby.prani

On 07/30, Paul E. McKenney wrote:
>
> +		rcu_read_lock();
> +		for_each_process_thread(g, t) {
> +			if (t != current && ACCESS_ONCE(t->on_rq) &&
> +			    !is_idle_task(t)) {
> +				t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
> +				t->rcu_tasks_holdout = 1;
> +				list_add(&t->rcu_tasks_holdout_list,
> +					 &rcu_tasks_holdouts);
> +			}
> +		}
> +		rcu_read_unlock();

Wait, unless I missed something this can't work...

The problem is, once the exiting task passes exit_notify() it can
be removed from rcu lists.

Now suppose that (say) proc_exit_connector() has a probe, and this
task has jumped into trampoline and it was preempted there.

No?

Oleg.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-08-01 15:53   ` Oleg Nesterov
@ 2014-08-01 18:19     ` Paul E. McKenney
  2014-08-01 18:36       ` Oleg Nesterov
  0 siblings, 1 reply; 42+ messages in thread
From: Paul E. McKenney @ 2014-08-01 18:19 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, tglx, peterz, rostedt, dhowells, edumazet, dvhart,
	fweisbec, bobby.prani

On Fri, Aug 01, 2014 at 05:53:10PM +0200, Oleg Nesterov wrote:
> On 07/30, Paul E. McKenney wrote:
> >
> > +		rcu_read_lock();
> > +		for_each_process_thread(g, t) {
> > +			if (t != current && ACCESS_ONCE(t->on_rq) &&
> > +			    !is_idle_task(t)) {
> > +				t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
> > +				t->rcu_tasks_holdout = 1;
> > +				list_add(&t->rcu_tasks_holdout_list,
> > +					 &rcu_tasks_holdouts);
> > +			}
> > +		}
> > +		rcu_read_unlock();
> 
> Wait, unless I missed something this can't work...
> 
> The problem is, once the exiting task passes exit_notify() it can
> be removed from rcu lists.
> 
> Now suppose that (say) proc_exit_connector() has a probe, and this
> task has jumped into trampoline and it was preempted there.
> 
> No?

OK, this sounds to me like another vote for get_task_struct() as used
in the v3 series.  Or am I missing something?

							Thanx, Paul


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()
  2014-08-01 18:19     ` Paul E. McKenney
@ 2014-08-01 18:36       ` Oleg Nesterov
  0 siblings, 0 replies; 42+ messages in thread
From: Oleg Nesterov @ 2014-08-01 18:36 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, tglx, peterz, rostedt, dhowells, edumazet, dvhart,
	fweisbec, bobby.prani

On 08/01, Paul E. McKenney wrote:
>
> On Fri, Aug 01, 2014 at 05:53:10PM +0200, Oleg Nesterov wrote:
> > On 07/30, Paul E. McKenney wrote:
> > >
> > > +		rcu_read_lock();
> > > +		for_each_process_thread(g, t) {
> > > +			if (t != current && ACCESS_ONCE(t->on_rq) &&
> > > +			    !is_idle_task(t)) {
> > > +				t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
> > > +				t->rcu_tasks_holdout = 1;
> > > +				list_add(&t->rcu_tasks_holdout_list,
> > > +					 &rcu_tasks_holdouts);
> > > +			}
> > > +		}
> > > +		rcu_read_unlock();
> >
> > Wait, unless I missed something this can't work...
> >
> > The problem is, once the exiting task passes exit_notify() it can
> > be removed from rcu lists.
> >
> > Now suppose that (say) proc_exit_connector() has a probe, and this
> > task has jumped into trampoline and it was preempted there.
> >
> > No?
>
> OK, this sounds to me like another vote for get_task_struct() as used
> in the v3 series.  Or am I missing something?

Ah. Sorry for confusion, I replied to the wrong email, let me resend...

Oleg.


^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2014-08-01 18:40 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-31  0:39 [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation Paul E. McKenney
2014-07-31  0:39 ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Paul E. McKenney
2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 02/10] rcu: Provide cond_resched_rcu_qs() to force quiescent states in long loops Paul E. McKenney
2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 03/10] rcu: Add synchronous grace-period waiting for RCU-tasks Paul E. McKenney
2014-07-31 16:58     ` josh
2014-07-31 18:34       ` Paul E. McKenney
2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 04/10] rcu: Export RCU-tasks APIs to GPL modules Paul E. McKenney
2014-07-31 16:56     ` josh
2014-07-31 20:55       ` Paul E. McKenney
2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 05/10] rcutorture: Add torture tests for RCU-tasks Paul E. McKenney
2014-07-31 17:01     ` josh
2014-07-31 20:55       ` Paul E. McKenney
2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 06/10] rcutorture: Add RCU-tasks test cases Paul E. McKenney
2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 07/10] rcu: Add stall-warning checks for RCU-tasks Paul E. McKenney
2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 08/10] rcu: Improve RCU-tasks energy efficiency Paul E. McKenney
2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 09/10] documentation: Add verbiage on RCU-tasks stall warning messages Paul E. McKenney
2014-07-31  0:39   ` [PATCH v2 tip/core/rcu 10/10] rcu: Make RCU-tasks track exiting tasks Paul E. McKenney
2014-07-31  7:30   ` [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks() Lai Jiangshan
2014-07-31 16:09     ` Paul E. McKenney
2014-07-31 16:20       ` Peter Zijlstra
2014-07-31 16:47         ` Paul E. McKenney
2014-07-31 16:57           ` Peter Zijlstra
2014-07-31 16:31       ` Oleg Nesterov
2014-07-31 17:02         ` Paul E. McKenney
2014-07-31 17:27           ` Oleg Nesterov
2014-07-31 17:44             ` Paul E. McKenney
2014-08-01  0:53       ` Lai Jiangshan
2014-08-01  2:09         ` Paul E. McKenney
2014-08-01 15:53   ` Oleg Nesterov
2014-08-01 18:19     ` Paul E. McKenney
2014-08-01 18:36       ` Oleg Nesterov
2014-07-31 16:19 ` [PATCH v2 tip/core/rcu 0/10] RCU-tasks implementation josh
2014-07-31 16:30   ` Peter Zijlstra
2014-07-31 16:43     ` josh
2014-07-31 16:49     ` Paul E. McKenney
2014-07-31 16:58   ` Paul E. McKenney
2014-07-31 17:20     ` josh
2014-07-31 18:38       ` Paul E. McKenney
2014-07-31 20:58         ` josh
2014-07-31 21:11           ` Paul E. McKenney
2014-07-31 19:29 ` Andi Kleen
2014-07-31 21:08   ` Paul E. McKenney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.