linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: linux-kernel@vger.kernel.org
Cc: mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com,
	akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca,
	josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de,
	peterz@infradead.org, rostedt@goodmis.org,
	Valdis.Kletnieks@vt.edu, dhowells@redhat.com,
	eric.dumazet@gmail.com, darren@dvhart.com, fweisbec@gmail.com,
	patches@linaro.org, "Paul E. McKenney" <paul.mckenney@linaro.org>,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Subject: [PATCH RFC tip/core/rcu 35/41] rcu: Move synchronize_sched_expedited() to rcutree.c
Date: Wed,  1 Feb 2012 11:41:53 -0800	[thread overview]
Message-ID: <1328125319-5205-35-git-send-email-paulmck@linux.vnet.ibm.com> (raw)
In-Reply-To: <1328125319-5205-1-git-send-email-paulmck@linux.vnet.ibm.com>

From: "Paul E. McKenney" <paul.mckenney@linaro.org>

Now that TREE_RCU and TREE_PREEMPT_RCU no longer do anything different
for the single-CPU case, there is no need for multiple definitions of
synchronize_sched_expedited().  It is no longer in any sense a plug-in,
so move it from kernel/rcutree_plugin.h to kernel/rcutree.c.

Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcutree.c        |  117 +++++++++++++++++++++++++++++++++++++++++++++++
 kernel/rcutree_plugin.h |  116 ----------------------------------------------
 2 files changed, 117 insertions(+), 116 deletions(-)

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 243ceb1..632b1c3 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -50,6 +50,8 @@
 #include <linux/wait.h>
 #include <linux/kthread.h>
 #include <linux/prefetch.h>
+#include <linux/delay.h>
+#include <linux/stop_machine.h>
 
 #include "rcutree.h"
 #include <trace/events/rcu.h>
@@ -1919,6 +1921,121 @@ void synchronize_rcu_bh(void)
 }
 EXPORT_SYMBOL_GPL(synchronize_rcu_bh);
 
+static atomic_t sync_sched_expedited_started = ATOMIC_INIT(0);
+static atomic_t sync_sched_expedited_done = ATOMIC_INIT(0);
+
+static int synchronize_sched_expedited_cpu_stop(void *data)
+{
+	/*
+	 * There must be a full memory barrier on each affected CPU
+	 * between the time that try_stop_cpus() is called and the
+	 * time that it returns.
+	 *
+	 * In the current initial implementation of cpu_stop, the
+	 * above condition is already met when the control reaches
+	 * this point and the following smp_mb() is not strictly
+	 * necessary.  Do smp_mb() anyway for documentation and
+	 * robustness against future implementation changes.
+	 */
+	smp_mb(); /* See above comment block. */
+	return 0;
+}
+
+/*
+ * Wait for an rcu-sched grace period to elapse, but use "big hammer"
+ * approach to force grace period to end quickly.  This consumes
+ * significant time on all CPUs, and is thus not recommended for
+ * any sort of common-case code.
+ *
+ * Note that it is illegal to call this function while holding any
+ * lock that is acquired by a CPU-hotplug notifier.  Failing to
+ * observe this restriction will result in deadlock.
+ *
+ * This implementation can be thought of as an application of ticket
+ * locking to RCU, with sync_sched_expedited_started and
+ * sync_sched_expedited_done taking on the roles of the halves
+ * of the ticket-lock word.  Each task atomically increments
+ * sync_sched_expedited_started upon entry, snapshotting the old value,
+ * then attempts to stop all the CPUs.  If this succeeds, then each
+ * CPU will have executed a context switch, resulting in an RCU-sched
+ * grace period.  We are then done, so we use atomic_cmpxchg() to
+ * update sync_sched_expedited_done to match our snapshot -- but
+ * only if someone else has not already advanced past our snapshot.
+ *
+ * On the other hand, if try_stop_cpus() fails, we check the value
+ * of sync_sched_expedited_done.  If it has advanced past our
+ * initial snapshot, then someone else must have forced a grace period
+ * some time after we took our snapshot.  In this case, our work is
+ * done for us, and we can simply return.  Otherwise, we try again,
+ * but keep our initial snapshot for purposes of checking for someone
+ * doing our work for us.
+ *
+ * If we fail too many times in a row, we fall back to synchronize_sched().
+ */
+void synchronize_sched_expedited(void)
+{
+	int firstsnap, s, snap, trycount = 0;
+
+	/* Note that atomic_inc_return() implies full memory barrier. */
+	firstsnap = snap = atomic_inc_return(&sync_sched_expedited_started);
+	get_online_cpus();
+	WARN_ON_ONCE(cpu_is_offline(smp_processor_id()));
+
+	/*
+	 * Each pass through the following loop attempts to force a
+	 * context switch on each CPU.
+	 */
+	while (try_stop_cpus(cpu_online_mask,
+			     synchronize_sched_expedited_cpu_stop,
+			     NULL) == -EAGAIN) {
+		put_online_cpus();
+
+		/* No joy, try again later.  Or just synchronize_sched(). */
+		if (trycount++ < 10)
+			udelay(trycount * num_online_cpus());
+		else {
+			synchronize_sched();
+			return;
+		}
+
+		/* Check to see if someone else did our work for us. */
+		s = atomic_read(&sync_sched_expedited_done);
+		if (UINT_CMP_GE((unsigned)s, (unsigned)firstsnap)) {
+			smp_mb(); /* ensure test happens before caller kfree */
+			return;
+		}
+
+		/*
+		 * Refetching sync_sched_expedited_started allows later
+		 * callers to piggyback on our grace period.  We subtract
+		 * 1 to get the same token that the last incrementer got.
+		 * We retry after they started, so our grace period works
+		 * for them, and they started after our first try, so their
+		 * grace period works for us.
+		 */
+		get_online_cpus();
+		snap = atomic_read(&sync_sched_expedited_started);
+		smp_mb(); /* ensure read is before try_stop_cpus(). */
+	}
+
+	/*
+	 * Everyone up to our most recent fetch is covered by our grace
+	 * period.  Update the counter, but only if our work is still
+	 * relevant -- which it won't be if someone who started later
+	 * than we did beat us to the punch.
+	 */
+	do {
+		s = atomic_read(&sync_sched_expedited_done);
+		if (UINT_CMP_GE((unsigned)s, (unsigned)snap)) {
+			smp_mb(); /* ensure test happens before caller kfree */
+			break;
+		}
+	} while (atomic_cmpxchg(&sync_sched_expedited_done, s, snap) != s);
+
+	put_online_cpus();
+}
+EXPORT_SYMBOL_GPL(synchronize_sched_expedited);
+
 /*
  * Check to see if there is any immediate RCU-related work to be done
  * by the current CPU, for the specified type of RCU, returning 1 if so.
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index 255ef887..9706a12 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -25,7 +25,6 @@
  */
 
 #include <linux/delay.h>
-#include <linux/stop_machine.h>
 
 #define RCU_KTHREAD_PRIO 1
 
@@ -1887,121 +1886,6 @@ static void __cpuinit rcu_prepare_kthreads(int cpu)
 
 #endif /* #else #ifdef CONFIG_RCU_BOOST */
 
-static atomic_t sync_sched_expedited_started = ATOMIC_INIT(0);
-static atomic_t sync_sched_expedited_done = ATOMIC_INIT(0);
-
-static int synchronize_sched_expedited_cpu_stop(void *data)
-{
-	/*
-	 * There must be a full memory barrier on each affected CPU
-	 * between the time that try_stop_cpus() is called and the
-	 * time that it returns.
-	 *
-	 * In the current initial implementation of cpu_stop, the
-	 * above condition is already met when the control reaches
-	 * this point and the following smp_mb() is not strictly
-	 * necessary.  Do smp_mb() anyway for documentation and
-	 * robustness against future implementation changes.
-	 */
-	smp_mb(); /* See above comment block. */
-	return 0;
-}
-
-/*
- * Wait for an rcu-sched grace period to elapse, but use "big hammer"
- * approach to force grace period to end quickly.  This consumes
- * significant time on all CPUs, and is thus not recommended for
- * any sort of common-case code.
- *
- * Note that it is illegal to call this function while holding any
- * lock that is acquired by a CPU-hotplug notifier.  Failing to
- * observe this restriction will result in deadlock.
- *
- * This implementation can be thought of as an application of ticket
- * locking to RCU, with sync_sched_expedited_started and
- * sync_sched_expedited_done taking on the roles of the halves
- * of the ticket-lock word.  Each task atomically increments
- * sync_sched_expedited_started upon entry, snapshotting the old value,
- * then attempts to stop all the CPUs.  If this succeeds, then each
- * CPU will have executed a context switch, resulting in an RCU-sched
- * grace period.  We are then done, so we use atomic_cmpxchg() to
- * update sync_sched_expedited_done to match our snapshot -- but
- * only if someone else has not already advanced past our snapshot.
- *
- * On the other hand, if try_stop_cpus() fails, we check the value
- * of sync_sched_expedited_done.  If it has advanced past our
- * initial snapshot, then someone else must have forced a grace period
- * some time after we took our snapshot.  In this case, our work is
- * done for us, and we can simply return.  Otherwise, we try again,
- * but keep our initial snapshot for purposes of checking for someone
- * doing our work for us.
- *
- * If we fail too many times in a row, we fall back to synchronize_sched().
- */
-void synchronize_sched_expedited(void)
-{
-	int firstsnap, s, snap, trycount = 0;
-
-	/* Note that atomic_inc_return() implies full memory barrier. */
-	firstsnap = snap = atomic_inc_return(&sync_sched_expedited_started);
-	get_online_cpus();
-	WARN_ON_ONCE(cpu_is_offline(smp_processor_id()));
-
-	/*
-	 * Each pass through the following loop attempts to force a
-	 * context switch on each CPU.
-	 */
-	while (try_stop_cpus(cpu_online_mask,
-			     synchronize_sched_expedited_cpu_stop,
-			     NULL) == -EAGAIN) {
-		put_online_cpus();
-
-		/* No joy, try again later.  Or just synchronize_sched(). */
-		if (trycount++ < 10)
-			udelay(trycount * num_online_cpus());
-		else {
-			synchronize_sched();
-			return;
-		}
-
-		/* Check to see if someone else did our work for us. */
-		s = atomic_read(&sync_sched_expedited_done);
-		if (UINT_CMP_GE((unsigned)s, (unsigned)firstsnap)) {
-			smp_mb(); /* ensure test happens before caller kfree */
-			return;
-		}
-
-		/*
-		 * Refetching sync_sched_expedited_started allows later
-		 * callers to piggyback on our grace period.  We subtract
-		 * 1 to get the same token that the last incrementer got.
-		 * We retry after they started, so our grace period works
-		 * for them, and they started after our first try, so their
-		 * grace period works for us.
-		 */
-		get_online_cpus();
-		snap = atomic_read(&sync_sched_expedited_started);
-		smp_mb(); /* ensure read is before try_stop_cpus(). */
-	}
-
-	/*
-	 * Everyone up to our most recent fetch is covered by our grace
-	 * period.  Update the counter, but only if our work is still
-	 * relevant -- which it won't be if someone who started later
-	 * than we did beat us to the punch.
-	 */
-	do {
-		s = atomic_read(&sync_sched_expedited_done);
-		if (UINT_CMP_GE((unsigned)s, (unsigned)snap)) {
-			smp_mb(); /* ensure test happens before caller kfree */
-			break;
-		}
-	} while (atomic_cmpxchg(&sync_sched_expedited_done, s, snap) != s);
-
-	put_online_cpus();
-}
-EXPORT_SYMBOL_GPL(synchronize_sched_expedited);
-
 #if !defined(CONFIG_RCU_FAST_NO_HZ)
 
 /*
-- 
1.7.8


  parent reply	other threads:[~2012-02-01 19:43 UTC|newest]

Thread overview: 104+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-01 19:41 [PATCH RFC 0/41] RCU commits for 3.4 Paul E. McKenney
2012-02-01 19:41 ` [PATCH RFC tip/core/rcu 01/41] rcu: Bring RTFP.txt up to date Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 02/41] rcu: Improve synchronize_rcu() diagnostics Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 03/41] rcu: Add lockdep-RCU checks for simple self-deadlock Paul E. McKenney
2012-02-02  0:55     ` Josh Triplett
2012-02-02 16:20       ` Paul E. McKenney
2012-02-02 19:56         ` Josh Triplett
2012-02-02 20:42           ` Paul E. McKenney
2012-02-03  9:04             ` Josh Triplett
2012-02-03 18:05               ` Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 04/41] rcu: Add diagnostic for misaligned rcu_head structures Paul E. McKenney
2012-02-02  1:00     ` Josh Triplett
2012-02-02 16:22       ` Paul E. McKenney
2012-02-02 20:11         ` Josh Triplett
2012-02-02  1:01     ` Josh Triplett
2012-02-02 16:27       ` Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 05/41] rcu: Avoid waking up CPUs having only kfree_rcu() callbacks Paul E. McKenney
2012-02-02  1:15     ` Josh Triplett
2012-02-02 16:34       ` Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 06/41] rcu: Move RCU_TRACE to lib/Kconfig.debug Paul E. McKenney
2012-02-02  1:39     ` Josh Triplett
2012-02-02 17:05       ` Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 07/41] s390: Convert call_rcu() to kfree_rcu() Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 08/41] tcm_fc: " Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 09/41] ipv4: " Paul E. McKenney
2012-02-01 19:49     ` David Miller
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 10/41] " Paul E. McKenney
2012-02-01 19:50     ` David Miller
2012-02-02  0:24     ` Josh Triplett
2012-02-02 15:56       ` Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 11/41] mac80211: " Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 12/41] rcu: Simplify offline processing Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 13/41] rcu: Make rcutorture flag online/offline failures Paul E. McKenney
2012-02-02  1:46     ` Josh Triplett
2012-02-02 17:08       ` Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 14/41] rcu: Limit lazy-callback duration Paul E. McKenney
2012-02-02  2:03     ` Josh Triplett
2012-02-02 17:13       ` Paul E. McKenney
2012-02-03  4:07         ` Josh Triplett
2012-02-03  5:54           ` Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 15/41] rcu: Check for callback invocation from offline CPUs Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 16/41] rcu: Don't make callbacks go through second full grace period Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 17/41] rcu: Remove single-rcu_node optimization in rcu_start_gp() Paul E. McKenney
2012-02-02  2:13     ` Josh Triplett
2012-02-02 17:16       ` Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 18/41] rcu: Protect __rcu_read_unlock() against scheduler-using irq handlers Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 19/41] rcu: Streamline code produced by __rcu_read_unlock() Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 20/41] rcu: Prevent RCU callbacks from executing before scheduler initialized Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 21/41] rcu: Inform RCU of irq_exit() activity Paul E. McKenney
2012-02-02  2:30     ` Josh Triplett
2012-02-02 17:30       ` Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 22/41] rcu: Simplify unboosting checks Paul E. McKenney
2012-02-02  2:38     ` Josh Triplett
2012-02-02 17:48       ` Paul E. McKenney
2012-02-03  4:23         ` Josh Triplett
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 23/41] rcu: Clean up straggling rcu_preempt_needs_cpu() name Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 24/41] rcu: Check for idle-loop entry while in RCU read-side critical section Paul E. McKenney
2012-02-02  5:13     ` Josh Triplett
2012-02-02 17:50       ` Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 25/41] rcu: Make rcu_sleep_check() also check rcu_lock_map Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 26/41] rcu: Note that rcu_access_pointer() can be used for teardown Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 27/41] rcu: Remove #ifdef CONFIG_SMP from TREE_RCU Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 28/41] rcu: Set RCU CPU stall times via sysfs Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 29/41] rcu: Print scheduling-clock information on RCU CPU stall-warning messages Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 30/41] rcutorture: Permit holding off CPU-hotplug operations during boot Paul E. McKenney
2012-02-02  5:43     ` Josh Triplett
2012-02-02 17:56       ` Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 31/41] rcu: Add CPU-stall capability to rcutorture Paul E. McKenney
2012-02-02  5:53     ` Josh Triplett
2012-02-02  9:15       ` Julia Lawall
2012-02-02 18:03         ` Paul E. McKenney
2012-02-02 18:00       ` Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 32/41] rcu: Update stall-warning documentation Paul E. McKenney
2012-02-02  5:56     ` Josh Triplett
2012-02-02 18:18       ` Paul E. McKenney
2012-02-03  5:42         ` Josh Triplett
2012-02-03  5:58           ` Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 33/41] rcu: Make boolean rcutorture parameters be of type "bool" Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 34/41] rcu: Check for illegal use of RCU from offlined CPUs Paul E. McKenney
2012-02-01 19:41   ` Paul E. McKenney [this message]
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 36/41] rcu: No interrupt disabling for rcu_prepare_for_idle() Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 37/41] lockdep: Add CPU-idle/offline warning to lockdep-RCU splat Paul E. McKenney
2012-02-02  6:07     ` Josh Triplett
2012-02-02 18:30       ` Paul E. McKenney
2012-02-03  6:12         ` Josh Triplett
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 38/41] rcu: Rework detection of use of RCU by offline CPUs Paul E. McKenney
2012-02-02  6:11     ` Josh Triplett
2012-02-02 18:31       ` Paul E. McKenney
2012-02-03  9:17         ` Josh Triplett
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 39/41] rcu: Wait at least a jiffy before declaring a CPU to be offline Paul E. McKenney
2012-02-02  6:12     ` Josh Triplett
2012-02-02 18:27       ` Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 40/41] rcu: Call out dangers of expedited RCU primitives Paul E. McKenney
2012-02-01 19:41   ` [PATCH RFC tip/core/rcu 41/41] rcu: Trace only after NULL-pointer check Paul E. McKenney
2012-02-02  0:18   ` [PATCH RFC tip/core/rcu 01/41] rcu: Bring RTFP.txt up to date Josh Triplett
2012-02-02  1:33     ` Paul E. McKenney
2012-02-02  2:01       ` Josh Triplett
2012-02-02 16:47         ` Paul E. McKenney
2012-02-02 22:32           ` Josh Triplett
2012-02-03 18:00             ` Paul E. McKenney
2012-02-02 22:47 ` [PATCH RFC 0/41] RCU commits for 3.4 Kevin Hilman
2012-02-02 23:58   ` Paul E. McKenney
2012-02-03 19:54     ` Kevin Hilman
2012-02-06  7:04       ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1328125319-5205-35-git-send-email-paulmck@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=Valdis.Kletnieks@vt.edu \
    --cc=akpm@linux-foundation.org \
    --cc=darren@dvhart.com \
    --cc=dhowells@redhat.com \
    --cc=dipankar@in.ibm.com \
    --cc=eric.dumazet@gmail.com \
    --cc=fweisbec@gmail.com \
    --cc=josh@joshtriplett.org \
    --cc=laijs@cn.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mathieu.desnoyers@polymtl.ca \
    --cc=mingo@elte.hu \
    --cc=niv@us.ibm.com \
    --cc=patches@linaro.org \
    --cc=paul.mckenney@linaro.org \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).