All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 0/6] Miscellaneous RCU fixes for 3.5
@ 2012-04-23 16:41 Paul E. McKenney
  2012-04-23 16:42 ` [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-23 16:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, eric.dumazet,
	darren, fweisbec, patches

Hello!

This series includes several miscellaneous fixes for RCU as follows:

1.	Make the rcu_blocking_is_gp() function properly guard against
	concurrent CPU-hotplug operations.
2.	Make __list_add_rcu() do pointer checks like __list_add() does,
	courtesy of Dave Jones.
3.	Replace the "unsafe at any speed" list_first_entry_rcu() macro
	with a safe and sane list_first_or_null_rcu(), courtesy of
	Michel Machado.
4.	Clarify help text for the RCU_BOOST_PRIO kernel config parameter.
5.	Convert __kfree_rcu() from an inline function to a macro in order
	to avoid spurious build failures should gcc fail to inline it.
	Courtesy of Jan Engelhardt.
6.	Reduce cache-miss initialization latencies for large systems.
	Given the widespread use of NR_CPUS=4096, there are a lot of
	large systems out there.

							Thanx, Paul


 b/include/linux/rculist.h  |    7 ++++++-
 b/include/linux/rcupdate.h |   18 ++++++++++++++++++
 b/include/linux/rcutree.h  |    7 ++++++-
 b/init/Kconfig             |   23 +++++++++++++++++++----
 b/kernel/rcutree.c         |    2 +-
 b/kernel/rcutree.h         |   10 +++-------
 b/lib/list_debug.c         |   22 ++++++++++++++++++++++
 include/linux/rculist.h    |   33 +++++++++++++++++++++++++++++----
 init/Kconfig               |   27 +++++++++++++++++++++++++++
 9 files changed, 131 insertions(+), 18 deletions(-)


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit
  2012-04-23 16:41 [PATCH RFC 0/6] Miscellaneous RCU fixes for 3.5 Paul E. McKenney
@ 2012-04-23 16:42 ` Paul E. McKenney
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 2/6] rcu: List-debug variants of rcu list routines Paul E. McKenney
                     ` (5 more replies)
  0 siblings, 6 replies; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-23 16:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, eric.dumazet,
	darren, fweisbec, patches, Paul E. McKenney, Paul E. McKenney

From: "Paul E. McKenney" <paul.mckenney@linaro.org>

The rcu_blocking_is_gp() function tests to see if there is only one
online CPU, and if so, synchronize_sched() and friends become no-ops.
However, for larger systems, num_online_cpus() scans a large vector,
and might be preempted while doing so.  While preempted, any number
of CPUs might come online and go offline, potentially resulting in
num_online_cpus() returning 1 when there never had only been one
CPU online.  This would result in a too-short RCU grace period, which
could in turn result in total failure.

This commit therefore protects the num_online_cpus() with preempt_disable().

Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/linux/rcutree.h |    7 ++++++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
index e8ee5dd..997179f 100644
--- a/include/linux/rcutree.h
+++ b/include/linux/rcutree.h
@@ -101,8 +101,13 @@ extern void rcu_sched_force_quiescent_state(void);
 /* A context switch is a grace period for RCU-sched and RCU-bh. */
 static inline int rcu_blocking_is_gp(void)
 {
+	int ret;
+
 	might_sleep();  /* Check for RCU read-side critical section. */
-	return num_online_cpus() == 1;
+	preempt_disable();  /* Prevent CPU hotplug from confusing us. */
+	ret = num_online_cpus() == 1;
+	preempt_enable();
+	return ret;
 }
 
 extern void rcu_scheduler_starting(void);
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC tip/core/rcu 2/6] rcu: List-debug variants of rcu list routines.
  2012-04-23 16:42 ` [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit Paul E. McKenney
@ 2012-04-23 16:42   ` Paul E. McKenney
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 3/6] rcu: Replace list_first_entry_rcu() with list_first_or_null_rcu() Paul E. McKenney
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-23 16:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, eric.dumazet,
	darren, fweisbec, patches, Dave Jones, Paul E. McKenney

From: Dave Jones <davej@redhat.com>

* Make __list_add_rcu check the next->prev and prev->next pointers
  just like __list_add does.
* Make list_del_rcu use __list_del_entry, which does the same checking
  at deletion time.

Has been running for a week here without anything being tripped up,
but it seems worth adding for completeness just in case something
ever does corrupt those lists.

Signed-off-by: Dave Jones <davej@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/linux/rculist.h |    7 ++++++-
 lib/list_debug.c        |   22 ++++++++++++++++++++++
 2 files changed, 28 insertions(+), 1 deletions(-)

diff --git a/include/linux/rculist.h b/include/linux/rculist.h
index d079290..a20c050 100644
--- a/include/linux/rculist.h
+++ b/include/linux/rculist.h
@@ -30,6 +30,7 @@
  * This is only for internal list manipulation where we know
  * the prev/next entries already!
  */
+#ifndef CONFIG_DEBUG_LIST
 static inline void __list_add_rcu(struct list_head *new,
 		struct list_head *prev, struct list_head *next)
 {
@@ -38,6 +39,10 @@ static inline void __list_add_rcu(struct list_head *new,
 	rcu_assign_pointer(list_next_rcu(prev), new);
 	next->prev = new;
 }
+#else
+extern void __list_add_rcu(struct list_head *new,
+		struct list_head *prev, struct list_head *next);
+#endif
 
 /**
  * list_add_rcu - add a new entry to rcu-protected list
@@ -108,7 +113,7 @@ static inline void list_add_tail_rcu(struct list_head *new,
  */
 static inline void list_del_rcu(struct list_head *entry)
 {
-	__list_del(entry->prev, entry->next);
+	__list_del_entry(entry);
 	entry->prev = LIST_POISON2;
 }
 
diff --git a/lib/list_debug.c b/lib/list_debug.c
index 982b850..3810b48 100644
--- a/lib/list_debug.c
+++ b/lib/list_debug.c
@@ -10,6 +10,7 @@
 #include <linux/list.h>
 #include <linux/bug.h>
 #include <linux/kernel.h>
+#include <linux/rculist.h>
 
 /*
  * Insert a new entry between two known consecutive entries.
@@ -75,3 +76,24 @@ void list_del(struct list_head *entry)
 	entry->prev = LIST_POISON2;
 }
 EXPORT_SYMBOL(list_del);
+
+/*
+ * RCU variants.
+ */
+void __list_add_rcu(struct list_head *new,
+		    struct list_head *prev, struct list_head *next)
+{
+	WARN(next->prev != prev,
+		"list_add_rcu corruption. next->prev should be "
+		"prev (%p), but was %p. (next=%p).\n",
+		prev, next->prev, next);
+	WARN(prev->next != next,
+		"list_add_rcu corruption. prev->next should be "
+		"next (%p), but was %p. (prev=%p).\n",
+		next, prev->next, prev);
+	new->next = next;
+	new->prev = prev;
+	rcu_assign_pointer(list_next_rcu(prev), new);
+	next->prev = new;
+}
+EXPORT_SYMBOL(__list_add_rcu);
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC tip/core/rcu 3/6] rcu: Replace list_first_entry_rcu() with list_first_or_null_rcu()
  2012-04-23 16:42 ` [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit Paul E. McKenney
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 2/6] rcu: List-debug variants of rcu list routines Paul E. McKenney
@ 2012-04-23 16:42   ` Paul E. McKenney
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 4/6] rcu: Clarify help text for RCU_BOOST_PRIO Paul E. McKenney
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-23 16:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, eric.dumazet,
	darren, fweisbec, patches, Michel Machado, Paul E. McKenney

From: Michel Machado <michel@digirati.com.br>

The list_first_entry_rcu() macro is inherently unsafe because it cannot
be applied to an empty list.  But because RCU readers do not exclude
updaters, a list might become empty between the time that list_empty()
claimed it was non-empty and the time that list_first_entry_rcu() is
invoked.  Therefore, the list_empty() test cannot be separated from the
list_first_entry_rcu() call.  This commit therefore combines these to
macros to create a new list_first_or_null_rcu() macro that replaces
the old (and unsafe) list_first_entry_rcu() macro.

This patch incorporates Paul's review comments on the previous version of
this patch available here:

https://lkml.org/lkml/2012/4/2/536

This patch cannot break any upstream code because list_first_entry_rcu()
is not being used anywhere in the kernel (tested with grep(1)), and any
external code using it is probably broken as a result of using it.

Signed-off-by: Michel Machado <michel@digirati.com.br>
CC: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
CC: Dipankar Sarma <dipankar@in.ibm.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/linux/rculist.h |   33 +++++++++++++++++++++++++++++----
 1 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/include/linux/rculist.h b/include/linux/rculist.h
index a20c050..e0f0fab 100644
--- a/include/linux/rculist.h
+++ b/include/linux/rculist.h
@@ -233,18 +233,43 @@ static inline void list_splice_init_rcu(struct list_head *list,
 	})
 
 /**
- * list_first_entry_rcu - get the first element from a list
+ * Where are list_empty_rcu() and list_first_entry_rcu()?
+ *
+ * Implementing those functions following their counterparts list_empty() and
+ * list_first_entry() is not advisable because they lead to subtle race
+ * conditions as the following snippet shows:
+ *
+ * if (!list_empty_rcu(mylist)) {
+ *	struct foo *bar = list_first_entry_rcu(mylist, struct foo, list_member);
+ *	do_something(bar);
+ * }
+ *
+ * The list may not be empty when list_empty_rcu checks it, but it may be when
+ * list_first_entry_rcu rereads the ->next pointer.
+ *
+ * Rereading the ->next pointer is not a problem for list_empty() and
+ * list_first_entry() because they would be protected by a lock that blocks
+ * writers.
+ *
+ * See list_first_or_null_rcu for an alternative.
+ */
+
+/**
+ * list_first_or_null_rcu - get the first element from a list
  * @ptr:        the list head to take the element from.
  * @type:       the type of the struct this is embedded in.
  * @member:     the name of the list_struct within the struct.
  *
- * Note, that list is expected to be not empty.
+ * Note that if the list is empty, it returns NULL.
  *
  * This primitive may safely run concurrently with the _rcu list-mutation
  * primitives such as list_add_rcu() as long as it's guarded by rcu_read_lock().
  */
-#define list_first_entry_rcu(ptr, type, member) \
-	list_entry_rcu((ptr)->next, type, member)
+#define list_first_or_null_rcu(ptr, type, member) \
+	({struct list_head *__ptr = (ptr); \
+	  struct list_head __rcu *__next = list_next_rcu(__ptr); \
+	  likely(__ptr != __next) ? container_of(__next, type, member) : NULL; \
+	})
 
 /**
  * list_for_each_entry_rcu	-	iterate over rcu list of given type
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC tip/core/rcu 4/6] rcu: Clarify help text for RCU_BOOST_PRIO
  2012-04-23 16:42 ` [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit Paul E. McKenney
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 2/6] rcu: List-debug variants of rcu list routines Paul E. McKenney
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 3/6] rcu: Replace list_first_entry_rcu() with list_first_or_null_rcu() Paul E. McKenney
@ 2012-04-23 16:42   ` Paul E. McKenney
  2012-04-26 12:46     ` Peter Zijlstra
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 5/6] rcu: Make __kfree_rcu() less dependent on compiler choices Paul E. McKenney
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-23 16:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, eric.dumazet,
	darren, fweisbec, patches, Paul E. McKenney, Paul E. McKenney

From: "Paul E. McKenney" <paul.mckenney@linaro.org>

The old text confused real-time applications with real-time threads, so
that you pretty much needed to understand how this kernel configuration
parameter worked to understand the help text.  This commit therefore
attempts to make the help text human-readable.

Reported-by: Jörn Engel <joern@purestorage.com>
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 init/Kconfig |   23 +++++++++++++++++++----
 1 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/init/Kconfig b/init/Kconfig
index 6cfd71d..85c6870 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -515,10 +515,25 @@ config RCU_BOOST_PRIO
 	depends on RCU_BOOST
 	default 1
 	help
-	  This option specifies the real-time priority to which preempted
-	  RCU readers are to be boosted.  If you are working with CPU-bound
-	  real-time applications, you should specify a priority higher then
-	  the highest-priority CPU-bound application.
+	  This option specifies the real-time priority to which long-term
+	  preempted RCU readers are to be boosted.  If you are working
+	  with a real-time application that has one or more CPU-bound
+	  threads running at a real-time priority level, you should set
+	  RCU_BOOST_PRIO to a priority higher then the highest-priority
+	  real-time CPU-bound thread.  The default RCU_BOOST_PRIO value
+	  of 1 is appropriate in the common case, which is real-time
+	  applications that do not have any CPU-bound threads.
+
+	  Some real-time applications might not have a single real-time
+	  thread that saturates a given CPU, but instead might have
+	  multiple real-time threads that, taken together, fully utilize
+	  that CPU.  In this case, you should set RCU_BOOST_PRIO to
+	  a priority higher than the lowest-priority thread that is
+	  conspiring to prevent the CPU from running any non-real-time
+	  tasks.  For example, if one thread at priority 10 and another
+	  thread at priority 5 are between themselves fully consuming
+	  the CPU time on a given CPU, then RCU_BOOST_PRIO should be
+	  set to priority 6 or higher.
 
 	  Specify the real-time priority, or take the default if unsure.
 
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC tip/core/rcu 5/6] rcu: Make __kfree_rcu() less dependent on compiler choices
  2012-04-23 16:42 ` [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit Paul E. McKenney
                     ` (2 preceding siblings ...)
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 4/6] rcu: Clarify help text for RCU_BOOST_PRIO Paul E. McKenney
@ 2012-04-23 16:42   ` Paul E. McKenney
  2012-04-26 12:48     ` Peter Zijlstra
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems Paul E. McKenney
  2012-04-24 15:35   ` [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit Srivatsa S. Bhat
  5 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-23 16:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, eric.dumazet,
	darren, fweisbec, patches, Jan Engelhardt, Paul E. McKenney

From: Jan Engelhardt <jengelh@medozas.de>

Currently, __kfree_rcu() is implemented as an inline function, and
contains a BUILD_BUG_ON() that malfunctions if __kfree_rcu() is compiled
as an out-of-line function.  Unfortunately, there are compiler settings
(e.g., -O0) that can result in __kfree_rcu() being compiled out of line,
resulting in annoying build breakage.  This commit therefore converts
both __kfree_rcu() and __is_kfree_rcu_offset() from inline functions to
macros to prevent such misbehavior on the part of the compiler.

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
---
 include/linux/rcupdate.h |   18 ++++++++++++++++++
 1 files changed, 18 insertions(+), 0 deletions(-)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 20fb776..d5dfb10 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -922,6 +922,21 @@ void __kfree_rcu(struct rcu_head *head, unsigned long offset)
 	kfree_call_rcu(head, (rcu_callback)offset);
 }
 
+/*
+ * Does the specified offset indicate that the corresponding rcu_head
+ * structure can be handled by kfree_rcu()?
+ */
+#define __is_kfree_rcu_offset(offset) ((offset) < 4096)
+
+/*
+ * Helper macro for kfree_rcu() to prevent argument-expansion eyestrain.
+ */
+#define __kfree_rcu(head, offset) \
+	do { \
+		BUILD_BUG_ON(!__is_kfree_rcu_offset(offset)); \
+		call_rcu(head, (void (*)(struct rcu_head *))(unsigned long)(offset)); \
+	} while (0)
+
 /**
  * kfree_rcu() - kfree an object after a grace period.
  * @ptr:	pointer to kfree
@@ -944,6 +959,9 @@ void __kfree_rcu(struct rcu_head *head, unsigned long offset)
  *
  * Note that the allowable offset might decrease in the future, for example,
  * to allow something like kmem_cache_free_rcu().
+ *
+ * The BUILD_BUG_ON check must not involve any function calls, hence the
+ * checks are done in macros here.
  */
 #define kfree_rcu(ptr, rcu_head)					\
 	__kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head))
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-23 16:42 ` [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit Paul E. McKenney
                     ` (3 preceding siblings ...)
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 5/6] rcu: Make __kfree_rcu() less dependent on compiler choices Paul E. McKenney
@ 2012-04-23 16:42   ` Paul E. McKenney
  2012-04-26 12:51     ` Peter Zijlstra
  2012-04-27  4:36     ` Mike Galbraith
  2012-04-24 15:35   ` [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit Srivatsa S. Bhat
  5 siblings, 2 replies; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-23 16:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, eric.dumazet,
	darren, fweisbec, patches, Paul E. McKenney

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

Commit #0209f649 (rcu: limit rcu_node leaf-level fanout) set an upper
limit of 16 on the leaf-level fanout for the rcu_node tree.  This was
needed to reduce lock contention that was induced by the synchronization
of scheduling-clock interrupts, which was in turn needed to improve
energy efficiency for moderate-sized lightly loaded servers.

However, reducing the leaf-level fanout means that there are more
leaf-level rcu_node structures in the tree, which in turn means that
RCU's grace-period initialization incurs more cache misses.  This is
not a problem on moderate-sized servers with only a few tens of CPUs,
but becomes a major source of real-time latency spikes on systems with
many hundreds of CPUs.  In addition, the workloads running on these large
systems tend to be CPU-bound, which eliminates the energy-efficiency
advantages of synchronizing scheduling-clock interrupts.  Therefore,
these systems need maximal values for the rcu_node leaf-level fanout.

This commit addresses this problem by introducing a new kernel parameter
named RCU_FANOUT_LEAF that directly controls the leaf-level fanout.
This parameter defaults to 16 to handle the common case of a moderate
sized lightly loaded servers, but may be set higher on larger systems.

Reported-by: Mike Galbraith <efault@gmx.de>
Reported-by: Dimitri Sivanich <sivanich@sgi.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 init/Kconfig     |   27 +++++++++++++++++++++++++++
 kernel/rcutree.c |    2 +-
 kernel/rcutree.h |   10 +++-------
 3 files changed, 31 insertions(+), 8 deletions(-)

diff --git a/init/Kconfig b/init/Kconfig
index 85c6870..6d18ef8 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -458,6 +458,33 @@ config RCU_FANOUT
 	  Select a specific number if testing RCU itself.
 	  Take the default if unsure.
 
+config RCU_FANOUT_LEAF
+	int "Tree-based hierarchical RCU leaf-level fanout value"
+	range 2 RCU_FANOUT if 64BIT
+	range 2 RCU_FANOUT if !64BIT
+	depends on TREE_RCU || TREE_PREEMPT_RCU
+	default 16
+	help
+	  This option controls the leaf-level fanout of hierarchical
+	  implementations of RCU, and allows trading off cache misses
+	  against lock contention.  Systems that synchronize their
+	  scheduling-clock interrupts for energy-efficiency reasons will
+	  want the default because the smaller leaf-level fanout keeps
+	  lock contention levels acceptably low.  Very large systems
+	  (hundreds or thousands of CPUs) will instead want to set this
+	  value to the maximum value possible in order to reduce the
+	  number of cache misses incurred during RCU's grace-period
+	  initialization.  These systems tend to run CPU-bound, and thus
+	  are not helped by synchronized interrupts, and thus tend to
+	  skew them, which reduces lock contention enough that large
+	  leaf-level fanouts work well.
+
+	  Select a specific number if testing RCU itself.
+
+	  Select the maximum permissible value for large systems.
+
+	  Take the default if unsure.
+
 config RCU_FANOUT_EXACT
 	bool "Disable tree-based hierarchical RCU auto-balancing"
 	depends on TREE_RCU || TREE_PREEMPT_RCU
diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 1050d6d..780acf8 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -2418,7 +2418,7 @@ static void __init rcu_init_levelspread(struct rcu_state *rsp)
 
 	for (i = NUM_RCU_LVLS - 1; i > 0; i--)
 		rsp->levelspread[i] = CONFIG_RCU_FANOUT;
-	rsp->levelspread[0] = RCU_FANOUT_LEAF;
+	rsp->levelspread[0] = CONFIG_RCU_FANOUT_LEAF;
 }
 #else /* #ifdef CONFIG_RCU_FANOUT_EXACT */
 static void __init rcu_init_levelspread(struct rcu_state *rsp)
diff --git a/kernel/rcutree.h b/kernel/rcutree.h
index cdd1be0..a905c20 100644
--- a/kernel/rcutree.h
+++ b/kernel/rcutree.h
@@ -29,18 +29,14 @@
 #include <linux/seqlock.h>
 
 /*
- * Define shape of hierarchy based on NR_CPUS and CONFIG_RCU_FANOUT.
+ * Define shape of hierarchy based on NR_CPUS, CONFIG_RCU_FANOUT, and
+ * CONFIG_RCU_FANOUT_LEAF.
  * In theory, it should be possible to add more levels straightforwardly.
  * In practice, this did work well going from three levels to four.
  * Of course, your mileage may vary.
  */
 #define MAX_RCU_LVLS 4
-#if CONFIG_RCU_FANOUT > 16
-#define RCU_FANOUT_LEAF       16
-#else /* #if CONFIG_RCU_FANOUT > 16 */
-#define RCU_FANOUT_LEAF       (CONFIG_RCU_FANOUT)
-#endif /* #else #if CONFIG_RCU_FANOUT > 16 */
-#define RCU_FANOUT_1	      (RCU_FANOUT_LEAF)
+#define RCU_FANOUT_1	      (CONFIG_RCU_FANOUT_LEAF)
 #define RCU_FANOUT_2	      (RCU_FANOUT_1 * CONFIG_RCU_FANOUT)
 #define RCU_FANOUT_3	      (RCU_FANOUT_2 * CONFIG_RCU_FANOUT)
 #define RCU_FANOUT_4	      (RCU_FANOUT_3 * CONFIG_RCU_FANOUT)
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit
  2012-04-23 16:42 ` [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit Paul E. McKenney
                     ` (4 preceding siblings ...)
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems Paul E. McKenney
@ 2012-04-24 15:35   ` Srivatsa S. Bhat
  2012-04-24 16:50     ` Paul E. McKenney
  5 siblings, 1 reply; 32+ messages in thread
From: Srivatsa S. Bhat @ 2012-04-24 15:35 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches, Paul E. McKenney, venki,
	KOSAKI Motohiro, rusty

On 04/23/2012 10:12 PM, Paul E. McKenney wrote:

> From: "Paul E. McKenney" <paul.mckenney@linaro.org>
> 
> The rcu_blocking_is_gp() function tests to see if there is only one
> online CPU, and if so, synchronize_sched() and friends become no-ops.
> However, for larger systems, num_online_cpus() scans a large vector,


Venki had posted a patch to optimize that by using a variable, so that we
don't calculate the value each and every time.
http://thread.gmane.org/gmane.linux.kernel/1240569/focus=1246659

However, unfortunately there was some confusion around that patch and
even though it made it to akpm's tree and stayed there briefly, it didn't
go upstream. Venki had attempted to resolve the confusion here:
http://thread.gmane.org/gmane.linux.kernel/1240569/focus=1260702

> and might be preempted while doing so.  While preempted, any number
> of CPUs might come online and go offline, potentially resulting in
> num_online_cpus() returning 1 when there never had only been one
> CPU online.  This would result in a too-short RCU grace period, which
> could in turn result in total failure.
> 


So, if I understand correctly, the "too-short RCU grace period being
troublesome" case occurs when we move from UP to SMP (ie., RCU thought
that the system is in UP judging by the value returned by num_online_cpus(),
but underneath, some CPUs got onlined in the meantime and hence the system
now became SMP).

> This commit therefore protects the num_online_cpus() with preempt_disable().
> 
> Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> ---
>  include/linux/rcutree.h |    7 ++++++-
>  1 files changed, 6 insertions(+), 1 deletions(-)
> 
> diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
> index e8ee5dd..997179f 100644
> --- a/include/linux/rcutree.h
> +++ b/include/linux/rcutree.h
> @@ -101,8 +101,13 @@ extern void rcu_sched_force_quiescent_state(void);
>  /* A context switch is a grace period for RCU-sched and RCU-bh. */
>  static inline int rcu_blocking_is_gp(void)
>  {
> +	int ret;
> +


Wouldn't it be more appropriate to return a bool? (and of course change the
function prototype as well..)

>  	might_sleep();  /* Check for RCU read-side critical section. */
> -	return num_online_cpus() == 1;
> +	preempt_disable();  /* Prevent CPU hotplug from confusing us. */


Preempt disable only prevents CPU offline from occurring concurrently.  So
even if we use preempt_disable/enable(), new CPUs can always come online!
And going by the above concern about RCU grace periods being too-short, I
guess we should also protect against CPUs from coming online concurrently
right?

[By the way, we don't really need to protect against CPU offlining, because
an SMP -> UP switch is harmless; we'll just miss a fast-path in the worst
case, but the code correctness is still guaranteed.]

> +	ret = num_online_cpus() == 1;
> +	preempt_enable();
> +	return ret;
>  }
>


Maybe I am missing something, but how is this code safe? Isn't it still racy?
Even if we use:

{
	might_sleep();
	get_online_cpus(); /* Prevent both CPU offline and CPU online */
	ret = num_online_cpus() == 1;
	put_online_cpus();
			<===============
	return ret;
}

What prevents a CPU Hotplug from happening right before we return from this
function, thereby rendering our num_online_cpus() calculation out-dated and
hence useless?

Regards,
Srivatsa S. Bhat


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit
  2012-04-24 15:35   ` [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit Srivatsa S. Bhat
@ 2012-04-24 16:50     ` Paul E. McKenney
  2012-04-24 17:46       ` Srivatsa S. Bhat
  2012-05-07  3:47       ` Rusty Russell
  0 siblings, 2 replies; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-24 16:50 UTC (permalink / raw)
  To: Srivatsa S. Bhat
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches, Paul E. McKenney, venki,
	KOSAKI Motohiro, rusty

On Tue, Apr 24, 2012 at 09:05:20PM +0530, Srivatsa S. Bhat wrote:

Thank you for looking this over!

> On 04/23/2012 10:12 PM, Paul E. McKenney wrote:
> 
> > From: "Paul E. McKenney" <paul.mckenney@linaro.org>
> > 
> > The rcu_blocking_is_gp() function tests to see if there is only one
> > online CPU, and if so, synchronize_sched() and friends become no-ops.
> > However, for larger systems, num_online_cpus() scans a large vector,
> 
> 
> Venki had posted a patch to optimize that by using a variable, so that we
> don't calculate the value each and every time.
> http://thread.gmane.org/gmane.linux.kernel/1240569/focus=1246659
> 
> However, unfortunately there was some confusion around that patch and
> even though it made it to akpm's tree and stayed there briefly, it didn't
> go upstream. Venki had attempted to resolve the confusion here:
> http://thread.gmane.org/gmane.linux.kernel/1240569/focus=1260702

Having a single variable tracking the online state would be good,
but as you say it isn't there yet.

> > and might be preempted while doing so.  While preempted, any number
> > of CPUs might come online and go offline, potentially resulting in
> > num_online_cpus() returning 1 when there never had only been one
> > CPU online.  This would result in a too-short RCU grace period, which
> > could in turn result in total failure.
> > 
> 
> 
> So, if I understand correctly, the "too-short RCU grace period being
> troublesome" case occurs when we move from UP to SMP (ie., RCU thought
> that the system is in UP judging by the value returned by num_online_cpus(),
> but underneath, some CPUs got onlined in the meantime and hence the system
> now became SMP).

No, the UP-to-SMP case is OK.  If we were UP at any point during the
execution of synchronize_rcu(), a non-preemptible RCU grace period can
be a no-op.

The problematic case is instead the one where we were SMP throughout,
but rcu_blocking_is_gp() mistakenly decides that we were UP.  For example,
consider the following sequence of events, based on the commit log's
sentence "While preempted, any number of CPUs might come online and go
offline, potentially resulting in num_online_cpus() returning 1 when
there never had only been one CPU online":

o	CPUs 100 and 150 are initially online, with a long-running RCU
	read-side critical section on CPU 100 and rcu_blocking_is_gp()
	initially running on CPU 150.

o	The rcu_blocking_is_gp() function checks the bits for CPUs
	0-63, and counts zero online CPUs.

o	CPU 1 comes online.

o	The rcu_blocking_is_gp() function checks the bits for CPUs
	64-127, and counts one online CPUs, for a total thus far
	of one CPU online..

o	CPU 150 goes offline.  Ah, but it cannot do this, because
	this is non-preemptible RCU, which means that the RCU
	read-side critical section has disabled preemption on
	CPU 100, which prevents CPU 150 from going offline, which
	prevents this scenario from occurring.

	So, yes, rcu_blocking_is_gp() can be fooled into incorrectly
	stating that the system has only one CPU (or even that it has
	only zero CPUs), but not while there is actually a non-preemptible
	RCU read-side critical section running.  Yow!

	I clearly had not thought this change through far enough,
	thank you for calling it to my attention!

So I could replace this patch with a patch that adds a comment
explaining why this works.  Though this patch might be simpler and
easier to understand.  ;-)  But not so good for real-time response
on large systems, I suppose.

And rcu_blocking_is_gp() is called only from synchronize_sched() and
synchronize_rcu_bh(), so it is safe for all current callers.  But it is
defined publicly, so I should move it to somewhere like kernel/rcutree.c
to keep new users from being fatally confused and disappointed.

I can also change the comparison from "num_online_cpus() == 1" to
"num_online_cpus() <= 1".  That should at least serve as a caution to
people who might attempt to use it where it shouldn't be used.  ;-)

> > This commit therefore protects the num_online_cpus() with preempt_disable().
> > 
> > Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > ---
> >  include/linux/rcutree.h |    7 ++++++-
> >  1 files changed, 6 insertions(+), 1 deletions(-)
> > 
> > diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
> > index e8ee5dd..997179f 100644
> > --- a/include/linux/rcutree.h
> > +++ b/include/linux/rcutree.h
> > @@ -101,8 +101,13 @@ extern void rcu_sched_force_quiescent_state(void);
> >  /* A context switch is a grace period for RCU-sched and RCU-bh. */
> >  static inline int rcu_blocking_is_gp(void)
> >  {
> > +	int ret;
> > +
> 
> 
> Wouldn't it be more appropriate to return a bool? (and of course change the
> function prototype as well..)

Sounds like a separate patch, but yes it does.

> >  	might_sleep();  /* Check for RCU read-side critical section. */
> > -	return num_online_cpus() == 1;
> > +	preempt_disable();  /* Prevent CPU hotplug from confusing us. */
> 
> 
> Preempt disable only prevents CPU offline from occurring concurrently.  So
> even if we use preempt_disable/enable(), new CPUs can always come online!
> And going by the above concern about RCU grace periods being too-short, I
> guess we should also protect against CPUs from coming online concurrently
> right?
> 
> [By the way, we don't really need to protect against CPU offlining, because
> an SMP -> UP switch is harmless; we'll just miss a fast-path in the worst
> case, but the code correctness is still guaranteed.]

Yes, CPUs coming online are harmless, so it is OK that disabling preemption
does not prevent this.

> > +	ret = num_online_cpus() == 1;
> > +	preempt_enable();
> > +	return ret;
> >  }
> 
> Maybe I am missing something, but how is this code safe? Isn't it still racy?
> Even if we use:
> 
> {
> 	might_sleep();
> 	get_online_cpus(); /* Prevent both CPU offline and CPU online */
> 	ret = num_online_cpus() == 1;
> 	put_online_cpus();
> 			<===============
> 	return ret;
> }
> 
> What prevents a CPU Hotplug from happening right before we return from this
> function, thereby rendering our num_online_cpus() calculation out-dated and
> hence useless?

Nothing prevents CPU hotplug from happening at the indicated point,
but the function is still safe.

The reason for this safety is that if we were UP at any time during the
execution of synchronize_rcu(), we had to have been running on that single
CPU, which means that a non-preemptible RCU read-side critical section
could not possibly have also been running.  Because non-preemptible
RCU read-side critical sections run with preemption disabled, this in
turn means that all prior RCU read-side critical sections have to have
completed, which means that synchronize_rcu() doesn't need to do anything.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit
  2012-04-24 16:50     ` Paul E. McKenney
@ 2012-04-24 17:46       ` Srivatsa S. Bhat
  2012-05-07  3:47       ` Rusty Russell
  1 sibling, 0 replies; 32+ messages in thread
From: Srivatsa S. Bhat @ 2012-04-24 17:46 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches, Paul E. McKenney, venki,
	KOSAKI Motohiro, rusty

On 04/24/2012 10:20 PM, Paul E. McKenney wrote:

> On Tue, Apr 24, 2012 at 09:05:20PM +0530, Srivatsa S. Bhat wrote:
> 
>>
>>> From: "Paul E. McKenney" <paul.mckenney@linaro.org>
>>>
>>> The rcu_blocking_is_gp() function tests to see if there is only one
>>> online CPU, and if so, synchronize_sched() and friends become no-ops.
>>> However, for larger systems, num_online_cpus() scans a large vector,
>>
>>
>> Venki had posted a patch to optimize that by using a variable, so that we
>> don't calculate the value each and every time.
>> http://thread.gmane.org/gmane.linux.kernel/1240569/focus=1246659
>>
>> However, unfortunately there was some confusion around that patch and
>> even though it made it to akpm's tree and stayed there briefly, it didn't
>> go upstream. Venki had attempted to resolve the confusion here:
>> http://thread.gmane.org/gmane.linux.kernel/1240569/focus=1260702
> 
> Having a single variable tracking the online state would be good,
> but as you say it isn't there yet.
> 
>>> and might be preempted while doing so.  While preempted, any number
>>> of CPUs might come online and go offline, potentially resulting in
>>> num_online_cpus() returning 1 when there never had only been one
>>> CPU online.  This would result in a too-short RCU grace period, which
>>> could in turn result in total failure.
>>
>>
[...]

> 
> The problematic case is instead the one where we were SMP throughout,
> but rcu_blocking_is_gp() mistakenly decides that we were UP.  For example,
> consider the following sequence of events, based on the commit log's
> sentence "While preempted, any number of CPUs might come online and go
> offline, potentially resulting in num_online_cpus() returning 1 when
> there never had only been one CPU online":
> 


Oh, I didn't think in the direction illustrated below when reading that
sentence.. :-(

> o	CPUs 100 and 150 are initially online, with a long-running RCU
> 	read-side critical section on CPU 100 and rcu_blocking_is_gp()
> 	initially running on CPU 150.
> 
> o	The rcu_blocking_is_gp() function checks the bits for CPUs
> 	0-63, and counts zero online CPUs.
> 
> o	CPU 1 comes online.
> 
> o	The rcu_blocking_is_gp() function checks the bits for CPUs
> 	64-127, and counts one online CPUs, for a total thus far
> 	of one CPU online..
> 
> o	CPU 150 goes offline.  Ah, but it cannot do this, because
> 	this is non-preemptible RCU, which means that the RCU
> 	read-side critical section has disabled preemption on
> 	CPU 100, which prevents CPU 150 from going offline, which
> 	prevents this scenario from occurring.
> 
> 	So, yes, rcu_blocking_is_gp() can be fooled into incorrectly
> 	stating that the system has only one CPU (or even that it has
> 	only zero CPUs), but not while there is actually a non-preemptible
> 	RCU read-side critical section running.  Yow!
>


Awesome :-)

 
> 	I clearly had not thought this change through far enough,
> 	thank you for calling it to my attention!
> 
> So I could replace this patch with a patch that adds a comment
> explaining why this works.


Yes, that would be great..

>  Though this patch might be simpler and
> easier to understand.  ;-)


Oh well, but I completely missed the intention behind the patch!
So I guess a comment would be better ;-)

>  But not so good for real-time response
> on large systems, I suppose.
> 
> And rcu_blocking_is_gp() is called only from synchronize_sched() and
> synchronize_rcu_bh(), so it is safe for all current callers.  But it is
> defined publicly, so I should move it to somewhere like kernel/rcutree.c
> to keep new users from being fatally confused and disappointed.
> 
> I can also change the comparison from "num_online_cpus() == 1" to
> "num_online_cpus() <= 1".  That should at least serve as a caution to
> people who might attempt to use it where it shouldn't be used.  ;-)
> 


Hehe, yeah!

Regards,
Srivatsa S. Bhat


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 4/6] rcu: Clarify help text for RCU_BOOST_PRIO
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 4/6] rcu: Clarify help text for RCU_BOOST_PRIO Paul E. McKenney
@ 2012-04-26 12:46     ` Peter Zijlstra
  2012-04-26 17:28       ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Peter Zijlstra @ 2012-04-26 12:46 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches, Paul E. McKenney

On Mon, 2012-04-23 at 09:42 -0700, Paul E. McKenney wrote:
> +         This option specifies the real-time priority to which long-term
> +         preempted RCU readers are to be boosted.  If you are working
> +         with a real-time application that has one or more CPU-bound
> +         threads running at a real-time priority level,

Then your application is broken ;-) the kernel is known to mis-behave
under these circumstances since it doesn't get to run house-keeping
tasks. RCU is just one of these and elevating it doesn't make it work.

>  you should set
> +         RCU_BOOST_PRIO to a priority higher then the highest-priority
> +         real-time CPU-bound thread.  The default RCU_BOOST_PRIO value
> +         of 1 is appropriate in the common case, which is real-time
> +         applications that do not have any CPU-bound threads.

Alternatively, 1 is the worst possible choice forcing people to consider
the issue.

> +         Some real-time applications might not have a single real-time
> +         thread that saturates a given CPU, but instead might have
> +         multiple real-time threads that, taken together, fully utilize
> +         that CPU.  In this case, you should set RCU_BOOST_PRIO to
> +         a priority higher than the lowest-priority thread that is
> +         conspiring to prevent the CPU from running any non-real-time
> +         tasks.  For example, if one thread at priority 10 and another
> +         thread at priority 5 are between themselves fully consuming
> +         the CPU time on a given CPU, then RCU_BOOST_PRIO should be
> +         set to priority 6 or higher. 

I'd call this misleading, who's to say that preempting the 5 would yield
enough time to complete the RCU work?

This all gets us back to the fun question of RCU delayed bandwidth
budgeting.. ideally every 'task' that does call_rcu() should donate some
of its budget towards the thread running the callback.

Anyway, I'd argue both the old and new description are bonkers.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 5/6] rcu: Make __kfree_rcu() less dependent on compiler choices
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 5/6] rcu: Make __kfree_rcu() less dependent on compiler choices Paul E. McKenney
@ 2012-04-26 12:48     ` Peter Zijlstra
  2012-04-26 13:29       ` Jan Engelhardt
  0 siblings, 1 reply; 32+ messages in thread
From: Peter Zijlstra @ 2012-04-26 12:48 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches, Jan Engelhardt

On Mon, 2012-04-23 at 09:42 -0700, Paul E. McKenney wrote:
> Currently, __kfree_rcu() is implemented as an inline function, and
> contains a BUILD_BUG_ON() that malfunctions if __kfree_rcu() is compiled
> as an out-of-line function.  Unfortunately, there are compiler settings
> (e.g., -O0) that can result in __kfree_rcu() being compiled out of line,
> resulting in annoying build breakage.  This commit therefore converts
> both __kfree_rcu() and __is_kfree_rcu_offset() from inline functions to
> macros to prevent such misbehavior on the part of the compiler.

The kernel very explicitly doesn't support being compiled with -O0, so
this is a non-issue, I think you can make it work if you add
-finline-functions.

I'd drop this, either make the entire kernel compile or don't bother.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems Paul E. McKenney
@ 2012-04-26 12:51     ` Peter Zijlstra
  2012-04-26 14:12       ` Paul E. McKenney
  2012-04-27  4:36     ` Mike Galbraith
  1 sibling, 1 reply; 32+ messages in thread
From: Peter Zijlstra @ 2012-04-26 12:51 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Mon, 2012-04-23 at 09:42 -0700, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> Commit #0209f649 (rcu: limit rcu_node leaf-level fanout) set an upper
> limit of 16 on the leaf-level fanout for the rcu_node tree.  This was
> needed to reduce lock contention that was induced by the synchronization
> of scheduling-clock interrupts, which was in turn needed to improve
> energy efficiency for moderate-sized lightly loaded servers.
> 
> However, reducing the leaf-level fanout means that there are more
> leaf-level rcu_node structures in the tree, which in turn means that
> RCU's grace-period initialization incurs more cache misses.  This is
> not a problem on moderate-sized servers with only a few tens of CPUs,
> but becomes a major source of real-time latency spikes on systems with
> many hundreds of CPUs.  In addition, the workloads running on these large
> systems tend to be CPU-bound, which eliminates the energy-efficiency
> advantages of synchronizing scheduling-clock interrupts.  Therefore,
> these systems need maximal values for the rcu_node leaf-level fanout.
> 
> This commit addresses this problem by introducing a new kernel parameter
> named RCU_FANOUT_LEAF that directly controls the leaf-level fanout.
> This parameter defaults to 16 to handle the common case of a moderate
> sized lightly loaded servers, but may be set higher on larger systems.

Wouldn't it be much better to match the rcu fanout tree to the physical
topology of the machine?

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 5/6] rcu: Make __kfree_rcu() less dependent on compiler choices
  2012-04-26 12:48     ` Peter Zijlstra
@ 2012-04-26 13:29       ` Jan Engelhardt
  2012-04-26 13:50         ` Peter Zijlstra
  0 siblings, 1 reply; 32+ messages in thread
From: Jan Engelhardt @ 2012-04-26 13:29 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Paul E. McKenney, linux-kernel, mingo, laijs, dipankar, akpm,
	mathieu.desnoyers, josh, niv, tglx, rostedt, Valdis.Kletnieks,
	dhowells, eric.dumazet, darren, fweisbec, patches


On Thursday 2012-04-26 14:48, Peter Zijlstra wrote:
>On Mon, 2012-04-23 at 09:42 -0700, Paul E. McKenney wrote:
>> Currently, __kfree_rcu() is implemented as an inline function, and
>> contains a BUILD_BUG_ON() that malfunctions if __kfree_rcu() is compiled
>> as an out-of-line function.  Unfortunately, there are compiler settings
>> (e.g., -O0) that can result in __kfree_rcu() being compiled out of line,
>> resulting in annoying build breakage.  This commit therefore converts
>> both __kfree_rcu() and __is_kfree_rcu_offset() from inline functions to
>> macros to prevent such misbehavior on the part of the compiler.
>
>The kernel very explicitly doesn't support being compiled with -O0, so
>this is a non-issue, I think you can make it work if you add
>-finline-functions.
>
>I'd drop this, either make the entire kernel compile or don't bother.

It was not originally meant to make the entire kernel compile with -O0,
but only select modules/.c files.
That, or __always_inline just does not always inline.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 5/6] rcu: Make __kfree_rcu() less dependent on compiler choices
  2012-04-26 13:29       ` Jan Engelhardt
@ 2012-04-26 13:50         ` Peter Zijlstra
  0 siblings, 0 replies; 32+ messages in thread
From: Peter Zijlstra @ 2012-04-26 13:50 UTC (permalink / raw)
  To: Jan Engelhardt
  Cc: Paul E. McKenney, linux-kernel, mingo, laijs, dipankar, akpm,
	mathieu.desnoyers, josh, niv, tglx, rostedt, Valdis.Kletnieks,
	dhowells, eric.dumazet, darren, fweisbec, patches

On Thu, 2012-04-26 at 15:29 +0200, Jan Engelhardt wrote:
> On Thursday 2012-04-26 14:48, Peter Zijlstra wrote:
> >On Mon, 2012-04-23 at 09:42 -0700, Paul E. McKenney wrote:
> >> Currently, __kfree_rcu() is implemented as an inline function, and
> >> contains a BUILD_BUG_ON() that malfunctions if __kfree_rcu() is compiled
> >> as an out-of-line function.  Unfortunately, there are compiler settings
> >> (e.g., -O0) that can result in __kfree_rcu() being compiled out of line,
> >> resulting in annoying build breakage.  This commit therefore converts
> >> both __kfree_rcu() and __is_kfree_rcu_offset() from inline functions to
> >> macros to prevent such misbehavior on the part of the compiler.
> >
> >The kernel very explicitly doesn't support being compiled with -O0, so
> >this is a non-issue, I think you can make it work if you add
> >-finline-functions.
> >
> >I'd drop this, either make the entire kernel compile or don't bother.
> 
> It was not originally meant to make the entire kernel compile with -O0,
> but only select modules/.c files.

Same problem, who cares about select modules/.c files? Why should they
have different rules?

> That, or __always_inline just does not always inline.

It does indeed not without -finline-functions.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-26 12:51     ` Peter Zijlstra
@ 2012-04-26 14:12       ` Paul E. McKenney
  2012-04-26 15:28         ` Peter Zijlstra
  0 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-26 14:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Thu, Apr 26, 2012 at 02:51:47PM +0200, Peter Zijlstra wrote:
> On Mon, 2012-04-23 at 09:42 -0700, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > 
> > Commit #0209f649 (rcu: limit rcu_node leaf-level fanout) set an upper
> > limit of 16 on the leaf-level fanout for the rcu_node tree.  This was
> > needed to reduce lock contention that was induced by the synchronization
> > of scheduling-clock interrupts, which was in turn needed to improve
> > energy efficiency for moderate-sized lightly loaded servers.
> > 
> > However, reducing the leaf-level fanout means that there are more
> > leaf-level rcu_node structures in the tree, which in turn means that
> > RCU's grace-period initialization incurs more cache misses.  This is
> > not a problem on moderate-sized servers with only a few tens of CPUs,
> > but becomes a major source of real-time latency spikes on systems with
> > many hundreds of CPUs.  In addition, the workloads running on these large
> > systems tend to be CPU-bound, which eliminates the energy-efficiency
> > advantages of synchronizing scheduling-clock interrupts.  Therefore,
> > these systems need maximal values for the rcu_node leaf-level fanout.
> > 
> > This commit addresses this problem by introducing a new kernel parameter
> > named RCU_FANOUT_LEAF that directly controls the leaf-level fanout.
> > This parameter defaults to 16 to handle the common case of a moderate
> > sized lightly loaded servers, but may be set higher on larger systems.
> 
> Wouldn't it be much better to match the rcu fanout tree to the physical
> topology of the machine?

>From what I am hearing, doing so requires me to morph the rcu_node tree
at run time.  I might eventually become courageous/inspired/senile
enough to try this, but not yet.  ;-)

Actually, some of this topology shifting seems to me like a firmware
bug.  Why not arrange the Linux-visible numbering in a way to promote
locality for code sequencing through the CPUs?

							Thanx, Paul


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-26 14:12       ` Paul E. McKenney
@ 2012-04-26 15:28         ` Peter Zijlstra
  2012-04-26 16:15           ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Peter Zijlstra @ 2012-04-26 15:28 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Thu, 2012-04-26 at 07:12 -0700, Paul E. McKenney wrote:
> On Thu, Apr 26, 2012 at 02:51:47PM +0200, Peter Zijlstra wrote:

> > Wouldn't it be much better to match the rcu fanout tree to the physical
> > topology of the machine?
> 
> From what I am hearing, doing so requires me to morph the rcu_node tree
> at run time.  I might eventually become courageous/inspired/senile
> enough to try this, but not yet.  ;-)

Yes, boot time with possibly some hotplug hooks.

> Actually, some of this topology shifting seems to me like a firmware
> bug.  Why not arrange the Linux-visible numbering in a way to promote
> locality for code sequencing through the CPUs?

I'm not sure.. but it seems well established on x86 to first enumerate
the cores (thread 0) and then the sibling threads (thread 1) -- one
'advantage' is that if you boot with max_cpus=$half you get all cores
instead of half the cores.

OTOH it does make linear iteration of the cpus 'funny' :-)

Also, a fanout of 16 is nice when your machine doesn't have HT and has a
2^n core count, but some popular machines these days have 6/10 cores per
socket, resulting in your fanout splitting caches.



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-26 15:28         ` Peter Zijlstra
@ 2012-04-26 16:15           ` Paul E. McKenney
  2012-04-26 19:41             ` Peter Zijlstra
  0 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-26 16:15 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Thu, Apr 26, 2012 at 05:28:57PM +0200, Peter Zijlstra wrote:
> On Thu, 2012-04-26 at 07:12 -0700, Paul E. McKenney wrote:
> > On Thu, Apr 26, 2012 at 02:51:47PM +0200, Peter Zijlstra wrote:
> 
> > > Wouldn't it be much better to match the rcu fanout tree to the physical
> > > topology of the machine?
> > 
> > From what I am hearing, doing so requires me to morph the rcu_node tree
> > at run time.  I might eventually become courageous/inspired/senile
> > enough to try this, but not yet.  ;-)
> 
> Yes, boot time with possibly some hotplug hooks.

Has anyone actually measured any slowdown due to the rcu_node structure
not matching the topology?  (But see also below.)

> > Actually, some of this topology shifting seems to me like a firmware
> > bug.  Why not arrange the Linux-visible numbering in a way to promote
> > locality for code sequencing through the CPUs?
> 
> I'm not sure.. but it seems well established on x86 to first enumerate
> the cores (thread 0) and then the sibling threads (thread 1) -- one
> 'advantage' is that if you boot with max_cpus=$half you get all cores
> instead of half the cores.
> 
> OTOH it does make linear iteration of the cpus 'funny' :-)

Like I said, firmware bug.  Seems like the fix should be there as well.
Perhaps there needs to be two CPU numberings, one for people wanting
whole cores and another for people who want cache locality.  Yes, this
could be confusing, but keep in mind that you are asking every kernel
subsystem to keep its own version of the cache-locality numbering,
and that will be even more confusing.

> Also, a fanout of 16 is nice when your machine doesn't have HT and has a
> 2^n core count, but some popular machines these days have 6/10 cores per
> socket, resulting in your fanout splitting caches.

That is easy.  Such systems can set CONFIG_RCU_FANOUT to 6, 12, 10,
or 20, depending on preference.  With a patch intended for 3.6, they
could set the smallest reasonable value at build time and adjust to
the hardware using the boot parameter.

http://www.gossamer-threads.com/lists/linux/kernel/1524864

I expect to make other similar changes over time, but will be proceeding
cautiously.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 4/6] rcu: Clarify help text for RCU_BOOST_PRIO
  2012-04-26 12:46     ` Peter Zijlstra
@ 2012-04-26 17:28       ` Paul E. McKenney
  0 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-26 17:28 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches, Paul E. McKenney

On Thu, Apr 26, 2012 at 02:46:31PM +0200, Peter Zijlstra wrote:
> On Mon, 2012-04-23 at 09:42 -0700, Paul E. McKenney wrote:
> > +         This option specifies the real-time priority to which long-term
> > +         preempted RCU readers are to be boosted.  If you are working
> > +         with a real-time application that has one or more CPU-bound
> > +         threads running at a real-time priority level,
> 
> Then your application is broken ;-) the kernel is known to mis-behave
> under these circumstances since it doesn't get to run house-keeping
> tasks. RCU is just one of these and elevating it doesn't make it work.

As you say, CPU-bound RT tasks have a number of problems, and RCU is but
one of them.  That said, an RCU-induced memory-exhaustion system hang
is an extremely unfriendly diagnostic message, and use of RCU priority
boosting allows them a better debugging environment.

> >  you should set
> > +         RCU_BOOST_PRIO to a priority higher then the highest-priority
> > +         real-time CPU-bound thread.  The default RCU_BOOST_PRIO value
> > +         of 1 is appropriate in the common case, which is real-time
> > +         applications that do not have any CPU-bound threads.
> 
> Alternatively, 1 is the worst possible choice forcing people to consider
> the issue.

You say that as if forcing people to consider the issue was a
bad thing.  ;-)

> > +         Some real-time applications might not have a single real-time
> > +         thread that saturates a given CPU, but instead might have
> > +         multiple real-time threads that, taken together, fully utilize
> > +         that CPU.  In this case, you should set RCU_BOOST_PRIO to
> > +         a priority higher than the lowest-priority thread that is
> > +         conspiring to prevent the CPU from running any non-real-time
> > +         tasks.  For example, if one thread at priority 10 and another
> > +         thread at priority 5 are between themselves fully consuming
> > +         the CPU time on a given CPU, then RCU_BOOST_PRIO should be
> > +         set to priority 6 or higher. 
> 
> I'd call this misleading, who's to say that preempting the 5 would yield
> enough time to complete the RCU work?

Yep, hence the "or higher".

> This all gets us back to the fun question of RCU delayed bandwidth
> budgeting.. ideally every 'task' that does call_rcu() should donate some
> of its budget towards the thread running the callback.

There was an academic interested in that topic a few years ago, but
I don't believe anything came of it.  An interesting approach would
be to do EDF scheduling on the callbacks themselves, but having a
separate thread for each callback sounds like overkill.

> Anyway, I'd argue both the old and new description are bonkers.

Indeed, my goal was "less bonkers" rather than "not bonkers".  A
"not bonkers" description remains a long-term aspiration rather than
a short-term goal for the moment.  I can only hope that the timeframe
is shorter than it was for RCU back in the early 1990s.  ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-26 16:15           ` Paul E. McKenney
@ 2012-04-26 19:41             ` Peter Zijlstra
  2012-04-26 19:47               ` Peter Zijlstra
  2012-04-26 20:28               ` Paul E. McKenney
  0 siblings, 2 replies; 32+ messages in thread
From: Peter Zijlstra @ 2012-04-26 19:41 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Thu, 2012-04-26 at 09:15 -0700, Paul E. McKenney wrote:
> On Thu, Apr 26, 2012 at 05:28:57PM +0200, Peter Zijlstra wrote:
> > On Thu, 2012-04-26 at 07:12 -0700, Paul E. McKenney wrote:
> > > On Thu, Apr 26, 2012 at 02:51:47PM +0200, Peter Zijlstra wrote:
> > 
> > > > Wouldn't it be much better to match the rcu fanout tree to the physical
> > > > topology of the machine?
> > > 
> > > From what I am hearing, doing so requires me to morph the rcu_node tree
> > > at run time.  I might eventually become courageous/inspired/senile
> > > enough to try this, but not yet.  ;-)
> > 
> > Yes, boot time with possibly some hotplug hooks.
> 
> Has anyone actually measured any slowdown due to the rcu_node structure
> not matching the topology?  (But see also below.)

Nope, I'm just whinging ;-)

> > > Actually, some of this topology shifting seems to me like a firmware
> > > bug.  Why not arrange the Linux-visible numbering in a way to promote
> > > locality for code sequencing through the CPUs?
> > 
> > I'm not sure.. but it seems well established on x86 to first enumerate
> > the cores (thread 0) and then the sibling threads (thread 1) -- one
> > 'advantage' is that if you boot with max_cpus=$half you get all cores
> > instead of half the cores.
> > 
> > OTOH it does make linear iteration of the cpus 'funny' :-)
> 
> Like I said, firmware bug.  Seems like the fix should be there as well.
> Perhaps there needs to be two CPU numberings, one for people wanting
> whole cores and another for people who want cache locality.  Yes, this
> could be confusing, but keep in mind that you are asking every kernel
> subsystem to keep its own version of the cache-locality numbering,
> and that will be even more confusing.

I really don't see why it would matter, as far I care they're completely
randomized on boot, its all done using cpu-bitmasks anyway.

Suppose the linear thing would have threads/cores/cache continuity like
you want, that still leaves the node interconnects, and we all know
people love to be creative with those, no way you're going to fold that
into a linear scheme :-)

Anyway, as it currently stands I can offer you: cpus_share_cache(),
which will return true/false depending on if the two cpus do indeed
share a cache.

On top of that there's node_distance(), which will (hopefully) reflect
the interconnect topology between nodes.

Using these you can construct enough of the topology layout to be
useful. NOTE: node topologies don't need to be symmetric!

> > Also, a fanout of 16 is nice when your machine doesn't have HT and has a
> > 2^n core count, but some popular machines these days have 6/10 cores per
> > socket, resulting in your fanout splitting caches.
> 
> That is easy.  Such systems can set CONFIG_RCU_FANOUT to 6, 12, 10,
> or 20, depending on preference.  With a patch intended for 3.6, they
> could set the smallest reasonable value at build time and adjust to
> the hardware using the boot parameter.
> 
> http://www.gossamer-threads.com/lists/linux/kernel/1524864
> 
> I expect to make other similar changes over time, but will be proceeding
> cautiously.

I can very easily give you the size (nr cpus in) a node, still as long
as you iterate the cpu space linearly that's not going to be much help.

I can also offer you access to the scheduler topology if you want.. I've
got the below patch pending which should (hopefully) improve the
scheduler's node topology -- it implements that detection based on
node_distance().

I've tested it using some fake-numa hacks to feed it the distance table
of an AMD quad-socket Magny-Cours:

 "numa=fake=8:10,16,16,22,16,22,16,22,
              16,10,22,16,22,16,22,16,
              16,22,10,16,16,22,16,22,
              22,16,16,10,22,16,22,16,
              16,22,16,22,10,16,16,22,
              22,16,22,16,16,10,22,16,
              16,22,16,22,16,22,10,16,
              22,16,22,16,22,16,16,10"



---
Subject: sched: Rewrite the CONFIG_NUMA sched domain support.
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
Date: Tue Apr 17 15:49:36 CEST 2012

The current code groups up to 16 nodes in a level and then puts an
ALLNODES domain spanning the entire tree on top of that. This doesn't
reflect the numa topology and esp for the smaller not-fully-connected
machines out there today this might make a difference.

Therefore, build a proper numa topology based on node_distance().

TODO: figure out a way to set SD_flags based on distance such that
      we disable various expensive load-balancing features at some
      point and increase the balance interval prop. to the distance.

Cc: Anton Blanchard <anton@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: linux-alpha@vger.kernel.org
Cc: linux-ia64@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-sh@vger.kernel.org
Cc: Matt Turner <mattst88@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: sparclinux@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: x86@kernel.org
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-fgj6245hxj61qe8vy7c6cmjj@git.kernel.org
---
 arch/powerpc/include/asm/topology.h |    6 
 arch/x86/include/asm/topology.h     |   38 -----
 include/linux/topology.h            |   36 -----
 kernel/sched/core.c                 |  253 ++++++++++++++++++++++--------------
 4 files changed, 158 insertions(+), 175 deletions(-)

Index: linux-2.6/include/linux/topology.h
===================================================================
--- linux-2.6.orig/include/linux/topology.h
+++ linux-2.6/include/linux/topology.h
@@ -176,48 +176,12 @@ int arch_update_cpu_topology(void);
 }
 #endif
 
-/* sched_domains SD_ALLNODES_INIT for NUMA machines */
-#define SD_ALLNODES_INIT (struct sched_domain) {			\
-	.min_interval		= 64,					\
-	.max_interval		= 64*num_online_cpus(),			\
-	.busy_factor		= 128,					\
-	.imbalance_pct		= 133,					\
-	.cache_nice_tries	= 1,					\
-	.busy_idx		= 3,					\
-	.idle_idx		= 3,					\
-	.flags			= 1*SD_LOAD_BALANCE			\
-				| 1*SD_BALANCE_NEWIDLE			\
-				| 0*SD_BALANCE_EXEC			\
-				| 0*SD_BALANCE_FORK			\
-				| 0*SD_BALANCE_WAKE			\
-				| 0*SD_WAKE_AFFINE			\
-				| 0*SD_SHARE_CPUPOWER			\
-				| 0*SD_POWERSAVINGS_BALANCE		\
-				| 0*SD_SHARE_PKG_RESOURCES		\
-				| 1*SD_SERIALIZE			\
-				| 0*SD_PREFER_SIBLING			\
-				,					\
-	.last_balance		= jiffies,				\
-	.balance_interval	= 64,					\
-}
-
-#ifndef SD_NODES_PER_DOMAIN
-#define SD_NODES_PER_DOMAIN 16
-#endif
-
 #ifdef CONFIG_SCHED_BOOK
 #ifndef SD_BOOK_INIT
 #error Please define an appropriate SD_BOOK_INIT in include/asm/topology.h!!!
 #endif
 #endif /* CONFIG_SCHED_BOOK */
 
-#ifdef CONFIG_NUMA
-#ifndef SD_NODE_INIT
-#error Please define an appropriate SD_NODE_INIT in include/asm/topology.h!!!
-#endif
-
-#endif /* CONFIG_NUMA */
-
 #ifdef CONFIG_USE_PERCPU_NUMA_NODE_ID
 DECLARE_PER_CPU(int, numa_node);
 
Index: linux-2.6/kernel/sched/core.c
===================================================================
--- linux-2.6.orig/kernel/sched/core.c
+++ linux-2.6/kernel/sched/core.c
@@ -5560,7 +5560,8 @@ static int sched_domain_debug_one(struct
 			break;
 		}
 
-		if (cpumask_intersects(groupmask, sched_group_cpus(group))) {
+		if (!(sd->flags & SD_OVERLAP) &&
+		    cpumask_intersects(groupmask, sched_group_cpus(group))) {
 			printk(KERN_CONT "\n");
 			printk(KERN_ERR "ERROR: repeated CPUs\n");
 			break;
@@ -5898,92 +5899,6 @@ static int __init isolated_cpu_setup(cha
 
 __setup("isolcpus=", isolated_cpu_setup);
 
-#ifdef CONFIG_NUMA
-
-/**
- * find_next_best_node - find the next node to include in a sched_domain
- * @node: node whose sched_domain we're building
- * @used_nodes: nodes already in the sched_domain
- *
- * Find the next node to include in a given scheduling domain. Simply
- * finds the closest node not already in the @used_nodes map.
- *
- * Should use nodemask_t.
- */
-static int find_next_best_node(int node, nodemask_t *used_nodes)
-{
-	int i, n, val, min_val, best_node = -1;
-
-	min_val = INT_MAX;
-
-	for (i = 0; i < nr_node_ids; i++) {
-		/* Start at @node */
-		n = (node + i) % nr_node_ids;
-
-		if (!nr_cpus_node(n))
-			continue;
-
-		/* Skip already used nodes */
-		if (node_isset(n, *used_nodes))
-			continue;
-
-		/* Simple min distance search */
-		val = node_distance(node, n);
-
-		if (val < min_val) {
-			min_val = val;
-			best_node = n;
-		}
-	}
-
-	if (best_node != -1)
-		node_set(best_node, *used_nodes);
-	return best_node;
-}
-
-/**
- * sched_domain_node_span - get a cpumask for a node's sched_domain
- * @node: node whose cpumask we're constructing
- * @span: resulting cpumask
- *
- * Given a node, construct a good cpumask for its sched_domain to span. It
- * should be one that prevents unnecessary balancing, but also spreads tasks
- * out optimally.
- */
-static void sched_domain_node_span(int node, struct cpumask *span)
-{
-	nodemask_t used_nodes;
-	int i;
-
-	cpumask_clear(span);
-	nodes_clear(used_nodes);
-
-	cpumask_or(span, span, cpumask_of_node(node));
-	node_set(node, used_nodes);
-
-	for (i = 1; i < SD_NODES_PER_DOMAIN; i++) {
-		int next_node = find_next_best_node(node, &used_nodes);
-		if (next_node < 0)
-			break;
-		cpumask_or(span, span, cpumask_of_node(next_node));
-	}
-}
-
-static const struct cpumask *cpu_node_mask(int cpu)
-{
-	lockdep_assert_held(&sched_domains_mutex);
-
-	sched_domain_node_span(cpu_to_node(cpu), sched_domains_tmpmask);
-
-	return sched_domains_tmpmask;
-}
-
-static const struct cpumask *cpu_allnodes_mask(int cpu)
-{
-	return cpu_possible_mask;
-}
-#endif /* CONFIG_NUMA */
-
 static const struct cpumask *cpu_cpu_mask(int cpu)
 {
 	return cpumask_of_node(cpu_to_node(cpu));
@@ -6020,6 +5935,7 @@ struct sched_domain_topology_level {
 	sched_domain_init_f init;
 	sched_domain_mask_f mask;
 	int		    flags;
+	int		    numa_level;
 	struct sd_data      data;
 };
 
@@ -6211,10 +6127,6 @@ sd_init_##type(struct sched_domain_topol
 }
 
 SD_INIT_FUNC(CPU)
-#ifdef CONFIG_NUMA
- SD_INIT_FUNC(ALLNODES)
- SD_INIT_FUNC(NODE)
-#endif
 #ifdef CONFIG_SCHED_SMT
  SD_INIT_FUNC(SIBLING)
 #endif
@@ -6336,15 +6248,164 @@ static struct sched_domain_topology_leve
 	{ sd_init_BOOK, cpu_book_mask, },
 #endif
 	{ sd_init_CPU, cpu_cpu_mask, },
-#ifdef CONFIG_NUMA
-	{ sd_init_NODE, cpu_node_mask, SDTL_OVERLAP, },
-	{ sd_init_ALLNODES, cpu_allnodes_mask, },
-#endif
 	{ NULL, },
 };
 
 static struct sched_domain_topology_level *sched_domain_topology = default_topology;
 
+#ifdef CONFIG_NUMA
+
+static int sched_domains_numa_levels;
+static int sched_domains_numa_scale;
+static int *sched_domains_numa_distance;
+static struct cpumask ** __percpu sched_domains_numa_masks;
+static int sched_domains_curr_level;
+
+#define NUMA_SCALE(x, level)	\
+
+static inline unsigned long numa_scale(unsigned long x, int level)
+{
+	return x * sched_domains_numa_distance[level] / sched_domains_numa_scale;
+}
+
+static inline int sd_local_flags(int level)
+{
+	if (sched_domains_numa_distance[level] > REMOTE_DISTANCE)
+		return 0;
+
+	return SD_BALANCE_EXEC | SD_BALANCE_FORK | SD_WAKE_AFFINE;
+}
+
+static struct sched_domain *
+sd_init_NUMA(struct sched_domain_topology_level *tl, int cpu)
+{
+	struct sched_domain *sd = *per_cpu_ptr(tl->data.sd, cpu);
+	int level = tl->numa_level;
+	int sd_weight = cpumask_weight(per_cpu_ptr(
+				sched_domains_numa_masks[level],
+				cpu));
+
+	*sd = (struct sched_domain){
+		.min_interval		= sd_weight,
+		.max_interval		= 2*sd_weight,
+		.busy_factor		= 32,
+		.imbalance_pct		= 100 + numa_scale(25, level),
+		.cache_nice_tries	= 2,
+		.busy_idx		= 3,
+		.idle_idx		= 2,
+		.newidle_idx		= 0,
+		.wake_idx		= 0,
+		.forkexec_idx		= 0,
+
+		.flags			= 1*SD_LOAD_BALANCE
+					| 1*SD_BALANCE_NEWIDLE
+					| 0*SD_BALANCE_EXEC
+					| 0*SD_BALANCE_FORK
+					| 0*SD_BALANCE_WAKE
+					| 0*SD_WAKE_AFFINE
+					| 0*SD_PREFER_LOCAL
+					| 0*SD_SHARE_CPUPOWER
+					| 0*SD_POWERSAVINGS_BALANCE
+					| 0*SD_SHARE_PKG_RESOURCES
+					| 1*SD_SERIALIZE
+					| 0*SD_PREFER_SIBLING
+					| sd_local_flags(level)
+					,
+		.last_balance		= jiffies,
+		.balance_interval	= sd_weight,
+	};
+	SD_INIT_NAME(sd, NUMA);
+	sd->private = &tl->data;
+
+	/*
+	 * Ugly hack to pass state to sd_numa_mask()...
+	 */
+	sched_domains_curr_level = tl->numa_level;
+
+	return sd;
+}
+
+static const struct cpumask *sd_numa_mask(int cpu)
+{
+	return per_cpu_ptr(sched_domains_numa_masks[sched_domains_curr_level], cpu);
+}
+
+static void sched_init_numa(void)
+{
+	int next_distance, curr_distance = node_distance(0, 0);
+	struct sched_domain_topology_level *tl;
+	int level = 0;
+	int i, j, k;
+
+	sched_domains_numa_scale = curr_distance;
+	sched_domains_numa_distance = kzalloc(sizeof(int) * nr_node_ids, GFP_KERNEL);
+	if (!sched_domains_numa_distance)
+		return;
+
+	next_distance = curr_distance;
+	for (i = 0; i < nr_node_ids; i++) {
+		for (j = 0; j < nr_node_ids; j++) {
+			int distance = node_distance(0, j);
+			if (distance > curr_distance &&
+					(distance < next_distance ||
+					 next_distance == curr_distance))
+				next_distance = distance;
+		}
+		if (next_distance != curr_distance) {
+			sched_domains_numa_distance[level++] = next_distance;
+			sched_domains_numa_levels = level;
+			curr_distance = next_distance;
+		} else break;
+	}
+
+	sched_domains_numa_masks = kzalloc(sizeof(void *) * level, GFP_KERNEL);
+	if (!sched_domains_numa_masks)
+		return;
+
+	for (i = 0; i < level; i++) {
+		sched_domains_numa_masks[i] = alloc_percpu(cpumask_t);
+		if (!sched_domains_numa_masks[i])
+			return;
+
+		for_each_possible_cpu(j) {
+			struct cpumask *mask =
+				per_cpu_ptr(sched_domains_numa_masks[i], j);
+
+			for (k = 0; k < nr_node_ids; k++) {
+				if (node_distance(cpu_to_node(j), k) >
+						sched_domains_numa_distance[i])
+					continue;
+
+				cpumask_or(mask, mask, cpumask_of_node(k));
+			}
+		}
+	}
+
+	tl = kzalloc((ARRAY_SIZE(default_topology) + level) *
+			sizeof(struct sched_domain_topology_level), GFP_KERNEL);
+	if (!tl)
+		return;
+
+	for (i = 0; default_topology[i].init; i++)
+		tl[i] = default_topology[i];
+
+	for (j = 0; j < level; i++, j++) {
+		tl[i] = (struct sched_domain_topology_level){
+			.init = sd_init_NUMA,
+			.mask = sd_numa_mask,
+			.flags = SDTL_OVERLAP,
+			.numa_level = j,
+		};
+	}
+
+	sched_domain_topology = tl;
+}
+#else
+static inline void sched_init_numa(void)
+{
+}
+#endif /* CONFIG_NUMA */
+
 static int __sdt_alloc(const struct cpumask *cpu_map)
 {
 	struct sched_domain_topology_level *tl;
@@ -6828,6 +6889,8 @@ void __init sched_init_smp(void)
 	alloc_cpumask_var(&non_isolated_cpus, GFP_KERNEL);
 	alloc_cpumask_var(&fallback_doms, GFP_KERNEL);
 
+	sched_init_numa();
+
 	get_online_cpus();
 	mutex_lock(&sched_domains_mutex);
 	init_sched_domains(cpu_active_mask);
Index: linux-2.6/arch/powerpc/include/asm/topology.h
===================================================================
--- linux-2.6.orig/arch/powerpc/include/asm/topology.h
+++ linux-2.6/arch/powerpc/include/asm/topology.h
@@ -18,12 +18,6 @@ struct device_node;
  */
 #define RECLAIM_DISTANCE 10
 
-/*
- * Avoid creating an extra level of balancing (SD_ALLNODES) on the largest
- * POWER7 boxes which have a maximum of 32 nodes.
- */
-#define SD_NODES_PER_DOMAIN 32
-
 #include <asm/mmzone.h>
 
 static inline int cpu_to_node(int cpu)
Index: linux-2.6/arch/x86/include/asm/topology.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/topology.h
+++ linux-2.6/arch/x86/include/asm/topology.h
@@ -92,44 +92,6 @@ extern void setup_node_to_cpumask_map(vo
 
 #define pcibus_to_node(bus) __pcibus_to_node(bus)
 
-#ifdef CONFIG_X86_32
-# define SD_CACHE_NICE_TRIES	1
-# define SD_IDLE_IDX		1
-#else
-# define SD_CACHE_NICE_TRIES	2
-# define SD_IDLE_IDX		2
-#endif
-
-/* sched_domains SD_NODE_INIT for NUMA machines */
-#define SD_NODE_INIT (struct sched_domain) {				\
-	.min_interval		= 8,					\
-	.max_interval		= 32,					\
-	.busy_factor		= 32,					\
-	.imbalance_pct		= 125,					\
-	.cache_nice_tries	= SD_CACHE_NICE_TRIES,			\
-	.busy_idx		= 3,					\
-	.idle_idx		= SD_IDLE_IDX,				\
-	.newidle_idx		= 0,					\
-	.wake_idx		= 0,					\
-	.forkexec_idx		= 0,					\
-									\
-	.flags			= 1*SD_LOAD_BALANCE			\
-				| 1*SD_BALANCE_NEWIDLE			\
-				| 1*SD_BALANCE_EXEC			\
-				| 1*SD_BALANCE_FORK			\
-				| 0*SD_BALANCE_WAKE			\
-				| 1*SD_WAKE_AFFINE			\
-				| 0*SD_PREFER_LOCAL			\
-				| 0*SD_SHARE_CPUPOWER			\
-				| 0*SD_POWERSAVINGS_BALANCE		\
-				| 0*SD_SHARE_PKG_RESOURCES		\
-				| 1*SD_SERIALIZE			\
-				| 0*SD_PREFER_SIBLING			\
-				,					\
-	.last_balance		= jiffies,				\
-	.balance_interval	= 1,					\
-}
-
 extern int __node_distance(int, int);
 #define node_distance(a, b) __node_distance(a, b)
 



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-26 19:41             ` Peter Zijlstra
@ 2012-04-26 19:47               ` Peter Zijlstra
  2012-04-26 20:29                 ` Paul E. McKenney
  2012-04-26 20:28               ` Paul E. McKenney
  1 sibling, 1 reply; 32+ messages in thread
From: Peter Zijlstra @ 2012-04-26 19:47 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Thu, 2012-04-26 at 21:41 +0200, Peter Zijlstra wrote:
> 
> I can very easily give you the size (nr cpus in) a node, still as long
> as you iterate the cpu space linearly that's not going to be much
> help.
> 
Oh, I forgot, the numa masks etc are available, depending on
CONFIG_NUMA, as cpumask_of_node(n).


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-26 19:41             ` Peter Zijlstra
  2012-04-26 19:47               ` Peter Zijlstra
@ 2012-04-26 20:28               ` Paul E. McKenney
  2012-04-26 22:01                 ` Peter Zijlstra
  1 sibling, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-26 20:28 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Thu, Apr 26, 2012 at 09:41:59PM +0200, Peter Zijlstra wrote:
> On Thu, 2012-04-26 at 09:15 -0700, Paul E. McKenney wrote:
> > On Thu, Apr 26, 2012 at 05:28:57PM +0200, Peter Zijlstra wrote:
> > > On Thu, 2012-04-26 at 07:12 -0700, Paul E. McKenney wrote:
> > > > On Thu, Apr 26, 2012 at 02:51:47PM +0200, Peter Zijlstra wrote:
> > > 
> > > > > Wouldn't it be much better to match the rcu fanout tree to the physical
> > > > > topology of the machine?
> > > > 
> > > > From what I am hearing, doing so requires me to morph the rcu_node tree
> > > > at run time.  I might eventually become courageous/inspired/senile
> > > > enough to try this, but not yet.  ;-)
> > > 
> > > Yes, boot time with possibly some hotplug hooks.
> > 
> > Has anyone actually measured any slowdown due to the rcu_node structure
> > not matching the topology?  (But see also below.)
> 
> Nope, I'm just whinging ;-)

Hmmm, I suppose that I am whinging as well...

> > > > Actually, some of this topology shifting seems to me like a firmware
> > > > bug.  Why not arrange the Linux-visible numbering in a way to promote
> > > > locality for code sequencing through the CPUs?
> > > 
> > > I'm not sure.. but it seems well established on x86 to first enumerate
> > > the cores (thread 0) and then the sibling threads (thread 1) -- one
> > > 'advantage' is that if you boot with max_cpus=$half you get all cores
> > > instead of half the cores.
> > > 
> > > OTOH it does make linear iteration of the cpus 'funny' :-)
> > 
> > Like I said, firmware bug.  Seems like the fix should be there as well.
> > Perhaps there needs to be two CPU numberings, one for people wanting
> > whole cores and another for people who want cache locality.  Yes, this
> > could be confusing, but keep in mind that you are asking every kernel
> > subsystem to keep its own version of the cache-locality numbering,
> > and that will be even more confusing.
> 
> I really don't see why it would matter, as far I care they're completely
> randomized on boot, its all done using cpu-bitmasks anyway.
> 
> Suppose the linear thing would have threads/cores/cache continuity like
> you want, that still leaves the node interconnects, and we all know
> people love to be creative with those, no way you're going to fold that
> into a linear scheme :-)
> 
> Anyway, as it currently stands I can offer you: cpus_share_cache(),
> which will return true/false depending on if the two cpus do indeed
> share a cache.
> 
> On top of that there's node_distance(), which will (hopefully) reflect
> the interconnect topology between nodes.
> 
> Using these you can construct enough of the topology layout to be
> useful. NOTE: node topologies don't need to be symmetric!

One difference between RCU and the scheduler is that scheduler needs
to worry not just about its own accesses, but also those of the user
processes.  If RCU chooses a suboptimal mapping for the rcu_node tree,
it matter only a few times per grace period per CPU.  On the other hand,
if the scheduler chooses a suboptimal migration for a task, or suboptimal
placement for a pair of tasks, a potentially very large number of access
by those tasks are affected.

> > > Also, a fanout of 16 is nice when your machine doesn't have HT and has a
> > > 2^n core count, but some popular machines these days have 6/10 cores per
> > > socket, resulting in your fanout splitting caches.
> > 
> > That is easy.  Such systems can set CONFIG_RCU_FANOUT to 6, 12, 10,
> > or 20, depending on preference.  With a patch intended for 3.6, they
> > could set the smallest reasonable value at build time and adjust to
> > the hardware using the boot parameter.
> > 
> > http://www.gossamer-threads.com/lists/linux/kernel/1524864
> > 
> > I expect to make other similar changes over time, but will be proceeding
> > cautiously.
> 
> I can very easily give you the size (nr cpus in) a node, still as long
> as you iterate the cpu space linearly that's not going to be much help.

Grace-period setup and quiescent-state forcing does iterate linearly.

> I can also offer you access to the scheduler topology if you want.. I've
> got the below patch pending which should (hopefully) improve the
> scheduler's node topology -- it implements that detection based on
> node_distance().
> 
> I've tested it using some fake-numa hacks to feed it the distance table
> of an AMD quad-socket Magny-Cours:
> 
>  "numa=fake=8:10,16,16,22,16,22,16,22,
>               16,10,22,16,22,16,22,16,
>               16,22,10,16,16,22,16,22,
>               22,16,16,10,22,16,22,16,
>               16,22,16,22,10,16,16,22,
>               22,16,22,16,16,10,22,16,
>               16,22,16,22,16,22,10,16,
>               22,16,22,16,22,16,16,10"

Interesting!  A few probably clueless questions and comments below.

							Thanx, Paul

> ---
> Subject: sched: Rewrite the CONFIG_NUMA sched domain support.
> From: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Date: Tue Apr 17 15:49:36 CEST 2012
> 
> The current code groups up to 16 nodes in a level and then puts an
> ALLNODES domain spanning the entire tree on top of that. This doesn't
> reflect the numa topology and esp for the smaller not-fully-connected
> machines out there today this might make a difference.
> 
> Therefore, build a proper numa topology based on node_distance().
> 
> TODO: figure out a way to set SD_flags based on distance such that
>       we disable various expensive load-balancing features at some
>       point and increase the balance interval prop. to the distance.
> 
> Cc: Anton Blanchard <anton@samba.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Chris Metcalf <cmetcalf@tilera.com>
> Cc: David Howells <dhowells@redhat.com>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Fenghua Yu <fenghua.yu@intel.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
> Cc: linux-alpha@vger.kernel.org
> Cc: linux-ia64@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Cc: linux-mips@linux-mips.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: linux-sh@vger.kernel.org
> Cc: Matt Turner <mattst88@gmail.com>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Paul Mundt <lethal@linux-sh.org>
> Cc: Ralf Baechle <ralf@linux-mips.org>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: sparclinux@vger.kernel.org
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tony Luck <tony.luck@intel.com>
> Cc: x86@kernel.org
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Link: http://lkml.kernel.org/n/tip-fgj6245hxj61qe8vy7c6cmjj@git.kernel.org
> ---
>  arch/powerpc/include/asm/topology.h |    6 
>  arch/x86/include/asm/topology.h     |   38 -----
>  include/linux/topology.h            |   36 -----
>  kernel/sched/core.c                 |  253 ++++++++++++++++++++++--------------
>  4 files changed, 158 insertions(+), 175 deletions(-)
> 
> Index: linux-2.6/include/linux/topology.h
> ===================================================================
> --- linux-2.6.orig/include/linux/topology.h
> +++ linux-2.6/include/linux/topology.h
> @@ -176,48 +176,12 @@ int arch_update_cpu_topology(void);
>  }
>  #endif
> 
> -/* sched_domains SD_ALLNODES_INIT for NUMA machines */
> -#define SD_ALLNODES_INIT (struct sched_domain) {			\
> -	.min_interval		= 64,					\
> -	.max_interval		= 64*num_online_cpus(),			\
> -	.busy_factor		= 128,					\
> -	.imbalance_pct		= 133,					\
> -	.cache_nice_tries	= 1,					\
> -	.busy_idx		= 3,					\
> -	.idle_idx		= 3,					\
> -	.flags			= 1*SD_LOAD_BALANCE			\
> -				| 1*SD_BALANCE_NEWIDLE			\
> -				| 0*SD_BALANCE_EXEC			\
> -				| 0*SD_BALANCE_FORK			\
> -				| 0*SD_BALANCE_WAKE			\
> -				| 0*SD_WAKE_AFFINE			\
> -				| 0*SD_SHARE_CPUPOWER			\
> -				| 0*SD_POWERSAVINGS_BALANCE		\
> -				| 0*SD_SHARE_PKG_RESOURCES		\
> -				| 1*SD_SERIALIZE			\
> -				| 0*SD_PREFER_SIBLING			\
> -				,					\
> -	.last_balance		= jiffies,				\
> -	.balance_interval	= 64,					\
> -}
> -
> -#ifndef SD_NODES_PER_DOMAIN
> -#define SD_NODES_PER_DOMAIN 16
> -#endif
> -
>  #ifdef CONFIG_SCHED_BOOK
>  #ifndef SD_BOOK_INIT
>  #error Please define an appropriate SD_BOOK_INIT in include/asm/topology.h!!!
>  #endif
>  #endif /* CONFIG_SCHED_BOOK */
> 
> -#ifdef CONFIG_NUMA
> -#ifndef SD_NODE_INIT
> -#error Please define an appropriate SD_NODE_INIT in include/asm/topology.h!!!
> -#endif
> -
> -#endif /* CONFIG_NUMA */
> -
>  #ifdef CONFIG_USE_PERCPU_NUMA_NODE_ID
>  DECLARE_PER_CPU(int, numa_node);
> 
> Index: linux-2.6/kernel/sched/core.c
> ===================================================================
> --- linux-2.6.orig/kernel/sched/core.c
> +++ linux-2.6/kernel/sched/core.c
> @@ -5560,7 +5560,8 @@ static int sched_domain_debug_one(struct
>  			break;
>  		}
> 
> -		if (cpumask_intersects(groupmask, sched_group_cpus(group))) {
> +		if (!(sd->flags & SD_OVERLAP) &&
> +		    cpumask_intersects(groupmask, sched_group_cpus(group))) {
>  			printk(KERN_CONT "\n");
>  			printk(KERN_ERR "ERROR: repeated CPUs\n");
>  			break;
> @@ -5898,92 +5899,6 @@ static int __init isolated_cpu_setup(cha
> 
>  __setup("isolcpus=", isolated_cpu_setup);
> 
> -#ifdef CONFIG_NUMA
> -
> -/**
> - * find_next_best_node - find the next node to include in a sched_domain
> - * @node: node whose sched_domain we're building
> - * @used_nodes: nodes already in the sched_domain
> - *
> - * Find the next node to include in a given scheduling domain. Simply
> - * finds the closest node not already in the @used_nodes map.
> - *
> - * Should use nodemask_t.
> - */
> -static int find_next_best_node(int node, nodemask_t *used_nodes)
> -{
> -	int i, n, val, min_val, best_node = -1;
> -
> -	min_val = INT_MAX;
> -
> -	for (i = 0; i < nr_node_ids; i++) {
> -		/* Start at @node */
> -		n = (node + i) % nr_node_ids;
> -
> -		if (!nr_cpus_node(n))
> -			continue;
> -
> -		/* Skip already used nodes */
> -		if (node_isset(n, *used_nodes))
> -			continue;
> -
> -		/* Simple min distance search */
> -		val = node_distance(node, n);
> -
> -		if (val < min_val) {
> -			min_val = val;
> -			best_node = n;
> -		}
> -	}
> -
> -	if (best_node != -1)
> -		node_set(best_node, *used_nodes);
> -	return best_node;
> -}
> -
> -/**
> - * sched_domain_node_span - get a cpumask for a node's sched_domain
> - * @node: node whose cpumask we're constructing
> - * @span: resulting cpumask
> - *
> - * Given a node, construct a good cpumask for its sched_domain to span. It
> - * should be one that prevents unnecessary balancing, but also spreads tasks
> - * out optimally.
> - */
> -static void sched_domain_node_span(int node, struct cpumask *span)
> -{
> -	nodemask_t used_nodes;
> -	int i;
> -
> -	cpumask_clear(span);
> -	nodes_clear(used_nodes);
> -
> -	cpumask_or(span, span, cpumask_of_node(node));
> -	node_set(node, used_nodes);
> -
> -	for (i = 1; i < SD_NODES_PER_DOMAIN; i++) {
> -		int next_node = find_next_best_node(node, &used_nodes);
> -		if (next_node < 0)
> -			break;
> -		cpumask_or(span, span, cpumask_of_node(next_node));
> -	}
> -}
> -
> -static const struct cpumask *cpu_node_mask(int cpu)
> -{
> -	lockdep_assert_held(&sched_domains_mutex);
> -
> -	sched_domain_node_span(cpu_to_node(cpu), sched_domains_tmpmask);
> -
> -	return sched_domains_tmpmask;
> -}
> -
> -static const struct cpumask *cpu_allnodes_mask(int cpu)
> -{
> -	return cpu_possible_mask;
> -}
> -#endif /* CONFIG_NUMA */
> -
>  static const struct cpumask *cpu_cpu_mask(int cpu)
>  {
>  	return cpumask_of_node(cpu_to_node(cpu));
> @@ -6020,6 +5935,7 @@ struct sched_domain_topology_level {
>  	sched_domain_init_f init;
>  	sched_domain_mask_f mask;
>  	int		    flags;
> +	int		    numa_level;
>  	struct sd_data      data;
>  };
> 
> @@ -6211,10 +6127,6 @@ sd_init_##type(struct sched_domain_topol
>  }
> 
>  SD_INIT_FUNC(CPU)
> -#ifdef CONFIG_NUMA
> - SD_INIT_FUNC(ALLNODES)
> - SD_INIT_FUNC(NODE)
> -#endif
>  #ifdef CONFIG_SCHED_SMT
>   SD_INIT_FUNC(SIBLING)
>  #endif
> @@ -6336,15 +6248,164 @@ static struct sched_domain_topology_leve
>  	{ sd_init_BOOK, cpu_book_mask, },
>  #endif
>  	{ sd_init_CPU, cpu_cpu_mask, },
> -#ifdef CONFIG_NUMA
> -	{ sd_init_NODE, cpu_node_mask, SDTL_OVERLAP, },
> -	{ sd_init_ALLNODES, cpu_allnodes_mask, },
> -#endif
>  	{ NULL, },
>  };
> 
>  static struct sched_domain_topology_level *sched_domain_topology = default_topology;
> 
> +#ifdef CONFIG_NUMA
> +
> +static int sched_domains_numa_levels;
> +static int sched_domains_numa_scale;
> +static int *sched_domains_numa_distance;
> +static struct cpumask ** __percpu sched_domains_numa_masks;
> +static int sched_domains_curr_level;
> +
> +#define NUMA_SCALE(x, level)	\
> +
> +static inline unsigned long numa_scale(unsigned long x, int level)
> +{
> +	return x * sched_domains_numa_distance[level] / sched_domains_numa_scale;
> +}
> +
> +static inline int sd_local_flags(int level)
> +{
> +	if (sched_domains_numa_distance[level] > REMOTE_DISTANCE)
> +		return 0;
> +
> +	return SD_BALANCE_EXEC | SD_BALANCE_FORK | SD_WAKE_AFFINE;
> +}
> +
> +static struct sched_domain *
> +sd_init_NUMA(struct sched_domain_topology_level *tl, int cpu)
> +{
> +	struct sched_domain *sd = *per_cpu_ptr(tl->data.sd, cpu);
> +	int level = tl->numa_level;
> +	int sd_weight = cpumask_weight(per_cpu_ptr(
> +				sched_domains_numa_masks[level],
> +				cpu));
> +
> +	*sd = (struct sched_domain){
> +		.min_interval		= sd_weight,
> +		.max_interval		= 2*sd_weight,
> +		.busy_factor		= 32,
> +		.imbalance_pct		= 100 + numa_scale(25, level),
> +		.cache_nice_tries	= 2,
> +		.busy_idx		= 3,
> +		.idle_idx		= 2,
> +		.newidle_idx		= 0,
> +		.wake_idx		= 0,
> +		.forkexec_idx		= 0,
> +
> +		.flags			= 1*SD_LOAD_BALANCE
> +					| 1*SD_BALANCE_NEWIDLE
> +					| 0*SD_BALANCE_EXEC
> +					| 0*SD_BALANCE_FORK
> +					| 0*SD_BALANCE_WAKE
> +					| 0*SD_WAKE_AFFINE
> +					| 0*SD_PREFER_LOCAL
> +					| 0*SD_SHARE_CPUPOWER
> +					| 0*SD_POWERSAVINGS_BALANCE
> +					| 0*SD_SHARE_PKG_RESOURCES
> +					| 1*SD_SERIALIZE
> +					| 0*SD_PREFER_SIBLING
> +					| sd_local_flags(level)
> +					,
> +		.last_balance		= jiffies,
> +		.balance_interval	= sd_weight,
> +	};
> +	SD_INIT_NAME(sd, NUMA);
> +	sd->private = &tl->data;
> +
> +	/*
> +	 * Ugly hack to pass state to sd_numa_mask()...
> +	 */
> +	sched_domains_curr_level = tl->numa_level;
> +
> +	return sd;
> +}
> +
> +static const struct cpumask *sd_numa_mask(int cpu)
> +{
> +	return per_cpu_ptr(sched_domains_numa_masks[sched_domains_curr_level], cpu);
> +}
> +
> +static void sched_init_numa(void)
> +{
> +	int next_distance, curr_distance = node_distance(0, 0);
> +	struct sched_domain_topology_level *tl;
> +	int level = 0;
> +	int i, j, k;
> +
> +	sched_domains_numa_scale = curr_distance;
> +	sched_domains_numa_distance = kzalloc(sizeof(int) * nr_node_ids, GFP_KERNEL);
> +	if (!sched_domains_numa_distance)
> +		return;
> +
> +	next_distance = curr_distance;
> +	for (i = 0; i < nr_node_ids; i++) {
> +		for (j = 0; j < nr_node_ids; j++) {
> +			int distance = node_distance(0, j);

Shouldn't this be "node_distance(i, j)"?

Actually, I don't see any use of "i" in this loop anywhere, which seems
strange.

> +			if (distance > curr_distance &&
> +					(distance < next_distance ||
> +					 next_distance == curr_distance))
> +				next_distance = distance;
> +		}
> +		if (next_distance != curr_distance) {
> +			sched_domains_numa_distance[level++] = next_distance;
> +			sched_domains_numa_levels = level;

I don't understand the intent here.  One approach would be to have
one sched_domains_numa_distance[] array per node, with distances
in each array sorted by increasing distance from the array's node.

As it is, I believe you get N copies of the sorted distances from
node 0 in an array that is only guaranteed to be big enough for 1 copy.
Unless you know something about the node_distance() matrix that limits
the number of distinct values.

> +			curr_distance = next_distance;
> +		} else break;
> +	}

The above loop seems to be doing a selection sort eliminating duplicates.
Would it make sense to put a real sort implementation into lib/?

> +
> +	sched_domains_numa_masks = kzalloc(sizeof(void *) * level, GFP_KERNEL);
> +	if (!sched_domains_numa_masks)
> +		return;
> +
> +	for (i = 0; i < level; i++) {
> +		sched_domains_numa_masks[i] = alloc_percpu(cpumask_t);
> +		if (!sched_domains_numa_masks[i])
> +			return;
> +
> +		for_each_possible_cpu(j) {
> +			struct cpumask *mask =
> +				per_cpu_ptr(sched_domains_numa_masks[i], j);
> +
> +			for (k = 0; k < nr_node_ids; k++) {
> +				if (node_distance(cpu_to_node(j), k) >
> +						sched_domains_numa_distance[i])
> +					continue;
> +
> +				cpumask_or(mask, mask, cpumask_of_node(k));
> +			}
> +		}
> +	}

OK, if we can assume that the node distances are symmetric, so that
node_distance(i, j) == node_distance(j, i), I think I see what you
are getting at, though I am having a hard time seeing how to pack
it into a linear array.

The idea seems to be to compute a per-CPU list of CPU masks, with the first
entry having bits set for the CPUs closest to the CPU corresponding to
the list, and subsequent entries adding more-distant CPUs.  The last
CPU mask would presumably have bits set for all CPUs.

I take it that there is no data structure listing per-node CPU masks,
indicating which CPUs are members of a given node?  Or is something else
going on here?

> +
> +	tl = kzalloc((ARRAY_SIZE(default_topology) + level) *
> +			sizeof(struct sched_domain_topology_level), GFP_KERNEL);
> +	if (!tl)
> +		return;
> +
> +	for (i = 0; default_topology[i].init; i++)
> +		tl[i] = default_topology[i];
> +
> +	for (j = 0; j < level; i++, j++) {
> +		tl[i] = (struct sched_domain_topology_level){

tl[j]?

> +			.init = sd_init_NUMA,
> +			.mask = sd_numa_mask,
> +			.flags = SDTL_OVERLAP,
> +			.numa_level = j,
> +		};
> +	}
> +
> +	sched_domain_topology = tl;
> +}
> +#else
> +static inline void sched_init_numa(void)
> +{
> +}
> +#endif /* CONFIG_NUMA */
> +
>  static int __sdt_alloc(const struct cpumask *cpu_map)
>  {
>  	struct sched_domain_topology_level *tl;
> @@ -6828,6 +6889,8 @@ void __init sched_init_smp(void)
>  	alloc_cpumask_var(&non_isolated_cpus, GFP_KERNEL);
>  	alloc_cpumask_var(&fallback_doms, GFP_KERNEL);
> 
> +	sched_init_numa();
> +
>  	get_online_cpus();
>  	mutex_lock(&sched_domains_mutex);
>  	init_sched_domains(cpu_active_mask);
> Index: linux-2.6/arch/powerpc/include/asm/topology.h
> ===================================================================
> --- linux-2.6.orig/arch/powerpc/include/asm/topology.h
> +++ linux-2.6/arch/powerpc/include/asm/topology.h
> @@ -18,12 +18,6 @@ struct device_node;
>   */
>  #define RECLAIM_DISTANCE 10
> 
> -/*
> - * Avoid creating an extra level of balancing (SD_ALLNODES) on the largest
> - * POWER7 boxes which have a maximum of 32 nodes.
> - */
> -#define SD_NODES_PER_DOMAIN 32
> -
>  #include <asm/mmzone.h>
> 
>  static inline int cpu_to_node(int cpu)
> Index: linux-2.6/arch/x86/include/asm/topology.h
> ===================================================================
> --- linux-2.6.orig/arch/x86/include/asm/topology.h
> +++ linux-2.6/arch/x86/include/asm/topology.h
> @@ -92,44 +92,6 @@ extern void setup_node_to_cpumask_map(vo
> 
>  #define pcibus_to_node(bus) __pcibus_to_node(bus)
> 
> -#ifdef CONFIG_X86_32
> -# define SD_CACHE_NICE_TRIES	1
> -# define SD_IDLE_IDX		1
> -#else
> -# define SD_CACHE_NICE_TRIES	2
> -# define SD_IDLE_IDX		2
> -#endif
> -
> -/* sched_domains SD_NODE_INIT for NUMA machines */
> -#define SD_NODE_INIT (struct sched_domain) {				\
> -	.min_interval		= 8,					\
> -	.max_interval		= 32,					\
> -	.busy_factor		= 32,					\
> -	.imbalance_pct		= 125,					\
> -	.cache_nice_tries	= SD_CACHE_NICE_TRIES,			\
> -	.busy_idx		= 3,					\
> -	.idle_idx		= SD_IDLE_IDX,				\
> -	.newidle_idx		= 0,					\
> -	.wake_idx		= 0,					\
> -	.forkexec_idx		= 0,					\
> -									\
> -	.flags			= 1*SD_LOAD_BALANCE			\
> -				| 1*SD_BALANCE_NEWIDLE			\
> -				| 1*SD_BALANCE_EXEC			\
> -				| 1*SD_BALANCE_FORK			\
> -				| 0*SD_BALANCE_WAKE			\
> -				| 1*SD_WAKE_AFFINE			\
> -				| 0*SD_PREFER_LOCAL			\
> -				| 0*SD_SHARE_CPUPOWER			\
> -				| 0*SD_POWERSAVINGS_BALANCE		\
> -				| 0*SD_SHARE_PKG_RESOURCES		\
> -				| 1*SD_SERIALIZE			\
> -				| 0*SD_PREFER_SIBLING			\
> -				,					\
> -	.last_balance		= jiffies,				\
> -	.balance_interval	= 1,					\
> -}
> -
>  extern int __node_distance(int, int);
>  #define node_distance(a, b) __node_distance(a, b)
> 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-26 19:47               ` Peter Zijlstra
@ 2012-04-26 20:29                 ` Paul E. McKenney
  2012-04-26 22:04                   ` Peter Zijlstra
  0 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-26 20:29 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Thu, Apr 26, 2012 at 09:47:44PM +0200, Peter Zijlstra wrote:
> On Thu, 2012-04-26 at 21:41 +0200, Peter Zijlstra wrote:
> > 
> > I can very easily give you the size (nr cpus in) a node, still as long
> > as you iterate the cpu space linearly that's not going to be much
> > help.
> > 
> Oh, I forgot, the numa masks etc are available, depending on
> CONFIG_NUMA, as cpumask_of_node(n).

These change with each CPU-hotplug operation?  Or is a given CPU
hotplug operation guaranteed to change only those node masks that
the CPU was (or will be, in the case of online) a member?

							Thanx, Paul


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-26 20:28               ` Paul E. McKenney
@ 2012-04-26 22:01                 ` Peter Zijlstra
  2012-04-27 14:17                   ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Peter Zijlstra @ 2012-04-26 22:01 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Thu, 2012-04-26 at 13:28 -0700, Paul E. McKenney wrote:
> Interesting!  A few probably clueless questions and comments below.

Thanks, shows me where to put comments in :-)

> > +static void sched_init_numa(void)
> > +{
> > +     int next_distance, curr_distance = node_distance(0, 0);
> > +     struct sched_domain_topology_level *tl;
> > +     int level = 0;
> > +     int i, j, k;
> > +
> > +     sched_domains_numa_scale = curr_distance;
> > +     sched_domains_numa_distance = kzalloc(sizeof(int) * nr_node_ids, GFP_KERNEL);
> > +     if (!sched_domains_numa_distance)
> > +             return;
> > +
> > +     next_distance = curr_distance;
> > +     for (i = 0; i < nr_node_ids; i++) {
> > +             for (j = 0; j < nr_node_ids; j++) {
> > +                     int distance = node_distance(0, j);
> 
> Shouldn't this be "node_distance(i, j)"?
> 
> Actually, I don't see any use of "i" in this loop anywhere, which seems
> strange.

It assumes all distances in node_distance(i,j) occur in
node_distance(0,j). This to avoid a cubic loop.
             
> > +                     if (distance > curr_distance &&
> > +                                     (distance < next_distance ||
> > +                                      next_distance == curr_distance))
> > +                             next_distance = distance;
> > +             }
> > +             if (next_distance != curr_distance) {
> > +                     sched_domains_numa_distance[level++] = next_distance;
> > +                     sched_domains_numa_levels = level;
> 
> I don't understand the intent here.  One approach would be to have
> one sched_domains_numa_distance[] array per node, with distances
> in each array sorted by increasing distance from the array's node.
> 
> As it is, I believe you get N copies of the sorted distances from
> node 0 in an array that is only guaranteed to be big enough for 1 copy.
> Unless you know something about the node_distance() matrix that limits
> the number of distinct values.

Note this is in the 'i' loop, not in the nested i,j loop. The nested
loop finds the next smallest distance, the outer loop records it.

> > +                     curr_distance = next_distance;
> > +             } else break;
> > +     }
> 
> The above loop seems to be doing a selection sort eliminating duplicates.

Correct, and leaving out the identity distance, see below.

> Would it make sense to put a real sort implementation into lib/?

There's a heapsort in lib/sort.c. And I guess we could write the whole
lot in 3 stages, 1) dump node_distance(0,i) in an array, 2) sort array,
3) remove duplicates from array, but I was lazy and did the O(n^2)
thing.

So we have 'level' be the number of unique distances larger than the
identity distance node_distance(i,i) -- in specific (0,0). And
sched_domains_numa_distance[0..level-1] be the array recording the
specific distance numbers.

> > +
> > +     sched_domains_numa_masks = kzalloc(sizeof(void *) * level, GFP_KERNEL);
> > +     if (!sched_domains_numa_masks)
> > +             return;
> > +
> > +     for (i = 0; i < level; i++) {
> > +             sched_domains_numa_masks[i] = alloc_percpu(cpumask_t);
> > +             if (!sched_domains_numa_masks[i])
> > +                     return;
> > +
> > +             for_each_possible_cpu(j) {
> > +                     struct cpumask *mask =
> > +                             per_cpu_ptr(sched_domains_numa_masks[i], j);
> > +
> > +                     for (k = 0; k < nr_node_ids; k++) {
> > +                             if (node_distance(cpu_to_node(j), k) >
> > +                                             sched_domains_numa_distance[i])
> > +                                     continue;
> > +
> > +                             cpumask_or(mask, mask, cpumask_of_node(k));
> > +                     }
> > +             }
> > +     }
> 
> OK, if we can assume that the node distances are symmetric, so that
> node_distance(i, j) == node_distance(j, i), 

Yeah, I do assume that, not quite sure this is true, but we'll see. I'll
have to ask someone at SGI/HP/etc. for node_distance() tables for
'interesting' hardware.

> I think I see what you
> are getting at, though I am having a hard time seeing how to pack
> it into a linear array.

Yeah, I'm not sure you can either. Hence me building a tree ;-) But you
too have a tree, its tree-rcu after all.

> The idea seems to be to compute a per-CPU list of CPU masks, with the first
> entry having bits set for the CPUs closest to the CPU corresponding to
> the list, and subsequent entries adding more-distant CPUs.  The last
> CPU mask would presumably have bits set for all CPUs.

Indeed. So the scheduler already knows about nodes (included in the
default_topology thing), here we're constructing masks spanning nodes
based on distance.

So the first level is all nodes that are directly connected, the second
level are all nodes that have one intermediate hop, etc.. with indeed
the last level being the entire machine.

> I take it that there is no data structure listing per-node CPU masks,
> indicating which CPUs are members of a given node?  Or is something else
> going on here?

There is, its cpumask_of_node(), you'll find it used in the above
code :-) We do the for_each_cpu loop because we need the mask per-node
and there's no such thing as per-node memory so we fudge it using
per-cpu memory.

This could be optimized to reduce overhead if this all turns out to work
well.

So in short: for every 'i < level', for every cpu, we build a mask of
which cpus are '<= i' hops away from our current node.

> > +
> > +     tl = kzalloc((ARRAY_SIZE(default_topology) + level) *
> > +                     sizeof(struct sched_domain_topology_level), GFP_KERNEL);
> > +     if (!tl)
> > +             return;
> > +
> > +     for (i = 0; default_topology[i].init; i++)
> > +             tl[i] = default_topology[i];
> > +
> > +     for (j = 0; j < level; i++, j++) {
> > +             tl[i] = (struct sched_domain_topology_level){
> 
> tl[j]?

No, [i]. See how we allocate an array of ARRAY_SIZE(default_topology) +
level, then copy the default topology array then continue i by j
additional levels.

> > +                     .init = sd_init_NUMA,
> > +                     .mask = sd_numa_mask,
> > +                     .flags = SDTL_OVERLAP,
> > +                     .numa_level = j,
> > +             };
> > +     }
> > +
> > +     sched_domain_topology = tl;
> > +} 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-26 20:29                 ` Paul E. McKenney
@ 2012-04-26 22:04                   ` Peter Zijlstra
  0 siblings, 0 replies; 32+ messages in thread
From: Peter Zijlstra @ 2012-04-26 22:04 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Thu, 2012-04-26 at 13:29 -0700, Paul E. McKenney wrote:
> On Thu, Apr 26, 2012 at 09:47:44PM +0200, Peter Zijlstra wrote:
> > On Thu, 2012-04-26 at 21:41 +0200, Peter Zijlstra wrote:
> > > 
> > > I can very easily give you the size (nr cpus in) a node, still as long
> > > as you iterate the cpu space linearly that's not going to be much
> > > help.
> > > 
> > Oh, I forgot, the numa masks etc are available, depending on
> > CONFIG_NUMA, as cpumask_of_node(n).
> 
> These change with each CPU-hotplug operation?  Or is a given CPU
> hotplug operation guaranteed to change only those node masks that
> the CPU was (or will be, in the case of online) a member?

I'd have to check, but its either the last or they don't change at all
and you have to mask out the offline cpus yourself.

NUMA topology doesn't actually change due to hotplug, so there's no
reason to update masks that do not (or should not) contain the cpu under
operation.




^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems Paul E. McKenney
  2012-04-26 12:51     ` Peter Zijlstra
@ 2012-04-27  4:36     ` Mike Galbraith
  2012-04-27 15:15       ` Paul E. McKenney
  1 sibling, 1 reply; 32+ messages in thread
From: Mike Galbraith @ 2012-04-27  4:36 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Mon, 2012-04-23 at 09:42 -0700, Paul E. McKenney wrote: 
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> Commit #0209f649 (rcu: limit rcu_node leaf-level fanout) set an upper
> limit of 16 on the leaf-level fanout for the rcu_node tree.  This was
> needed to reduce lock contention that was induced by the synchronization
> of scheduling-clock interrupts, which was in turn needed to improve
> energy efficiency for moderate-sized lightly loaded servers.
> 
> However, reducing the leaf-level fanout means that there are more
> leaf-level rcu_node structures in the tree, which in turn means that
> RCU's grace-period initialization incurs more cache misses.  This is
> not a problem on moderate-sized servers with only a few tens of CPUs,

With a distro config (4096 CPUs) interrupt latency is bad even on a
quad.  Traversing empty nodes taking locks and cache misses hurts.

-Mike


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-26 22:01                 ` Peter Zijlstra
@ 2012-04-27 14:17                   ` Paul E. McKenney
  0 siblings, 0 replies; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-27 14:17 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Fri, Apr 27, 2012 at 12:01:25AM +0200, Peter Zijlstra wrote:
> On Thu, 2012-04-26 at 13:28 -0700, Paul E. McKenney wrote:

[ . . . ]

> > I think I see what you
> > are getting at, though I am having a hard time seeing how to pack
> > it into a linear array.
> 
> Yeah, I'm not sure you can either. Hence me building a tree ;-) But you
> too have a tree, its tree-rcu after all.
> 
> > The idea seems to be to compute a per-CPU list of CPU masks, with the first
> > entry having bits set for the CPUs closest to the CPU corresponding to
> > the list, and subsequent entries adding more-distant CPUs.  The last
> > CPU mask would presumably have bits set for all CPUs.
> 
> Indeed. So the scheduler already knows about nodes (included in the
> default_topology thing), here we're constructing masks spanning nodes
> based on distance.
> 
> So the first level is all nodes that are directly connected, the second
> level are all nodes that have one intermediate hop, etc.. with indeed
> the last level being the entire machine.
> 
> > I take it that there is no data structure listing per-node CPU masks,
> > indicating which CPUs are members of a given node?  Or is something else
> > going on here?
> 
> There is, its cpumask_of_node(), you'll find it used in the above
> code :-) We do the for_each_cpu loop because we need the mask per-node
> and there's no such thing as per-node memory so we fudge it using
> per-cpu memory.
> 
> This could be optimized to reduce overhead if this all turns out to work
> well.
> 
> So in short: for every 'i < level', for every cpu, we build a mask of
> which cpus are '<= i' hops away from our current node.

So this information could be used to create a cache-friendly CPU ordering,
such that CPU i and CPU i+1 tend to be electrically close to each other.
One could solve the traveling salesmans problem, but doing a traveral
of the CPUs following the node tree should be much simpler and come
pretty close.

If someone were to show significant performance degradation due to
RCU's using the smp_processor_id() ordering for its rcu_node tree,
I would try this ordering.  It would cause the rcu_node tree performance
to be much less sensitive to the rcu_node tree's geometry.

> > > +
> > > +     tl = kzalloc((ARRAY_SIZE(default_topology) + level) *
> > > +                     sizeof(struct sched_domain_topology_level), GFP_KERNEL);
> > > +     if (!tl)
> > > +             return;
> > > +
> > > +     for (i = 0; default_topology[i].init; i++)
> > > +             tl[i] = default_topology[i];
> > > +
> > > +     for (j = 0; j < level; i++, j++) {
> > > +             tl[i] = (struct sched_domain_topology_level){
> > 
> > tl[j]?
> 
> No, [i]. See how we allocate an array of ARRAY_SIZE(default_topology) +
> level, then copy the default topology array then continue i by j
> additional levels.

OK, good thing I correctly characterized my comments.  ;-)

							Thanx, Paul

> > > +                     .init = sd_init_NUMA,
> > > +                     .mask = sd_numa_mask,
> > > +                     .flags = SDTL_OVERLAP,
> > > +                     .numa_level = j,
> > > +             };
> > > +     }
> > > +
> > > +     sched_domain_topology = tl;
> > > +} 
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-27  4:36     ` Mike Galbraith
@ 2012-04-27 15:15       ` Paul E. McKenney
  2012-04-28  4:42         ` Mike Galbraith
  0 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-27 15:15 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Fri, Apr 27, 2012 at 06:36:11AM +0200, Mike Galbraith wrote:
> On Mon, 2012-04-23 at 09:42 -0700, Paul E. McKenney wrote: 
> > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > 
> > Commit #0209f649 (rcu: limit rcu_node leaf-level fanout) set an upper
> > limit of 16 on the leaf-level fanout for the rcu_node tree.  This was
> > needed to reduce lock contention that was induced by the synchronization
> > of scheduling-clock interrupts, which was in turn needed to improve
> > energy efficiency for moderate-sized lightly loaded servers.
> > 
> > However, reducing the leaf-level fanout means that there are more
> > leaf-level rcu_node structures in the tree, which in turn means that
> > RCU's grace-period initialization incurs more cache misses.  This is
> > not a problem on moderate-sized servers with only a few tens of CPUs,
> 
> With a distro config (4096 CPUs) interrupt latency is bad even on a
> quad.  Traversing empty nodes taking locks and cache misses hurts.

Agreed -- and I will be working on an additional patch that makes RCU
avoid initializing its data structures for CPUs that don't exist.

That said, increasing the leaf-level fanout from 16 to 64 should reduce
the latency pain by a factor of four.  In addition, I would expect that
real-time builds of the kernel would set NR_CPUS to some value much
smaller than 4096.  ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-27 15:15       ` Paul E. McKenney
@ 2012-04-28  4:42         ` Mike Galbraith
  2012-04-28 17:21           ` Paul E. McKenney
  0 siblings, 1 reply; 32+ messages in thread
From: Mike Galbraith @ 2012-04-28  4:42 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Fri, 2012-04-27 at 08:15 -0700, Paul E. McKenney wrote: 
> On Fri, Apr 27, 2012 at 06:36:11AM +0200, Mike Galbraith wrote:
> > On Mon, 2012-04-23 at 09:42 -0700, Paul E. McKenney wrote: 
> > > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > > 
> > > Commit #0209f649 (rcu: limit rcu_node leaf-level fanout) set an upper
> > > limit of 16 on the leaf-level fanout for the rcu_node tree.  This was
> > > needed to reduce lock contention that was induced by the synchronization
> > > of scheduling-clock interrupts, which was in turn needed to improve
> > > energy efficiency for moderate-sized lightly loaded servers.
> > > 
> > > However, reducing the leaf-level fanout means that there are more
> > > leaf-level rcu_node structures in the tree, which in turn means that
> > > RCU's grace-period initialization incurs more cache misses.  This is
> > > not a problem on moderate-sized servers with only a few tens of CPUs,
> > 
> > With a distro config (4096 CPUs) interrupt latency is bad even on a
> > quad.  Traversing empty nodes taking locks and cache misses hurts.
> 
> Agreed -- and I will be working on an additional patch that makes RCU
> avoid initializing its data structures for CPUs that don't exist.

That's still on my todo list too, your initial patch (and my butchery
thereof to skip taking lock) showed this helps a heap.

> That said, increasing the leaf-level fanout from 16 to 64 should reduce
> the latency pain by a factor of four.  In addition, I would expect that
> real-time builds of the kernel would set NR_CPUS to some value much
> smaller than 4096.  ;-)

Yup, else you would have heard whimpering months ago ;-)

-Mike


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-28  4:42         ` Mike Galbraith
@ 2012-04-28 17:21           ` Paul E. McKenney
  2012-04-29  3:54             ` Mike Galbraith
  0 siblings, 1 reply; 32+ messages in thread
From: Paul E. McKenney @ 2012-04-28 17:21 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Sat, Apr 28, 2012 at 06:42:22AM +0200, Mike Galbraith wrote:
> On Fri, 2012-04-27 at 08:15 -0700, Paul E. McKenney wrote: 
> > On Fri, Apr 27, 2012 at 06:36:11AM +0200, Mike Galbraith wrote:
> > > On Mon, 2012-04-23 at 09:42 -0700, Paul E. McKenney wrote: 
> > > > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > > > 
> > > > Commit #0209f649 (rcu: limit rcu_node leaf-level fanout) set an upper
> > > > limit of 16 on the leaf-level fanout for the rcu_node tree.  This was
> > > > needed to reduce lock contention that was induced by the synchronization
> > > > of scheduling-clock interrupts, which was in turn needed to improve
> > > > energy efficiency for moderate-sized lightly loaded servers.
> > > > 
> > > > However, reducing the leaf-level fanout means that there are more
> > > > leaf-level rcu_node structures in the tree, which in turn means that
> > > > RCU's grace-period initialization incurs more cache misses.  This is
> > > > not a problem on moderate-sized servers with only a few tens of CPUs,
> > > 
> > > With a distro config (4096 CPUs) interrupt latency is bad even on a
> > > quad.  Traversing empty nodes taking locks and cache misses hurts.
> > 
> > Agreed -- and I will be working on an additional patch that makes RCU
> > avoid initializing its data structures for CPUs that don't exist.
> 
> That's still on my todo list too, your initial patch (and my butchery
> thereof to skip taking lock) showed this helps a heap.

Indeed, I am a bit worried about the safety of skipping the lock in
that case.  Either way, if you would rather keep working on this, I
have no problem pushing it further down my todo list.

> > That said, increasing the leaf-level fanout from 16 to 64 should reduce
> > the latency pain by a factor of four.  In addition, I would expect that
> > real-time builds of the kernel would set NR_CPUS to some value much
> > smaller than 4096.  ;-)
> 
> Yup, else you would have heard whimpering months ago ;-)

I am not sure that it would have been exactly whimpering, but yes.  ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems
  2012-04-28 17:21           ` Paul E. McKenney
@ 2012-04-29  3:54             ` Mike Galbraith
  0 siblings, 0 replies; 32+ messages in thread
From: Mike Galbraith @ 2012-04-29  3:54 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches

On Sat, 2012-04-28 at 10:21 -0700, Paul E. McKenney wrote: 
> On Sat, Apr 28, 2012 at 06:42:22AM +0200, Mike Galbraith wrote:
> > On Fri, 2012-04-27 at 08:15 -0700, Paul E. McKenney wrote: 
> > > On Fri, Apr 27, 2012 at 06:36:11AM +0200, Mike Galbraith wrote:
> > > > On Mon, 2012-04-23 at 09:42 -0700, Paul E. McKenney wrote: 
> > > > > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > > > > 
> > > > > Commit #0209f649 (rcu: limit rcu_node leaf-level fanout) set an upper
> > > > > limit of 16 on the leaf-level fanout for the rcu_node tree.  This was
> > > > > needed to reduce lock contention that was induced by the synchronization
> > > > > of scheduling-clock interrupts, which was in turn needed to improve
> > > > > energy efficiency for moderate-sized lightly loaded servers.
> > > > > 
> > > > > However, reducing the leaf-level fanout means that there are more
> > > > > leaf-level rcu_node structures in the tree, which in turn means that
> > > > > RCU's grace-period initialization incurs more cache misses.  This is
> > > > > not a problem on moderate-sized servers with only a few tens of CPUs,
> > > > 
> > > > With a distro config (4096 CPUs) interrupt latency is bad even on a
> > > > quad.  Traversing empty nodes taking locks and cache misses hurts.
> > > 
> > > Agreed -- and I will be working on an additional patch that makes RCU
> > > avoid initializing its data structures for CPUs that don't exist.
> > 
> > That's still on my todo list too, your initial patch (and my butchery
> > thereof to skip taking lock) showed this helps a heap.
> 
> Indeed, I am a bit worried about the safety of skipping the lock in
> that case.  Either way, if you would rather keep working on this, I
> have no problem pushing it further down my todo list.

It's on my todo because Paul is a busy fellow.  I'd much prefer that RCU
tweaking is done by somebody who fully understands RCU's gizzard (!me).
IOW, I'll fiddle with RCU as time permits/gripes require, but waiting to
test your implementation is the more prudent thing to do.  Heck, as yet,
I haven't even scraped together the time to test your other patches :-/ 

For the nonce, I've done the safe thing, reverted 0209f649 and enabled
tick skew.  Per my measurements, the regression is thus dead, so it's
not as pressing an issue, though some users do still want and need more.

-Mike


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit
  2012-04-24 16:50     ` Paul E. McKenney
  2012-04-24 17:46       ` Srivatsa S. Bhat
@ 2012-05-07  3:47       ` Rusty Russell
  1 sibling, 0 replies; 32+ messages in thread
From: Rusty Russell @ 2012-05-07  3:47 UTC (permalink / raw)
  To: paulmck, Srivatsa S. Bhat
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	eric.dumazet, darren, fweisbec, patches, Paul E. McKenney, venki,
	KOSAKI Motohiro

On Tue, 24 Apr 2012 09:50:56 -0700, "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> On Tue, Apr 24, 2012 at 09:05:20PM +0530, Srivatsa S. Bhat wrote:
> 
> Thank you for looking this over!
> 
> > On 04/23/2012 10:12 PM, Paul E. McKenney wrote:
> > 
> > > From: "Paul E. McKenney" <paul.mckenney@linaro.org>
> > > 
> > > The rcu_blocking_is_gp() function tests to see if there is only one
> > > online CPU, and if so, synchronize_sched() and friends become no-ops.
> > > However, for larger systems, num_online_cpus() scans a large vector,
> > 
> > 
> > Venki had posted a patch to optimize that by using a variable, so that we
> > don't calculate the value each and every time.
> > http://thread.gmane.org/gmane.linux.kernel/1240569/focus=1246659
> > 
> > However, unfortunately there was some confusion around that patch and
> > even though it made it to akpm's tree and stayed there briefly, it didn't
> > go upstream. Venki had attempted to resolve the confusion here:
> > http://thread.gmane.org/gmane.linux.kernel/1240569/focus=1260702
> 
> Having a single variable tracking the online state would be good,
> but as you say it isn't there yet.

I think you could have a flag set by a notifier, but you'd have to
re-think the races...

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2012-05-07  5:24 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-23 16:41 [PATCH RFC 0/6] Miscellaneous RCU fixes for 3.5 Paul E. McKenney
2012-04-23 16:42 ` [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit Paul E. McKenney
2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 2/6] rcu: List-debug variants of rcu list routines Paul E. McKenney
2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 3/6] rcu: Replace list_first_entry_rcu() with list_first_or_null_rcu() Paul E. McKenney
2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 4/6] rcu: Clarify help text for RCU_BOOST_PRIO Paul E. McKenney
2012-04-26 12:46     ` Peter Zijlstra
2012-04-26 17:28       ` Paul E. McKenney
2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 5/6] rcu: Make __kfree_rcu() less dependent on compiler choices Paul E. McKenney
2012-04-26 12:48     ` Peter Zijlstra
2012-04-26 13:29       ` Jan Engelhardt
2012-04-26 13:50         ` Peter Zijlstra
2012-04-23 16:42   ` [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss initialization latencies for large systems Paul E. McKenney
2012-04-26 12:51     ` Peter Zijlstra
2012-04-26 14:12       ` Paul E. McKenney
2012-04-26 15:28         ` Peter Zijlstra
2012-04-26 16:15           ` Paul E. McKenney
2012-04-26 19:41             ` Peter Zijlstra
2012-04-26 19:47               ` Peter Zijlstra
2012-04-26 20:29                 ` Paul E. McKenney
2012-04-26 22:04                   ` Peter Zijlstra
2012-04-26 20:28               ` Paul E. McKenney
2012-04-26 22:01                 ` Peter Zijlstra
2012-04-27 14:17                   ` Paul E. McKenney
2012-04-27  4:36     ` Mike Galbraith
2012-04-27 15:15       ` Paul E. McKenney
2012-04-28  4:42         ` Mike Galbraith
2012-04-28 17:21           ` Paul E. McKenney
2012-04-29  3:54             ` Mike Galbraith
2012-04-24 15:35   ` [PATCH RFC tip/core/rcu 1/6] rcu: Stabilize use of num_online_cpus() for GP short circuit Srivatsa S. Bhat
2012-04-24 16:50     ` Paul E. McKenney
2012-04-24 17:46       ` Srivatsa S. Bhat
2012-05-07  3:47       ` Rusty Russell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.