linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH documentation 0/2] OS-jitter documentation
@ 2013-04-11 16:05 Paul E. McKenney
  2013-04-11 16:05 ` [PATCH documentation 1/2] nohz1: Add documentation Paul E. McKenney
  0 siblings, 1 reply; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-11 16:05 UTC (permalink / raw)
  To: linux-kernel; +Cc: fweisbec, rostedt, bp, arjan, khilman, cl, pradeep

Hello!

This is v2 of the OS-jitter-reduction documentation.  Changes from
v1 (https://lkml.org/lkml/2013/3/18/462):

o	Updated the nohz1 patch based on feedback from Frederic Weisbecker,
	Steven Rostedt, Borislav Petkov, Arjan van de Ven, Kevin Hilman,
	and Christoph Lameter.

o	Added a second file describing how to reduce OS jitter from
	per-CPU kthreads.  This is quite rough, but is hopefully a
	good starting point.

							Thanx, Paul

------------------------------------------------------------------------

 b/Documentation/kernel-per-CPU-kthreads.txt |  159 ++++++++++++++++++
 b/Documentation/timers/NO_HZ.txt            |  245 ++++++++++++++++++++++++++++
 2 files changed, 404 insertions(+)


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-11 16:05 [PATCH documentation 0/2] OS-jitter documentation Paul E. McKenney
@ 2013-04-11 16:05 ` Paul E. McKenney
  2013-04-11 16:05   ` [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads Paul E. McKenney
                     ` (5 more replies)
  0 siblings, 6 replies; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-11 16:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, edumazet, darren,
	fweisbec, sbw, Paul E. McKenney, Borislav Petkov,
	Arjan van de Ven, Kevin Hilman, Christoph Lameter

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Christoph Lameter <cl@linux.com>
---
 Documentation/timers/NO_HZ.txt | 245 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 245 insertions(+)
 create mode 100644 Documentation/timers/NO_HZ.txt

diff --git a/Documentation/timers/NO_HZ.txt b/Documentation/timers/NO_HZ.txt
new file mode 100644
index 0000000..6b33f6b
--- /dev/null
+++ b/Documentation/timers/NO_HZ.txt
@@ -0,0 +1,245 @@
+		NO_HZ: Reducing Scheduling-Clock Ticks
+
+
+This document describes Kconfig options and boot parameters that can
+reduce the number of scheduling-clock interrupts, thereby improving energy
+efficiency and reducing OS jitter.  Reducing OS jitter is important for
+some types of computationally intensive high-performance computing (HPC)
+applications and for real-time applications.
+
+There are two major aspects of scheduling-clock interrupt reduction:
+
+1.	Idle CPUs.
+
+2.	CPUs having only one runnable task.
+
+These two cases are described in the following sections.
+
+
+IDLE CPUs
+
+If a CPU is idle, there is little point in sending it a scheduling-clock
+interrupt.  After all, the primary purpose of a scheduling-clock interrupt
+is to force a busy CPU to shift its attention among multiple duties,
+but an idle CPU by definition has no duties to shift its attention among.
+
+The CONFIG_NO_HZ=y Kconfig option causes the kernel to avoid sending
+scheduling-clock interrupts to idle CPUs, which is critically important
+both to battery-powered devices and to highly virtualized mainframes.
+A battery-powered device running a CONFIG_NO_HZ=n kernel would drain
+its battery very quickly, easily 2-3x as fast as would the same device
+running a CONFIG_NO_HZ=y kernel.  A mainframe running 1,500 OS instances
+might find that half of its CPU time was consumed by scheduling-clock
+interrupts.  In these situations, there is strong motivation to avoid
+sending scheduling-clock interrupts to idle CPUs.  That said, dyntick-idle
+mode is not free:
+
+1.	It increases the number of instructions executed on the path
+	to and from the idle loop.
+
+2.	Many architectures will place dyntick-idle CPUs into deep sleep
+	states, which further degrades from-idle transition latencies.
+
+Therefore, systems with aggressive real-time response constraints
+often run CONFIG_NO_HZ=n kernels in order to avoid degrading from-idle
+transition latencies.
+
+An idle CPU that is not receiving scheduling-clock interrupts is said to
+be "dyntick-idle", "in dyntick-idle mode", "in nohz mode", or "running
+tickless".  The remainder of this document will use "dyntick-idle mode".
+
+There is also a boot parameter "nohz=" that can be used to disable
+dyntick-idle mode in CONFIG_NO_HZ=y kernels by specifying "nohz=off".
+By default, CONFIG_NO_HZ=y kernels boot with "nohz=on", enabling
+dyntick-idle mode.
+
+
+CPUs WITH ONLY ONE RUNNABLE TASK
+
+If a CPU has only one runnable task, there is again little point in
+sending it a scheduling-clock interrupt because there is nowhere else
+for a CPU with but one runnable task to shift its attention to.
+
+The CONFIG_NO_HZ_EXTENDED=y Kconfig option causes the kernel to avoid
+sending scheduling-clock interrupts to CPUs with a single runnable task,
+and such CPUs are said to be "adaptive-ticks CPUs".  This is important
+for applications with aggressive real-time response constraints because
+it allows them to improve their worst-case response times by the maximum
+duration of a scheduling-clock interrupt.  It is also important for
+computationally intensive iterative workloads with short iterations:  If
+any CPU is delayed during a given iteration, all the other CPUs will be
+forced to wait idle while the delayed CPU finished.  Thus, the delay is
+multiplied by one less than the number of CPUs.  In these situations,
+there is again strong motivation to avoid sending scheduling-clock
+interrupts.
+
+The "nohz_extended=" boot parameter specifies which CPUs are to be
+adaptive-ticks CPUs.  For example, "nohz_extended=1,6-8" says that CPUs
+1, 6, 7, and 8 are to be adaptive-ticks CPUs.  By default, no CPUs will
+be adaptive-ticks CPUs.  Note that you are prohibited from marking all
+of the CPUs as adaptive-tick CPUs:  At least one non-adaptive-tick CPU
+must remain online to handle timekeeping tasks in order to ensure that
+gettimeofday() returns sane values on adaptive-tick CPUs.
+
+Transitioning to kernel mode does not automatically force that CPU out
+of adaptive-ticks mode.  The CPU will exit adaptive-ticks mode only if
+needed, for example, if that CPU enqueues an RCU callback.
+
+Just as with dyntick-idle mode, the benefits of adaptive-tick mode do
+not come for free:
+
+1.	CONFIG_NO_HZ_EXTENDED depends on CONFIG_NO_HZ, so you cannot run
+	adaptive ticks without also running dyntick idle.  This dependency
+	of CONFIG_NO_HZ_EXTENDED on CONFIG_NO_HZ extends down into the
+	implementation.  Therefore, all of the costs of CONFIG_NO_HZ
+	are also incurred by CONFIG_NO_HZ_EXTENDED.
+
+2.	The user/kernel transitions are slightly more expensive due
+	to the need to inform kernel subsystems (such as RCU) about
+	the change in mode.
+
+3.	POSIX CPU timers on adaptive-tick CPUs may fire late (or even
+	not at all) because they currently rely on scheduling-tick
+	interrupts.  This will likely be fixed in one of two ways: (1)
+	Prevent CPUs with POSIX CPU timers from entering adaptive-tick
+	mode, or (2) Use hrtimers or other adaptive-ticks-immune mechanism
+	to cause the POSIX CPU timer to fire properly.
+
+4.	If there are more perf events pending than the hardware can
+	accommodate, they are normally round-robined so as to collect
+	all of them over time.  Adaptive-tick mode may prevent this
+	round-robining from happening.  This will likely be fixed by
+	preventing CPUs with large numbers of perf events pending from
+	entering adaptive-tick mode.
+
+5.	Scheduler statistics for adaptive-idle CPUs may be computed
+	slightly differently than those for non-adaptive-idle CPUs.
+	This may in turn perturb load-balancing of real-time tasks.
+
+6.	The LB_BIAS scheduler feature is disabled by adaptive ticks.
+
+Although improvements are expected over time, adaptive ticks is quite
+useful for many types of real-time and compute-intensive applications.
+However, the drawbacks listed above mean that adaptive ticks should not
+(yet) be enabled by default.
+
+
+RCU IMPLICATIONS
+
+There are situations in which idle CPUs cannot be permitted to
+enter either dyntick-idle mode or adaptive-tick mode, the most
+familiar being the case where that CPU has RCU callbacks pending.
+
+The CONFIG_RCU_FAST_NO_HZ=y Kconfig option may be used to cause such
+CPUs to enter dyntick-idle mode or adaptive-tick mode anyway, though a
+timer will awaken these CPUs every four jiffies in order to ensure that
+the RCU callbacks are processed in a timely fashion.
+
+Another approach is to offload RCU callback processing to "rcuo" kthreads
+using the CONFIG_RCU_NOCB_CPU=y.  The specific CPUs to offload may be
+selected via several methods:
+
+1.	One of three mutually exclusive Kconfig options specify a
+	build-time default for the CPUs to offload:
+
+	a.	The RCU_NOCB_CPU_NONE=y Kconfig option results in
+		no CPUs being offloaded.
+
+	b.	The RCU_NOCB_CPU_ZERO=y Kconfig option causes CPU 0 to
+		be offloaded.
+
+	c.	The RCU_NOCB_CPU_ALL=y Kconfig option causes all CPUs
+		to be offloaded.  Note that the callbacks will be
+		offloaded to "rcuo" kthreads, and that those kthreads
+		will in fact run on some CPU.  However, this approach
+		gives fine-grained control on exactly which CPUs the
+		callbacks run on, the priority that they run at (including
+		the default of SCHED_OTHER), and it further allows
+		this control to be varied dynamically at runtime.
+
+2.	The "rcu_nocbs=" kernel boot parameter, which takes a comma-separated
+	list of CPUs and CPU ranges, for example, "1,3-5" selects CPUs 1,
+	3, 4, and 5.  The specified CPUs will be offloaded in addition
+	to any CPUs specified as offloaded by RCU_NOCB_CPU_ZERO or
+	RCU_NOCB_CPU_ALL.
+
+The offloaded CPUs never have RCU callbacks queued, and therefore RCU
+never prevents offloaded CPUs from entering either dyntick-idle mode or
+adaptive-tick mode.  That said, note that it is up to userspace to
+pin the "rcuo" kthreads to specific CPUs if desired.  Otherwise, the
+scheduler will decide where to run them, which might or might not be
+where you want them to run.
+
+
+KNOWN ISSUES
+
+o	Dyntick-idle slows transitions to and from idle slightly.
+	In practice, this has not been a problem except for the most
+	aggressive real-time workloads, which have the option of disabling
+	dyntick-idle mode, an option that most of them take.  However,
+	some workloads will no doubt want to use adaptive ticks to
+	eliminate scheduling-clock-tick latencies.  Here are some
+	options for these workloads:
+
+	a.	Use PMQOS from userspace to inform the kernel of your
+		latency requirements (preferred).
+
+	b.	On x86 systems, use the "idle=mwait" boot parameter.
+
+	c.	On x86 systems, use the "intel_idle.max_cstate=" to limit
+	`	the maximum depth C-state depth.
+
+	d.	On x86 systems, use the "idle=poll" boot parameter.
+		However, please note that use of this parameter can cause
+		your CPU to overheat, which may cause thermal throttling
+		to degrade your latencies -- and that this degradation can
+		be even worse than that of dyntick-idle.  Furthermore,
+		this parameter effectively disables Turbo Mode on Intel
+		CPUs, which can significantly reduce maximum performance.
+
+o	Adaptive-ticks slows user/kernel transitions slightly.
+	This is not expected to be a problem for computational-intensive
+	workloads, which have few such transitions.  Careful benchmarking
+	will be required to determine whether or not other workloads
+	are significantly affected by this effect.
+
+o	Adaptive-ticks does not do anything unless there is only one
+	runnable task for a given CPU, even though there are a number
+	of other situations where the scheduling-clock tick is not
+	needed.  To give but one example, consider a CPU that has one
+	runnable high-priority SCHED_FIFO task and an arbitrary number
+	of low-priority SCHED_OTHER tasks.  In this case, the CPU is
+	required to run the SCHED_FIFO task until either it blocks or
+	some other higher-priority task awakens on (or is assigned to)
+	this CPU, so there is no point in sending a scheduling-clock
+	interrupt to this CPU.	However, the current implementation
+	prohibits CPU with a single runnable SCHED_FIFO task and multiple
+	runnable SCHED_OTHER tasks from entering adaptive-ticks mode,
+	even though it would be correct to allow it to do so.
+
+	Better handling of these sorts of situations is future work.
+
+o	A reboot is required to reconfigure both adaptive idle and RCU
+	callback offloading.  Runtime reconfiguration could be provided
+	if needed, however, due to the complexity of reconfiguring RCU
+	at runtime, there would need to be an earthshakingly good reason.
+	Especially given the option of simply offloading RCU callbacks
+	from all CPUs.
+
+o	Additional configuration is required to deal with other sources
+	of OS jitter, including interrupts and system-utility tasks
+	and processes.  This configuration normally involves binding
+	interrupts and tasks to particular CPUs.
+
+o	Some sources of OS jitter can currently be eliminated only by
+	constraining the workload.  For example, the only way to eliminate
+	OS jitter due to global TLB shootdowns is to avoid the unmapping
+	operations (such as kernel module unload operations) that result
+	in these shootdowns.  For another example, page faults and TLB
+	misses can be reduced (and in some cases eliminated) by using
+	huge pages and by constraining the amount of memory used by the
+	application.
+
+o	Unless all CPUs are idle, at least one CPU must keep the
+	scheduling-clock interrupt going in order to support accurate
+	timekeeping.
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads
  2013-04-11 16:05 ` [PATCH documentation 1/2] nohz1: Add documentation Paul E. McKenney
@ 2013-04-11 16:05   ` Paul E. McKenney
  2013-04-11 17:18     ` Randy Dunlap
  2013-04-11 16:48   ` [PATCH documentation 1/2] nohz1: Add documentation Randy Dunlap
                     ` (4 subsequent siblings)
  5 siblings, 1 reply; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-11 16:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	peterz, rostedt, Valdis.Kletnieks, dhowells, edumazet, darren,
	fweisbec, sbw, Paul E. McKenney, Borislav Petkov,
	Arjan van de Ven, Kevin Hilman, Christoph Lameter

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

The Linux kernel uses a number of per-CPU kthreads, any of which might
contribute to OS jitter at any time.  The usual approach to normal
kthreads, namely to affinity them to a "housekeeping" CPU, does not
work with these kthreads because they cannot operate correctly if moved
to some other CPU.  This commit therefore lists ways of controlling OS
jitter from the Linux kernel's per-CPU kthreads.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Christoph Lameter <cl@linux.com>
---
 Documentation/kernel-per-CPU-kthreads.txt | 159 ++++++++++++++++++++++++++++++
 1 file changed, 159 insertions(+)
 create mode 100644 Documentation/kernel-per-CPU-kthreads.txt

diff --git a/Documentation/kernel-per-CPU-kthreads.txt b/Documentation/kernel-per-CPU-kthreads.txt
new file mode 100644
index 0000000..495dacf
--- /dev/null
+++ b/Documentation/kernel-per-CPU-kthreads.txt
@@ -0,0 +1,159 @@
+REDUCING OS JITTER DUE TO PER-CPU KTHREADS
+
+This document lists per-CPU kthreads in the Linux kernel and presents
+options to control OS jitter due to these kthreads.  Note that kthreads
+that are not per-CPU are not listed here -- to reduce OS jitter from
+non-per-CPU kthreads, bind them to a "housekeeping" CPU that is dedicated
+to such work.
+
+
+Name: ehca_comp/%u
+Purpose: Periodically process Infiniband-related work.
+To reduce corresponding OS jitter, do any of the following:
+1.	Don't use EHCA Infiniband hardware.  This will prevent these
+	kthreads from being created in the first place.  (This will
+	work for most people, as this hardware, though important,
+	is relatively old as is produced in relatively low unit
+	volumes.)
+2.	Do all EHCA-Infiniband-related work on other CPUs, including
+	interrupts.
+
+
+Name: irq/%d-%s
+Purpose: Handle threaded interrupts.
+To reduce corresponding OS jitter, do the following:
+1.	Use irq affinity to force the irq threads to execute on
+	some other CPU.
+
+Name: kcmtpd_ctr_%d
+Purpose: Handle Bluetooth work.
+To reduce corresponding OS jitter, do one of the following:
+1.	Don't use Bluetooth, in cwhich case these kthreads won't be
+	created in the first place.
+2.	Use irq affinity to force Bluetooth-related interrupts to
+	occur on some other CPU and furthermore initiate all
+	Bluetooth activity from some other CPU.
+
+Name: ksoftirqd/%u
+Purpose: Execute softirq handlers when threaded or when under heavy load.
+To reduce corresponding OS jitter, each softirq vector must be handled
+separately as follows:
+TIMER_SOFTIRQ:
+1.	Build with CONFIG_HOTPLUG_CPU=y.
+2.	To the extent possible, keep the CPU out of the kernel when it
+	is non-idle, for example, by forcing user and kernel threads as
+	well as interrupts to execute elsewhere.
+3.	Force the CPU offline, then bring it back online.  This forces
+	recurring timers to migrate elsewhere.  If you are concerned
+	with multiple CPUs, force them all offline before bringing the
+	first one back online.
+NET_TX_SOFTIRQ and NET_RX_SOFTIRQ:  Do all of the following:
+1.	Force networking interrupts onto other CPUs.
+2.	Initiate any network I/O on other CPUs.
+3.	Prevent CPU-hotplug operations from being initiated from tasks
+	that might run on the CPU to be de-jittered.
+BLOCK_SOFTIRQ:  Do all of the following:
+1.	Force block-device interrupts onto some other CPU.
+2.	Initiate any block I/O on other CPUs.
+3.	Prevent CPU-hotplug operations from being initiated from tasks
+	that might run on the CPU to be de-jittered.
+BLOCK_IOPOLL_SOFTIRQ:  Do all of the following:
+1.	Force block-device interrupts onto some other CPU.
+2.	Initiate any block I/O and block-I/O polling on other CPUs.
+3.	Prevent CPU-hotplug operations from being initiated from tasks
+	that might run on the CPU to be de-jittered.
+TASKLET_SOFTIRQ: Do one or more of the following:
+1.	Avoid use of drivers that use tasklets.
+2.	Convert all drivers that you must use from tasklets to workqueues.
+3.	Force interrupts for drivers using tasklets onto other CPUs,
+	and also do I/O involving these drivers on other CPUs.
+SCHED_SOFTIRQ: Do all of the following:
+1.	Avoid sending scheduler IPIs to the CPU to be de-jittered,
+	for example, ensure that at most one runnable kthread is
+	present on that CPU.  If a thread awakens that expects
+	to run on the de-jittered CPU, the scheduler will send
+	an IPI that can result in a subsequent SCHED_SOFTIRQ.
+2.	Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
+	CONFIG_NO_HZ_EXTENDED=y, and in addition ensure that the CPU
+	to be de-jittered is marked as an adaptive-ticks CPU using the
+	"nohz_extended=" boot parameter.  This reduces the number of
+	scheduler-clock interrupts that the de-jittered CPU receives,
+	minimizing its chances of being selected to do load balancing,
+	which happens in SCHED_SOFTIRQ context.
+3.	To the extent possible, keep the CPU out of the kernel when it
+	is non-idle, for example, by forcing user and kernel threads as
+	well as interrupts to execute elsewhere.  This further reduces
+	the number of scheduler-clock interrupts that the de-jittered
+	CPU receives.
+HRTIMER_SOFTIRQ:  Do all of the following:
+1.	Build with CONFIG_HOTPLUG_CPU=y.
+2.	To the extent possible, keep the CPU out of the kernel when it
+	is non-idle, for example, by forcing user and kernel threads as
+	well as interrupts to execute elsewhere.
+3.	Force the CPU offline, then bring it back online.  This forces
+	recurring timers to migrate elsewhere.  If you are concerned
+	with multiple CPUs, force them all offline before bringing the
+	first one back online.
+RCU_SOFTIRQ:  Do at least one of the following:
+1.	Offload callbacks and keep the CPU in either dyntick-idle or
+	adaptive-ticks state by doing all of the following:
+	a.	Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
+		CONFIG_NO_HZ_EXTENDED=y, and in addition ensure that
+		the CPU to be de-jittered is marked as an adaptive-ticks CPU
+		using the "nohz_extended=" boot parameter.
+	b.	To the extent possible, keep the CPU out of the kernel
+		when it is non-idle, for example, by forcing user and
+		kernel threads as well as interrupts to execute elsewhere.
+2.	Enable RCU to do its processing remotely via dyntick-idle by
+	doing all of the following:
+	a.	Build with CONFIG_NO_HZ=y and CONFIG_RCU_FAST_NO_HZ=y.
+	b.	To the extent possible, keep the CPU out of the kernel
+		when it is non-idle, for example, by forcing user and
+		kernel threads as well as interrupts to execute elsewhere.
+	c.	Ensure that the CPU goes idle frequently, allowing other
+		CPUs to detect that it has passed through an RCU
+		quiescent state.
+
+Name: rcuc/%u
+Purpose: Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels.
+To reduce corresponding OS jitter, do at least one of the following:
+1.	Build the kernel with CONFIG_PREEMPT=n.  This prevents these
+	kthreads from being created in the first place, and also prevents
+	RCU priority boosting from ever being required.  This approach
+	is feasible for workloads that do not require high degrees of
+	responsiveness.
+2.	Build the kernel with CONFIG_RCU_BOOST=n.  This prevents these
+	kthreads from being created in the first place.  This approach
+	is feasible only if your workload never requires RCU priority
+	boosting, for example, if you ensure ample idle time on all CPUs
+	that might execute within the kernel.
+3.	Build with CONFIG_RCU_NOCB_CPU=y and CONFIG_RCU_NOCB_CPU_ALL=y,
+	which offloads all RCU callbacks to kthreads that can be moved
+	off of CPUs susceptible to OS jitter.  This approach prevents the
+	rcuc/%u kthreads from having any work to do, and are therefore
+	never awakened.
+4.	Ensure that then CPU never enters the kernel and avoid any
+	CPU hotplug operations.  This is another way of preventing any
+	callbacks from being queued on the CPU, again preventing the
+	rcuc/%u kthreads from having any work to do.
+
+Name: rcuob/%d, rcuop/%d, and rcuos/%d
+Purpose: Offload RCU callbacks from the corresponding CPU.
+To reduce corresponding OS jitter, do at least one of the following:
+1.	Use affinity, cgroups, or other mechanism to force these kthreads
+	to execute on some other CPU.
+2.	Build with CONFIG_RCU_NOCB_CPUS=n, which will prevent these
+	kthreads from being created in the first place.  However,
+	please note that this will not eliminate the corresponding
+	OS jitter, but will instead merely shift it to softirq.
+
+Name: watchdog/%u
+Purpose: Detect software lockups on each CPU.
+To reduce corresponding OS jitter, do at least one of the following:
+1.	Build with CONFIG_LOCKUP_DETECTOR=n, which will prevent these
+	kthreads from being created in the first place.
+2.	Echo a zero to /proc/sys/kernel/watchdog to disable the
+	watchdog timer.
+3.	Echo a large number of /proc/sys/kernel/watchdog_thresh in
+	order to reduce the frequency of OS jitter due to the watchdog
+	timer down to a level that is acceptable for your workload.
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-11 16:05 ` [PATCH documentation 1/2] nohz1: Add documentation Paul E. McKenney
  2013-04-11 16:05   ` [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads Paul E. McKenney
@ 2013-04-11 16:48   ` Randy Dunlap
  2013-04-11 17:09     ` Paul E. McKenney
  2013-04-11 17:14   ` Arjan van de Ven
                     ` (3 subsequent siblings)
  5 siblings, 1 reply; 35+ messages in thread
From: Randy Dunlap @ 2013-04-11 16:48 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, fweisbec, sbw, Borislav Petkov,
	Arjan van de Ven, Kevin Hilman, Christoph Lameter

On 04/11/2013 09:05 AM, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
>
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Arjan van de Ven <arjan@linux.intel.com>
> Cc: Kevin Hilman <khilman@linaro.org>
> Cc: Christoph Lameter <cl@linux.com>
> ---
>   Documentation/timers/NO_HZ.txt | 245 +++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 245 insertions(+)
>   create mode 100644 Documentation/timers/NO_HZ.txt
>
> diff --git a/Documentation/timers/NO_HZ.txt b/Documentation/timers/NO_HZ.txt
> new file mode 100644
> index 0000000..6b33f6b
> --- /dev/null
> +++ b/Documentation/timers/NO_HZ.txt
> @@ -0,0 +1,245 @@
> +		NO_HZ: Reducing Scheduling-Clock Ticks
> +
> +
> +This document describes Kconfig options and boot parameters that can
> +reduce the number of scheduling-clock interrupts, thereby improving energy
> +efficiency and reducing OS jitter.  Reducing OS jitter is important for
> +some types of computationally intensive high-performance computing (HPC)
> +applications and for real-time applications.
> +
> +There are two major aspects of scheduling-clock interrupt reduction:
> +
> +1.	Idle CPUs.
> +
> +2.	CPUs having only one runnable task.
> +
> +These two cases are described in the following sections.
> +
> +
> +IDLE CPUs
> +
> +If a CPU is idle, there is little point in sending it a scheduling-clock
> +interrupt.  After all, the primary purpose of a scheduling-clock interrupt
> +is to force a busy CPU to shift its attention among multiple duties,
> +but an idle CPU by definition has no duties to shift its attention among.
> +
> +The CONFIG_NO_HZ=y Kconfig option causes the kernel to avoid sending
> +scheduling-clock interrupts to idle CPUs, which is critically important
> +both to battery-powered devices and to highly virtualized mainframes.
> +A battery-powered device running a CONFIG_NO_HZ=n kernel would drain
> +its battery very quickly, easily 2-3x as fast as would the same device
> +running a CONFIG_NO_HZ=y kernel.  A mainframe running 1,500 OS instances
> +might find that half of its CPU time was consumed by scheduling-clock
> +interrupts.  In these situations, there is strong motivation to avoid
> +sending scheduling-clock interrupts to idle CPUs.  That said, dyntick-idle
> +mode is not free:
> +
> +1.	It increases the number of instructions executed on the path
> +	to and from the idle loop.
> +
> +2.	Many architectures will place dyntick-idle CPUs into deep sleep
> +	states, which further degrades from-idle transition latencies.
> +
> +Therefore, systems with aggressive real-time response constraints
> +often run CONFIG_NO_HZ=n kernels in order to avoid degrading from-idle
> +transition latencies.
> +
> +An idle CPU that is not receiving scheduling-clock interrupts is said to
> +be "dyntick-idle", "in dyntick-idle mode", "in nohz mode", or "running
> +tickless".  The remainder of this document will use "dyntick-idle mode".
> +
> +There is also a boot parameter "nohz=" that can be used to disable
> +dyntick-idle mode in CONFIG_NO_HZ=y kernels by specifying "nohz=off".
> +By default, CONFIG_NO_HZ=y kernels boot with "nohz=on", enabling
> +dyntick-idle mode.
> +
> +
> +CPUs WITH ONLY ONE RUNNABLE TASK
> +
> +If a CPU has only one runnable task, there is again little point in
> +sending it a scheduling-clock interrupt because there is nowhere else
> +for a CPU with but one runnable task to shift its attention to.
> +
> +The CONFIG_NO_HZ_EXTENDED=y Kconfig option causes the kernel to avoid
> +sending scheduling-clock interrupts to CPUs with a single runnable task,
> +and such CPUs are said to be "adaptive-ticks CPUs".  This is important
> +for applications with aggressive real-time response constraints because
> +it allows them to improve their worst-case response times by the maximum
> +duration of a scheduling-clock interrupt.  It is also important for
> +computationally intensive iterative workloads with short iterations:  If
> +any CPU is delayed during a given iteration, all the other CPUs will be
> +forced to wait idle while the delayed CPU finished.  Thus, the delay is

I would say:                                 finishes.


> +multiplied by one less than the number of CPUs.  In these situations,
> +there is again strong motivation to avoid sending scheduling-clock
> +interrupts.
> +
> +The "nohz_extended=" boot parameter specifies which CPUs are to be
> +adaptive-ticks CPUs.  For example, "nohz_extended=1,6-8" says that CPUs
> +1, 6, 7, and 8 are to be adaptive-ticks CPUs.  By default, no CPUs will
> +be adaptive-ticks CPUs.  Note that you are prohibited from marking all
> +of the CPUs as adaptive-tick CPUs:  At least one non-adaptive-tick CPU
> +must remain online to handle timekeeping tasks in order to ensure that
> +gettimeofday() returns sane values on adaptive-tick CPUs.
> +
> +Transitioning to kernel mode does not automatically force that CPU out
> +of adaptive-ticks mode.  The CPU will exit adaptive-ticks mode only if
> +needed, for example, if that CPU enqueues an RCU callback.
> +
> +Just as with dyntick-idle mode, the benefits of adaptive-tick mode do
> +not come for free:
> +
> +1.	CONFIG_NO_HZ_EXTENDED depends on CONFIG_NO_HZ, so you cannot run
> +	adaptive ticks without also running dyntick idle.  This dependency
> +	of CONFIG_NO_HZ_EXTENDED on CONFIG_NO_HZ extends down into the
> +	implementation.  Therefore, all of the costs of CONFIG_NO_HZ
> +	are also incurred by CONFIG_NO_HZ_EXTENDED.
> +
> +2.	The user/kernel transitions are slightly more expensive due
> +	to the need to inform kernel subsystems (such as RCU) about
> +	the change in mode.
> +
> +3.	POSIX CPU timers on adaptive-tick CPUs may fire late (or even
> +	not at all) because they currently rely on scheduling-tick
> +	interrupts.  This will likely be fixed in one of two ways: (1)
> +	Prevent CPUs with POSIX CPU timers from entering adaptive-tick
> +	mode, or (2) Use hrtimers or other adaptive-ticks-immune mechanism
> +	to cause the POSIX CPU timer to fire properly.
> +
> +4.	If there are more perf events pending than the hardware can
> +	accommodate, they are normally round-robined so as to collect
> +	all of them over time.  Adaptive-tick mode may prevent this
> +	round-robining from happening.  This will likely be fixed by
> +	preventing CPUs with large numbers of perf events pending from
> +	entering adaptive-tick mode.
> +
> +5.	Scheduler statistics for adaptive-idle CPUs may be computed
> +	slightly differently than those for non-adaptive-idle CPUs.
> +	This may in turn perturb load-balancing of real-time tasks.
> +
> +6.	The LB_BIAS scheduler feature is disabled by adaptive ticks.
> +
> +Although improvements are expected over time, adaptive ticks is quite
> +useful for many types of real-time and compute-intensive applications.
> +However, the drawbacks listed above mean that adaptive ticks should not
> +(yet) be enabled by default.
> +
> +
> +RCU IMPLICATIONS
> +
> +There are situations in which idle CPUs cannot be permitted to
> +enter either dyntick-idle mode or adaptive-tick mode, the most
> +familiar being the case where that CPU has RCU callbacks pending.
> +
> +The CONFIG_RCU_FAST_NO_HZ=y Kconfig option may be used to cause such
> +CPUs to enter dyntick-idle mode or adaptive-tick mode anyway, though a
> +timer will awaken these CPUs every four jiffies in order to ensure that
> +the RCU callbacks are processed in a timely fashion.
> +
> +Another approach is to offload RCU callback processing to "rcuo" kthreads
> +using the CONFIG_RCU_NOCB_CPU=y.  The specific CPUs to offload may be
> +selected via several methods:
> +
> +1.	One of three mutually exclusive Kconfig options specify a
> +	build-time default for the CPUs to offload:
> +
> +	a.	The RCU_NOCB_CPU_NONE=y Kconfig option results in
> +		no CPUs being offloaded.
> +
> +	b.	The RCU_NOCB_CPU_ZERO=y Kconfig option causes CPU 0 to
> +		be offloaded.
> +
> +	c.	The RCU_NOCB_CPU_ALL=y Kconfig option causes all CPUs
> +		to be offloaded.  Note that the callbacks will be
> +		offloaded to "rcuo" kthreads, and that those kthreads
> +		will in fact run on some CPU.  However, this approach
> +		gives fine-grained control on exactly which CPUs the
> +		callbacks run on, the priority that they run at (including
> +		the default of SCHED_OTHER), and it further allows
> +		this control to be varied dynamically at runtime.
> +
> +2.	The "rcu_nocbs=" kernel boot parameter, which takes a comma-separated
> +	list of CPUs and CPU ranges, for example, "1,3-5" selects CPUs 1,
> +	3, 4, and 5.  The specified CPUs will be offloaded in addition
> +	to any CPUs specified as offloaded by RCU_NOCB_CPU_ZERO or
> +	RCU_NOCB_CPU_ALL.
> +
> +The offloaded CPUs never have RCU callbacks queued, and therefore RCU
> +never prevents offloaded CPUs from entering either dyntick-idle mode or
> +adaptive-tick mode.  That said, note that it is up to userspace to
> +pin the "rcuo" kthreads to specific CPUs if desired.  Otherwise, the
> +scheduler will decide where to run them, which might or might not be
> +where you want them to run.
> +
> +
> +KNOWN ISSUES
> +
> +o	Dyntick-idle slows transitions to and from idle slightly.
> +	In practice, this has not been a problem except for the most
> +	aggressive real-time workloads, which have the option of disabling
> +	dyntick-idle mode, an option that most of them take.  However,
> +	some workloads will no doubt want to use adaptive ticks to
> +	eliminate scheduling-clock-tick latencies.  Here are some
> +	options for these workloads:
> +
> +	a.	Use PMQOS from userspace to inform the kernel of your
> +		latency requirements (preferred).
> +
> +	b.	On x86 systems, use the "idle=mwait" boot parameter.
> +
> +	c.	On x86 systems, use the "intel_idle.max_cstate=" to limit
> +	`	the maximum depth C-state depth.
> +
> +	d.	On x86 systems, use the "idle=poll" boot parameter.
> +		However, please note that use of this parameter can cause
> +		your CPU to overheat, which may cause thermal throttling
> +		to degrade your latencies -- and that this degradation can
> +		be even worse than that of dyntick-idle.  Furthermore,
> +		this parameter effectively disables Turbo Mode on Intel
> +		CPUs, which can significantly reduce maximum performance.
> +
> +o	Adaptive-ticks slows user/kernel transitions slightly.
> +	This is not expected to be a problem for computational-intensive
> +	workloads, which have few such transitions.  Careful benchmarking
> +	will be required to determine whether or not other workloads
> +	are significantly affected by this effect.
> +
> +o	Adaptive-ticks does not do anything unless there is only one
> +	runnable task for a given CPU, even though there are a number
> +	of other situations where the scheduling-clock tick is not
> +	needed.  To give but one example, consider a CPU that has one
> +	runnable high-priority SCHED_FIFO task and an arbitrary number
> +	of low-priority SCHED_OTHER tasks.  In this case, the CPU is
> +	required to run the SCHED_FIFO task until either it blocks or
> +	some other higher-priority task awakens on (or is assigned to)
> +	this CPU, so there is no point in sending a scheduling-clock
> +	interrupt to this CPU.	However, the current implementation
> +	prohibits CPU with a single runnable SCHED_FIFO task and multiple

	prohibits a CPU or prohibits CPUs

> +	runnable SCHED_OTHER tasks from entering adaptive-ticks mode,
> +	even though it would be correct to allow it to do so.
> +
> +	Better handling of these sorts of situations is future work.
> +
> +o	A reboot is required to reconfigure both adaptive idle and RCU
> +	callback offloading.  Runtime reconfiguration could be provided
> +	if needed, however, due to the complexity of reconfiguring RCU
> +	at runtime, there would need to be an earthshakingly good reason.
> +	Especially given the option of simply offloading RCU callbacks
> +	from all CPUs.
> +
> +o	Additional configuration is required to deal with other sources
> +	of OS jitter, including interrupts and system-utility tasks
> +	and processes.  This configuration normally involves binding
> +	interrupts and tasks to particular CPUs.
> +
> +o	Some sources of OS jitter can currently be eliminated only by
> +	constraining the workload.  For example, the only way to eliminate
> +	OS jitter due to global TLB shootdowns is to avoid the unmapping
> +	operations (such as kernel module unload operations) that result
> +	in these shootdowns.  For another example, page faults and TLB
> +	misses can be reduced (and in some cases eliminated) by using
> +	huge pages and by constraining the amount of memory used by the
> +	application.
> +
> +o	Unless all CPUs are idle, at least one CPU must keep the
> +	scheduling-clock interrupt going in order to support accurate
> +	timekeeping.
>

Nicely written.

Reviewed-by: Randy Dunlap <rdunlap@infradead.org>


-- 
~Randy

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-11 16:48   ` [PATCH documentation 1/2] nohz1: Add documentation Randy Dunlap
@ 2013-04-11 17:09     ` Paul E. McKenney
  0 siblings, 0 replies; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-11 17:09 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, fweisbec, sbw, Borislav Petkov,
	Arjan van de Ven, Kevin Hilman, Christoph Lameter

On Thu, Apr 11, 2013 at 09:48:45AM -0700, Randy Dunlap wrote:
> On 04/11/2013 09:05 AM, Paul E. McKenney wrote:
> >From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> >
> >Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> >Cc: Frederic Weisbecker <fweisbec@gmail.com>
> >Cc: Steven Rostedt <rostedt@goodmis.org>
> >Cc: Borislav Petkov <bp@alien8.de>
> >Cc: Arjan van de Ven <arjan@linux.intel.com>
> >Cc: Kevin Hilman <khilman@linaro.org>
> >Cc: Christoph Lameter <cl@linux.com>
> >---
> >  Documentation/timers/NO_HZ.txt | 245 +++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 245 insertions(+)
> >  create mode 100644 Documentation/timers/NO_HZ.txt
> >
> >diff --git a/Documentation/timers/NO_HZ.txt b/Documentation/timers/NO_HZ.txt
> >new file mode 100644
> >index 0000000..6b33f6b
> >--- /dev/null
> >+++ b/Documentation/timers/NO_HZ.txt
> >@@ -0,0 +1,245 @@
> >+		NO_HZ: Reducing Scheduling-Clock Ticks
> >+
> >+
> >+This document describes Kconfig options and boot parameters that can
> >+reduce the number of scheduling-clock interrupts, thereby improving energy
> >+efficiency and reducing OS jitter.  Reducing OS jitter is important for
> >+some types of computationally intensive high-performance computing (HPC)
> >+applications and for real-time applications.
> >+
> >+There are two major aspects of scheduling-clock interrupt reduction:
> >+
> >+1.	Idle CPUs.
> >+
> >+2.	CPUs having only one runnable task.
> >+
> >+These two cases are described in the following sections.
> >+
> >+
> >+IDLE CPUs
> >+
> >+If a CPU is idle, there is little point in sending it a scheduling-clock
> >+interrupt.  After all, the primary purpose of a scheduling-clock interrupt
> >+is to force a busy CPU to shift its attention among multiple duties,
> >+but an idle CPU by definition has no duties to shift its attention among.
> >+
> >+The CONFIG_NO_HZ=y Kconfig option causes the kernel to avoid sending
> >+scheduling-clock interrupts to idle CPUs, which is critically important
> >+both to battery-powered devices and to highly virtualized mainframes.
> >+A battery-powered device running a CONFIG_NO_HZ=n kernel would drain
> >+its battery very quickly, easily 2-3x as fast as would the same device
> >+running a CONFIG_NO_HZ=y kernel.  A mainframe running 1,500 OS instances
> >+might find that half of its CPU time was consumed by scheduling-clock
> >+interrupts.  In these situations, there is strong motivation to avoid
> >+sending scheduling-clock interrupts to idle CPUs.  That said, dyntick-idle
> >+mode is not free:
> >+
> >+1.	It increases the number of instructions executed on the path
> >+	to and from the idle loop.
> >+
> >+2.	Many architectures will place dyntick-idle CPUs into deep sleep
> >+	states, which further degrades from-idle transition latencies.
> >+
> >+Therefore, systems with aggressive real-time response constraints
> >+often run CONFIG_NO_HZ=n kernels in order to avoid degrading from-idle
> >+transition latencies.
> >+
> >+An idle CPU that is not receiving scheduling-clock interrupts is said to
> >+be "dyntick-idle", "in dyntick-idle mode", "in nohz mode", or "running
> >+tickless".  The remainder of this document will use "dyntick-idle mode".
> >+
> >+There is also a boot parameter "nohz=" that can be used to disable
> >+dyntick-idle mode in CONFIG_NO_HZ=y kernels by specifying "nohz=off".
> >+By default, CONFIG_NO_HZ=y kernels boot with "nohz=on", enabling
> >+dyntick-idle mode.
> >+
> >+
> >+CPUs WITH ONLY ONE RUNNABLE TASK
> >+
> >+If a CPU has only one runnable task, there is again little point in
> >+sending it a scheduling-clock interrupt because there is nowhere else
> >+for a CPU with but one runnable task to shift its attention to.
> >+
> >+The CONFIG_NO_HZ_EXTENDED=y Kconfig option causes the kernel to avoid
> >+sending scheduling-clock interrupts to CPUs with a single runnable task,
> >+and such CPUs are said to be "adaptive-ticks CPUs".  This is important
> >+for applications with aggressive real-time response constraints because
> >+it allows them to improve their worst-case response times by the maximum
> >+duration of a scheduling-clock interrupt.  It is also important for
> >+computationally intensive iterative workloads with short iterations:  If
> >+any CPU is delayed during a given iteration, all the other CPUs will be
> >+forced to wait idle while the delayed CPU finished.  Thus, the delay is
> 
> I would say:                                 finishes.

Good eyes, fixed!

> >+multiplied by one less than the number of CPUs.  In these situations,
> >+there is again strong motivation to avoid sending scheduling-clock
> >+interrupts.
> >+
> >+The "nohz_extended=" boot parameter specifies which CPUs are to be
> >+adaptive-ticks CPUs.  For example, "nohz_extended=1,6-8" says that CPUs
> >+1, 6, 7, and 8 are to be adaptive-ticks CPUs.  By default, no CPUs will
> >+be adaptive-ticks CPUs.  Note that you are prohibited from marking all
> >+of the CPUs as adaptive-tick CPUs:  At least one non-adaptive-tick CPU
> >+must remain online to handle timekeeping tasks in order to ensure that
> >+gettimeofday() returns sane values on adaptive-tick CPUs.
> >+
> >+Transitioning to kernel mode does not automatically force that CPU out
> >+of adaptive-ticks mode.  The CPU will exit adaptive-ticks mode only if
> >+needed, for example, if that CPU enqueues an RCU callback.
> >+
> >+Just as with dyntick-idle mode, the benefits of adaptive-tick mode do
> >+not come for free:
> >+
> >+1.	CONFIG_NO_HZ_EXTENDED depends on CONFIG_NO_HZ, so you cannot run
> >+	adaptive ticks without also running dyntick idle.  This dependency
> >+	of CONFIG_NO_HZ_EXTENDED on CONFIG_NO_HZ extends down into the
> >+	implementation.  Therefore, all of the costs of CONFIG_NO_HZ
> >+	are also incurred by CONFIG_NO_HZ_EXTENDED.
> >+
> >+2.	The user/kernel transitions are slightly more expensive due
> >+	to the need to inform kernel subsystems (such as RCU) about
> >+	the change in mode.
> >+
> >+3.	POSIX CPU timers on adaptive-tick CPUs may fire late (or even
> >+	not at all) because they currently rely on scheduling-tick
> >+	interrupts.  This will likely be fixed in one of two ways: (1)
> >+	Prevent CPUs with POSIX CPU timers from entering adaptive-tick
> >+	mode, or (2) Use hrtimers or other adaptive-ticks-immune mechanism
> >+	to cause the POSIX CPU timer to fire properly.
> >+
> >+4.	If there are more perf events pending than the hardware can
> >+	accommodate, they are normally round-robined so as to collect
> >+	all of them over time.  Adaptive-tick mode may prevent this
> >+	round-robining from happening.  This will likely be fixed by
> >+	preventing CPUs with large numbers of perf events pending from
> >+	entering adaptive-tick mode.
> >+
> >+5.	Scheduler statistics for adaptive-idle CPUs may be computed
> >+	slightly differently than those for non-adaptive-idle CPUs.
> >+	This may in turn perturb load-balancing of real-time tasks.
> >+
> >+6.	The LB_BIAS scheduler feature is disabled by adaptive ticks.
> >+
> >+Although improvements are expected over time, adaptive ticks is quite
> >+useful for many types of real-time and compute-intensive applications.
> >+However, the drawbacks listed above mean that adaptive ticks should not
> >+(yet) be enabled by default.
> >+
> >+
> >+RCU IMPLICATIONS
> >+
> >+There are situations in which idle CPUs cannot be permitted to
> >+enter either dyntick-idle mode or adaptive-tick mode, the most
> >+familiar being the case where that CPU has RCU callbacks pending.
> >+
> >+The CONFIG_RCU_FAST_NO_HZ=y Kconfig option may be used to cause such
> >+CPUs to enter dyntick-idle mode or adaptive-tick mode anyway, though a
> >+timer will awaken these CPUs every four jiffies in order to ensure that
> >+the RCU callbacks are processed in a timely fashion.
> >+
> >+Another approach is to offload RCU callback processing to "rcuo" kthreads
> >+using the CONFIG_RCU_NOCB_CPU=y.  The specific CPUs to offload may be
> >+selected via several methods:
> >+
> >+1.	One of three mutually exclusive Kconfig options specify a
> >+	build-time default for the CPUs to offload:
> >+
> >+	a.	The RCU_NOCB_CPU_NONE=y Kconfig option results in
> >+		no CPUs being offloaded.
> >+
> >+	b.	The RCU_NOCB_CPU_ZERO=y Kconfig option causes CPU 0 to
> >+		be offloaded.
> >+
> >+	c.	The RCU_NOCB_CPU_ALL=y Kconfig option causes all CPUs
> >+		to be offloaded.  Note that the callbacks will be
> >+		offloaded to "rcuo" kthreads, and that those kthreads
> >+		will in fact run on some CPU.  However, this approach
> >+		gives fine-grained control on exactly which CPUs the
> >+		callbacks run on, the priority that they run at (including
> >+		the default of SCHED_OTHER), and it further allows
> >+		this control to be varied dynamically at runtime.
> >+
> >+2.	The "rcu_nocbs=" kernel boot parameter, which takes a comma-separated
> >+	list of CPUs and CPU ranges, for example, "1,3-5" selects CPUs 1,
> >+	3, 4, and 5.  The specified CPUs will be offloaded in addition
> >+	to any CPUs specified as offloaded by RCU_NOCB_CPU_ZERO or
> >+	RCU_NOCB_CPU_ALL.
> >+
> >+The offloaded CPUs never have RCU callbacks queued, and therefore RCU
> >+never prevents offloaded CPUs from entering either dyntick-idle mode or
> >+adaptive-tick mode.  That said, note that it is up to userspace to
> >+pin the "rcuo" kthreads to specific CPUs if desired.  Otherwise, the
> >+scheduler will decide where to run them, which might or might not be
> >+where you want them to run.
> >+
> >+
> >+KNOWN ISSUES
> >+
> >+o	Dyntick-idle slows transitions to and from idle slightly.
> >+	In practice, this has not been a problem except for the most
> >+	aggressive real-time workloads, which have the option of disabling
> >+	dyntick-idle mode, an option that most of them take.  However,
> >+	some workloads will no doubt want to use adaptive ticks to
> >+	eliminate scheduling-clock-tick latencies.  Here are some
> >+	options for these workloads:
> >+
> >+	a.	Use PMQOS from userspace to inform the kernel of your
> >+		latency requirements (preferred).
> >+
> >+	b.	On x86 systems, use the "idle=mwait" boot parameter.
> >+
> >+	c.	On x86 systems, use the "intel_idle.max_cstate=" to limit
> >+	`	the maximum depth C-state depth.
> >+
> >+	d.	On x86 systems, use the "idle=poll" boot parameter.
> >+		However, please note that use of this parameter can cause
> >+		your CPU to overheat, which may cause thermal throttling
> >+		to degrade your latencies -- and that this degradation can
> >+		be even worse than that of dyntick-idle.  Furthermore,
> >+		this parameter effectively disables Turbo Mode on Intel
> >+		CPUs, which can significantly reduce maximum performance.
> >+
> >+o	Adaptive-ticks slows user/kernel transitions slightly.
> >+	This is not expected to be a problem for computational-intensive
> >+	workloads, which have few such transitions.  Careful benchmarking
> >+	will be required to determine whether or not other workloads
> >+	are significantly affected by this effect.
> >+
> >+o	Adaptive-ticks does not do anything unless there is only one
> >+	runnable task for a given CPU, even though there are a number
> >+	of other situations where the scheduling-clock tick is not
> >+	needed.  To give but one example, consider a CPU that has one
> >+	runnable high-priority SCHED_FIFO task and an arbitrary number
> >+	of low-priority SCHED_OTHER tasks.  In this case, the CPU is
> >+	required to run the SCHED_FIFO task until either it blocks or
> >+	some other higher-priority task awakens on (or is assigned to)
> >+	this CPU, so there is no point in sending a scheduling-clock
> >+	interrupt to this CPU.	However, the current implementation
> >+	prohibits CPU with a single runnable SCHED_FIFO task and multiple
> 
> 	prohibits a CPU or prohibits CPUs

Good eyes, I took option A to agree with the "it" two lines below.

> >+	runnable SCHED_OTHER tasks from entering adaptive-ticks mode,
> >+	even though it would be correct to allow it to do so.
> >+
> >+	Better handling of these sorts of situations is future work.
> >+
> >+o	A reboot is required to reconfigure both adaptive idle and RCU
> >+	callback offloading.  Runtime reconfiguration could be provided
> >+	if needed, however, due to the complexity of reconfiguring RCU
> >+	at runtime, there would need to be an earthshakingly good reason.
> >+	Especially given the option of simply offloading RCU callbacks
> >+	from all CPUs.
> >+
> >+o	Additional configuration is required to deal with other sources
> >+	of OS jitter, including interrupts and system-utility tasks
> >+	and processes.  This configuration normally involves binding
> >+	interrupts and tasks to particular CPUs.
> >+
> >+o	Some sources of OS jitter can currently be eliminated only by
> >+	constraining the workload.  For example, the only way to eliminate
> >+	OS jitter due to global TLB shootdowns is to avoid the unmapping
> >+	operations (such as kernel module unload operations) that result
> >+	in these shootdowns.  For another example, page faults and TLB
> >+	misses can be reduced (and in some cases eliminated) by using
> >+	huge pages and by constraining the amount of memory used by the
> >+	application.
> >+
> >+o	Unless all CPUs are idle, at least one CPU must keep the
> >+	scheduling-clock interrupt going in order to support accurate
> >+	timekeeping.
> 
> Nicely written.

Glad you like it!  I have added your Reviewed-by.

							Thanx, Paul

> Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
> 
> 
> -- 
> ~Randy
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-11 16:05 ` [PATCH documentation 1/2] nohz1: Add documentation Paul E. McKenney
  2013-04-11 16:05   ` [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads Paul E. McKenney
  2013-04-11 16:48   ` [PATCH documentation 1/2] nohz1: Add documentation Randy Dunlap
@ 2013-04-11 17:14   ` Arjan van de Ven
  2013-04-11 18:27     ` Paul E. McKenney
  2013-04-11 18:25   ` Borislav Petkov
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 35+ messages in thread
From: Arjan van de Ven @ 2013-04-11 17:14 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, fweisbec, sbw, Borislav Petkov, Kevin Hilman,
	Christoph Lameter

> +2.	Many architectures will place dyntick-idle CPUs into deep sleep
> +	states, which further degrades from-idle transition latencies.
> +
I think this part should just be deleted.
On x86, the deeper idle states are even used with non-tickless system (the break even times are
quite a bit less than even 1 msec).
I can't imagine that ARM is worse on this, at which point the statement above is highly dubious



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads
  2013-04-11 16:05   ` [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads Paul E. McKenney
@ 2013-04-11 17:18     ` Randy Dunlap
  2013-04-11 18:40       ` Paul E. McKenney
  0 siblings, 1 reply; 35+ messages in thread
From: Randy Dunlap @ 2013-04-11 17:18 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, fweisbec, sbw, Borislav Petkov,
	Arjan van de Ven, Kevin Hilman, Christoph Lameter

On 04/11/2013 09:05 AM, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
>
> The Linux kernel uses a number of per-CPU kthreads, any of which might
> contribute to OS jitter at any time.  The usual approach to normal
> kthreads, namely to affinity them to a "housekeeping" CPU, does not

ugh.               to affine them

> work with these kthreads because they cannot operate correctly if moved
> to some other CPU.  This commit therefore lists ways of controlling OS
> jitter from the Linux kernel's per-CPU kthreads.
>
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Arjan van de Ven <arjan@linux.intel.com>
> Cc: Kevin Hilman <khilman@linaro.org>
> Cc: Christoph Lameter <cl@linux.com>
> ---
>   Documentation/kernel-per-CPU-kthreads.txt | 159 ++++++++++++++++++++++++++++++
>   1 file changed, 159 insertions(+)
>   create mode 100644 Documentation/kernel-per-CPU-kthreads.txt
>
> diff --git a/Documentation/kernel-per-CPU-kthreads.txt b/Documentation/kernel-per-CPU-kthreads.txt
> new file mode 100644
> index 0000000..495dacf
> --- /dev/null
> +++ b/Documentation/kernel-per-CPU-kthreads.txt
> @@ -0,0 +1,159 @@
> +REDUCING OS JITTER DUE TO PER-CPU KTHREADS
> +
> +This document lists per-CPU kthreads in the Linux kernel and presents
> +options to control OS jitter due to these kthreads.  Note that kthreads
> +that are not per-CPU are not listed here -- to reduce OS jitter from
> +non-per-CPU kthreads, bind them to a "housekeeping" CPU that is dedicated
> +to such work.
> +
> +
> +Name: ehca_comp/%u
> +Purpose: Periodically process Infiniband-related work.
> +To reduce corresponding OS jitter, do any of the following:
> +1.	Don't use EHCA Infiniband hardware.  This will prevent these
> +	kthreads from being created in the first place.  (This will
> +	work for most people, as this hardware, though important,
> +	is relatively old as is produced in relatively low unit
> +	volumes.)
> +2.	Do all EHCA-Infiniband-related work on other CPUs, including
> +	interrupts.
> +
> +
> +Name: irq/%d-%s
> +Purpose: Handle threaded interrupts.
> +To reduce corresponding OS jitter, do the following:
> +1.	Use irq affinity to force the irq threads to execute on
> +	some other CPU.

It would be very nice to explain here how that is done.

> +
> +Name: kcmtpd_ctr_%d
> +Purpose: Handle Bluetooth work.
> +To reduce corresponding OS jitter, do one of the following:
> +1.	Don't use Bluetooth, in cwhich case these kthreads won't be

	                        which

> +	created in the first place.
> +2.	Use irq affinity to force Bluetooth-related interrupts to
> +	occur on some other CPU and furthermore initiate all
> +	Bluetooth activity from some other CPU.
> +
> +Name: ksoftirqd/%u
> +Purpose: Execute softirq handlers when threaded or when under heavy load.
> +To reduce corresponding OS jitter, each softirq vector must be handled
> +separately as follows:
> +TIMER_SOFTIRQ:
> +1.	Build with CONFIG_HOTPLUG_CPU=y.
> +2.	To the extent possible, keep the CPU out of the kernel when it

I guess I have a different viewpoint.  I would say:  keep the kernel
off of that CPU ....

> +	is non-idle, for example, by forcing user and kernel threads as
> +	well as interrupts to execute elsewhere.
> +3.	Force the CPU offline, then bring it back online.  This forces
> +	recurring timers to migrate elsewhere.  If you are concerned
> +	with multiple CPUs, force them all offline before bringing the
> +	first one back online.
> +NET_TX_SOFTIRQ and NET_RX_SOFTIRQ:  Do all of the following:
> +1.	Force networking interrupts onto other CPUs.
> +2.	Initiate any network I/O on other CPUs.
> +3.	Prevent CPU-hotplug operations from being initiated from tasks
> +	that might run on the CPU to be de-jittered.
> +BLOCK_SOFTIRQ:  Do all of the following:
> +1.	Force block-device interrupts onto some other CPU.
> +2.	Initiate any block I/O on other CPUs.
> +3.	Prevent CPU-hotplug operations from being initiated from tasks
> +	that might run on the CPU to be de-jittered.
> +BLOCK_IOPOLL_SOFTIRQ:  Do all of the following:
> +1.	Force block-device interrupts onto some other CPU.
> +2.	Initiate any block I/O and block-I/O polling on other CPUs.
> +3.	Prevent CPU-hotplug operations from being initiated from tasks
> +	that might run on the CPU to be de-jittered.
> +TASKLET_SOFTIRQ: Do one or more of the following:
> +1.	Avoid use of drivers that use tasklets.
> +2.	Convert all drivers that you must use from tasklets to workqueues.
> +3.	Force interrupts for drivers using tasklets onto other CPUs,
> +	and also do I/O involving these drivers on other CPUs.
> +SCHED_SOFTIRQ: Do all of the following:
> +1.	Avoid sending scheduler IPIs to the CPU to be de-jittered,
> +	for example, ensure that at most one runnable kthread is
> +	present on that CPU.  If a thread awakens that expects
> +	to run on the de-jittered CPU, the scheduler will send
> +	an IPI that can result in a subsequent SCHED_SOFTIRQ.
> +2.	Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
> +	CONFIG_NO_HZ_EXTENDED=y, and in addition ensure that the CPU
> +	to be de-jittered is marked as an adaptive-ticks CPU using the
> +	"nohz_extended=" boot parameter.  This reduces the number of
> +	scheduler-clock interrupts that the de-jittered CPU receives,
> +	minimizing its chances of being selected to do load balancing,
> +	which happens in SCHED_SOFTIRQ context.
> +3.	To the extent possible, keep the CPU out of the kernel when it

same viewpoint point.

> +	is non-idle, for example, by forcing user and kernel threads as
> +	well as interrupts to execute elsewhere.  This further reduces
> +	the number of scheduler-clock interrupts that the de-jittered
> +	CPU receives.
> +HRTIMER_SOFTIRQ:  Do all of the following:
> +1.	Build with CONFIG_HOTPLUG_CPU=y.
> +2.	To the extent possible, keep the CPU out of the kernel when it
> +	is non-idle, for example, by forcing user and kernel threads as
> +	well as interrupts to execute elsewhere.
> +3.	Force the CPU offline, then bring it back online.  This forces
> +	recurring timers to migrate elsewhere.  If you are concerned
> +	with multiple CPUs, force them all offline before bringing the
> +	first one back online.
> +RCU_SOFTIRQ:  Do at least one of the following:
> +1.	Offload callbacks and keep the CPU in either dyntick-idle or
> +	adaptive-ticks state by doing all of the following:
> +	a.	Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
> +		CONFIG_NO_HZ_EXTENDED=y, and in addition ensure that
> +		the CPU to be de-jittered is marked as an adaptive-ticks CPU
> +		using the "nohz_extended=" boot parameter.
> +	b.	To the extent possible, keep the CPU out of the kernel

viewpoint?

> +		when it is non-idle, for example, by forcing user and
> +		kernel threads as well as interrupts to execute elsewhere.
> +2.	Enable RCU to do its processing remotely via dyntick-idle by
> +	doing all of the following:
> +	a.	Build with CONFIG_NO_HZ=y and CONFIG_RCU_FAST_NO_HZ=y.
> +	b.	To the extent possible, keep the CPU out of the kernel

viewpoint?

> +		when it is non-idle, for example, by forcing user and
> +		kernel threads as well as interrupts to execute elsewhere.
> +	c.	Ensure that the CPU goes idle frequently, allowing other
> +		CPUs to detect that it has passed through an RCU
> +		quiescent state.
> +
> +Name: rcuc/%u
> +Purpose: Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels.
> +To reduce corresponding OS jitter, do at least one of the following:
> +1.	Build the kernel with CONFIG_PREEMPT=n.  This prevents these
> +	kthreads from being created in the first place, and also prevents
> +	RCU priority boosting from ever being required.  This approach
> +	is feasible for workloads that do not require high degrees of
> +	responsiveness.
> +2.	Build the kernel with CONFIG_RCU_BOOST=n.  This prevents these
> +	kthreads from being created in the first place.  This approach
> +	is feasible only if your workload never requires RCU priority
> +	boosting, for example, if you ensure ample idle time on all CPUs
> +	that might execute within the kernel.
> +3.	Build with CONFIG_RCU_NOCB_CPU=y and CONFIG_RCU_NOCB_CPU_ALL=y,
> +	which offloads all RCU callbacks to kthreads that can be moved
> +	off of CPUs susceptible to OS jitter.  This approach prevents the
> +	rcuc/%u kthreads from having any work to do, and are therefore
> +	never awakened.
> +4.	Ensure that then CPU never enters the kernel and avoid any

	            the
viewpoint?

> +	CPU hotplug operations.  This is another way of preventing any
> +	callbacks from being queued on the CPU, again preventing the
> +	rcuc/%u kthreads from having any work to do.
> +
> +Name: rcuob/%d, rcuop/%d, and rcuos/%d
> +Purpose: Offload RCU callbacks from the corresponding CPU.
> +To reduce corresponding OS jitter, do at least one of the following:
> +1.	Use affinity, cgroups, or other mechanism to force these kthreads
> +	to execute on some other CPU.
> +2.	Build with CONFIG_RCU_NOCB_CPUS=n, which will prevent these
> +	kthreads from being created in the first place.  However,
> +	please note that this will not eliminate the corresponding
> +	OS jitter, but will instead merely shift it to softirq.
> +
> +Name: watchdog/%u
> +Purpose: Detect software lockups on each CPU.
> +To reduce corresponding OS jitter, do at least one of the following:
> +1.	Build with CONFIG_LOCKUP_DETECTOR=n, which will prevent these
> +	kthreads from being created in the first place.
> +2.	Echo a zero to /proc/sys/kernel/watchdog to disable the
> +	watchdog timer.
> +3.	Echo a large number of /proc/sys/kernel/watchdog_thresh in
> +	order to reduce the frequency of OS jitter due to the watchdog
> +	timer down to a level that is acceptable for your workload.
>


Reviewed-by: Randy Dunlap <rdunlap@infradead.org>


-- 
~Randy

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-11 16:05 ` [PATCH documentation 1/2] nohz1: Add documentation Paul E. McKenney
                     ` (2 preceding siblings ...)
  2013-04-11 17:14   ` Arjan van de Ven
@ 2013-04-11 18:25   ` Borislav Petkov
  2013-04-11 19:13     ` Paul E. McKenney
  2013-04-19 21:01   ` Kevin Hilman
  2013-04-27 13:26   ` Frederic Weisbecker
  5 siblings, 1 reply; 35+ messages in thread
From: Borislav Petkov @ 2013-04-11 18:25 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, fweisbec, sbw, Arjan van de Ven, Kevin Hilman,
	Christoph Lameter

Ok,

here's some more Savel fun, feel free to take whatever you like. :)

On Thu, Apr 11, 2013 at 09:05:58AM -0700, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Arjan van de Ven <arjan@linux.intel.com>
> Cc: Kevin Hilman <khilman@linaro.org>
> Cc: Christoph Lameter <cl@linux.com>
> ---
>  Documentation/timers/NO_HZ.txt | 245 +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 245 insertions(+)
>  create mode 100644 Documentation/timers/NO_HZ.txt
> 
> diff --git a/Documentation/timers/NO_HZ.txt b/Documentation/timers/NO_HZ.txt
> new file mode 100644
> index 0000000..6b33f6b
> --- /dev/null
> +++ b/Documentation/timers/NO_HZ.txt
> @@ -0,0 +1,245 @@
> +		NO_HZ: Reducing Scheduling-Clock Ticks
> +
> +
> +This document describes Kconfig options and boot parameters that can
> +reduce the number of scheduling-clock interrupts, thereby improving energy
> +efficiency and reducing OS jitter.  Reducing OS jitter is important for
> +some types of computationally intensive high-performance computing (HPC)
> +applications and for real-time applications.
> +
> +There are two major aspects of scheduling-clock interrupt reduction:

I'd simplify this:

There are two main reasons for reducing the amount of scheduling-clock
interrupts:

> +
> +1.	Idle CPUs.
> +
> +2.	CPUs having only one runnable task.
> +
> +These two cases are described in the following sections.

Not really needed this sentence is, huh, since the two aspects simply
follow.

> +
> +
> +IDLE CPUs
> +
> +If a CPU is idle, there is little point in sending it a scheduling-clock
> +interrupt.  After all, the primary purpose of a scheduling-clock interrupt
> +is to force a busy CPU to shift its attention among multiple duties,
> +but an idle CPU by definition has no duties to shift its attention among.

simplify:

"... but an idle CPU has, by definition, no duties."

> +
> +The CONFIG_NO_HZ=y Kconfig option causes the kernel to avoid sending

I'm guessing you're keeping those CONFIG_* options in sync with
Frederic's Kconfig changes...

> +scheduling-clock interrupts to idle CPUs, which is critically important
> +both to battery-powered devices and to highly virtualized mainframes.
> +A battery-powered device running a CONFIG_NO_HZ=n kernel would drain
> +its battery very quickly, easily 2-3x as fast as would the same device

let's write it out:
			 " ... easily 2-3 times as fast..."

> +running a CONFIG_NO_HZ=y kernel.  A mainframe running 1,500 OS instances
> +might find that half of its CPU time was consumed by scheduling-clock
> +interrupts.  In these situations, there is strong motivation to avoid
> +sending scheduling-clock interrupts to idle CPUs.  That said, dyntick-idle

I hate "that said" :-)

						      However, dyntick-idle mode
						      doesn't come for free:

> +mode is not free:
> +
> +1.	It increases the number of instructions executed on the path
> +	to and from the idle loop.
> +
> +2.	Many architectures will place dyntick-idle CPUs into deep sleep
> +	states, which further degrades from-idle transition latencies.

Above you say "to and from the idle loop", now it is from-idle. Simply say:

"... which further degrades idle transision latencies" which means both :).

> +
> +Therefore, systems with aggressive real-time response constraints
> +often run CONFIG_NO_HZ=n kernels in order to avoid degrading from-idle
> +transition latencies.
> +
> +An idle CPU that is not receiving scheduling-clock interrupts is said to
> +be "dyntick-idle", "in dyntick-idle mode", "in nohz mode", or "runninga
> +tickless".  The remainder of this document will use "dyntick-idle mode".

Very good terminology sort-out. :)

> +
> +There is also a boot parameter "nohz=" that can be used to disable
> +dyntick-idle mode in CONFIG_NO_HZ=y kernels by specifying "nohz=off".
> +By default, CONFIG_NO_HZ=y kernels boot with "nohz=on", enabling
> +dyntick-idle mode.
> +
> +
> +CPUs WITH ONLY ONE RUNNABLE TASK
> +
> +If a CPU has only one runnable task, there is again little point in
> +sending it a scheduling-clock interrupt because there is nowhere else
> +for a CPU with but one runnable task to shift its attention to.

Simplify:

"For a very similar reason, there's little point in sending
scheduling-clock interrupts to a CPU with a single runnable task because
there's no other task to switch to."

> +
> +The CONFIG_NO_HZ_EXTENDED=y Kconfig option causes the kernel to avoid
> +sending scheduling-clock interrupts to CPUs with a single runnable task,
> +and such CPUs are said to be "adaptive-ticks CPUs".  This is important
> +for applications with aggressive real-time response constraints because
> +it allows them to improve their worst-case response times by the maximum
> +duration of a scheduling-clock interrupt.  It is also important for
> +computationally intensive iterative workloads with short iterations:  If

"iterative" twice. Maybe:

"computationally-intensive, short-iteration workloads"?

Also, s/If/if/

> +any CPU is delayed during a given iteration, all the other CPUs will be
> +forced to wait idle while the delayed CPU finished.  Thus, the delay is
> +multiplied by one less than the number of CPUs.  In these situations,
> +there is again strong motivation to avoid sending scheduling-clock
> +interrupts.
> +
> +The "nohz_extended=" boot parameter specifies which CPUs are to be
> +adaptive-ticks CPUs.  For example, "nohz_extended=1,6-8" says that CPUs
> +1, 6, 7, and 8 are to be adaptive-ticks CPUs.  By default, no CPUs will
> +be adaptive-ticks CPUs.

Let's put that last sentence above at the beginning of the paragraph.

> Note that you are prohibited from marking all
> +of the CPUs as adaptive-tick CPUs:  At least one non-adaptive-tick CPU
> +must remain online to handle timekeeping tasks in order to ensure that
> +gettimeofday() returns sane values on adaptive-tick CPUs.

"... gettimeofday(), for example, ..."

> +
> +Transitioning to kernel mode does not automatically force that CPU out
> +of adaptive-ticks mode.  The CPU will exit adaptive-ticks mode only if
> +needed, for example, if that CPU enqueues an RCU callback.

This paragraph sounds funny, let's flip it:

Normally, a CPU remains in adaptive-ticks mode as long as possible.
Transitioning into the kernel doesn't automatically force it out of
said mode. One possible exit, though, is when this CPU enqueues an RCU
callback.

> +
> +Just as with dyntick-idle mode, the benefits of adaptive-tick mode do
> +not come for free:
> +
> +1.	CONFIG_NO_HZ_EXTENDED depends on CONFIG_NO_HZ, so you cannot run
> +	adaptive ticks without also running dyntick idle.  This dependency
> +	of CONFIG_NO_HZ_EXTENDED on CONFIG_NO_HZ extends down into the
> +	implementation.  Therefore, all of the costs of CONFIG_NO_HZ
> +	are also incurred by CONFIG_NO_HZ_EXTENDED.

"... are also transitively incurred by CONFIG_NO_HZ_EXTENDED."

Q: are we talking the same costs here or magnified costs due to the
NO_HZ_EXTENDED addition?

> +2.	The user/kernel transitions are slightly more expensive due
> +	to the need to inform kernel subsystems (such as RCU) about
> +	the change in mode.

Ah, here it is, NO_HZ_EXTENDED is more expensive than NO_HZ?

> +3.	POSIX CPU timers on adaptive-tick CPUs may fire late (or even

					 "... may miss their deadline..."?

> +	not at all) because they currently rely on scheduling-tick
> +	interrupts.  This will likely be fixed in one of two ways: (1)
> +	Prevent CPUs with POSIX CPU timers from entering adaptive-tick
> +	mode, or (2) Use hrtimers or other adaptive-ticks-immune mechanism
> +	to cause the POSIX CPU timer to fire properly.
> +
> +4.	If there are more perf events pending than the hardware can
> +	accommodate, they are normally round-robined so as to collect
> +	all of them over time.  Adaptive-tick mode may prevent this
> +	round-robining from happening.  This will likely be fixed by
> +	preventing CPUs with large numbers of perf events pending from
> +	entering adaptive-tick mode.
> +
> +5.	Scheduler statistics for adaptive-idle CPUs may be computed

"adaptive-idle"? new term huh?

> +	slightly differently than those for non-adaptive-idle CPUs.
> +	This may in turn perturb load-balancing of real-time tasks.
> +
> +6.	The LB_BIAS scheduler feature is disabled by adaptive ticks.
> +
> +Although improvements are expected over time, adaptive ticks is quite
> +useful for many types of real-time and compute-intensive applications.
> +However, the drawbacks listed above mean that adaptive ticks should not
> +(yet) be enabled by default.
> +
> +
> +RCU IMPLICATIONS
> +
> +There are situations in which idle CPUs cannot be permitted to
> +enter either dyntick-idle mode or adaptive-tick mode, the most
> +familiar being the case where that CPU has RCU callbacks pending.

"... the common cause being where..."

> +
> +The CONFIG_RCU_FAST_NO_HZ=y Kconfig option may be used to cause such
> +CPUs to enter dyntick-idle mode or adaptive-tick mode anyway, though a
> +timer will awaken these CPUs every four jiffies in order to ensure that
> +the RCU callbacks are processed in a timely fashion.
> +
> +Another approach is to offload RCU callback processing to "rcuo" kthreads
> +using the CONFIG_RCU_NOCB_CPU=y.  The specific CPUs to offload may be

				" ... option."

> +selected via several methods:
> +
> +1.	One of three mutually exclusive Kconfig options specify a
> +	build-time default for the CPUs to offload:
> +
> +	a.	The RCU_NOCB_CPU_NONE=y Kconfig option results in
> +		no CPUs being offloaded.
> +
> +	b.	The RCU_NOCB_CPU_ZERO=y Kconfig option causes CPU 0 to
> +		be offloaded.
> +
> +	c.	The RCU_NOCB_CPU_ALL=y Kconfig option causes all CPUs
> +		to be offloaded.  Note that the callbacks will be
> +		offloaded to "rcuo" kthreads, and that those kthreads
> +		will in fact run on some CPU.  However, this approach
> +		gives fine-grained control on exactly which CPUs the
> +		callbacks run on, the priority that they run at (including

simpler:

"... the callbacks will run along with their priority (including..."

> +		the default of SCHED_OTHER), and it further allows
> +		this control to be varied dynamically at runtime.
> +
> +2.	The "rcu_nocbs=" kernel boot parameter, which takes a comma-separated
> +	list of CPUs and CPU ranges, for example, "1,3-5" selects CPUs 1,
> +	3, 4, and 5.  The specified CPUs will be offloaded in addition
> +	to any CPUs specified as offloaded by RCU_NOCB_CPU_ZERO or
> +	RCU_NOCB_CPU_ALL.
> +
> +The offloaded CPUs never have RCU callbacks queued, and therefore RCU

"The offloaded CPUs then do not queue RCU callbacks, ..."

> +never prevents offloaded CPUs from entering either dyntick-idle mode or
> +adaptive-tick mode.  That said, note that it is up to userspace to
> +pin the "rcuo" kthreads to specific CPUs if desired.  Otherwise, the
> +scheduler will decide where to run them, which might or might not be
> +where you want them to run.
> +
> +
> +KNOWN ISSUES
> +
> +o	Dyntick-idle slows transitions to and from idle slightly.
> +	In practice, this has not been a problem except for the most
> +	aggressive real-time workloads, which have the option of disabling
> +	dyntick-idle mode, an option that most of them take.  However,
> +	some workloads will no doubt want to use adaptive ticks to

			   undoubtedly

> +	eliminate scheduling-clock-tick latencies.  Here are some

scheduling-clock interrupt latencies?

> +	options for these workloads:
> +
> +	a.	Use PMQOS from userspace to inform the kernel of your
> +		latency requirements (preferred).
> +
> +	b.	On x86 systems, use the "idle=mwait" boot parameter.
> +
> +	c.	On x86 systems, use the "intel_idle.max_cstate=" to limit
> +	`	the maximum depth C-state depth.

remove first "depth"

> +
> +	d.	On x86 systems, use the "idle=poll" boot parameter.
> +		However, please note that use of this parameter can cause
> +		your CPU to overheat, which may cause thermal throttling
> +		to degrade your latencies -- and that this degradation can
> +		be even worse than that of dyntick-idle.  Furthermore,
> +		this parameter effectively disables Turbo Mode on Intel
> +		CPUs, which can significantly reduce maximum performance.
> +
> +o	Adaptive-ticks slows user/kernel transitions slightly.
> +	This is not expected to be a problem for computational-intensive

computationally intensive

> +	workloads, which have few such transitions.  Careful benchmarking
> +	will be required to determine whether or not other workloads
> +	are significantly affected by this effect.
> +
> +o	Adaptive-ticks does not do anything unless there is only one
> +	runnable task for a given CPU, even though there are a number
> +	of other situations where the scheduling-clock tick is not
> +	needed.  To give but one example, consider a CPU that has one
> +	runnable high-priority SCHED_FIFO task and an arbitrary number
> +	of low-priority SCHED_OTHER tasks.  In this case, the CPU is
> +	required to run the SCHED_FIFO task until either it blocks or

					    until it either blocks

> +	some other higher-priority task awakens on (or is assigned to)
> +	this CPU, so there is no point in sending a scheduling-clock
> +	interrupt to this CPU.	However, the current implementation
> +	prohibits CPU with a single runnable SCHED_FIFO task and multiple
> +	runnable SCHED_OTHER tasks from entering adaptive-ticks mode,
> +	even though it would be correct to allow it to do so.
> +
> +	Better handling of these sorts of situations is future work.
> +
> +o	A reboot is required to reconfigure both adaptive idle and RCU
> +	callback offloading.  Runtime reconfiguration could be provided
> +	if needed, however, due to the complexity of reconfiguring RCU
> +	at runtime, there would need to be an earthshakingly good reason.
> +	Especially given the option of simply offloading RCU callbacks
> +	from all CPUs.
> +
> +o	Additional configuration is required to deal with other sources
> +	of OS jitter, including interrupts and system-utility tasks
> +	and processes.  This configuration normally involves binding
> +	interrupts and tasks to particular CPUs.
> +
> +o	Some sources of OS jitter can currently be eliminated only by
> +	constraining the workload.  For example, the only way to eliminate
> +	OS jitter due to global TLB shootdowns is to avoid the unmapping
> +	operations (such as kernel module unload operations) that result
> +	in these shootdowns.  For another example, page faults and TLB
> +	misses can be reduced (and in some cases eliminated) by using
> +	huge pages and by constraining the amount of memory used by the
> +	application.

Good. What about prefaulting the working set of each piece of work?

> +
> +o	Unless all CPUs are idle, at least one CPU must keep the
> +	scheduling-clock interrupt going in order to support accurate
> +	timekeeping.
> -- 
> 1.8.1.5
> 
> 

-- 
Regards/Gruss,
    Boris.

Sent from a fat crate under my desk. Formatting is fine.
--

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-11 17:14   ` Arjan van de Ven
@ 2013-04-11 18:27     ` Paul E. McKenney
  2013-04-11 18:43       ` Dipankar Sarma
  0 siblings, 1 reply; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-11 18:27 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, fweisbec, sbw, Borislav Petkov, Kevin Hilman,
	Christoph Lameter, arnd, Robin.Randhawa, linux-rt-users

On Thu, Apr 11, 2013 at 10:14:28AM -0700, Arjan van de Ven wrote:
> >+2.	Many architectures will place dyntick-idle CPUs into deep sleep
> >+	states, which further degrades from-idle transition latencies.
> >+
> I think this part should just be deleted.
> On x86, the deeper idle states are even used with non-tickless system (the break even times are
> quite a bit less than even 1 msec).
> I can't imagine that ARM is worse on this, at which point the statement above is highly dubious

Interesting point, and I freely admit that I don't have full knowledge
of the energy-consumption characteristics of all the architectures that
Linux supports.  Adding a few of the ARM guys on CC for their take,
plus linux-rt-users.

If there are no objections, I will delete point 2 above as Arjan suggests.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads
  2013-04-11 17:18     ` Randy Dunlap
@ 2013-04-11 18:40       ` Paul E. McKenney
  2013-04-11 20:09         ` Randy Dunlap
  0 siblings, 1 reply; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-11 18:40 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, fweisbec, sbw, Borislav Petkov,
	Arjan van de Ven, Kevin Hilman, Christoph Lameter

On Thu, Apr 11, 2013 at 10:18:26AM -0700, Randy Dunlap wrote:
> On 04/11/2013 09:05 AM, Paul E. McKenney wrote:
> >From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> >
> >The Linux kernel uses a number of per-CPU kthreads, any of which might
> >contribute to OS jitter at any time.  The usual approach to normal
> >kthreads, namely to affinity them to a "housekeeping" CPU, does not
> 
> ugh.               to affine them

How about s/affinity/bind/ instead?

> >work with these kthreads because they cannot operate correctly if moved
> >to some other CPU.  This commit therefore lists ways of controlling OS
> >jitter from the Linux kernel's per-CPU kthreads.
> >
> >Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> >Cc: Frederic Weisbecker <fweisbec@gmail.com>
> >Cc: Steven Rostedt <rostedt@goodmis.org>
> >Cc: Borislav Petkov <bp@alien8.de>
> >Cc: Arjan van de Ven <arjan@linux.intel.com>
> >Cc: Kevin Hilman <khilman@linaro.org>
> >Cc: Christoph Lameter <cl@linux.com>
> >---
> >  Documentation/kernel-per-CPU-kthreads.txt | 159 ++++++++++++++++++++++++++++++
> >  1 file changed, 159 insertions(+)
> >  create mode 100644 Documentation/kernel-per-CPU-kthreads.txt
> >
> >diff --git a/Documentation/kernel-per-CPU-kthreads.txt b/Documentation/kernel-per-CPU-kthreads.txt
> >new file mode 100644
> >index 0000000..495dacf
> >--- /dev/null
> >+++ b/Documentation/kernel-per-CPU-kthreads.txt
> >@@ -0,0 +1,159 @@
> >+REDUCING OS JITTER DUE TO PER-CPU KTHREADS
> >+
> >+This document lists per-CPU kthreads in the Linux kernel and presents
> >+options to control OS jitter due to these kthreads.  Note that kthreads
> >+that are not per-CPU are not listed here -- to reduce OS jitter from
> >+non-per-CPU kthreads, bind them to a "housekeeping" CPU that is dedicated
> >+to such work.
> >+
> >+
> >+Name: ehca_comp/%u
> >+Purpose: Periodically process Infiniband-related work.
> >+To reduce corresponding OS jitter, do any of the following:
> >+1.	Don't use EHCA Infiniband hardware.  This will prevent these
> >+	kthreads from being created in the first place.  (This will
> >+	work for most people, as this hardware, though important,
> >+	is relatively old as is produced in relatively low unit
> >+	volumes.)
> >+2.	Do all EHCA-Infiniband-related work on other CPUs, including
> >+	interrupts.
> >+
> >+
> >+Name: irq/%d-%s
> >+Purpose: Handle threaded interrupts.
> >+To reduce corresponding OS jitter, do the following:
> >+1.	Use irq affinity to force the irq threads to execute on
> >+	some other CPU.
> 
> It would be very nice to explain here how that is done.

Documentation/IRQ-affinity.txt

I added a pointer to this near the beginning.

> >+
> >+Name: kcmtpd_ctr_%d
> >+Purpose: Handle Bluetooth work.
> >+To reduce corresponding OS jitter, do one of the following:
> >+1.	Don't use Bluetooth, in cwhich case these kthreads won't be
> 
> 	                        which

Good catch, fixed.

> >+	created in the first place.
> >+2.	Use irq affinity to force Bluetooth-related interrupts to
> >+	occur on some other CPU and furthermore initiate all
> >+	Bluetooth activity from some other CPU.
> >+
> >+Name: ksoftirqd/%u
> >+Purpose: Execute softirq handlers when threaded or when under heavy load.
> >+To reduce corresponding OS jitter, each softirq vector must be handled
> >+separately as follows:
> >+TIMER_SOFTIRQ:
> >+1.	Build with CONFIG_HOTPLUG_CPU=y.
> >+2.	To the extent possible, keep the CPU out of the kernel when it
> 
> I guess I have a different viewpoint.  I would say:  keep the kernel
> off of that CPU ....

The rationale for the viewpoint that I chose is that many workloads that
care about OS jitter run CPU-bound userspace threads.  The more that
these threads avoid system calls, the less opportunity for OS jitter to
slip in.  So in this case, the application writer really is keeping the
CPU out of the kernel.

> >+	is non-idle, for example, by forcing user and kernel threads as
> >+	well as interrupts to execute elsewhere.
> >+3.	Force the CPU offline, then bring it back online.  This forces
> >+	recurring timers to migrate elsewhere.  If you are concerned
> >+	with multiple CPUs, force them all offline before bringing the
> >+	first one back online.
> >+NET_TX_SOFTIRQ and NET_RX_SOFTIRQ:  Do all of the following:
> >+1.	Force networking interrupts onto other CPUs.
> >+2.	Initiate any network I/O on other CPUs.
> >+3.	Prevent CPU-hotplug operations from being initiated from tasks
> >+	that might run on the CPU to be de-jittered.
> >+BLOCK_SOFTIRQ:  Do all of the following:
> >+1.	Force block-device interrupts onto some other CPU.
> >+2.	Initiate any block I/O on other CPUs.
> >+3.	Prevent CPU-hotplug operations from being initiated from tasks
> >+	that might run on the CPU to be de-jittered.
> >+BLOCK_IOPOLL_SOFTIRQ:  Do all of the following:
> >+1.	Force block-device interrupts onto some other CPU.
> >+2.	Initiate any block I/O and block-I/O polling on other CPUs.
> >+3.	Prevent CPU-hotplug operations from being initiated from tasks
> >+	that might run on the CPU to be de-jittered.
> >+TASKLET_SOFTIRQ: Do one or more of the following:
> >+1.	Avoid use of drivers that use tasklets.
> >+2.	Convert all drivers that you must use from tasklets to workqueues.
> >+3.	Force interrupts for drivers using tasklets onto other CPUs,
> >+	and also do I/O involving these drivers on other CPUs.
> >+SCHED_SOFTIRQ: Do all of the following:
> >+1.	Avoid sending scheduler IPIs to the CPU to be de-jittered,
> >+	for example, ensure that at most one runnable kthread is
> >+	present on that CPU.  If a thread awakens that expects
> >+	to run on the de-jittered CPU, the scheduler will send
> >+	an IPI that can result in a subsequent SCHED_SOFTIRQ.
> >+2.	Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
> >+	CONFIG_NO_HZ_EXTENDED=y, and in addition ensure that the CPU
> >+	to be de-jittered is marked as an adaptive-ticks CPU using the
> >+	"nohz_extended=" boot parameter.  This reduces the number of
> >+	scheduler-clock interrupts that the de-jittered CPU receives,
> >+	minimizing its chances of being selected to do load balancing,
> >+	which happens in SCHED_SOFTIRQ context.
> >+3.	To the extent possible, keep the CPU out of the kernel when it
> 
> same viewpoint point.

Same rationale.  ;-)

> >+	is non-idle, for example, by forcing user and kernel threads as
> >+	well as interrupts to execute elsewhere.  This further reduces
> >+	the number of scheduler-clock interrupts that the de-jittered
> >+	CPU receives.
> >+HRTIMER_SOFTIRQ:  Do all of the following:
> >+1.	Build with CONFIG_HOTPLUG_CPU=y.
> >+2.	To the extent possible, keep the CPU out of the kernel when it
> >+	is non-idle, for example, by forcing user and kernel threads as
> >+	well as interrupts to execute elsewhere.
> >+3.	Force the CPU offline, then bring it back online.  This forces
> >+	recurring timers to migrate elsewhere.  If you are concerned
> >+	with multiple CPUs, force them all offline before bringing the
> >+	first one back online.
> >+RCU_SOFTIRQ:  Do at least one of the following:
> >+1.	Offload callbacks and keep the CPU in either dyntick-idle or
> >+	adaptive-ticks state by doing all of the following:
> >+	a.	Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
> >+		CONFIG_NO_HZ_EXTENDED=y, and in addition ensure that
> >+		the CPU to be de-jittered is marked as an adaptive-ticks CPU
> >+		using the "nohz_extended=" boot parameter.
> >+	b.	To the extent possible, keep the CPU out of the kernel
> 
> viewpoint?

Ditto.

> >+		when it is non-idle, for example, by forcing user and
> >+		kernel threads as well as interrupts to execute elsewhere.
> >+2.	Enable RCU to do its processing remotely via dyntick-idle by
> >+	doing all of the following:
> >+	a.	Build with CONFIG_NO_HZ=y and CONFIG_RCU_FAST_NO_HZ=y.
> >+	b.	To the extent possible, keep the CPU out of the kernel
> 
> viewpoint?

Ditto.

> >+		when it is non-idle, for example, by forcing user and
> >+		kernel threads as well as interrupts to execute elsewhere.
> >+	c.	Ensure that the CPU goes idle frequently, allowing other
> >+		CPUs to detect that it has passed through an RCU
> >+		quiescent state.
> >+
> >+Name: rcuc/%u
> >+Purpose: Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels.
> >+To reduce corresponding OS jitter, do at least one of the following:
> >+1.	Build the kernel with CONFIG_PREEMPT=n.  This prevents these
> >+	kthreads from being created in the first place, and also prevents
> >+	RCU priority boosting from ever being required.  This approach
> >+	is feasible for workloads that do not require high degrees of
> >+	responsiveness.
> >+2.	Build the kernel with CONFIG_RCU_BOOST=n.  This prevents these
> >+	kthreads from being created in the first place.  This approach
> >+	is feasible only if your workload never requires RCU priority
> >+	boosting, for example, if you ensure ample idle time on all CPUs
> >+	that might execute within the kernel.
> >+3.	Build with CONFIG_RCU_NOCB_CPU=y and CONFIG_RCU_NOCB_CPU_ALL=y,
> >+	which offloads all RCU callbacks to kthreads that can be moved
> >+	off of CPUs susceptible to OS jitter.  This approach prevents the
> >+	rcuc/%u kthreads from having any work to do, and are therefore
> >+	never awakened.
> >+4.	Ensure that then CPU never enters the kernel and avoid any
> 
> 	            the

Good catch, fixed.

> viewpoint?

Rationale.

> >+	CPU hotplug operations.  This is another way of preventing any
> >+	callbacks from being queued on the CPU, again preventing the
> >+	rcuc/%u kthreads from having any work to do.
> >+
> >+Name: rcuob/%d, rcuop/%d, and rcuos/%d
> >+Purpose: Offload RCU callbacks from the corresponding CPU.
> >+To reduce corresponding OS jitter, do at least one of the following:
> >+1.	Use affinity, cgroups, or other mechanism to force these kthreads
> >+	to execute on some other CPU.
> >+2.	Build with CONFIG_RCU_NOCB_CPUS=n, which will prevent these
> >+	kthreads from being created in the first place.  However,
> >+	please note that this will not eliminate the corresponding
> >+	OS jitter, but will instead merely shift it to softirq.
> >+
> >+Name: watchdog/%u
> >+Purpose: Detect software lockups on each CPU.
> >+To reduce corresponding OS jitter, do at least one of the following:
> >+1.	Build with CONFIG_LOCKUP_DETECTOR=n, which will prevent these
> >+	kthreads from being created in the first place.
> >+2.	Echo a zero to /proc/sys/kernel/watchdog to disable the
> >+	watchdog timer.
> >+3.	Echo a large number of /proc/sys/kernel/watchdog_thresh in
> >+	order to reduce the frequency of OS jitter due to the watchdog
> >+	timer down to a level that is acceptable for your workload.

Thank you for your review and comments!  Given my rationale above,
are you still comfortable with my applying your Reviewed-by?

							Thanx, Paul

> Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
> 
> 
> -- 
> ~Randy
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-11 18:27     ` Paul E. McKenney
@ 2013-04-11 18:43       ` Dipankar Sarma
  2013-04-11 19:14         ` Paul E. McKenney
  0 siblings, 1 reply; 35+ messages in thread
From: Dipankar Sarma @ 2013-04-11 18:43 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Arjan van de Ven, linux-kernel, mingo, laijs, akpm,
	mathieu.desnoyers, josh, niv, tglx, peterz, rostedt,
	Valdis.Kletnieks, dhowells, edumazet, darren, fweisbec, sbw,
	Borislav Petkov, Kevin Hilman, Christoph Lameter, arnd,
	Robin.Randhawa, linux-rt-users

On Thu, Apr 11, 2013 at 11:27:27AM -0700, Paul E. McKenney wrote:
> On Thu, Apr 11, 2013 at 10:14:28AM -0700, Arjan van de Ven wrote:
> > >+2.	Many architectures will place dyntick-idle CPUs into deep sleep
> > >+	states, which further degrades from-idle transition latencies.
> > >+
> > I think this part should just be deleted.
> > On x86, the deeper idle states are even used with non-tickless system (the break even times are
> > quite a bit less than even 1 msec).
> > I can't imagine that ARM is worse on this, at which point the statement above is highly dubious
> 
> Interesting point, and I freely admit that I don't have full knowledge
> of the energy-consumption characteristics of all the architectures that
> Linux supports.  Adding a few of the ARM guys on CC for their take,
> plus linux-rt-users.
> 
> If there are no objections, I will delete point 2 above as Arjan suggests.

What Arjan said will also be true for Linux on Power systems. I am not
sure "many architectures" would be the right way to characterize it.

Thanks
Dipankar


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-11 18:25   ` Borislav Petkov
@ 2013-04-11 19:13     ` Paul E. McKenney
  2013-04-11 20:19       ` Borislav Petkov
  2013-04-12  8:05       ` Peter Zijlstra
  0 siblings, 2 replies; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-11 19:13 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, fweisbec, sbw, Arjan van de Ven, Kevin Hilman,
	Christoph Lameter

On Thu, Apr 11, 2013 at 08:25:02PM +0200, Borislav Petkov wrote:
> Ok,
> 
> here's some more Savel fun, feel free to take whatever you like. :)

;-) ;-) ;-)

> On Thu, Apr 11, 2013 at 09:05:58AM -0700, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > Cc: Frederic Weisbecker <fweisbec@gmail.com>
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> > Cc: Borislav Petkov <bp@alien8.de>
> > Cc: Arjan van de Ven <arjan@linux.intel.com>
> > Cc: Kevin Hilman <khilman@linaro.org>
> > Cc: Christoph Lameter <cl@linux.com>
> > ---
> >  Documentation/timers/NO_HZ.txt | 245 +++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 245 insertions(+)
> >  create mode 100644 Documentation/timers/NO_HZ.txt
> > 
> > diff --git a/Documentation/timers/NO_HZ.txt b/Documentation/timers/NO_HZ.txt
> > new file mode 100644
> > index 0000000..6b33f6b
> > --- /dev/null
> > +++ b/Documentation/timers/NO_HZ.txt
> > @@ -0,0 +1,245 @@
> > +		NO_HZ: Reducing Scheduling-Clock Ticks
> > +
> > +
> > +This document describes Kconfig options and boot parameters that can
> > +reduce the number of scheduling-clock interrupts, thereby improving energy
> > +efficiency and reducing OS jitter.  Reducing OS jitter is important for
> > +some types of computationally intensive high-performance computing (HPC)
> > +applications and for real-time applications.
> > +
> > +There are two major aspects of scheduling-clock interrupt reduction:
> 
> I'd simplify this:
> 
> There are two main reasons for reducing the amount of scheduling-clock
> interrupts:

How about "There are two main contexts in which the number of
scheduling-clock interrupts can be reduced:"?

> > +
> > +1.	Idle CPUs.
> > +
> > +2.	CPUs having only one runnable task.
> > +
> > +These two cases are described in the following sections.
> 
> Not really needed this sentence is, huh, since the two aspects simply
> follow.

Good point.  How about if I also mention the two additional sections
following this:

	These two cases are described in the following two sections,
	followed by a third section on RCU issues and a fourth and final
	section listing known issues.

> > +
> > +
> > +IDLE CPUs
> > +
> > +If a CPU is idle, there is little point in sending it a scheduling-clock
> > +interrupt.  After all, the primary purpose of a scheduling-clock interrupt
> > +is to force a busy CPU to shift its attention among multiple duties,
> > +but an idle CPU by definition has no duties to shift its attention among.
> 
> simplify:
> 
> "... but an idle CPU has, by definition, no duties."

I feel the need to close the loop back to shifting attention, but the
"by definition" could be dropped.  How about the following?

	If a CPU is idle, there is little point in sending it a
	scheduling-clock interrupt.  After all, the primary purpose of a
	scheduling-clock interrupt is to force a busy CPU to shift its
	attention among multiple duties, and an idle CPU has no duties
	to shift its attention among.

> > +
> > +The CONFIG_NO_HZ=y Kconfig option causes the kernel to avoid sending
> 
> I'm guessing you're keeping those CONFIG_* options in sync with
> Frederic's Kconfig changes...

I am trying to, but probably failing.  But that is OK, as I suspect
that there are more changes on the way.  ;-)

> > +scheduling-clock interrupts to idle CPUs, which is critically important
> > +both to battery-powered devices and to highly virtualized mainframes.
> > +A battery-powered device running a CONFIG_NO_HZ=n kernel would drain
> > +its battery very quickly, easily 2-3x as fast as would the same device
> 
> let's write it out:
> 			 " ... easily 2-3 times as fast..."

OK, done.

> > +running a CONFIG_NO_HZ=y kernel.  A mainframe running 1,500 OS instances
> > +might find that half of its CPU time was consumed by scheduling-clock
> > +interrupts.  In these situations, there is strong motivation to avoid
> > +sending scheduling-clock interrupts to idle CPUs.  That said, dyntick-idle
> 
> I hate "that said" :-)

Interesting.  What don't you like about it?

> 						      However, dyntick-idle mode
> 						      doesn't come for free:
> 
> > +mode is not free:
> > +
> > +1.	It increases the number of instructions executed on the path
> > +	to and from the idle loop.
> > +
> > +2.	Many architectures will place dyntick-idle CPUs into deep sleep
> > +	states, which further degrades from-idle transition latencies.
> 
> Above you say "to and from the idle loop", now it is from-idle. Simply say:
> 
> "... which further degrades idle transision latencies" which means both :).

If people speak for this item, I will update it.  Arjan suggested removing
it entirely.

> > +
> > +Therefore, systems with aggressive real-time response constraints
> > +often run CONFIG_NO_HZ=n kernels in order to avoid degrading from-idle
> > +transition latencies.
> > +
> > +An idle CPU that is not receiving scheduling-clock interrupts is said to
> > +be "dyntick-idle", "in dyntick-idle mode", "in nohz mode", or "running
> > +tickless".  The remainder of this document will use "dyntick-idle mode".
> 
> Very good terminology sort-out. :)

Glad you like it.  ;-)

> > +
> > +There is also a boot parameter "nohz=" that can be used to disable
> > +dyntick-idle mode in CONFIG_NO_HZ=y kernels by specifying "nohz=off".
> > +By default, CONFIG_NO_HZ=y kernels boot with "nohz=on", enabling
> > +dyntick-idle mode.
> > +
> > +
> > +CPUs WITH ONLY ONE RUNNABLE TASK
> > +
> > +If a CPU has only one runnable task, there is again little point in
> > +sending it a scheduling-clock interrupt because there is nowhere else
> > +for a CPU with but one runnable task to shift its attention to.
> 
> Simplify:
> 
> "For a very similar reason, there's little point in sending
> scheduling-clock interrupts to a CPU with a single runnable task because
> there's no other task to switch to."

The original was a bit contorted.  How about the following?

	If a CPU has only one runnable task, there is little point in
	sending it a scheduling-clock interrupt because there is no
	other task to switch to.

> > +
> > +The CONFIG_NO_HZ_EXTENDED=y Kconfig option causes the kernel to avoid
> > +sending scheduling-clock interrupts to CPUs with a single runnable task,
> > +and such CPUs are said to be "adaptive-ticks CPUs".  This is important
> > +for applications with aggressive real-time response constraints because
> > +it allows them to improve their worst-case response times by the maximum
> > +duration of a scheduling-clock interrupt.  It is also important for
> > +computationally intensive iterative workloads with short iterations:  If
> 
> "iterative" twice. Maybe:
> 
> "computationally-intensive, short-iteration workloads"?

Good point, updated as suggested.

> Also, s/If/if/

No, the word following colon is capitalized.

> > +any CPU is delayed during a given iteration, all the other CPUs will be
> > +forced to wait idle while the delayed CPU finished.  Thus, the delay is
> > +multiplied by one less than the number of CPUs.  In these situations,
> > +there is again strong motivation to avoid sending scheduling-clock
> > +interrupts.
> > +
> > +The "nohz_extended=" boot parameter specifies which CPUs are to be
> > +adaptive-ticks CPUs.  For example, "nohz_extended=1,6-8" says that CPUs
> > +1, 6, 7, and 8 are to be adaptive-ticks CPUs.  By default, no CPUs will
> > +be adaptive-ticks CPUs.
> 
> Let's put that last sentence above at the beginning of the paragraph.

Good point, done.

> > Note that you are prohibited from marking all
> > +of the CPUs as adaptive-tick CPUs:  At least one non-adaptive-tick CPU
> > +must remain online to handle timekeeping tasks in order to ensure that
> > +gettimeofday() returns sane values on adaptive-tick CPUs.
> 
> "... gettimeofday(), for example, ..."

How about "system calls like gettimeofday()"?

> > +
> > +Transitioning to kernel mode does not automatically force that CPU out
> > +of adaptive-ticks mode.  The CPU will exit adaptive-ticks mode only if
> > +needed, for example, if that CPU enqueues an RCU callback.
> 
> This paragraph sounds funny, let's flip it:
> 
> Normally, a CPU remains in adaptive-ticks mode as long as possible.
> Transitioning into the kernel doesn't automatically force it out of
> said mode. One possible exit, though, is when this CPU enqueues an RCU
> callback.

Good point -- how about the following?

	Normally, a CPU remains in adaptive-ticks mode as long as
	possible.  In particular, transitioning to kernel mode does
	not automatically change the mode.  Instead, the CPU will exit
	adaptive-ticks mode only if needed, for example, if that CPU
	enqueues an RCU callback.

> > +
> > +Just as with dyntick-idle mode, the benefits of adaptive-tick mode do
> > +not come for free:
> > +
> > +1.	CONFIG_NO_HZ_EXTENDED depends on CONFIG_NO_HZ, so you cannot run
> > +	adaptive ticks without also running dyntick idle.  This dependency
> > +	of CONFIG_NO_HZ_EXTENDED on CONFIG_NO_HZ extends down into the
> > +	implementation.  Therefore, all of the costs of CONFIG_NO_HZ
> > +	are also incurred by CONFIG_NO_HZ_EXTENDED.
> 
> "... are also transitively incurred by CONFIG_NO_HZ_EXTENDED."

Not sure that adding "transitively" helps here.

> Q: are we talking the same costs here or magnified costs due to the
> NO_HZ_EXTENDED addition?

The same costs, from what I can see.

> > +2.	The user/kernel transitions are slightly more expensive due
> > +	to the need to inform kernel subsystems (such as RCU) about
> > +	the change in mode.
> 
> Ah, here it is, NO_HZ_EXTENDED is more expensive than NO_HZ?

In theory, yes.  In practice, it might or might not be measurable.

> > +3.	POSIX CPU timers on adaptive-tick CPUs may fire late (or even
> 
> 					 "... may miss their deadline..."?

Good point, changed.  (And I will need to update this list as well.)

> > +	not at all) because they currently rely on scheduling-tick
> > +	interrupts.  This will likely be fixed in one of two ways: (1)
> > +	Prevent CPUs with POSIX CPU timers from entering adaptive-tick
> > +	mode, or (2) Use hrtimers or other adaptive-ticks-immune mechanism
> > +	to cause the POSIX CPU timer to fire properly.
> > +
> > +4.	If there are more perf events pending than the hardware can
> > +	accommodate, they are normally round-robined so as to collect
> > +	all of them over time.  Adaptive-tick mode may prevent this
> > +	round-robining from happening.  This will likely be fixed by
> > +	preventing CPUs with large numbers of perf events pending from
> > +	entering adaptive-tick mode.
> > +
> > +5.	Scheduler statistics for adaptive-idle CPUs may be computed
> 
> "adaptive-idle"? new term huh?

Good catch!  Changed to "adaptive-tick".

> > +	slightly differently than those for non-adaptive-idle CPUs.
> > +	This may in turn perturb load-balancing of real-time tasks.
> > +
> > +6.	The LB_BIAS scheduler feature is disabled by adaptive ticks.
> > +
> > +Although improvements are expected over time, adaptive ticks is quite
> > +useful for many types of real-time and compute-intensive applications.
> > +However, the drawbacks listed above mean that adaptive ticks should not
> > +(yet) be enabled by default.
> > +
> > +
> > +RCU IMPLICATIONS
> > +
> > +There are situations in which idle CPUs cannot be permitted to
> > +enter either dyntick-idle mode or adaptive-tick mode, the most
> > +familiar being the case where that CPU has RCU callbacks pending.
> 
> "... the common cause being where..."

Good point, I changed "familiar" to "common".

> > +
> > +The CONFIG_RCU_FAST_NO_HZ=y Kconfig option may be used to cause such
> > +CPUs to enter dyntick-idle mode or adaptive-tick mode anyway, though a
> > +timer will awaken these CPUs every four jiffies in order to ensure that
> > +the RCU callbacks are processed in a timely fashion.
> > +
> > +Another approach is to offload RCU callback processing to "rcuo" kthreads
> > +using the CONFIG_RCU_NOCB_CPU=y.  The specific CPUs to offload may be
> 
> 				" ... option."

Good catch, fixed.

> > +selected via several methods:
> > +
> > +1.	One of three mutually exclusive Kconfig options specify a
> > +	build-time default for the CPUs to offload:
> > +
> > +	a.	The RCU_NOCB_CPU_NONE=y Kconfig option results in
> > +		no CPUs being offloaded.
> > +
> > +	b.	The RCU_NOCB_CPU_ZERO=y Kconfig option causes CPU 0 to
> > +		be offloaded.
> > +
> > +	c.	The RCU_NOCB_CPU_ALL=y Kconfig option causes all CPUs
> > +		to be offloaded.  Note that the callbacks will be
> > +		offloaded to "rcuo" kthreads, and that those kthreads
> > +		will in fact run on some CPU.  However, this approach
> > +		gives fine-grained control on exactly which CPUs the
> > +		callbacks run on, the priority that they run at (including
> 
> simpler:
> 
> "... the callbacks will run along with their priority (including..."

Good point.  I reworded as follows:

	 c.	The RCU_NOCB_CPU_ALL=y Kconfig option causes all CPUs
		to be offloaded.  Note that the callbacks will be
		offloaded to "rcuo" kthreads, and that those kthreads
		will in fact run on some CPU.  However, this approach
		gives fine-grained control on exactly which CPUs the
		callbacks run on, along with their scheduling priority
		(including the default of SCHED_OTHER), and it further
		allows this control to be varied dynamically at runtime.

> > +		the default of SCHED_OTHER), and it further allows
> > +		this control to be varied dynamically at runtime.
> > +
> > +2.	The "rcu_nocbs=" kernel boot parameter, which takes a comma-separated
> > +	list of CPUs and CPU ranges, for example, "1,3-5" selects CPUs 1,
> > +	3, 4, and 5.  The specified CPUs will be offloaded in addition
> > +	to any CPUs specified as offloaded by RCU_NOCB_CPU_ZERO or
> > +	RCU_NOCB_CPU_ALL.
> > +
> > +The offloaded CPUs never have RCU callbacks queued, and therefore RCU
> 
> "The offloaded CPUs then do not queue RCU callbacks, ..."

How about this?

	The offloaded CPUs will never queue RCU callbacks, and
	therefore RCU never prevents offloaded CPUs from entering either
	dyntick-idle mode or adaptive-tick mode.  That said, note that
	it is up to userspace to pin the "rcuo" kthreads to specific
	CPUs if desired.  Otherwise, the scheduler will decide where to
	run them, which might or might not be where you want them to run.

> > +never prevents offloaded CPUs from entering either dyntick-idle mode or
> > +adaptive-tick mode.  That said, note that it is up to userspace to
> > +pin the "rcuo" kthreads to specific CPUs if desired.  Otherwise, the
> > +scheduler will decide where to run them, which might or might not be
> > +where you want them to run.
> > +
> > +
> > +KNOWN ISSUES
> > +
> > +o	Dyntick-idle slows transitions to and from idle slightly.
> > +	In practice, this has not been a problem except for the most
> > +	aggressive real-time workloads, which have the option of disabling
> > +	dyntick-idle mode, an option that most of them take.  However,
> > +	some workloads will no doubt want to use adaptive ticks to
> 
> 			   undoubtedly

I like the connotations of "no doubt" in this case.  ;-)

> > +	eliminate scheduling-clock-tick latencies.  Here are some
> 
> scheduling-clock interrupt latencies?

Good, updated.

> > +	options for these workloads:
> > +
> > +	a.	Use PMQOS from userspace to inform the kernel of your
> > +		latency requirements (preferred).
> > +
> > +	b.	On x86 systems, use the "idle=mwait" boot parameter.
> > +
> > +	c.	On x86 systems, use the "intel_idle.max_cstate=" to limit
> > +	`	the maximum depth C-state depth.
> 
> remove first "depth"

Good catch, I must have been out of my depth.

> > +
> > +	d.	On x86 systems, use the "idle=poll" boot parameter.
> > +		However, please note that use of this parameter can cause
> > +		your CPU to overheat, which may cause thermal throttling
> > +		to degrade your latencies -- and that this degradation can
> > +		be even worse than that of dyntick-idle.  Furthermore,
> > +		this parameter effectively disables Turbo Mode on Intel
> > +		CPUs, which can significantly reduce maximum performance.
> > +
> > +o	Adaptive-ticks slows user/kernel transitions slightly.
> > +	This is not expected to be a problem for computational-intensive
> 
> computationally intensive

Good catch, fixed.

> > +	workloads, which have few such transitions.  Careful benchmarking
> > +	will be required to determine whether or not other workloads
> > +	are significantly affected by this effect.
> > +
> > +o	Adaptive-ticks does not do anything unless there is only one
> > +	runnable task for a given CPU, even though there are a number
> > +	of other situations where the scheduling-clock tick is not
> > +	needed.  To give but one example, consider a CPU that has one
> > +	runnable high-priority SCHED_FIFO task and an arbitrary number
> > +	of low-priority SCHED_OTHER tasks.  In this case, the CPU is
> > +	required to run the SCHED_FIFO task until either it blocks or
> 
> 					    until it either blocks

Good, fixed.

> > +	some other higher-priority task awakens on (or is assigned to)
> > +	this CPU, so there is no point in sending a scheduling-clock
> > +	interrupt to this CPU.	However, the current implementation
> > +	prohibits CPU with a single runnable SCHED_FIFO task and multiple
> > +	runnable SCHED_OTHER tasks from entering adaptive-ticks mode,
> > +	even though it would be correct to allow it to do so.
> > +
> > +	Better handling of these sorts of situations is future work.
> > +
> > +o	A reboot is required to reconfigure both adaptive idle and RCU
> > +	callback offloading.  Runtime reconfiguration could be provided
> > +	if needed, however, due to the complexity of reconfiguring RCU
> > +	at runtime, there would need to be an earthshakingly good reason.
> > +	Especially given the option of simply offloading RCU callbacks
> > +	from all CPUs.
> > +
> > +o	Additional configuration is required to deal with other sources
> > +	of OS jitter, including interrupts and system-utility tasks
> > +	and processes.  This configuration normally involves binding
> > +	interrupts and tasks to particular CPUs.
> > +
> > +o	Some sources of OS jitter can currently be eliminated only by
> > +	constraining the workload.  For example, the only way to eliminate
> > +	OS jitter due to global TLB shootdowns is to avoid the unmapping
> > +	operations (such as kernel module unload operations) that result
> > +	in these shootdowns.  For another example, page faults and TLB
> > +	misses can be reduced (and in some cases eliminated) by using
> > +	huge pages and by constraining the amount of memory used by the
> > +	application.
> 
> Good. What about prefaulting the working set of each piece of work?

Fair point.  I added the following sentence:

	Pre-faulting the working set can also be helpful, as can the
	mlock() and mlockall() system calls.

Thank you for the careful review and helpful comments!

							Thanx, Paul

> > +
> > +o	Unless all CPUs are idle, at least one CPU must keep the
> > +	scheduling-clock interrupt going in order to support accurate
> > +	timekeeping.
> > -- 
> > 1.8.1.5
> > 
> > 
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> Sent from a fat crate under my desk. Formatting is fine.
> --
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-11 18:43       ` Dipankar Sarma
@ 2013-04-11 19:14         ` Paul E. McKenney
  0 siblings, 0 replies; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-11 19:14 UTC (permalink / raw)
  To: Dipankar Sarma
  Cc: Arjan van de Ven, linux-kernel, mingo, laijs, akpm,
	mathieu.desnoyers, josh, niv, tglx, peterz, rostedt,
	Valdis.Kletnieks, dhowells, edumazet, darren, fweisbec, sbw,
	Borislav Petkov, Kevin Hilman, Christoph Lameter, arnd,
	Robin.Randhawa, linux-rt-users

On Fri, Apr 12, 2013 at 12:13:13AM +0530, Dipankar Sarma wrote:
> On Thu, Apr 11, 2013 at 11:27:27AM -0700, Paul E. McKenney wrote:
> > On Thu, Apr 11, 2013 at 10:14:28AM -0700, Arjan van de Ven wrote:
> > > >+2.	Many architectures will place dyntick-idle CPUs into deep sleep
> > > >+	states, which further degrades from-idle transition latencies.
> > > >+
> > > I think this part should just be deleted.
> > > On x86, the deeper idle states are even used with non-tickless system (the break even times are
> > > quite a bit less than even 1 msec).
> > > I can't imagine that ARM is worse on this, at which point the statement above is highly dubious
> > 
> > Interesting point, and I freely admit that I don't have full knowledge
> > of the energy-consumption characteristics of all the architectures that
> > Linux supports.  Adding a few of the ARM guys on CC for their take,
> > plus linux-rt-users.
> > 
> > If there are no objections, I will delete point 2 above as Arjan suggests.
> 
> What Arjan said will also be true for Linux on Power systems. I am not
> sure "many architectures" would be the right way to characterize it.

Very well, I count one non-objection to Arjan's suggestion.  ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads
  2013-04-11 18:40       ` Paul E. McKenney
@ 2013-04-11 20:09         ` Randy Dunlap
  2013-04-11 21:00           ` Paul E. McKenney
  0 siblings, 1 reply; 35+ messages in thread
From: Randy Dunlap @ 2013-04-11 20:09 UTC (permalink / raw)
  To: paulmck
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, fweisbec, sbw, Borislav Petkov,
	Arjan van de Ven, Kevin Hilman, Christoph Lameter

On 04/11/13 11:40, Paul E. McKenney wrote:
> On Thu, Apr 11, 2013 at 10:18:26AM -0700, Randy Dunlap wrote:
>> On 04/11/2013 09:05 AM, Paul E. McKenney wrote:
>>> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
>>>
>>> The Linux kernel uses a number of per-CPU kthreads, any of which might
>>> contribute to OS jitter at any time.  The usual approach to normal
>>> kthreads, namely to affinity them to a "housekeeping" CPU, does not
>>
>> ugh.               to affine them
> 
> How about s/affinity/bind/ instead?

Yes, that's good.

>>> work with these kthreads because they cannot operate correctly if moved
>>> to some other CPU.  This commit therefore lists ways of controlling OS
>>> jitter from the Linux kernel's per-CPU kthreads.
>>>
>>> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
>>> Cc: Frederic Weisbecker <fweisbec@gmail.com>
>>> Cc: Steven Rostedt <rostedt@goodmis.org>
>>> Cc: Borislav Petkov <bp@alien8.de>
>>> Cc: Arjan van de Ven <arjan@linux.intel.com>
>>> Cc: Kevin Hilman <khilman@linaro.org>
>>> Cc: Christoph Lameter <cl@linux.com>
>>> ---
>>>  Documentation/kernel-per-CPU-kthreads.txt | 159 ++++++++++++++++++++++++++++++
>>>  1 file changed, 159 insertions(+)
>>>  create mode 100644 Documentation/kernel-per-CPU-kthreads.txt
>>>
>>> diff --git a/Documentation/kernel-per-CPU-kthreads.txt b/Documentation/kernel-per-CPU-kthreads.txt
>>> new file mode 100644
>>> index 0000000..495dacf
>>> --- /dev/null
>>> +++ b/Documentation/kernel-per-CPU-kthreads.txt
>>> @@ -0,0 +1,159 @@
>>> +REDUCING OS JITTER DUE TO PER-CPU KTHREADS
>>> +
>>> +
>>> +Name: irq/%d-%s
>>> +Purpose: Handle threaded interrupts.
>>> +To reduce corresponding OS jitter, do the following:
>>> +1.	Use irq affinity to force the irq threads to execute on
>>> +	some other CPU.
>>
>> It would be very nice to explain here how that is done.
> 
> Documentation/IRQ-affinity.txt
> 
> I added a pointer to this near the beginning.
> 

Good.

> Thank you for your review and comments!  Given my rationale above,
> are you still comfortable with my applying your Reviewed-by?

Sure.  Thanks.

>> Reviewed-by: Randy Dunlap <rdunlap@infradead.org>


-- 
~Randy

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-11 19:13     ` Paul E. McKenney
@ 2013-04-11 20:19       ` Borislav Petkov
  2013-04-11 21:01         ` Paul E. McKenney
  2013-04-12  8:05       ` Peter Zijlstra
  1 sibling, 1 reply; 35+ messages in thread
From: Borislav Petkov @ 2013-04-11 20:19 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, fweisbec, sbw, Arjan van de Ven, Kevin Hilman,
	Christoph Lameter

On Thu, Apr 11, 2013 at 12:13:55PM -0700, Paul E. McKenney wrote:
> Thank you for the careful review and helpful comments!

Sure, agreed with all your comments - I think we've so beaten the hell
out of this, it's not even funny. ;-)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

Sent from a fat crate under my desk. Formatting is fine.
--

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads
  2013-04-11 20:09         ` Randy Dunlap
@ 2013-04-11 21:00           ` Paul E. McKenney
  0 siblings, 0 replies; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-11 21:00 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, fweisbec, sbw, Borislav Petkov,
	Arjan van de Ven, Kevin Hilman, Christoph Lameter

On Thu, Apr 11, 2013 at 01:09:28PM -0700, Randy Dunlap wrote:
> On 04/11/13 11:40, Paul E. McKenney wrote:
> > On Thu, Apr 11, 2013 at 10:18:26AM -0700, Randy Dunlap wrote:
> >> On 04/11/2013 09:05 AM, Paul E. McKenney wrote:
> >>> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> >>>
> >>> The Linux kernel uses a number of per-CPU kthreads, any of which might
> >>> contribute to OS jitter at any time.  The usual approach to normal
> >>> kthreads, namely to affinity them to a "housekeeping" CPU, does not
> >>
> >> ugh.               to affine them
> > 
> > How about s/affinity/bind/ instead?
> 
> Yes, that's good.
> 
> >>> work with these kthreads because they cannot operate correctly if moved
> >>> to some other CPU.  This commit therefore lists ways of controlling OS
> >>> jitter from the Linux kernel's per-CPU kthreads.
> >>>
> >>> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> >>> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> >>> Cc: Steven Rostedt <rostedt@goodmis.org>
> >>> Cc: Borislav Petkov <bp@alien8.de>
> >>> Cc: Arjan van de Ven <arjan@linux.intel.com>
> >>> Cc: Kevin Hilman <khilman@linaro.org>
> >>> Cc: Christoph Lameter <cl@linux.com>
> >>> ---
> >>>  Documentation/kernel-per-CPU-kthreads.txt | 159 ++++++++++++++++++++++++++++++
> >>>  1 file changed, 159 insertions(+)
> >>>  create mode 100644 Documentation/kernel-per-CPU-kthreads.txt
> >>>
> >>> diff --git a/Documentation/kernel-per-CPU-kthreads.txt b/Documentation/kernel-per-CPU-kthreads.txt
> >>> new file mode 100644
> >>> index 0000000..495dacf
> >>> --- /dev/null
> >>> +++ b/Documentation/kernel-per-CPU-kthreads.txt
> >>> @@ -0,0 +1,159 @@
> >>> +REDUCING OS JITTER DUE TO PER-CPU KTHREADS
> >>> +
> >>> +
> >>> +Name: irq/%d-%s
> >>> +Purpose: Handle threaded interrupts.
> >>> +To reduce corresponding OS jitter, do the following:
> >>> +1.	Use irq affinity to force the irq threads to execute on
> >>> +	some other CPU.
> >>
> >> It would be very nice to explain here how that is done.
> > 
> > Documentation/IRQ-affinity.txt
> > 
> > I added a pointer to this near the beginning.
> > 
> 
> Good.
> 
> > Thank you for your review and comments!  Given my rationale above,
> > are you still comfortable with my applying your Reviewed-by?
> 
> Sure.  Thanks.
> 
> >> Reviewed-by: Randy Dunlap <rdunlap@infradead.org>

I have added your Reviewed-by, thank you again!

							Thanx, Paul


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-11 20:19       ` Borislav Petkov
@ 2013-04-11 21:01         ` Paul E. McKenney
  0 siblings, 0 replies; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-11 21:01 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, fweisbec, sbw, Arjan van de Ven, Kevin Hilman,
	Christoph Lameter

On Thu, Apr 11, 2013 at 10:19:56PM +0200, Borislav Petkov wrote:
> On Thu, Apr 11, 2013 at 12:13:55PM -0700, Paul E. McKenney wrote:
> > Thank you for the careful review and helpful comments!
> 
> Sure, agreed with all your comments - I think we've so beaten the hell
> out of this, it's not even funny. ;-)

;-) ;-) ;-)

> Reviewed-by: Borislav Petkov <bp@suse.de>

I have added your Reviewed-by, thank you!

							Thanx, Paul


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-11 19:13     ` Paul E. McKenney
  2013-04-11 20:19       ` Borislav Petkov
@ 2013-04-12  8:05       ` Peter Zijlstra
  2013-04-12 17:54         ` Paul E. McKenney
  1 sibling, 1 reply; 35+ messages in thread
From: Peter Zijlstra @ 2013-04-12  8:05 UTC (permalink / raw)
  To: paulmck
  Cc: Borislav Petkov, linux-kernel, mingo, laijs, dipankar, akpm,
	mathieu.desnoyers, josh, niv, tglx, rostedt, Valdis.Kletnieks,
	dhowells, edumazet, darren, fweisbec, sbw, Arjan van de Ven,
	Kevin Hilman, Christoph Lameter

On Thu, 2013-04-11 at 12:13 -0700, Paul E. McKenney wrote:
> > > +2. Many architectures will place dyntick-idle CPUs into deep sleep
> > > +   states, which further degrades from-idle transition latencies.
> > 
> > Above you say "to and from the idle loop", now it is from-idle. Simply say:
> > 
> > "... which further degrades idle transision latencies" which means both :).
> 
> If people speak for this item, I will update it.  Arjan suggested removing
> it entirely.

So I haven't yet read the entire document, but:

+2.     Many architectures will place dyntick-idle CPUs into deep sleep
+       states, which further degrades from-idle transition latencies.
+
+Therefore, systems with aggressive real-time response constraints
+often run CONFIG_NO_HZ=n kernels in order to avoid degrading from-idle
+transition latencies.

I'm not sure that's the reason.. We can (and do) limit C states to curb
the idle-exit times. The reason we often turn off NOHZ all together is
to further reduce the cost of the idle paths.

All the mucking about with clock states and such is a rather expensive
thing
to do all the time.




^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-12  8:05       ` Peter Zijlstra
@ 2013-04-12 17:54         ` Paul E. McKenney
  2013-04-12 17:56           ` Arjan van de Ven
  0 siblings, 1 reply; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-12 17:54 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Borislav Petkov, linux-kernel, mingo, laijs, dipankar, akpm,
	mathieu.desnoyers, josh, niv, tglx, rostedt, Valdis.Kletnieks,
	dhowells, edumazet, darren, fweisbec, sbw, Arjan van de Ven,
	Kevin Hilman, Christoph Lameter

On Fri, Apr 12, 2013 at 10:05:04AM +0200, Peter Zijlstra wrote:
> On Thu, 2013-04-11 at 12:13 -0700, Paul E. McKenney wrote:
> > > > +2. Many architectures will place dyntick-idle CPUs into deep sleep
> > > > +   states, which further degrades from-idle transition latencies.
> > > 
> > > Above you say "to and from the idle loop", now it is from-idle. Simply say:
> > > 
> > > "... which further degrades idle transision latencies" which means both :).
> > 
> > If people speak for this item, I will update it.  Arjan suggested removing
> > it entirely.
> 
> So I haven't yet read the entire document, but:
> 
> +2.     Many architectures will place dyntick-idle CPUs into deep sleep
> +       states, which further degrades from-idle transition latencies.
> +
> +Therefore, systems with aggressive real-time response constraints
> +often run CONFIG_NO_HZ=n kernels in order to avoid degrading from-idle
> +transition latencies.
> 
> I'm not sure that's the reason.. We can (and do) limit C states to curb
> the idle-exit times. The reason we often turn off NOHZ all together is
> to further reduce the cost of the idle paths.
> 
> All the mucking about with clock states and such is a rather expensive
> thing
> to do all the time.

Ah, thank you!  This might help me address Arjan's concerns as well.
How about the following for the disadvantages of CONFIG_NO_HZ=y?

							Thanx, Paul

------------------------------------------------------------------------

1.	It increases the number of instructions executed on the path
	to and from the idle loop.

2.	On many architectures, dyntick-idle mode also increases the
	number of times that clocks must be reprogrammed, and this
	reprogramming can be quite expensive.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-12 17:54         ` Paul E. McKenney
@ 2013-04-12 17:56           ` Arjan van de Ven
  2013-04-12 20:39             ` Paul E. McKenney
  2013-04-15 16:00             ` Christoph Lameter
  0 siblings, 2 replies; 35+ messages in thread
From: Arjan van de Ven @ 2013-04-12 17:56 UTC (permalink / raw)
  To: paulmck
  Cc: Peter Zijlstra, Borislav Petkov, linux-kernel, mingo, laijs,
	dipankar, akpm, mathieu.desnoyers, josh, niv, tglx, rostedt,
	Valdis.Kletnieks, dhowells, edumazet, darren, fweisbec, sbw,
	Kevin Hilman, Christoph Lameter


> ------------------------------------------------------------------------
>
> 1.	It increases the number of instructions executed on the path
> 	to and from the idle loop.
>
> 2.	On many architectures, dyntick-idle mode also increases the
> 	number of times that clocks must be reprogrammed, and this
> 	reprogramming can be quite expensive.


it's really that we're no longer using periodic clocks, but only one-shot clocks only.
(which then leads to having to program them every time)

but arguably, that's because of HRTIMERS more than NOHZ
(e.g. I bet we still turn off periodic even for nohz as long as hrtimers are enabled)


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-12 17:56           ` Arjan van de Ven
@ 2013-04-12 20:39             ` Paul E. McKenney
  2013-04-15 16:00             ` Christoph Lameter
  1 sibling, 0 replies; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-12 20:39 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Peter Zijlstra, Borislav Petkov, linux-kernel, mingo, laijs,
	dipankar, akpm, mathieu.desnoyers, josh, niv, tglx, rostedt,
	Valdis.Kletnieks, dhowells, edumazet, darren, fweisbec, sbw,
	Kevin Hilman, Christoph Lameter

On Fri, Apr 12, 2013 at 10:56:35AM -0700, Arjan van de Ven wrote:
> 
> >------------------------------------------------------------------------
> >
> >1.	It increases the number of instructions executed on the path
> >	to and from the idle loop.
> >
> >2.	On many architectures, dyntick-idle mode also increases the
> >	number of times that clocks must be reprogrammed, and this
> >	reprogramming can be quite expensive.
> 
> 
> it's really that we're no longer using periodic clocks, but only one-shot clocks only.
> (which then leads to having to program them every time)

Fair enough, but I believe that I have captured this.

> but arguably, that's because of HRTIMERS more than NOHZ
> (e.g. I bet we still turn off periodic even for nohz as long as hrtimers are enabled)

Might be, but the more detail I add, the higher the maintenance burden
keeping this document up to date.  ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-12 17:56           ` Arjan van de Ven
  2013-04-12 20:39             ` Paul E. McKenney
@ 2013-04-15 16:00             ` Christoph Lameter
  2013-04-15 16:41               ` Arjan van de Ven
  1 sibling, 1 reply; 35+ messages in thread
From: Christoph Lameter @ 2013-04-15 16:00 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: paulmck, Peter Zijlstra, Borislav Petkov, linux-kernel, mingo,
	laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	rostedt, Valdis.Kletnieks, dhowells, edumazet, darren, fweisbec,
	sbw, Kevin Hilman

On Fri, 12 Apr 2013, Arjan van de Ven wrote:

> but arguably, that's because of HRTIMERS more than NOHZ
> (e.g. I bet we still turn off periodic even for nohz as long as hrtimers are
> enabled)

If we are able to only get rid of one timer tick on average with dynticks
then I would think that is enough to justify having it on by default.

If the scheduling period from the schduler is around 20ms then one may be
able to save processing 20 timer ticks by going to htimers.

The main issue with hrtimers is likely going to be that is it is too much
effort for small timerframes less than 10ms. Could we  only switch off the
timer tick if the next event is more than 10 ticks aways?


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-15 16:00             ` Christoph Lameter
@ 2013-04-15 16:41               ` Arjan van de Ven
  2013-04-15 16:53                 ` Christoph Lameter
  0 siblings, 1 reply; 35+ messages in thread
From: Arjan van de Ven @ 2013-04-15 16:41 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: paulmck, Peter Zijlstra, Borislav Petkov, linux-kernel, mingo,
	laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	rostedt, Valdis.Kletnieks, dhowells, edumazet, darren, fweisbec,
	sbw, Kevin Hilman

On 4/15/2013 9:00 AM, Christoph Lameter wrote:
> On Fri, 12 Apr 2013, Arjan van de Ven wrote:
>
>> but arguably, that's because of HRTIMERS more than NOHZ
>> (e.g. I bet we still turn off periodic even for nohz as long as hrtimers are
>> enabled)
>
> If we are able to only get rid of one timer tick on average with dynticks
> then I would think that is enough to justify having it on by default.
>
> If the scheduling period from the schduler is around 20ms then one may be
> able to save processing 20 timer ticks by going to htimers.
>
> The main issue with hrtimers is likely going to be that is it is too much
> effort for small timerframes less than 10ms. Could we  only switch off the
> timer tick if the next event is more than 10 ticks aways?
>

to put the "cost" into perspective; programming a timer in one-shot mode
is some math on the cpu (to go from kernel time to hardware time),
which is a multiply and a shift (or a divide), and then actually
programming the hardware, which is at the cost of (approximately) a cachemiss or two
(so give or take in the "hundreds" of cycles)
at least on moderately modern hardware (e.g. last few years)

not cheap. But also not INSANE expensive... and it breaks-even already if you only
save one or two cache misses elsewhere.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-15 16:41               ` Arjan van de Ven
@ 2013-04-15 16:53                 ` Christoph Lameter
  2013-04-15 17:21                   ` Arjan van de Ven
  0 siblings, 1 reply; 35+ messages in thread
From: Christoph Lameter @ 2013-04-15 16:53 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: paulmck, Peter Zijlstra, Borislav Petkov, linux-kernel, mingo,
	laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	rostedt, Valdis.Kletnieks, dhowells, edumazet, darren, fweisbec,
	sbw, Kevin Hilman

On Mon, 15 Apr 2013, Arjan van de Ven wrote:

> to put the "cost" into perspective; programming a timer in one-shot mode
> is some math on the cpu (to go from kernel time to hardware time),
> which is a multiply and a shift (or a divide), and then actually
> programming the hardware, which is at the cost of (approximately) a cachemiss
> or two
> (so give or take in the "hundreds" of cycles)
> at least on moderately modern hardware (e.g. last few years)

Well these are PCI transactions which are bound to be high latency
reaching may be more than microscond in total. A timer interrupt may last
2-4 microsecond at best without PCI transactions.

> not cheap. But also not INSANE expensive... and it breaks-even already if you
> only
> save one or two cache misses elsewhere.

Ok then maybe go dynticks if we can save at least one timer tick?

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-15 16:53                 ` Christoph Lameter
@ 2013-04-15 17:21                   ` Arjan van de Ven
  0 siblings, 0 replies; 35+ messages in thread
From: Arjan van de Ven @ 2013-04-15 17:21 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: paulmck, Peter Zijlstra, Borislav Petkov, linux-kernel, mingo,
	laijs, dipankar, akpm, mathieu.desnoyers, josh, niv, tglx,
	rostedt, Valdis.Kletnieks, dhowells, edumazet, darren, fweisbec,
	sbw, Kevin Hilman

On 4/15/2013 9:53 AM, Christoph Lameter wrote:
> On Mon, 15 Apr 2013, Arjan van de Ven wrote:
>
>> to put the "cost" into perspective; programming a timer in one-shot mode
>> is some math on the cpu (to go from kernel time to hardware time),
>> which is a multiply and a shift (or a divide), and then actually
>> programming the hardware, which is at the cost of (approximately) a cachemiss
>> or two
>> (so give or take in the "hundreds" of cycles)
>> at least on moderately modern hardware (e.g. last few years)
>
> Well these are PCI transactions

eh no not on anything modern

they're touching the local apic which is core-local

> Ok then maybe go dynticks if we can save at least one timer tick?

switching between periodic versus not is actually non-trivial and much more expensive
(and complex) so not something you want to do all the time.
once during early boot is hard enough already


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-11 16:05 ` [PATCH documentation 1/2] nohz1: Add documentation Paul E. McKenney
                     ` (3 preceding siblings ...)
  2013-04-11 18:25   ` Borislav Petkov
@ 2013-04-19 21:01   ` Kevin Hilman
  2013-04-19 21:47     ` Paul E. McKenney
  2013-04-27 13:26   ` Frederic Weisbecker
  5 siblings, 1 reply; 35+ messages in thread
From: Kevin Hilman @ 2013-04-19 21:01 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, fweisbec, sbw, Borislav Petkov,
	Arjan van de Ven, Christoph Lameter

"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> writes:

> +KNOWN ISSUES

[...]

> +o	Unless all CPUs are idle, at least one CPU must keep the
> +	scheduling-clock interrupt going in order to support accurate
> +	timekeeping.

At least with the implementation I'm using (Frederic's 3.9-nohz1
branch), at least one CPU is forced to stay out of dyntick-idle
*always*, even if all CPUs are idle.

IMO, this is important to list as a known issue since this will have
its own power implications when the system is mostly idle.

Otherwise, document looks great.  

Reviewed-by: Kevin Hilman <khilman@linaro.org>

Kevin


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-19 21:01   ` Kevin Hilman
@ 2013-04-19 21:47     ` Paul E. McKenney
  0 siblings, 0 replies; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-19 21:47 UTC (permalink / raw)
  To: Kevin Hilman
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, fweisbec, sbw, Borislav Petkov,
	Arjan van de Ven, Christoph Lameter

On Fri, Apr 19, 2013 at 02:01:49PM -0700, Kevin Hilman wrote:
> "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> writes:
> 
> > +KNOWN ISSUES
> 
> [...]
> 
> > +o	Unless all CPUs are idle, at least one CPU must keep the
> > +	scheduling-clock interrupt going in order to support accurate
> > +	timekeeping.
> 
> At least with the implementation I'm using (Frederic's 3.9-nohz1
> branch), at least one CPU is forced to stay out of dyntick-idle
> *always*, even if all CPUs are idle.
> 
> IMO, this is important to list as a known issue since this will have
> its own power implications when the system is mostly idle.

Good point!  I added the following at the end of the known issues:

o	If there are adaptive-ticks CPUs, there will be at least one
	CPU keeping the scheduling-clock interrupt going, even if all
	CPUs are otherwise idle.

> Otherwise, document looks great.  
> 
> Reviewed-by: Kevin Hilman <khilman@linaro.org>

Added, thank you for the review and comments!

							Thanx, Paul


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 1/2] nohz1: Add documentation.
  2013-04-11 16:05 ` [PATCH documentation 1/2] nohz1: Add documentation Paul E. McKenney
                     ` (4 preceding siblings ...)
  2013-04-19 21:01   ` Kevin Hilman
@ 2013-04-27 13:26   ` Frederic Weisbecker
  5 siblings, 0 replies; 35+ messages in thread
From: Frederic Weisbecker @ 2013-04-27 13:26 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, laijs, dipankar, akpm, mathieu.desnoyers,
	josh, niv, tglx, peterz, rostedt, Valdis.Kletnieks, dhowells,
	edumazet, darren, sbw, Borislav Petkov, Arjan van de Ven,
	Kevin Hilman, Christoph Lameter

2013/4/11 Paul E. McKenney <paulmck@linux.vnet.ibm.com>:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
>
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Arjan van de Ven <arjan@linux.intel.com>
> Cc: Kevin Hilman <khilman@linaro.org>
> Cc: Christoph Lameter <cl@linux.com>
> ---

There have been some significant interest and amount of review on this
document. That's a good sign ;-)

We should probably merge the next version of this into tip:timers/nohz
and then iteratively address the remaining reviews. Documentation for
that new full dynticks stuff (and of course dynticks in general) is
critically important. I'm especially worried about warning the users
concerning the current limitations: scheduler stats, fairness, load
balancing and scheduler features in general are not yet well handled
with full dynticks. We'll improve that over time but these issues need
to be visible early.

Thanks!

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads
  2013-04-25 20:59             ` Thomas Gleixner
@ 2013-04-25 21:23               ` Paul E. McKenney
  0 siblings, 0 replies; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-25 21:23 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Borislav Petkov, linux-kernel, mingo, sbw, Frederic Weisbecker,
	Steven Rostedt, Arjan van de Ven, Kevin Hilman,
	Christoph Lameter, Olivier Baetz

On Thu, Apr 25, 2013 at 10:59:05PM +0200, Thomas Gleixner wrote:
> On Thu, 25 Apr 2013, Paul E. McKenney wrote:
> > On Thu, Apr 25, 2013 at 12:23:12PM +0200, Borislav Petkov wrote:
> > > On Mon, Apr 22, 2013 at 09:03:29PM -0700, Paul E. McKenney wrote:
> > > > > > +Name: ehca_comp/%u
> > > > > > +Purpose: Periodically process Infiniband-related work.
> > > > > > +To reduce corresponding OS jitter, do any of the following:
> > > > > > +1.	Don't use EHCA Infiniband hardware.  This will prevent these
> > > > > 
> > > > > Sounds like this particular hardware is slow and its IRQ handler/softirq
> > > > > needs a lot of time. Yes, no?
> > > > > 
> > > > > Can we have a reason why people shouldn't use that hw.
> > > > 
> > > > Because it has per-CPU kthreads that can cause OS jitter.  ;-)
> > > 
> > > Yeah, I stumbled over this specific brand of Infiniband hw. It looks
> > > like this particular Infiniband driver uses per-CPU kthreads and the
> > > others in drivers/infiniband/hw/ don't?
> > > 
> > > I hope this explains my head-scratching moment here...
> > 
> > Ah!  I rewrote the first sentence to read:
> > 
> > 	Don't use eHCA Infiniband hardware, instead choosing hardware
> > 	that does not require per-CPU kthreads.
> 
> Another option would be to teach that eHCA driver to be configurable
> on which cpus kthreads are desired and on which not. I can't see a
> reason (aside of throughput) why that hardware can't cope with a
> single thread.

Good point!  I have added a third item to the eHCA list:

	Rework the eHCA driver so that its per-CPU kthreads are
	provisioned only on selected CPUs.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads
  2013-04-25 15:52           ` Paul E. McKenney
@ 2013-04-25 20:59             ` Thomas Gleixner
  2013-04-25 21:23               ` Paul E. McKenney
  0 siblings, 1 reply; 35+ messages in thread
From: Thomas Gleixner @ 2013-04-25 20:59 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Borislav Petkov, linux-kernel, mingo, sbw, Frederic Weisbecker,
	Steven Rostedt, Arjan van de Ven, Kevin Hilman,
	Christoph Lameter, Olivier Baetz

On Thu, 25 Apr 2013, Paul E. McKenney wrote:
> On Thu, Apr 25, 2013 at 12:23:12PM +0200, Borislav Petkov wrote:
> > On Mon, Apr 22, 2013 at 09:03:29PM -0700, Paul E. McKenney wrote:
> > > > > +Name: ehca_comp/%u
> > > > > +Purpose: Periodically process Infiniband-related work.
> > > > > +To reduce corresponding OS jitter, do any of the following:
> > > > > +1.	Don't use EHCA Infiniband hardware.  This will prevent these
> > > > 
> > > > Sounds like this particular hardware is slow and its IRQ handler/softirq
> > > > needs a lot of time. Yes, no?
> > > > 
> > > > Can we have a reason why people shouldn't use that hw.
> > > 
> > > Because it has per-CPU kthreads that can cause OS jitter.  ;-)
> > 
> > Yeah, I stumbled over this specific brand of Infiniband hw. It looks
> > like this particular Infiniband driver uses per-CPU kthreads and the
> > others in drivers/infiniband/hw/ don't?
> > 
> > I hope this explains my head-scratching moment here...
> 
> Ah!  I rewrote the first sentence to read:
> 
> 	Don't use eHCA Infiniband hardware, instead choosing hardware
> 	that does not require per-CPU kthreads.

Another option would be to teach that eHCA driver to be configurable
on which cpus kthreads are desired and on which not. I can't see a
reason (aside of throughput) why that hardware can't cope with a
single thread.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads
  2013-04-25 10:23         ` Borislav Petkov
@ 2013-04-25 15:52           ` Paul E. McKenney
  2013-04-25 20:59             ` Thomas Gleixner
  0 siblings, 1 reply; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-25 15:52 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-kernel, mingo, sbw, Frederic Weisbecker, Steven Rostedt,
	Arjan van de Ven, Kevin Hilman, Christoph Lameter,
	Thomas Gleixner, Olivier Baetz

On Thu, Apr 25, 2013 at 12:23:12PM +0200, Borislav Petkov wrote:
> On Mon, Apr 22, 2013 at 09:03:29PM -0700, Paul E. McKenney wrote:
> > > > +This document lists per-CPU kthreads in the Linux kernel and presents
> > > > +options to control OS jitter due to these kthreads.  Note that kthreads
> > > 
> > > s/due to/which can be caused by/
> > 
> > Same meaning, but "due to" is probably a bit more arcane.
> 
> Yeah, "due to" kinda didn't read right in the context, arcane could be
> one way to put it.
> 
> > But how about "and presents options to control these kthreads' OS
> > jitter"?
> 
> Yep.
> 
> > > > +that are not per-CPU are not listed here -- to reduce OS jitter from
> > > 
> > > one too many "that"s:
> > > 
> > > s/that/which/
> > 
> > Fair point, but I can shorten it as follows:
> > 
> > 	Note that non-per-CPU kthreads CPU are not listed here --
> 
> that second "CPU" is kinda superfluous...?
> 
> > 	to reduce OS jitter from non-per-CPU kthreads, bind them to a
> > 	"housekeeping" CPU that is dedicated to such work.
> 
> Yep, reads ok, except "that is" but you've removed it in the final
> version below.
> 
> > > > +non-per-CPU kthreads, bind them to a "housekeeping" CPU that is dedicated
> > > 
> > > s/that/which/
> > 
> > Good catch -- I chose s/that is//.
> 
> Yep.
> 
> > > > +Name: ehca_comp/%u
> > > > +Purpose: Periodically process Infiniband-related work.
> > > > +To reduce corresponding OS jitter, do any of the following:
> > > > +1.	Don't use EHCA Infiniband hardware.  This will prevent these
> > > 
> > > Sounds like this particular hardware is slow and its IRQ handler/softirq
> > > needs a lot of time. Yes, no?
> > > 
> > > Can we have a reason why people shouldn't use that hw.
> > 
> > Because it has per-CPU kthreads that can cause OS jitter.  ;-)
> 
> Yeah, I stumbled over this specific brand of Infiniband hw. It looks
> like this particular Infiniband driver uses per-CPU kthreads and the
> others in drivers/infiniband/hw/ don't?
> 
> I hope this explains my head-scratching moment here...

Ah!  I rewrote the first sentence to read:

	Don't use eHCA Infiniband hardware, instead choosing hardware
	that does not require per-CPU kthreads.

> > > This sentence keeps repeating; maybe explain the purpose of this doc in
> > > the beginning once and drop this sentence in the later sections.
> > 
> > There are "any of" and "all of" qualifiers.  Also, I cannot count on
> > someone reading the document beginning to end.  I would instead expect
> > many of them to search for the name of the kthread that is bothering
> > them and read only that part.
> 
> Ha! Very good point. :-)
> 
> > > > +2.	Build with CONFIG_HOTPLUG_CPU=y.  After boot completes, force
> > > > +	the CPU offline, then bring it back online.  This forces
> > > > +	recurring timers to migrate elsewhere.	If you are concerned
> > > 
> > > We don't migrate them back to that CPU when we online it again, do we?
> > 
> > Not unless the CPU it migrated to later is taken offline.  Good point,
> > added words to that effect.
> 
> Yep, good.
> 
> > > > +	to be de-jittered is marked as an adaptive-ticks CPU using the
> > > > +	"nohz_full=" boot parameter.  This reduces the number of
> > > > +	scheduler-clock interrupts that the de-jittered CPU receives,
> > > > +	minimizing its chances of being selected to do load balancing,
> > > 
> > > I don't think there's a "," if the "which... " part refers to the
> > > previous "load balancing" and not to the whole sentence.
> > 
> > Good point -- I can reword to:
> > 
> > 	This reduces the number of scheduler-clock interrupts that the
> > 	de-jittered CPU receives, minimizing its chances of being selected
> > 	to do the load balancing work that runs in SCHED_SOFTIRQ context.
> 
> Yep.
> 
> > > > +	This further reduces the number of scheduler-clock interrupts
> > > > +	that the de-jittered CPU receives.
> > > 
> > > s/that/which/ would suit better here IMHO.
> > 
> > Fair point, but how about this?
> > 
> > 	This further reduces the number of scheduler-clock interrupts
> > 	received by the de-jittered CPU.
> 
> Even better.
> 
> > > > +	b.	To the extent possible, keep the CPU out of the kernel
> > > > +		when it is non-idle, for example, by avoiding system
> > > > +		calls and by forcing both kernel threads and interrupts
> > > > +		to execute elsewhere.
> > > > +2.	Enable RCU to do its processing remotely via dyntick-idle by
> > > > +	doing all of the following:
> > > > +	a.	Build with CONFIG_NO_HZ=y and CONFIG_RCU_FAST_NO_HZ=y.
> > > > +	b.	Ensure that the CPU goes idle frequently, allowing other
> > > 
> > > I'm ensuring that by selecting the proper workload which has idle
> > > breathers?
> > 
> > Yep!  Or, equivalently, by adding enough CPUs so that the workload
> > has idle breathers.
> 
> Yeah, this sentence could be in the text, since we're explaining
> everything! :-)
> 
> > Thank you for the thorough review and comments!  Please see below for
> > an update.
> 
> Sure, thank you for writing this up for others to read.
> 
> Reviewed-by: Borislav Petkov <bp@suse.de>

Thank you, added!

> > ------------------------------------------------------------------------
> > 
> > REDUCING OS JITTER DUE TO PER-CPU KTHREADS
> > 
> > This document lists per-CPU kthreads in the Linux kernel and presents
> > options to control these kthreads' OS jitter.  Note that non-per-CPU
> 
> s /these kthreads'/their/
> 
> Sorry, I can't help it :) I promise I won't read too much in the rest so
> as not to beat it to death again :-)

Good change, though, applied.

> > kthreads CPU are not listed here.  To reduce OS jitter from non-per-CPU
> 
> s/CPU //
> 
> see above.

Good point, fixed

> > kthreads, bind them to a "housekeeping" CPU dedicated to such work.
> 
> [ … ]
> 
> Ok, it looks good, ship it.
> 
> :-)

Will do!  ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads
  2013-04-23  4:03       ` Paul E. McKenney
@ 2013-04-25 10:23         ` Borislav Petkov
  2013-04-25 15:52           ` Paul E. McKenney
  0 siblings, 1 reply; 35+ messages in thread
From: Borislav Petkov @ 2013-04-25 10:23 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, sbw, Frederic Weisbecker, Steven Rostedt,
	Arjan van de Ven, Kevin Hilman, Christoph Lameter,
	Thomas Gleixner, Olivier Baetz

On Mon, Apr 22, 2013 at 09:03:29PM -0700, Paul E. McKenney wrote:
> > > +This document lists per-CPU kthreads in the Linux kernel and presents
> > > +options to control OS jitter due to these kthreads.  Note that kthreads
> > 
> > s/due to/which can be caused by/
> 
> Same meaning, but "due to" is probably a bit more arcane.

Yeah, "due to" kinda didn't read right in the context, arcane could be
one way to put it.

> But how about "and presents options to control these kthreads' OS
> jitter"?

Yep.

> > > +that are not per-CPU are not listed here -- to reduce OS jitter from
> > 
> > one too many "that"s:
> > 
> > s/that/which/
> 
> Fair point, but I can shorten it as follows:
> 
> 	Note that non-per-CPU kthreads CPU are not listed here --

that second "CPU" is kinda superfluous...?

> 	to reduce OS jitter from non-per-CPU kthreads, bind them to a
> 	"housekeeping" CPU that is dedicated to such work.

Yep, reads ok, except "that is" but you've removed it in the final
version below.

> > > +non-per-CPU kthreads, bind them to a "housekeeping" CPU that is dedicated
> > 
> > s/that/which/
> 
> Good catch -- I chose s/that is//.

Yep.

> > > +Name: ehca_comp/%u
> > > +Purpose: Periodically process Infiniband-related work.
> > > +To reduce corresponding OS jitter, do any of the following:
> > > +1.	Don't use EHCA Infiniband hardware.  This will prevent these
> > 
> > Sounds like this particular hardware is slow and its IRQ handler/softirq
> > needs a lot of time. Yes, no?
> > 
> > Can we have a reason why people shouldn't use that hw.
> 
> Because it has per-CPU kthreads that can cause OS jitter.  ;-)

Yeah, I stumbled over this specific brand of Infiniband hw. It looks
like this particular Infiniband driver uses per-CPU kthreads and the
others in drivers/infiniband/hw/ don't?

I hope this explains my head-scratching moment here...

> > This sentence keeps repeating; maybe explain the purpose of this doc in
> > the beginning once and drop this sentence in the later sections.
> 
> There are "any of" and "all of" qualifiers.  Also, I cannot count on
> someone reading the document beginning to end.  I would instead expect
> many of them to search for the name of the kthread that is bothering
> them and read only that part.

Ha! Very good point. :-)

> > > +2.	Build with CONFIG_HOTPLUG_CPU=y.  After boot completes, force
> > > +	the CPU offline, then bring it back online.  This forces
> > > +	recurring timers to migrate elsewhere.	If you are concerned
> > 
> > We don't migrate them back to that CPU when we online it again, do we?
> 
> Not unless the CPU it migrated to later is taken offline.  Good point,
> added words to that effect.

Yep, good.

> > > +	to be de-jittered is marked as an adaptive-ticks CPU using the
> > > +	"nohz_full=" boot parameter.  This reduces the number of
> > > +	scheduler-clock interrupts that the de-jittered CPU receives,
> > > +	minimizing its chances of being selected to do load balancing,
> > 
> > I don't think there's a "," if the "which... " part refers to the
> > previous "load balancing" and not to the whole sentence.
> 
> Good point -- I can reword to:
> 
> 	This reduces the number of scheduler-clock interrupts that the
> 	de-jittered CPU receives, minimizing its chances of being selected
> 	to do the load balancing work that runs in SCHED_SOFTIRQ context.

Yep.

> > > +	This further reduces the number of scheduler-clock interrupts
> > > +	that the de-jittered CPU receives.
> > 
> > s/that/which/ would suit better here IMHO.
> 
> Fair point, but how about this?
> 
> 	This further reduces the number of scheduler-clock interrupts
> 	received by the de-jittered CPU.

Even better.

> > > +	b.	To the extent possible, keep the CPU out of the kernel
> > > +		when it is non-idle, for example, by avoiding system
> > > +		calls and by forcing both kernel threads and interrupts
> > > +		to execute elsewhere.
> > > +2.	Enable RCU to do its processing remotely via dyntick-idle by
> > > +	doing all of the following:
> > > +	a.	Build with CONFIG_NO_HZ=y and CONFIG_RCU_FAST_NO_HZ=y.
> > > +	b.	Ensure that the CPU goes idle frequently, allowing other
> > 
> > I'm ensuring that by selecting the proper workload which has idle
> > breathers?
> 
> Yep!  Or, equivalently, by adding enough CPUs so that the workload
> has idle breathers.

Yeah, this sentence could be in the text, since we're explaining
everything! :-)

> Thank you for the thorough review and comments!  Please see below for
> an update.

Sure, thank you for writing this up for others to read.

Reviewed-by: Borislav Petkov <bp@suse.de>

> ------------------------------------------------------------------------
> 
> REDUCING OS JITTER DUE TO PER-CPU KTHREADS
> 
> This document lists per-CPU kthreads in the Linux kernel and presents
> options to control these kthreads' OS jitter.  Note that non-per-CPU

s /these kthreads'/their/

Sorry, I can't help it :) I promise I won't read too much in the rest so
as not to beat it to death again :-)

> kthreads CPU are not listed here.  To reduce OS jitter from non-per-CPU

s/CPU //

see above.

> kthreads, bind them to a "housekeeping" CPU dedicated to such work.

[ … ]

Ok, it looks good, ship it.

:-)

-- 
Regards/Gruss,
    Boris.

Sent from a fat crate under my desk. Formatting is fine.
--

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads
  2013-04-21 19:37     ` Borislav Petkov
@ 2013-04-23  4:03       ` Paul E. McKenney
  2013-04-25 10:23         ` Borislav Petkov
  0 siblings, 1 reply; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-23  4:03 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-kernel, mingo, sbw, Frederic Weisbecker, Steven Rostedt,
	Arjan van de Ven, Kevin Hilman, Christoph Lameter,
	Thomas Gleixner, Olivier Baetz

On Sun, Apr 21, 2013 at 09:37:05PM +0200, Borislav Petkov wrote:
> On Tue, Apr 16, 2013 at 09:41:30AM -0700, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > 
> > The Linux kernel uses a number of per-CPU kthreads, any of which might
> > contribute to OS jitter at any time.  The usual approach to normal
> > kthreads, namely to bind them to a "housekeeping" CPU, does not work
> > with these kthreads because they cannot operate correctly if moved to
> > some other CPU.  This commit therefore lists ways of controlling OS
> > jitter from the Linux kernel's per-CPU kthreads.
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > Cc: Frederic Weisbecker <fweisbec@gmail.com>
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> > Cc: Borislav Petkov <bp@alien8.de>
> > Cc: Arjan van de Ven <arjan@linux.intel.com>
> > Cc: Kevin Hilman <khilman@linaro.org>
> > Cc: Christoph Lameter <cl@linux.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Olivier Baetz <olivier.baetz@novasparks.com>
> > Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
> > ---
> >  Documentation/kernel-per-CPU-kthreads.txt | 186 ++++++++++++++++++++++++++++++
> >  1 file changed, 186 insertions(+)
> >  create mode 100644 Documentation/kernel-per-CPU-kthreads.txt
> > 
> > diff --git a/Documentation/kernel-per-CPU-kthreads.txt b/Documentation/kernel-per-CPU-kthreads.txt
> > new file mode 100644
> > index 0000000..bfecc1c
> > --- /dev/null
> > +++ b/Documentation/kernel-per-CPU-kthreads.txt
> > @@ -0,0 +1,186 @@
> > +REDUCING OS JITTER DUE TO PER-CPU KTHREADS
> > +
> > +This document lists per-CPU kthreads in the Linux kernel and presents
> > +options to control OS jitter due to these kthreads.  Note that kthreads
> 
> s/due to/which can be caused by/

Same meaning, but "due to" is probably a bit more arcane.  But how
about "and presents options to control these kthreads' OS jitter"?

> > +that are not per-CPU are not listed here -- to reduce OS jitter from
> 
> one too many "that"s:
> 
> s/that/which/

Fair point, but I can shorten it as follows:

	Note that non-per-CPU kthreads CPU are not listed here --
	to reduce OS jitter from non-per-CPU kthreads, bind them to a
	"housekeeping" CPU that is dedicated to such work.

> > +non-per-CPU kthreads, bind them to a "housekeeping" CPU that is dedicated
> 
> s/that/which/

Good catch -- I chose s/that is//.

> > +to such work.
> > +
> > +
> > +REFERENCES
> > +
> > +o	Documentation/IRQ-affinity.txt:  Binding interrupts to sets of CPUs.
> > +
> > +o	Documentation/cgroups:  Using cgroups to bind tasks to sets of CPUs.
> > +
> > +o	man taskset:  Using the taskset command to bind tasks to sets
> > +	of CPUs.
> > +
> > +o	man sched_setaffinity:  Using the sched_setaffinity() system
> > +	call to bind tasks to sets of CPUs.
> > +
> > +
> > +KTHREADS
> > +
> > +Name: ehca_comp/%u
> > +Purpose: Periodically process Infiniband-related work.
> > +To reduce corresponding OS jitter, do any of the following:
> > +1.	Don't use EHCA Infiniband hardware.  This will prevent these
> 
> Sounds like this particular hardware is slow and its IRQ handler/softirq
> needs a lot of time. Yes, no?
> 
> Can we have a reason why people shouldn't use that hw.

Because it has per-CPU kthreads that can cause OS jitter.  ;-)

> > +	kthreads from being created in the first place.  (This will
> > +	work for most people, as this hardware, though important,
> > +	is relatively old and is produced in relatively low unit
> > +	volumes.)
> > +2.	Do all EHCA-Infiniband-related work on other CPUs, including
> > +	interrupts.
> > +
> > +
> > +Name: irq/%d-%s
> > +Purpose: Handle threaded interrupts.
> > +To reduce corresponding OS jitter, do the following:
> 
> This sentence keeps repeating; maybe explain the purpose of this doc in
> the beginning once and drop this sentence in the later sections.

There are "any of" and "all of" qualifiers.  Also, I cannot count on
someone reading the document beginning to end.  I would instead expect
many of them to search for the name of the kthread that is bothering
them and read only that part.

> > +1.	Use irq affinity to force the irq threads to execute on
> > +	some other CPU.
> > +
> > +Name: kcmtpd_ctr_%d
> > +Purpose: Handle Bluetooth work.
> > +To reduce corresponding OS jitter, do one of the following:
> > +1.	Don't use Bluetooth, in which case these kthreads won't be
> > +	created in the first place.
> > +2.	Use irq affinity to force Bluetooth-related interrupts to
> > +	occur on some other CPU and furthermore initiate all
> > +	Bluetooth activity on some other CPU.
> > +
> > +Name: ksoftirqd/%u
> > +Purpose: Execute softirq handlers when threaded or when under heavy load.
> > +To reduce corresponding OS jitter, each softirq vector must be handled
> > +separately as follows:
> > +TIMER_SOFTIRQ:  Do all of the following:
> > +1.	To the extent possible, keep the CPU out of the kernel when it
> > +	is non-idle, for example, by avoiding system calls and by forcing
> > +	both kernel threads and interrupts to execute elsewhere.
> > +2.	Build with CONFIG_HOTPLUG_CPU=y.  After boot completes, force
> > +	the CPU offline, then bring it back online.  This forces
> > +	recurring timers to migrate elsewhere.	If you are concerned
> 
> We don't migrate them back to that CPU when we online it again, do we?

Not unless the CPU it migrated to later is taken offline.  Good point,
added words to that effect.

> > +	with multiple CPUs, force them all offline before bringing the
> > +	first one back online.
> > +NET_TX_SOFTIRQ and NET_RX_SOFTIRQ:  Do all of the following:
> > +1.	Force networking interrupts onto other CPUs.
> > +2.	Initiate any network I/O on other CPUs.
> > +3.	Once your application has started, prevent CPU-hotplug operations
> > +	from being initiated from tasks that might run on the CPU to
> > +	be de-jittered.  (It is OK to force this CPU offline and then
> > +	bring it back online before you start your application.)
> > +BLOCK_SOFTIRQ:  Do all of the following:
> > +1.	Force block-device interrupts onto some other CPU.
> > +2.	Initiate any block I/O on other CPUs.
> > +3.	Once your application has started, prevent CPU-hotplug operations
> > +	from being initiated from tasks that might run on the CPU to
> > +	be de-jittered.  (It is OK to force this CPU offline and then
> > +	bring it back online before you start your application.)
> > +BLOCK_IOPOLL_SOFTIRQ:  Do all of the following:
> > +1.	Force block-device interrupts onto some other CPU.
> > +2.	Initiate any block I/O and block-I/O polling on other CPUs.
> > +3.	Once your application has started, prevent CPU-hotplug operations
> > +	from being initiated from tasks that might run on the CPU to
> > +	be de-jittered.  (It is OK to force this CPU offline and then
> > +	bring it back online before you start your application.)
> 
> more repeated text in brackets, maybe a footnote somewhere instead...

Indeed, it is a bit repetitive, but I expect that people will tend
to look just at the part that seems relevant rather than reading the
whole thing.

> > +TASKLET_SOFTIRQ: Do one or more of the following:
> > +1.	Avoid use of drivers that use tasklets.
> > +2.	Convert all drivers that you must use from tasklets to workqueues.
> > +3.	Force interrupts for drivers using tasklets onto other CPUs,
> > +	and also do I/O involving these drivers on other CPUs.
> 
> How do I check which drivers use tasklets?

Good point -- I added "(Such drivers will contain calls to things like
tasklet_schedule().)"

> > +SCHED_SOFTIRQ: Do all of the following:
> > +1.	Avoid sending scheduler IPIs to the CPU to be de-jittered,
> > +	for example, ensure that at most one runnable kthread is
> 
> To which sentence does "for example" belong to? Depending on the answer,
> you can split that sentence.

It belongs with the first sentence.

> > +	present on that CPU.  If a thread awakens that expects
> > +	to run on the de-jittered CPU, the scheduler will send
> 
> "If a thread expecting to run ont the de-jittered CPU awakens, the
> scheduler..."

Sold!

> > +	an IPI that can result in a subsequent SCHED_SOFTIRQ.
> > +2.	Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
> > +	CONFIG_NO_HZ_FULL=y, and in addition ensure that the CPU
> 
> commas:
> 
> 			  , and, in addition, ensure...

Good catch, fixed.

> > +	to be de-jittered is marked as an adaptive-ticks CPU using the
> > +	"nohz_full=" boot parameter.  This reduces the number of
> > +	scheduler-clock interrupts that the de-jittered CPU receives,
> > +	minimizing its chances of being selected to do load balancing,
> 
> I don't think there's a "," if the "which... " part refers to the
> previous "load balancing" and not to the whole sentence.

Good point -- I can reword to:

	This reduces the number of scheduler-clock interrupts that the
	de-jittered CPU receives, minimizing its chances of being selected
	to do the load balancing work that runs in SCHED_SOFTIRQ context.

> > +	which happens in SCHED_SOFTIRQ context.
> > +3.	To the extent possible, keep the CPU out of the kernel when it
> > +	is non-idle, for example, by avoiding system calls and by
> > +	forcing both kernel threads and interrupts to execute elsewhere.
> 
> This time "for example" reads ok.

Glad you like it.  ;-)

> > +	This further reduces the number of scheduler-clock interrupts
> > +	that the de-jittered CPU receives.
> 
> s/that/which/ would suit better here IMHO.

Fair point, but how about this?

	This further reduces the number of scheduler-clock interrupts
	received by the de-jittered CPU.

> > +HRTIMER_SOFTIRQ:  Do all of the following:
> > +1.	To the extent possible, keep the CPU out of the kernel when it
> > +	is non-idle, for example, by avoiding system calls and by forcing
> > +	both kernel threads and interrupts to execute elsewhere.
> 
> Ok, I think I get your "for example" usage pattern.
> 
> "blabablabla. For example, do blabalbal."
> 
> I think that would be a bit more readable.

In this case, agreed:

	To the extent possible, keep the CPU out of the kernel when it
	is non-idle.  For example, avoid system calls and force both
	kernel threads and interrupts to execute elsewhere.

> > +2.	Build with CONFIG_HOTPLUG_CPU=y.  Once boot completes, force the
> > +	CPU offline, then bring it back online.  This forces recurring
> > +	timers to migrate elsewhere.  If you are concerned with multiple
> > +	CPUs, force them all offline before bringing the first one
> > +	back online.
> 
> Same question: do the timers get migrated back when the CPU reappears
> online?

Good point, applied the same change here.

> > +RCU_SOFTIRQ:  Do at least one of the following:
> > +1.	Offload callbacks and keep the CPU in either dyntick-idle or
> > +	adaptive-ticks state by doing all of the following:
> > +	a.	Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
> > +		CONFIG_NO_HZ_FULL=y, and in addition ensure that the CPU
> 
> 				   , and, in addition, 
> 
> > +		to be de-jittered is marked as an adaptive-ticks CPU
> > +		using the "nohz_full=" boot parameter.	Bind the rcuo
> > +		kthreads to housekeeping CPUs that can tolerate OS jitter.
> 
> 					      which

Good point, took both.

> > +	b.	To the extent possible, keep the CPU out of the kernel
> > +		when it is non-idle, for example, by avoiding system
> > +		calls and by forcing both kernel threads and interrupts
> > +		to execute elsewhere.
> > +2.	Enable RCU to do its processing remotely via dyntick-idle by
> > +	doing all of the following:
> > +	a.	Build with CONFIG_NO_HZ=y and CONFIG_RCU_FAST_NO_HZ=y.
> > +	b.	Ensure that the CPU goes idle frequently, allowing other
> 
> I'm ensuring that by selecting the proper workload which has idle
> breathers?

Yep!  Or, equivalently, by adding enough CPUs so that the workload
has idle breathers.

> > +		CPUs to detect that it has passed through an RCU quiescent
> > +		state.	If the kernel is built with CONFIG_NO_HZ_FULL=y,
> > +		userspace execution also allows other CPUs to detect that
> > +		the CPU in question has passed through a quiescent state.
> > +	c.	To the extent possible, keep the CPU out of the kernel
> > +		when it is non-idle, for example, by avoiding system
> > +		calls and by forcing both kernel threads and interrupts
> > +		to execute elsewhere.
> > +
> > +Name: rcuc/%u
> > +Purpose: Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels.
> > +To reduce corresponding OS jitter, do at least one of the following:
> > +1.	Build the kernel with CONFIG_PREEMPT=n.  This prevents these
> > +	kthreads from being created in the first place, and also prevents
> > +	RCU priority boosting from ever being required.  This approach
> 
> "... this obviates the need for RCU priority boosting."

Sold!

> > +	is feasible for workloads that do not require high degrees of
> > +	responsiveness.
> > +2.	Build the kernel with CONFIG_RCU_BOOST=n.  This prevents these
> > +	kthreads from being created in the first place.  This approach
> > +	is feasible only if your workload never requires RCU priority
> > +	boosting, for example, if you ensure frequent idle time on all
> > +	CPUs that might execute within the kernel.
> > +3.	Build with CONFIG_RCU_NOCB_CPU=y and CONFIG_RCU_NOCB_CPU_ALL=y,
> > +	which offloads all RCU callbacks to kthreads that can be moved
> > +	off of CPUs susceptible to OS jitter.  This approach prevents the
> > +	rcuc/%u kthreads from having any work to do, so that they are
> > +	never awakened.
> > +4.	Ensure that the CPU never enters the kernel and in particular
> 
> 						   , and, in particular, 

Good, fixed.

> > +	avoid initiating any CPU hotplug operations on this CPU.  This is
> > +	another way of preventing any callbacks from being queued on the
> > +	CPU, again preventing the rcuc/%u kthreads from having any work
> > +	to do.
> > +
> > +Name: rcuob/%d, rcuop/%d, and rcuos/%d
> > +Purpose: Offload RCU callbacks from the corresponding CPU.
> > +To reduce corresponding OS jitter, do at least one of the following:
> > +1.	Use affinity, cgroups, or other mechanism to force these kthreads
> > +	to execute on some other CPU.
> > +2.	Build with CONFIG_RCU_NOCB_CPUS=n, which will prevent these
> > +	kthreads from being created in the first place.  However,
> > +	please note that this will not eliminate the corresponding
> 
> can we drop "corresponding" here?

Yep!  Dropped the preceding "the" as well, just to be on the safe side.

> > +	OS jitter, but will instead shift it to RCU_SOFTIRQ.
> > +
> > +Name: watchdog/%u
> > +Purpose: Detect software lockups on each CPU.
> > +To reduce corresponding OS jitter, do at least one of the following:
> 
> ditto.

I changed "corresponding" to "its" globally for this lead-in sentence.

> > +1.	Build with CONFIG_LOCKUP_DETECTOR=n, which will prevent these
> > +	kthreads from being created in the first place.
> > +2.	Echo a zero to /proc/sys/kernel/watchdog to disable the
> > +	watchdog timer.
> > +3.	Echo a large number of /proc/sys/kernel/watchdog_thresh in
> > +	order to reduce the frequency of OS jitter due to the watchdog
> > +	timer down to a level that is acceptable for your workload.

Thank you for the thorough review and comments!  Please see below for
an update.

							Thanx, Paul

------------------------------------------------------------------------

REDUCING OS JITTER DUE TO PER-CPU KTHREADS

This document lists per-CPU kthreads in the Linux kernel and presents
options to control these kthreads' OS jitter.  Note that non-per-CPU
kthreads CPU are not listed here.  To reduce OS jitter from non-per-CPU
kthreads, bind them to a "housekeeping" CPU dedicated to such work.


REFERENCES

o	Documentation/IRQ-affinity.txt:  Binding interrupts to sets of CPUs.

o	Documentation/cgroups:  Using cgroups to bind tasks to sets of CPUs.

o	man taskset:  Using the taskset command to bind tasks to sets
	of CPUs.

o	man sched_setaffinity:  Using the sched_setaffinity() system
	call to bind tasks to sets of CPUs.


KTHREADS

Name: ehca_comp/%u
Purpose: Periodically process Infiniband-related work.
To reduce its OS jitter, do any of the following:
1.	Don't use eHCA Infiniband hardware.  This will prevent these
	kthreads from being created in the first place.  (This will
	work for most people, as this hardware, though important,
	is relatively old and is produced in relatively low unit
	volumes.)
2.	Do all eHCA-Infiniband-related work on other CPUs, including
	interrupts.


Name: irq/%d-%s
Purpose: Handle threaded interrupts.
To reduce its OS jitter, do the following:
1.	Use irq affinity to force the irq threads to execute on
	some other CPU.

Name: kcmtpd_ctr_%d
Purpose: Handle Bluetooth work.
To reduce its OS jitter, do one of the following:
1.	Don't use Bluetooth, in which case these kthreads won't be
	created in the first place.
2.	Use irq affinity to force Bluetooth-related interrupts to
	occur on some other CPU and furthermore initiate all
	Bluetooth activity on some other CPU.

Name: ksoftirqd/%u
Purpose: Execute softirq handlers when threaded or when under heavy load.
To reduce its OS jitter, each softirq vector must be handled
separately as follows:
TIMER_SOFTIRQ:  Do all of the following:
1.	To the extent possible, keep the CPU out of the kernel when it
	is non-idle, for example, by avoiding system calls and by forcing
	both kernel threads and interrupts to execute elsewhere.
2.	Build with CONFIG_HOTPLUG_CPU=y.  After boot completes, force
	the CPU offline, then bring it back online.  This forces
	recurring timers to migrate elsewhere.	If you are concerned
	with multiple CPUs, force them all offline before bringing the
	first one back online.  Once you have onlined the CPUs in question,
	do not offline any other CPUs, because doing so could force the
	timer back onto one of the CPUs in question.
NET_TX_SOFTIRQ and NET_RX_SOFTIRQ:  Do all of the following:
1.	Force networking interrupts onto other CPUs.
2.	Initiate any network I/O on other CPUs.
3.	Once your application has started, prevent CPU-hotplug operations
	from being initiated from tasks that might run on the CPU to
	be de-jittered.  (It is OK to force this CPU offline and then
	bring it back online before you start your application.)
BLOCK_SOFTIRQ:  Do all of the following:
1.	Force block-device interrupts onto some other CPU.
2.	Initiate any block I/O on other CPUs.
3.	Once your application has started, prevent CPU-hotplug operations
	from being initiated from tasks that might run on the CPU to
	be de-jittered.  (It is OK to force this CPU offline and then
	bring it back online before you start your application.)
BLOCK_IOPOLL_SOFTIRQ:  Do all of the following:
1.	Force block-device interrupts onto some other CPU.
2.	Initiate any block I/O and block-I/O polling on other CPUs.
3.	Once your application has started, prevent CPU-hotplug operations
	from being initiated from tasks that might run on the CPU to
	be de-jittered.  (It is OK to force this CPU offline and then
	bring it back online before you start your application.)
TASKLET_SOFTIRQ: Do one or more of the following:
1.	Avoid use of drivers that use tasklets.  (Such drivers will contain
	calls to things like tasklet_schedule().)
2.	Convert all drivers that you must use from tasklets to workqueues.
3.	Force interrupts for drivers using tasklets onto other CPUs,
	and also do I/O involving these drivers on other CPUs.
SCHED_SOFTIRQ: Do all of the following:
1.	Avoid sending scheduler IPIs to the CPU to be de-jittered,
	for example, ensure that at most one runnable kthread is present
	on that CPU.  If a thread that expects to run on the de-jittered
	CPU awakens, the scheduler will send an IPI that can result in
	a subsequent SCHED_SOFTIRQ.
2.	Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
	CONFIG_NO_HZ_FULL=y, and, in addition, ensure that the CPU
	to be de-jittered is marked as an adaptive-ticks CPU using the
	"nohz_full=" boot parameter.  This reduces the number of
	scheduler-clock interrupts that the de-jittered CPU receives,
	minimizing its chances of being selected to do the load balancing
	work that runs in SCHED_SOFTIRQ context.
3.	To the extent possible, keep the CPU out of the kernel when it
	is non-idle, for example, by avoiding system calls and by
	forcing both kernel threads and interrupts to execute elsewhere.
	This further reduces the number of scheduler-clock interrupts
	received by the de-jittered CPU.
HRTIMER_SOFTIRQ:  Do all of the following:
1.	To the extent possible, keep the CPU out of the kernel when it
	is non-idle.  For example, avoid system calls and force both
	kernel threads and interrupts to execute elsewhere.
2.	Build with CONFIG_HOTPLUG_CPU=y.  Once boot completes, force the
	CPU offline, then bring it back online.  This forces recurring
	timers to migrate elsewhere.  If you are concerned with multiple
	CPUs, force them all offline before bringing the first one
	back online.  Once you have onlined the CPUs in question, do not
	offline any other CPUs, because doing so could force the timer
	back onto one of the CPUs in question.
RCU_SOFTIRQ:  Do at least one of the following:
1.	Offload callbacks and keep the CPU in either dyntick-idle or
	adaptive-ticks state by doing all of the following:
	a.	Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
		CONFIG_NO_HZ_FULL=y, and, in addition ensure that the CPU
		to be de-jittered is marked as an adaptive-ticks CPU using
		the "nohz_full=" boot parameter.  Bind the rcuo kthreads
		to housekeeping CPUs, which can tolerate OS jitter.
	b.	To the extent possible, keep the CPU out of the kernel
		when it is non-idle, for example, by avoiding system
		calls and by forcing both kernel threads and interrupts
		to execute elsewhere.
2.	Enable RCU to do its processing remotely via dyntick-idle by
	doing all of the following:
	a.	Build with CONFIG_NO_HZ=y and CONFIG_RCU_FAST_NO_HZ=y.
	b.	Ensure that the CPU goes idle frequently, allowing other
		CPUs to detect that it has passed through an RCU quiescent
		state.	If the kernel is built with CONFIG_NO_HZ_FULL=y,
		userspace execution also allows other CPUs to detect that
		the CPU in question has passed through a quiescent state.
	c.	To the extent possible, keep the CPU out of the kernel
		when it is non-idle, for example, by avoiding system
		calls and by forcing both kernel threads and interrupts
		to execute elsewhere.

Name: rcuc/%u
Purpose: Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels.
To reduce its OS jitter, do at least one of the following:
1.	Build the kernel with CONFIG_PREEMPT=n.  This prevents these
	kthreads from being created in the first place, and also obviates
	the need for RCU priority boosting.  This approach is feasible
	for workloads that do not require high degrees of responsiveness.
2.	Build the kernel with CONFIG_RCU_BOOST=n.  This prevents these
	kthreads from being created in the first place.  This approach
	is feasible only if your workload never requires RCU priority
	boosting, for example, if you ensure frequent idle time on all
	CPUs that might execute within the kernel.
3.	Build with CONFIG_RCU_NOCB_CPU=y and CONFIG_RCU_NOCB_CPU_ALL=y,
	which offloads all RCU callbacks to kthreads that can be moved
	off of CPUs susceptible to OS jitter.  This approach prevents the
	rcuc/%u kthreads from having any work to do, so that they are
	never awakened.
4.	Ensure that the CPU never enters the kernel, and, in particular,
	avoid initiating any CPU hotplug operations on this CPU.  This is
	another way of preventing any callbacks from being queued on the
	CPU, again preventing the rcuc/%u kthreads from having any work
	to do.

Name: rcuob/%d, rcuop/%d, and rcuos/%d
Purpose: Offload RCU callbacks from the corresponding CPU.
To reduce its OS jitter, do at least one of the following:
1.	Use affinity, cgroups, or other mechanism to force these kthreads
	to execute on some other CPU.
2.	Build with CONFIG_RCU_NOCB_CPUS=n, which will prevent these
	kthreads from being created in the first place.  However, please
	note that this will not eliminate OS jitter, but will instead
	shift it to RCU_SOFTIRQ.

Name: watchdog/%u
Purpose: Detect software lockups on each CPU.
To reduce its OS jitter, do at least one of the following:
1.	Build with CONFIG_LOCKUP_DETECTOR=n, which will prevent these
	kthreads from being created in the first place.
2.	Echo a zero to /proc/sys/kernel/watchdog to disable the
	watchdog timer.
3.	Echo a large number of /proc/sys/kernel/watchdog_thresh in
	order to reduce the frequency of OS jitter due to the watchdog
	timer down to a level that is acceptable for your workload.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads
  2013-04-16 16:41   ` [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads Paul E. McKenney
@ 2013-04-21 19:37     ` Borislav Petkov
  2013-04-23  4:03       ` Paul E. McKenney
  0 siblings, 1 reply; 35+ messages in thread
From: Borislav Petkov @ 2013-04-21 19:37 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, mingo, sbw, Frederic Weisbecker, Steven Rostedt,
	Arjan van de Ven, Kevin Hilman, Christoph Lameter,
	Thomas Gleixner, Olivier Baetz

On Tue, Apr 16, 2013 at 09:41:30AM -0700, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> The Linux kernel uses a number of per-CPU kthreads, any of which might
> contribute to OS jitter at any time.  The usual approach to normal
> kthreads, namely to bind them to a "housekeeping" CPU, does not work
> with these kthreads because they cannot operate correctly if moved to
> some other CPU.  This commit therefore lists ways of controlling OS
> jitter from the Linux kernel's per-CPU kthreads.
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Arjan van de Ven <arjan@linux.intel.com>
> Cc: Kevin Hilman <khilman@linaro.org>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Olivier Baetz <olivier.baetz@novasparks.com>
> Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
> ---
>  Documentation/kernel-per-CPU-kthreads.txt | 186 ++++++++++++++++++++++++++++++
>  1 file changed, 186 insertions(+)
>  create mode 100644 Documentation/kernel-per-CPU-kthreads.txt
> 
> diff --git a/Documentation/kernel-per-CPU-kthreads.txt b/Documentation/kernel-per-CPU-kthreads.txt
> new file mode 100644
> index 0000000..bfecc1c
> --- /dev/null
> +++ b/Documentation/kernel-per-CPU-kthreads.txt
> @@ -0,0 +1,186 @@
> +REDUCING OS JITTER DUE TO PER-CPU KTHREADS
> +
> +This document lists per-CPU kthreads in the Linux kernel and presents
> +options to control OS jitter due to these kthreads.  Note that kthreads

s/due to/which can be caused by/

> +that are not per-CPU are not listed here -- to reduce OS jitter from

one too many "that"s:

s/that/which/

> +non-per-CPU kthreads, bind them to a "housekeeping" CPU that is dedicated

s/that/which/

> +to such work.
> +
> +
> +REFERENCES
> +
> +o	Documentation/IRQ-affinity.txt:  Binding interrupts to sets of CPUs.
> +
> +o	Documentation/cgroups:  Using cgroups to bind tasks to sets of CPUs.
> +
> +o	man taskset:  Using the taskset command to bind tasks to sets
> +	of CPUs.
> +
> +o	man sched_setaffinity:  Using the sched_setaffinity() system
> +	call to bind tasks to sets of CPUs.
> +
> +
> +KTHREADS
> +
> +Name: ehca_comp/%u
> +Purpose: Periodically process Infiniband-related work.
> +To reduce corresponding OS jitter, do any of the following:
> +1.	Don't use EHCA Infiniband hardware.  This will prevent these

Sounds like this particular hardware is slow and its IRQ handler/softirq
needs a lot of time. Yes, no?

Can we have a reason why people shouldn't use that hw.

> +	kthreads from being created in the first place.  (This will
> +	work for most people, as this hardware, though important,
> +	is relatively old and is produced in relatively low unit
> +	volumes.)
> +2.	Do all EHCA-Infiniband-related work on other CPUs, including
> +	interrupts.
> +
> +
> +Name: irq/%d-%s
> +Purpose: Handle threaded interrupts.
> +To reduce corresponding OS jitter, do the following:

This sentence keeps repeating; maybe explain the purpose of this doc in
the beginning once and drop this sentence in the later sections.

> +1.	Use irq affinity to force the irq threads to execute on
> +	some other CPU.
> +
> +Name: kcmtpd_ctr_%d
> +Purpose: Handle Bluetooth work.
> +To reduce corresponding OS jitter, do one of the following:
> +1.	Don't use Bluetooth, in which case these kthreads won't be
> +	created in the first place.
> +2.	Use irq affinity to force Bluetooth-related interrupts to
> +	occur on some other CPU and furthermore initiate all
> +	Bluetooth activity on some other CPU.
> +
> +Name: ksoftirqd/%u
> +Purpose: Execute softirq handlers when threaded or when under heavy load.
> +To reduce corresponding OS jitter, each softirq vector must be handled
> +separately as follows:
> +TIMER_SOFTIRQ:  Do all of the following:
> +1.	To the extent possible, keep the CPU out of the kernel when it
> +	is non-idle, for example, by avoiding system calls and by forcing
> +	both kernel threads and interrupts to execute elsewhere.
> +2.	Build with CONFIG_HOTPLUG_CPU=y.  After boot completes, force
> +	the CPU offline, then bring it back online.  This forces
> +	recurring timers to migrate elsewhere.	If you are concerned

We don't migrate them back to that CPU when we online it again, do we?

> +	with multiple CPUs, force them all offline before bringing the
> +	first one back online.
> +NET_TX_SOFTIRQ and NET_RX_SOFTIRQ:  Do all of the following:
> +1.	Force networking interrupts onto other CPUs.
> +2.	Initiate any network I/O on other CPUs.
> +3.	Once your application has started, prevent CPU-hotplug operations
> +	from being initiated from tasks that might run on the CPU to
> +	be de-jittered.  (It is OK to force this CPU offline and then
> +	bring it back online before you start your application.)
> +BLOCK_SOFTIRQ:  Do all of the following:
> +1.	Force block-device interrupts onto some other CPU.
> +2.	Initiate any block I/O on other CPUs.
> +3.	Once your application has started, prevent CPU-hotplug operations
> +	from being initiated from tasks that might run on the CPU to
> +	be de-jittered.  (It is OK to force this CPU offline and then
> +	bring it back online before you start your application.)
> +BLOCK_IOPOLL_SOFTIRQ:  Do all of the following:
> +1.	Force block-device interrupts onto some other CPU.
> +2.	Initiate any block I/O and block-I/O polling on other CPUs.
> +3.	Once your application has started, prevent CPU-hotplug operations
> +	from being initiated from tasks that might run on the CPU to
> +	be de-jittered.  (It is OK to force this CPU offline and then
> +	bring it back online before you start your application.)

more repeated text in brackets, maybe a footnote somewhere instead...

> +TASKLET_SOFTIRQ: Do one or more of the following:
> +1.	Avoid use of drivers that use tasklets.
> +2.	Convert all drivers that you must use from tasklets to workqueues.
> +3.	Force interrupts for drivers using tasklets onto other CPUs,
> +	and also do I/O involving these drivers on other CPUs.

How do I check which drivers use tasklets?

> +SCHED_SOFTIRQ: Do all of the following:
> +1.	Avoid sending scheduler IPIs to the CPU to be de-jittered,
> +	for example, ensure that at most one runnable kthread is

To which sentence does "for example" belong to? Depending on the answer,
you can split that sentence.

> +	present on that CPU.  If a thread awakens that expects
> +	to run on the de-jittered CPU, the scheduler will send

"If a thread expecting to run ont the de-jittered CPU awakens, the
scheduler..."

> +	an IPI that can result in a subsequent SCHED_SOFTIRQ.
> +2.	Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
> +	CONFIG_NO_HZ_FULL=y, and in addition ensure that the CPU

commas:

			  , and, in addition, ensure...


> +	to be de-jittered is marked as an adaptive-ticks CPU using the
> +	"nohz_full=" boot parameter.  This reduces the number of
> +	scheduler-clock interrupts that the de-jittered CPU receives,
> +	minimizing its chances of being selected to do load balancing,

I don't think there's a "," if the "which... " part refers to the
previous "load balancing" and not to the whole sentence.

> +	which happens in SCHED_SOFTIRQ context.
> +3.	To the extent possible, keep the CPU out of the kernel when it
> +	is non-idle, for example, by avoiding system calls and by
> +	forcing both kernel threads and interrupts to execute elsewhere.

This time "for example" reads ok.

> +	This further reduces the number of scheduler-clock interrupts
> +	that the de-jittered CPU receives.

s/that/which/ would suit better here IMHO.

> +HRTIMER_SOFTIRQ:  Do all of the following:
> +1.	To the extent possible, keep the CPU out of the kernel when it
> +	is non-idle, for example, by avoiding system calls and by forcing
> +	both kernel threads and interrupts to execute elsewhere.

Ok, I think I get your "for example" usage pattern.

"blabablabla. For example, do blabalbal."

I think that would be a bit more readable.

> +2.	Build with CONFIG_HOTPLUG_CPU=y.  Once boot completes, force the
> +	CPU offline, then bring it back online.  This forces recurring
> +	timers to migrate elsewhere.  If you are concerned with multiple
> +	CPUs, force them all offline before bringing the first one
> +	back online.

Same question: do the timers get migrated back when the CPU reappears
online?

> +RCU_SOFTIRQ:  Do at least one of the following:
> +1.	Offload callbacks and keep the CPU in either dyntick-idle or
> +	adaptive-ticks state by doing all of the following:
> +	a.	Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
> +		CONFIG_NO_HZ_FULL=y, and in addition ensure that the CPU

				   , and, in addition, 

> +		to be de-jittered is marked as an adaptive-ticks CPU
> +		using the "nohz_full=" boot parameter.	Bind the rcuo
> +		kthreads to housekeeping CPUs that can tolerate OS jitter.

					      which

> +	b.	To the extent possible, keep the CPU out of the kernel
> +		when it is non-idle, for example, by avoiding system
> +		calls and by forcing both kernel threads and interrupts
> +		to execute elsewhere.
> +2.	Enable RCU to do its processing remotely via dyntick-idle by
> +	doing all of the following:
> +	a.	Build with CONFIG_NO_HZ=y and CONFIG_RCU_FAST_NO_HZ=y.
> +	b.	Ensure that the CPU goes idle frequently, allowing other

I'm ensuring that by selecting the proper workload which has idle
breathers?

> +		CPUs to detect that it has passed through an RCU quiescent
> +		state.	If the kernel is built with CONFIG_NO_HZ_FULL=y,
> +		userspace execution also allows other CPUs to detect that
> +		the CPU in question has passed through a quiescent state.
> +	c.	To the extent possible, keep the CPU out of the kernel
> +		when it is non-idle, for example, by avoiding system
> +		calls and by forcing both kernel threads and interrupts
> +		to execute elsewhere.
> +
> +Name: rcuc/%u
> +Purpose: Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels.
> +To reduce corresponding OS jitter, do at least one of the following:
> +1.	Build the kernel with CONFIG_PREEMPT=n.  This prevents these
> +	kthreads from being created in the first place, and also prevents
> +	RCU priority boosting from ever being required.  This approach

"... this obviates the need for RCU priority boosting."

> +	is feasible for workloads that do not require high degrees of
> +	responsiveness.
> +2.	Build the kernel with CONFIG_RCU_BOOST=n.  This prevents these
> +	kthreads from being created in the first place.  This approach
> +	is feasible only if your workload never requires RCU priority
> +	boosting, for example, if you ensure frequent idle time on all
> +	CPUs that might execute within the kernel.
> +3.	Build with CONFIG_RCU_NOCB_CPU=y and CONFIG_RCU_NOCB_CPU_ALL=y,
> +	which offloads all RCU callbacks to kthreads that can be moved
> +	off of CPUs susceptible to OS jitter.  This approach prevents the
> +	rcuc/%u kthreads from having any work to do, so that they are
> +	never awakened.
> +4.	Ensure that the CPU never enters the kernel and in particular

						   , and, in particular, 

> +	avoid initiating any CPU hotplug operations on this CPU.  This is
> +	another way of preventing any callbacks from being queued on the
> +	CPU, again preventing the rcuc/%u kthreads from having any work
> +	to do.
> +
> +Name: rcuob/%d, rcuop/%d, and rcuos/%d
> +Purpose: Offload RCU callbacks from the corresponding CPU.
> +To reduce corresponding OS jitter, do at least one of the following:
> +1.	Use affinity, cgroups, or other mechanism to force these kthreads
> +	to execute on some other CPU.
> +2.	Build with CONFIG_RCU_NOCB_CPUS=n, which will prevent these
> +	kthreads from being created in the first place.  However,
> +	please note that this will not eliminate the corresponding

can we drop "corresponding" here?

> +	OS jitter, but will instead shift it to RCU_SOFTIRQ.
> +
> +Name: watchdog/%u
> +Purpose: Detect software lockups on each CPU.
> +To reduce corresponding OS jitter, do at least one of the following:

ditto.

> +1.	Build with CONFIG_LOCKUP_DETECTOR=n, which will prevent these
> +	kthreads from being created in the first place.
> +2.	Echo a zero to /proc/sys/kernel/watchdog to disable the
> +	watchdog timer.
> +3.	Echo a large number of /proc/sys/kernel/watchdog_thresh in
> +	order to reduce the frequency of OS jitter due to the watchdog
> +	timer down to a level that is acceptable for your workload.


-- 
Regards/Gruss,
    Boris.

Sent from a fat crate under my desk. Formatting is fine.
--

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads
  2013-04-16 16:41 ` [PATCH documentation 1/2] nohz_full: Add documentation Paul E. McKenney
@ 2013-04-16 16:41   ` Paul E. McKenney
  2013-04-21 19:37     ` Borislav Petkov
  0 siblings, 1 reply; 35+ messages in thread
From: Paul E. McKenney @ 2013-04-16 16:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, sbw, Paul E. McKenney, Frederic Weisbecker,
	Steven Rostedt, Borislav Petkov, Arjan van de Ven, Kevin Hilman,
	Christoph Lameter, Thomas Gleixner, Olivier Baetz

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

The Linux kernel uses a number of per-CPU kthreads, any of which might
contribute to OS jitter at any time.  The usual approach to normal
kthreads, namely to bind them to a "housekeeping" CPU, does not work
with these kthreads because they cannot operate correctly if moved to
some other CPU.  This commit therefore lists ways of controlling OS
jitter from the Linux kernel's per-CPU kthreads.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Olivier Baetz <olivier.baetz@novasparks.com>
Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
---
 Documentation/kernel-per-CPU-kthreads.txt | 186 ++++++++++++++++++++++++++++++
 1 file changed, 186 insertions(+)
 create mode 100644 Documentation/kernel-per-CPU-kthreads.txt

diff --git a/Documentation/kernel-per-CPU-kthreads.txt b/Documentation/kernel-per-CPU-kthreads.txt
new file mode 100644
index 0000000..bfecc1c
--- /dev/null
+++ b/Documentation/kernel-per-CPU-kthreads.txt
@@ -0,0 +1,186 @@
+REDUCING OS JITTER DUE TO PER-CPU KTHREADS
+
+This document lists per-CPU kthreads in the Linux kernel and presents
+options to control OS jitter due to these kthreads.  Note that kthreads
+that are not per-CPU are not listed here -- to reduce OS jitter from
+non-per-CPU kthreads, bind them to a "housekeeping" CPU that is dedicated
+to such work.
+
+
+REFERENCES
+
+o	Documentation/IRQ-affinity.txt:  Binding interrupts to sets of CPUs.
+
+o	Documentation/cgroups:  Using cgroups to bind tasks to sets of CPUs.
+
+o	man taskset:  Using the taskset command to bind tasks to sets
+	of CPUs.
+
+o	man sched_setaffinity:  Using the sched_setaffinity() system
+	call to bind tasks to sets of CPUs.
+
+
+KTHREADS
+
+Name: ehca_comp/%u
+Purpose: Periodically process Infiniband-related work.
+To reduce corresponding OS jitter, do any of the following:
+1.	Don't use EHCA Infiniband hardware.  This will prevent these
+	kthreads from being created in the first place.  (This will
+	work for most people, as this hardware, though important,
+	is relatively old and is produced in relatively low unit
+	volumes.)
+2.	Do all EHCA-Infiniband-related work on other CPUs, including
+	interrupts.
+
+
+Name: irq/%d-%s
+Purpose: Handle threaded interrupts.
+To reduce corresponding OS jitter, do the following:
+1.	Use irq affinity to force the irq threads to execute on
+	some other CPU.
+
+Name: kcmtpd_ctr_%d
+Purpose: Handle Bluetooth work.
+To reduce corresponding OS jitter, do one of the following:
+1.	Don't use Bluetooth, in which case these kthreads won't be
+	created in the first place.
+2.	Use irq affinity to force Bluetooth-related interrupts to
+	occur on some other CPU and furthermore initiate all
+	Bluetooth activity on some other CPU.
+
+Name: ksoftirqd/%u
+Purpose: Execute softirq handlers when threaded or when under heavy load.
+To reduce corresponding OS jitter, each softirq vector must be handled
+separately as follows:
+TIMER_SOFTIRQ:  Do all of the following:
+1.	To the extent possible, keep the CPU out of the kernel when it
+	is non-idle, for example, by avoiding system calls and by forcing
+	both kernel threads and interrupts to execute elsewhere.
+2.	Build with CONFIG_HOTPLUG_CPU=y.  After boot completes, force
+	the CPU offline, then bring it back online.  This forces
+	recurring timers to migrate elsewhere.	If you are concerned
+	with multiple CPUs, force them all offline before bringing the
+	first one back online.
+NET_TX_SOFTIRQ and NET_RX_SOFTIRQ:  Do all of the following:
+1.	Force networking interrupts onto other CPUs.
+2.	Initiate any network I/O on other CPUs.
+3.	Once your application has started, prevent CPU-hotplug operations
+	from being initiated from tasks that might run on the CPU to
+	be de-jittered.  (It is OK to force this CPU offline and then
+	bring it back online before you start your application.)
+BLOCK_SOFTIRQ:  Do all of the following:
+1.	Force block-device interrupts onto some other CPU.
+2.	Initiate any block I/O on other CPUs.
+3.	Once your application has started, prevent CPU-hotplug operations
+	from being initiated from tasks that might run on the CPU to
+	be de-jittered.  (It is OK to force this CPU offline and then
+	bring it back online before you start your application.)
+BLOCK_IOPOLL_SOFTIRQ:  Do all of the following:
+1.	Force block-device interrupts onto some other CPU.
+2.	Initiate any block I/O and block-I/O polling on other CPUs.
+3.	Once your application has started, prevent CPU-hotplug operations
+	from being initiated from tasks that might run on the CPU to
+	be de-jittered.  (It is OK to force this CPU offline and then
+	bring it back online before you start your application.)
+TASKLET_SOFTIRQ: Do one or more of the following:
+1.	Avoid use of drivers that use tasklets.
+2.	Convert all drivers that you must use from tasklets to workqueues.
+3.	Force interrupts for drivers using tasklets onto other CPUs,
+	and also do I/O involving these drivers on other CPUs.
+SCHED_SOFTIRQ: Do all of the following:
+1.	Avoid sending scheduler IPIs to the CPU to be de-jittered,
+	for example, ensure that at most one runnable kthread is
+	present on that CPU.  If a thread awakens that expects
+	to run on the de-jittered CPU, the scheduler will send
+	an IPI that can result in a subsequent SCHED_SOFTIRQ.
+2.	Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
+	CONFIG_NO_HZ_FULL=y, and in addition ensure that the CPU
+	to be de-jittered is marked as an adaptive-ticks CPU using the
+	"nohz_full=" boot parameter.  This reduces the number of
+	scheduler-clock interrupts that the de-jittered CPU receives,
+	minimizing its chances of being selected to do load balancing,
+	which happens in SCHED_SOFTIRQ context.
+3.	To the extent possible, keep the CPU out of the kernel when it
+	is non-idle, for example, by avoiding system calls and by
+	forcing both kernel threads and interrupts to execute elsewhere.
+	This further reduces the number of scheduler-clock interrupts
+	that the de-jittered CPU receives.
+HRTIMER_SOFTIRQ:  Do all of the following:
+1.	To the extent possible, keep the CPU out of the kernel when it
+	is non-idle, for example, by avoiding system calls and by forcing
+	both kernel threads and interrupts to execute elsewhere.
+2.	Build with CONFIG_HOTPLUG_CPU=y.  Once boot completes, force the
+	CPU offline, then bring it back online.  This forces recurring
+	timers to migrate elsewhere.  If you are concerned with multiple
+	CPUs, force them all offline before bringing the first one
+	back online.
+RCU_SOFTIRQ:  Do at least one of the following:
+1.	Offload callbacks and keep the CPU in either dyntick-idle or
+	adaptive-ticks state by doing all of the following:
+	a.	Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y,
+		CONFIG_NO_HZ_FULL=y, and in addition ensure that the CPU
+		to be de-jittered is marked as an adaptive-ticks CPU
+		using the "nohz_full=" boot parameter.	Bind the rcuo
+		kthreads to housekeeping CPUs that can tolerate OS jitter.
+	b.	To the extent possible, keep the CPU out of the kernel
+		when it is non-idle, for example, by avoiding system
+		calls and by forcing both kernel threads and interrupts
+		to execute elsewhere.
+2.	Enable RCU to do its processing remotely via dyntick-idle by
+	doing all of the following:
+	a.	Build with CONFIG_NO_HZ=y and CONFIG_RCU_FAST_NO_HZ=y.
+	b.	Ensure that the CPU goes idle frequently, allowing other
+		CPUs to detect that it has passed through an RCU quiescent
+		state.	If the kernel is built with CONFIG_NO_HZ_FULL=y,
+		userspace execution also allows other CPUs to detect that
+		the CPU in question has passed through a quiescent state.
+	c.	To the extent possible, keep the CPU out of the kernel
+		when it is non-idle, for example, by avoiding system
+		calls and by forcing both kernel threads and interrupts
+		to execute elsewhere.
+
+Name: rcuc/%u
+Purpose: Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels.
+To reduce corresponding OS jitter, do at least one of the following:
+1.	Build the kernel with CONFIG_PREEMPT=n.  This prevents these
+	kthreads from being created in the first place, and also prevents
+	RCU priority boosting from ever being required.  This approach
+	is feasible for workloads that do not require high degrees of
+	responsiveness.
+2.	Build the kernel with CONFIG_RCU_BOOST=n.  This prevents these
+	kthreads from being created in the first place.  This approach
+	is feasible only if your workload never requires RCU priority
+	boosting, for example, if you ensure frequent idle time on all
+	CPUs that might execute within the kernel.
+3.	Build with CONFIG_RCU_NOCB_CPU=y and CONFIG_RCU_NOCB_CPU_ALL=y,
+	which offloads all RCU callbacks to kthreads that can be moved
+	off of CPUs susceptible to OS jitter.  This approach prevents the
+	rcuc/%u kthreads from having any work to do, so that they are
+	never awakened.
+4.	Ensure that the CPU never enters the kernel and in particular
+	avoid initiating any CPU hotplug operations on this CPU.  This is
+	another way of preventing any callbacks from being queued on the
+	CPU, again preventing the rcuc/%u kthreads from having any work
+	to do.
+
+Name: rcuob/%d, rcuop/%d, and rcuos/%d
+Purpose: Offload RCU callbacks from the corresponding CPU.
+To reduce corresponding OS jitter, do at least one of the following:
+1.	Use affinity, cgroups, or other mechanism to force these kthreads
+	to execute on some other CPU.
+2.	Build with CONFIG_RCU_NOCB_CPUS=n, which will prevent these
+	kthreads from being created in the first place.  However,
+	please note that this will not eliminate the corresponding
+	OS jitter, but will instead shift it to RCU_SOFTIRQ.
+
+Name: watchdog/%u
+Purpose: Detect software lockups on each CPU.
+To reduce corresponding OS jitter, do at least one of the following:
+1.	Build with CONFIG_LOCKUP_DETECTOR=n, which will prevent these
+	kthreads from being created in the first place.
+2.	Echo a zero to /proc/sys/kernel/watchdog to disable the
+	watchdog timer.
+3.	Echo a large number of /proc/sys/kernel/watchdog_thresh in
+	order to reduce the frequency of OS jitter due to the watchdog
+	timer down to a level that is acceptable for your workload.
-- 
1.8.1.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2013-04-27 13:27 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-04-11 16:05 [PATCH documentation 0/2] OS-jitter documentation Paul E. McKenney
2013-04-11 16:05 ` [PATCH documentation 1/2] nohz1: Add documentation Paul E. McKenney
2013-04-11 16:05   ` [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads Paul E. McKenney
2013-04-11 17:18     ` Randy Dunlap
2013-04-11 18:40       ` Paul E. McKenney
2013-04-11 20:09         ` Randy Dunlap
2013-04-11 21:00           ` Paul E. McKenney
2013-04-11 16:48   ` [PATCH documentation 1/2] nohz1: Add documentation Randy Dunlap
2013-04-11 17:09     ` Paul E. McKenney
2013-04-11 17:14   ` Arjan van de Ven
2013-04-11 18:27     ` Paul E. McKenney
2013-04-11 18:43       ` Dipankar Sarma
2013-04-11 19:14         ` Paul E. McKenney
2013-04-11 18:25   ` Borislav Petkov
2013-04-11 19:13     ` Paul E. McKenney
2013-04-11 20:19       ` Borislav Petkov
2013-04-11 21:01         ` Paul E. McKenney
2013-04-12  8:05       ` Peter Zijlstra
2013-04-12 17:54         ` Paul E. McKenney
2013-04-12 17:56           ` Arjan van de Ven
2013-04-12 20:39             ` Paul E. McKenney
2013-04-15 16:00             ` Christoph Lameter
2013-04-15 16:41               ` Arjan van de Ven
2013-04-15 16:53                 ` Christoph Lameter
2013-04-15 17:21                   ` Arjan van de Ven
2013-04-19 21:01   ` Kevin Hilman
2013-04-19 21:47     ` Paul E. McKenney
2013-04-27 13:26   ` Frederic Weisbecker
2013-04-16 16:40 PATCH documentation 0/2] OS-jitter documentation Paul E. McKenney
2013-04-16 16:41 ` [PATCH documentation 1/2] nohz_full: Add documentation Paul E. McKenney
2013-04-16 16:41   ` [PATCH documentation 2/2] kthread: Document ways of reducing OS jitter due to per-CPU kthreads Paul E. McKenney
2013-04-21 19:37     ` Borislav Petkov
2013-04-23  4:03       ` Paul E. McKenney
2013-04-25 10:23         ` Borislav Petkov
2013-04-25 15:52           ` Paul E. McKenney
2013-04-25 20:59             ` Thomas Gleixner
2013-04-25 21:23               ` Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).