All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH -tip 0/9] locktorture: Improve and expand lock torturing
@ 2014-09-12  3:40 Davidlohr Bueso
  2014-09-12  3:40 ` [PATCH 1/9] locktorture: Rename locktorture_runnable parameter Davidlohr Bueso
                   ` (8 more replies)
  0 siblings, 9 replies; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12  3:40 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel, dave

This set includes general updates throughout the locktorture code.
Particularly support for reader locks are added as well as torturing
mutexes and rwsems. With the recent locking changes, it doesn't hurt
to improve our testing infrastructure, and torturing is definitely
one of them. For specific details about each change, please consult
the actual patches.

o patches 1, 4, 9: misc changes.
o patch 2: new doc, based on rcutorture's.
o patches 3, 8: torture new locking primitives.
o patches 5, 7: add support for reader locks.
o patch 6: fix a minor race in the torture cleanup path.

Really no particular order, please consider for v3.18.

Davidlohr Bueso (9):
  locktorture: Rename locktorture_runnable parameter
  locktorture: Add documentation
  locktorture: Support mutexes
  locktorture: Teach about lock debugging
  locktorture: Make statistics generic
  torture: Address race in module cleanup
  locktorture: Add infraestructure for torturing read locks
  locktorture: Support rwsems
  locktorture: Introduce torture context

 Documentation/locking/locktorture.txt | 140 ++++++++++++
 include/linux/torture.h               |   3 +-
 kernel/locking/locktorture.c          | 392 ++++++++++++++++++++++++++++------
 kernel/rcu/rcutorture.c               |   3 +-
 kernel/torture.c                      |  16 +-
 5 files changed, 480 insertions(+), 74 deletions(-)
 create mode 100644 Documentation/locking/locktorture.txt

-- 
1.8.4.5


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 1/9] locktorture: Rename locktorture_runnable parameter
  2014-09-12  3:40 [PATCH -tip 0/9] locktorture: Improve and expand lock torturing Davidlohr Bueso
@ 2014-09-12  3:40 ` Davidlohr Bueso
  2014-09-12 17:40   ` Paul E. McKenney
  2014-09-12  3:40 ` [PATCH 2/9] locktorture: Add documentation Davidlohr Bueso
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12  3:40 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel, dave, Davidlohr Bueso

... to just 'torture_runnable'. It follows other variable naming
and is shorter.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
---
 kernel/locking/locktorture.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index 0955b88..8c770b2 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -87,9 +87,9 @@ static struct lock_writer_stress_stats *lwsa;
 #else
 #define LOCKTORTURE_RUNNABLE_INIT 0
 #endif
-int locktorture_runnable = LOCKTORTURE_RUNNABLE_INIT;
-module_param(locktorture_runnable, int, 0444);
-MODULE_PARM_DESC(locktorture_runnable, "Start locktorture at module init");
+int torture_runnable = LOCKTORTURE_RUNNABLE_INIT;
+module_param(torture_runnable, int, 0444);
+MODULE_PARM_DESC(torture_runnable, "Start locktorture at module init");
 
 /* Forward reference. */
 static void lock_torture_cleanup(void);
@@ -355,7 +355,7 @@ static int __init lock_torture_init(void)
 		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops,
 	};
 
-	if (!torture_init_begin(torture_type, verbose, &locktorture_runnable))
+	if (!torture_init_begin(torture_type, verbose, &torture_runnable))
 		return -EBUSY;
 
 	/* Process args and tell the world that the torturer is on the job. */
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 2/9] locktorture: Add documentation
  2014-09-12  3:40 [PATCH -tip 0/9] locktorture: Improve and expand lock torturing Davidlohr Bueso
  2014-09-12  3:40 ` [PATCH 1/9] locktorture: Rename locktorture_runnable parameter Davidlohr Bueso
@ 2014-09-12  3:40 ` Davidlohr Bueso
  2014-09-12  5:28   ` Davidlohr Bueso
  2014-09-13  1:10   ` Randy Dunlap
  2014-09-12  3:40 ` [PATCH 3/9] locktorture: Support mutexes Davidlohr Bueso
                   ` (6 subsequent siblings)
  8 siblings, 2 replies; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12  3:40 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel, dave, Davidlohr Bueso

Just like Documentation/RCU/torture.txt, begin a document for the
locktorture module. This module is still pretty green, so I have
just added some specific sections to the doc (general desc, params,
usage, etc.). Further development should update the file.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
---
 Documentation/locking/locktorture.txt | 128 ++++++++++++++++++++++++++++++++++
 1 file changed, 128 insertions(+)
 create mode 100644 Documentation/locking/locktorture.txt

diff --git a/Documentation/locking/locktorture.txt b/Documentation/locking/locktorture.txt
new file mode 100644
index 0000000..c0ab969
--- /dev/null
+++ b/Documentation/locking/locktorture.txt
@@ -0,0 +1,128 @@
+Kernel Lock Torture Test Operation
+
+CONFIG_LOCK_TORTURE_TEST
+
+The CONFIG LOCK_TORTURE_TEST config option provides a kernel module
+that runs torture tests on core kernel locking primitives. The kernel
+module, 'locktorture', may be built after the fact on the running
+kernel to be tested, if desired. The tests periodically outputs status
+messages via printk(), which can be examined via the dmesg (perhaps
+grepping for "torture").  The test is started when the module is loaded,
+and stops when the module is unloaded. This program is based on how RCU
+is tortured, via rcutorture.
+
+This torture test consists of creating a number of kernel threads which
+acquires the lock and holds it for specific amount of time, thus simulating
+different critical region behaviors. The amount of contention on the lock
+can be simulated by either enlarging this critical region hold time and/or
+creating more kthreads.
+
+
+MODULE PARAMETERS
+
+This module has the following parameters:
+
+
+	    ** Locktorture-specific **
+
+nwriters_stress   Number of kernel threads that will stress exclusive lock
+		  ownership (writers). The default value is twice the amount
+		  of online CPUs.
+
+torture_type	  Type of lock to torture. By default, only spinlocks will
+		  be tortured. This module can torture the following locks,
+		  with string values as follows:
+
+		     o "lock_busted": Simulates a buggy lock implementation.
+
+		     o "spin_lock": spin_lock() and spin_unlock() pairs.
+
+		     o "spin_lock_irq": spin_lock_irq() and spin_unlock_irq()
+					pairs.
+
+torture_runnable  Start locktorture at module init. By default it will begin
+		  once the module is loaded.
+
+
+	    ** Torture-framework (RCU + locking) **
+
+shutdown_secs	  The number of seconds to run the test before terminating
+		  the test and powering off the system.  The default is
+		  zero, which disables test termination and system shutdown.
+		  This capability is useful for automated testing.
+
+onoff_holdoff	  The number of seconds between each attempt to execute a
+		  randomly selected CPU-hotplug operation.  Defaults to
+		  zero, which disables CPU hotplugging.  In HOTPLUG_CPU=n
+		  kernels, locktorture will silently refuse to do any
+		  CPU-hotplug operations regardless of what value is
+		  specified for onoff_interval.
+
+onoff_holdoff	  The number of seconds to wait until starting CPU-hotplug
+		  operations.  This would normally only be used when
+		  locktorture was built into the kernel and started
+		  automatically at boot time, in which case it is useful
+		  in order to avoid confusing boot-time code with CPUs
+		  coming and going. This parameter is only useful if
+		  CONFIG_HOTPLUG_CPU is enabled.
+
+stat_interval	  Number of seconds between statistics-related printk()s.
+		  By default, locktorture will report stats every 60 seconds.
+		  Setting the interval to zero causes the statistics to
+		  be printed -only- when the module is unloaded, and this
+		  is the default.
+
+stutter		  The length of time to run the test before pausing for this
+		  same period of time.  Defaults to "stutter=5", so as
+		  to run and pause for (roughly) five-second intervals.
+		  Specifying "stutter=0" causes the test to run continuously
+		  without pausing, which is the old default behavior.
+
+shuffle_interval  The number of seconds to keep the test threads affinitied
+		  to a particular subset of the CPUs, defaults to 3 seconds.
+		  Used in conjunction with test_no_idle_hz.
+
+verbose		  Enable verbose debugging printking, via printk(). Enabled
+		  by default. This extra information is mostly related to
+		  high-level errors and reports from the main 'torture'
+		  framework.
+
+
+STATISTICS
+
+Statistics are printed in the following format:
+
+spin_lock-torture: Writes:  Total: 93746064  Max/Min: 0/0   Fail: 0
+   (A)				   (B)		  (C)	       (D)
+
+(A): Lock type that is being tortured -- torture_type parameter.
+
+(B): Number of times the lock was acquired.
+
+(C): Min and max number of times threads failed to acquire the lock.
+
+(D): true/false values if there were errors acquiring the lock. This should
+     -only- be positive if there is a bug in the locking primitive's
+     implementation. Otherwise a lock should never fail (ie: spin_lock()).
+     Of course, the same applies for (C), above. A dummy example of this is
+     the "lock_busted" type.
+
+USAGE
+
+The following script may be used to torture locks:
+
+	#!/bin/sh
+
+	modprobe locktorture
+	sleep 3600
+	rmmod locktorture
+	dmesg | grep torture:
+
+The output can be manually inspected for the error flag of "!!!".
+One could of course create a more elaborate script that automatically
+checked for such errors.  The "rmmod" command forces a "SUCCESS",
+"FAILURE", or "RCU_HOTPLUG" indication to be printk()ed.  The first
+two are self-explanatory, while the last indicates that while there
+were no locking failures, CPU-hotplug problems were detected.
+
+Also see: Documentation/RCU/torture.txt
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 3/9] locktorture: Support mutexes
  2014-09-12  3:40 [PATCH -tip 0/9] locktorture: Improve and expand lock torturing Davidlohr Bueso
  2014-09-12  3:40 ` [PATCH 1/9] locktorture: Rename locktorture_runnable parameter Davidlohr Bueso
  2014-09-12  3:40 ` [PATCH 2/9] locktorture: Add documentation Davidlohr Bueso
@ 2014-09-12  3:40 ` Davidlohr Bueso
  2014-09-12 18:02   ` Paul E. McKenney
  2014-09-12  3:40 ` [PATCH 4/9] locktorture: Teach about lock debugging Davidlohr Bueso
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12  3:40 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel, dave, Davidlohr Bueso

Add a "mutex_lock" torture test. The main difference with the already
existing spinlock tests is that the latency of the critical region
is much larger. We randomly delay for (arbitrarily) either 500 ms or,
otherwise, 25 ms. While this can considerably reduce the amount of
writes compared to non blocking locks, if run long enough it can have
the same torturous effect. Furthermore it is more representative of
mutex hold times and can stress better things like thrashing.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
---
 Documentation/locking/locktorture.txt |  2 ++
 kernel/locking/locktorture.c          | 41 +++++++++++++++++++++++++++++++++--
 2 files changed, 41 insertions(+), 2 deletions(-)

diff --git a/Documentation/locking/locktorture.txt b/Documentation/locking/locktorture.txt
index c0ab969..6b1e7ca 100644
--- a/Documentation/locking/locktorture.txt
+++ b/Documentation/locking/locktorture.txt
@@ -40,6 +40,8 @@ torture_type	  Type of lock to torture. By default, only spinlocks will
 		     o "spin_lock_irq": spin_lock_irq() and spin_unlock_irq()
 					pairs.
 
+		     o "mutex_lock": mutex_lock() and mutex_unlock() pairs.
+
 torture_runnable  Start locktorture at module init. By default it will begin
 		  once the module is loaded.
 
diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index 8c770b2..414ba45 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -27,6 +27,7 @@
 #include <linux/kthread.h>
 #include <linux/err.h>
 #include <linux/spinlock.h>
+#include <linux/mutex.h>
 #include <linux/smp.h>
 #include <linux/interrupt.h>
 #include <linux/sched.h>
@@ -66,7 +67,7 @@ torture_param(bool, verbose, true,
 static char *torture_type = "spin_lock";
 module_param(torture_type, charp, 0444);
 MODULE_PARM_DESC(torture_type,
-		 "Type of lock to torture (spin_lock, spin_lock_irq, ...)");
+		 "Type of lock to torture (spin_lock, spin_lock_irq, mutex_lock, ...)");
 
 static atomic_t n_lock_torture_errors;
 
@@ -206,6 +207,42 @@ static struct lock_torture_ops spin_lock_irq_ops = {
 	.name		= "spin_lock_irq"
 };
 
+static DEFINE_MUTEX(torture_mutex);
+
+static int torture_mutex_lock(void) __acquires(torture_mutex)
+{
+	mutex_lock(&torture_mutex);
+	return 0;
+}
+
+static void torture_mutex_delay(struct torture_random_state *trsp)
+{
+	const unsigned long longdelay_ms = 100;
+
+	/* We want a long delay occasionally to force massive contention.  */
+	if (!(torture_random(trsp) %
+	      (nrealwriters_stress * 2000 * longdelay_ms)))
+		mdelay(longdelay_ms * 5);
+	else
+		mdelay(longdelay_ms / 5);
+#ifdef CONFIG_PREEMPT
+	if (!(torture_random(trsp) % (nrealwriters_stress * 20000)))
+		preempt_schedule();  /* Allow test to be preempted. */
+#endif
+}
+
+static void torture_mutex_unlock(void) __releases(torture_mutex)
+{
+	mutex_unlock(&torture_mutex);
+}
+
+static struct lock_torture_ops mutex_lock_ops = {
+	.writelock	= torture_mutex_lock,
+	.write_delay	= torture_mutex_delay,
+	.writeunlock	= torture_mutex_unlock,
+	.name		= "mutex_lock"
+};
+
 /*
  * Lock torture writer kthread.  Repeatedly acquires and releases
  * the lock, checking for duplicate acquisitions.
@@ -352,7 +389,7 @@ static int __init lock_torture_init(void)
 	int i;
 	int firsterr = 0;
 	static struct lock_torture_ops *torture_ops[] = {
-		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops,
+		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops, &mutex_lock_ops,
 	};
 
 	if (!torture_init_begin(torture_type, verbose, &torture_runnable))
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 4/9] locktorture: Teach about lock debugging
  2014-09-12  3:40 [PATCH -tip 0/9] locktorture: Improve and expand lock torturing Davidlohr Bueso
                   ` (2 preceding siblings ...)
  2014-09-12  3:40 ` [PATCH 3/9] locktorture: Support mutexes Davidlohr Bueso
@ 2014-09-12  3:40 ` Davidlohr Bueso
  2014-09-12  3:40 ` [PATCH 5/9] locktorture: Make statistics generic Davidlohr Bueso
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12  3:40 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel, dave, Davidlohr Bueso

Regular locks are very different than locks with debugging. For instance
for mutexes, debugging forces to only take the slowpaths. As such, the
locktorture module should take this into account when printing related
information -- specifically when printing user passed parameters, it seems
the right place for such info.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
---
 kernel/locking/locktorture.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index 414ba45..a6049fa 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -64,6 +64,7 @@ torture_param(int, stutter, 5, "Number of jiffies to run/halt test, 0=disable");
 torture_param(bool, verbose, true,
 	     "Enable verbose debugging printk()s");
 
+static bool debug_lock = false;
 static char *torture_type = "spin_lock";
 module_param(torture_type, charp, 0444);
 MODULE_PARM_DESC(torture_type,
@@ -349,8 +350,9 @@ lock_torture_print_module_parms(struct lock_torture_ops *cur_ops,
 				const char *tag)
 {
 	pr_alert("%s" TORTURE_FLAG
-		 "--- %s: nwriters_stress=%d stat_interval=%d verbose=%d shuffle_interval=%d stutter=%d shutdown_secs=%d onoff_interval=%d onoff_holdoff=%d\n",
-		 torture_type, tag, nrealwriters_stress, stat_interval, verbose,
+		 "--- %s%s: nwriters_stress=%d stat_interval=%d verbose=%d shuffle_interval=%d stutter=%d shutdown_secs=%d onoff_interval=%d onoff_holdoff=%d\n",
+		 torture_type, tag, debug_lock ? " [debug]": "",
+		 nrealwriters_stress, stat_interval, verbose,
 		 shuffle_interval, stutter, shutdown_secs,
 		 onoff_interval, onoff_holdoff);
 }
@@ -418,6 +420,15 @@ static int __init lock_torture_init(void)
 		nrealwriters_stress = nwriters_stress;
 	else
 		nrealwriters_stress = 2 * num_online_cpus();
+
+#ifdef CONFIG_DEBUG_MUTEXES
+	if (strncmp(torture_type, "mutex", 5) == 0)
+		debug_lock = true;
+#endif
+#ifdef CONFIG_DEBUG_SPINLOCK
+	if (strncmp(torture_type, "spin", 4) == 0)
+		debug_lock = true;
+#endif
 	lock_torture_print_module_parms(cur_ops, "Start of test");
 
 	/* Initialize the statistics so that each run gets its own numbers. */
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 5/9] locktorture: Make statistics generic
  2014-09-12  3:40 [PATCH -tip 0/9] locktorture: Improve and expand lock torturing Davidlohr Bueso
                   ` (3 preceding siblings ...)
  2014-09-12  3:40 ` [PATCH 4/9] locktorture: Teach about lock debugging Davidlohr Bueso
@ 2014-09-12  3:40 ` Davidlohr Bueso
  2014-09-12  3:40 ` [PATCH 6/9] torture: Address race in module cleanup Davidlohr Bueso
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12  3:40 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel, dave, Davidlohr Bueso

The statistics structure can serve well for both reader and writer
locks, thus simply rename some fields that mention 'write' and leave
the declaration of lwsa.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
---
 kernel/locking/locktorture.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index a6049fa..de703a7 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -78,11 +78,11 @@ static struct task_struct **writer_tasks;
 static int nrealwriters_stress;
 static bool lock_is_write_held;
 
-struct lock_writer_stress_stats {
-	long n_write_lock_fail;
-	long n_write_lock_acquired;
+struct lock_stress_stats {
+	long n_lock_fail;
+	long n_lock_acquired;
 };
-static struct lock_writer_stress_stats *lwsa;
+static struct lock_stress_stats *lwsa; /* writer statistics */
 
 #if defined(MODULE)
 #define LOCKTORTURE_RUNNABLE_INIT 1
@@ -250,7 +250,7 @@ static struct lock_torture_ops mutex_lock_ops = {
  */
 static int lock_torture_writer(void *arg)
 {
-	struct lock_writer_stress_stats *lwsp = arg;
+	struct lock_stress_stats *lwsp = arg;
 	static DEFINE_TORTURE_RANDOM(rand);
 
 	VERBOSE_TOROUT_STRING("lock_torture_writer task started");
@@ -261,9 +261,9 @@ static int lock_torture_writer(void *arg)
 			schedule_timeout_uninterruptible(1);
 		cur_ops->writelock();
 		if (WARN_ON_ONCE(lock_is_write_held))
-			lwsp->n_write_lock_fail++;
+			lwsp->n_lock_fail++;
 		lock_is_write_held = 1;
-		lwsp->n_write_lock_acquired++;
+		lwsp->n_lock_acquired++;
 		cur_ops->write_delay(&rand);
 		lock_is_write_held = 0;
 		cur_ops->writeunlock();
@@ -281,17 +281,17 @@ static void lock_torture_printk(char *page)
 	bool fail = 0;
 	int i;
 	long max = 0;
-	long min = lwsa[0].n_write_lock_acquired;
+	long min = lwsa[0].n_lock_acquired;
 	long long sum = 0;
 
 	for (i = 0; i < nrealwriters_stress; i++) {
-		if (lwsa[i].n_write_lock_fail)
+		if (lwsa[i].n_lock_fail)
 			fail = true;
-		sum += lwsa[i].n_write_lock_acquired;
-		if (max < lwsa[i].n_write_lock_fail)
-			max = lwsa[i].n_write_lock_fail;
-		if (min > lwsa[i].n_write_lock_fail)
-			min = lwsa[i].n_write_lock_fail;
+		sum += lwsa[i].n_lock_acquired;
+		if (max < lwsa[i].n_lock_fail)
+			max = lwsa[i].n_lock_fail;
+		if (min > lwsa[i].n_lock_fail)
+			min = lwsa[i].n_lock_fail;
 	}
 	page += sprintf(page, "%s%s ", torture_type, TORTURE_FLAG);
 	page += sprintf(page,
@@ -441,8 +441,8 @@ static int __init lock_torture_init(void)
 		goto unwind;
 	}
 	for (i = 0; i < nrealwriters_stress; i++) {
-		lwsa[i].n_write_lock_fail = 0;
-		lwsa[i].n_write_lock_acquired = 0;
+		lwsa[i].n_lock_fail = 0;
+		lwsa[i].n_lock_acquired = 0;
 	}
 
 	/* Start up the kthreads. */
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 6/9] torture: Address race in module cleanup
  2014-09-12  3:40 [PATCH -tip 0/9] locktorture: Improve and expand lock torturing Davidlohr Bueso
                   ` (4 preceding siblings ...)
  2014-09-12  3:40 ` [PATCH 5/9] locktorture: Make statistics generic Davidlohr Bueso
@ 2014-09-12  3:40 ` Davidlohr Bueso
  2014-09-12 18:04   ` Paul E. McKenney
  2014-09-12  4:40 ` [PATCH 7/9] locktorture: Add infrastructure for torturing read locks Davidlohr Bueso
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12  3:40 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel, dave, Davidlohr Bueso

When performing module cleanups by calling torture_cleanup() the
'torture_type' string in nullified However, callers are not necessarily
done, and might still need to reference the variable. This impacts
both rcutorture and locktorture, causing printing things like:

[   94.226618] (null)-torture: Stopping lock_torture_writer task
[   94.226624] (null)-torture: Stopping lock_torture_stats task

Thus delay this operation until the very end of the cleanup process.
The consequence (which shouldn't matter for this kid of program) is,
of course, that we delay the window between rmmod and modprobing,
for instance in module_torture_begin().

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
---
 include/linux/torture.h      |  3 ++-
 kernel/locking/locktorture.c |  3 ++-
 kernel/rcu/rcutorture.c      |  3 ++-
 kernel/torture.c             | 16 +++++++++++++---
 4 files changed, 19 insertions(+), 6 deletions(-)

diff --git a/include/linux/torture.h b/include/linux/torture.h
index 5ca58fc..301b628 100644
--- a/include/linux/torture.h
+++ b/include/linux/torture.h
@@ -77,7 +77,8 @@ int torture_stutter_init(int s);
 /* Initialization and cleanup. */
 bool torture_init_begin(char *ttype, bool v, int *runnable);
 void torture_init_end(void);
-bool torture_cleanup(void);
+bool torture_cleanup_begin(void);
+void torture_cleanup_end(void);
 bool torture_must_stop(void);
 bool torture_must_stop_irq(void);
 void torture_kthread_stopping(char *title);
diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index de703a7..988267c 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -361,7 +361,7 @@ static void lock_torture_cleanup(void)
 {
 	int i;
 
-	if (torture_cleanup())
+	if (torture_cleanup_begin())
 		return;
 
 	if (writer_tasks) {
@@ -384,6 +384,7 @@ static void lock_torture_cleanup(void)
 	else
 		lock_torture_print_module_parms(cur_ops,
 						"End of test: SUCCESS");
+	torture_cleanup_end();
 }
 
 static int __init lock_torture_init(void)
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index 948a769..57a2792 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -1418,7 +1418,7 @@ rcu_torture_cleanup(void)
 	int i;
 
 	rcutorture_record_test_transition();
-	if (torture_cleanup()) {
+	if (torture_cleanup_begin()) {
 		if (cur_ops->cb_barrier != NULL)
 			cur_ops->cb_barrier();
 		return;
@@ -1468,6 +1468,7 @@ rcu_torture_cleanup(void)
 					       "End of test: RCU_HOTPLUG");
 	else
 		rcu_torture_print_module_parms(cur_ops, "End of test: SUCCESS");
+	torture_cleanup_end();
 }
 
 #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD
diff --git a/kernel/torture.c b/kernel/torture.c
index d600af2..07a5c3d 100644
--- a/kernel/torture.c
+++ b/kernel/torture.c
@@ -635,8 +635,13 @@ EXPORT_SYMBOL_GPL(torture_init_end);
  *
  * This must be called before the caller starts shutting down its own
  * kthreads.
+ *
+ * Both torture_cleanup_begin() and torture_cleanup_end() must be paired,
+ * in order to correctly perform the cleanup. They are separated because
+ * threads can still need to reference the torture_type type, thus nullify
+ * only after completing all other relevant calls.
  */
-bool torture_cleanup(void)
+bool torture_cleanup_begin(void)
 {
 	mutex_lock(&fullstop_mutex);
 	if (ACCESS_ONCE(fullstop) == FULLSTOP_SHUTDOWN) {
@@ -651,12 +656,17 @@ bool torture_cleanup(void)
 	torture_shuffle_cleanup();
 	torture_stutter_cleanup();
 	torture_onoff_cleanup();
+	return false;
+}
+EXPORT_SYMBOL_GPL(torture_cleanup_begin);
+
+void torture_cleanup_end(void)
+{
 	mutex_lock(&fullstop_mutex);
 	torture_type = NULL;
 	mutex_unlock(&fullstop_mutex);
-	return false;
 }
-EXPORT_SYMBOL_GPL(torture_cleanup);
+EXPORT_SYMBOL_GPL(torture_cleanup_end);
 
 /*
  * Is it time for the current torture test to stop?
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 7/9] locktorture: Add infrastructure for torturing read locks
  2014-09-12  3:40 [PATCH -tip 0/9] locktorture: Improve and expand lock torturing Davidlohr Bueso
                   ` (5 preceding siblings ...)
  2014-09-12  3:40 ` [PATCH 6/9] torture: Address race in module cleanup Davidlohr Bueso
@ 2014-09-12  4:40 ` Davidlohr Bueso
  2014-09-12 16:06   ` Paul E. McKenney
  2014-09-12  4:41 ` [PATCH 8/9] locktorture: Support rwsems Davidlohr Bueso
  2014-09-12  4:42 ` [PATCH 9/9] locktorture: Introduce torture context Davidlohr Bueso
  8 siblings, 1 reply; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12  4:40 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel, dave, dbueso

Most of it is based on what we already have for writers. This allows
readers to be very independent (and thus configurable), enabling
future module parameters to control things such as rw distribution.
Furthermore, readers have their own delaying function, allowing us
to test different rw critical region latencies, and stress locking
internals. Similarly, statistics, for now will only serve for the
number of lock acquisitions -- as opposed to writers, readers have
no failure detection.

In addition, introduce a new nreaders_stress module parameter. The
default number of readers will be the same number of writers threads.
Writer threads are interleaved with readers. Documentation is updated,
respectively.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
---
 Documentation/locking/locktorture.txt |  16 +++-
 kernel/locking/locktorture.c          | 176 ++++++++++++++++++++++++++++++----
 2 files changed, 168 insertions(+), 24 deletions(-)

diff --git a/Documentation/locking/locktorture.txt b/Documentation/locking/locktorture.txt
index 6b1e7ca..1bdeb71 100644
--- a/Documentation/locking/locktorture.txt
+++ b/Documentation/locking/locktorture.txt
@@ -29,6 +29,11 @@ nwriters_stress   Number of kernel threads that will stress exclusive lock
 		  ownership (writers). The default value is twice the amount
 		  of online CPUs.
 
+nreaders_stress   Number of kernel threads that will stress shared lock
+		  ownership (readers). The default is the same amount of writer
+		  locks. If the user did not specify nwriters_stress, then
+		  both readers and writers be the amount of online CPUs.
+
 torture_type	  Type of lock to torture. By default, only spinlocks will
 		  be tortured. This module can torture the following locks,
 		  with string values as follows:
@@ -95,15 +100,18 @@ STATISTICS
 Statistics are printed in the following format:
 
 spin_lock-torture: Writes:  Total: 93746064  Max/Min: 0/0   Fail: 0
-   (A)				   (B)		  (C)	       (D)
+   (A)		    (B)		   (C)		  (D)	       (E)
 
 (A): Lock type that is being tortured -- torture_type parameter.
 
-(B): Number of times the lock was acquired.
+(B): Number of writer lock acquisitions. If dealing with a read/write primitive
+     a second "Reads" statistics line is printed.
+
+(C): Number of times the lock was acquired.
 
-(C): Min and max number of times threads failed to acquire the lock.
+(D): Min and max number of times threads failed to acquire the lock.
 
-(D): true/false values if there were errors acquiring the lock. This should
+(E): true/false values if there were errors acquiring the lock. This should
      -only- be positive if there is a bug in the locking primitive's
      implementation. Otherwise a lock should never fail (ie: spin_lock()).
      Of course, the same applies for (C), above. A dummy example of this is
diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index 988267c..c1073d7 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -52,6 +52,8 @@ MODULE_AUTHOR("Paul E. McKenney <paulmck@us.ibm.com>");
 
 torture_param(int, nwriters_stress, -1,
 	     "Number of write-locking stress-test threads");
+torture_param(int, nreaders_stress, -1,
+	     "Number of read-locking stress-test threads");
 torture_param(int, onoff_holdoff, 0, "Time after boot before CPU hotplugs (s)");
 torture_param(int, onoff_interval, 0,
 	     "Time between CPU hotplugs (s), 0=disable");
@@ -74,15 +76,19 @@ static atomic_t n_lock_torture_errors;
 
 static struct task_struct *stats_task;
 static struct task_struct **writer_tasks;
+static struct task_struct **reader_tasks;
 
 static int nrealwriters_stress;
 static bool lock_is_write_held;
+static int nrealreaders_stress;
+static bool lock_is_read_held;
 
 struct lock_stress_stats {
 	long n_lock_fail;
 	long n_lock_acquired;
 };
 static struct lock_stress_stats *lwsa; /* writer statistics */
+static struct lock_stress_stats *lrsa; /* reader statistics */
 
 #if defined(MODULE)
 #define LOCKTORTURE_RUNNABLE_INIT 1
@@ -104,6 +110,9 @@ struct lock_torture_ops {
 	int (*writelock)(void);
 	void (*write_delay)(struct torture_random_state *trsp);
 	void (*writeunlock)(void);
+	int (*readlock)(void);
+	void (*read_delay)(struct torture_random_state *trsp);
+	void (*readunlock)(void);
 	unsigned long flags;
 	const char *name;
 };
@@ -142,6 +151,9 @@ static struct lock_torture_ops lock_busted_ops = {
 	.writelock	= torture_lock_busted_write_lock,
 	.write_delay	= torture_lock_busted_write_delay,
 	.writeunlock	= torture_lock_busted_write_unlock,
+	.readlock       = NULL,
+	.read_delay     = NULL,
+	.readunlock     = NULL,
 	.name		= "lock_busted"
 };
 
@@ -182,6 +194,9 @@ static struct lock_torture_ops spin_lock_ops = {
 	.writelock	= torture_spin_lock_write_lock,
 	.write_delay	= torture_spin_lock_write_delay,
 	.writeunlock	= torture_spin_lock_write_unlock,
+	.readlock       = NULL,
+	.read_delay     = NULL,
+	.readunlock     = NULL,
 	.name		= "spin_lock"
 };
 
@@ -205,6 +220,9 @@ static struct lock_torture_ops spin_lock_irq_ops = {
 	.writelock	= torture_spin_lock_write_lock_irq,
 	.write_delay	= torture_spin_lock_write_delay,
 	.writeunlock	= torture_lock_spin_write_unlock_irq,
+	.readlock       = NULL,
+	.read_delay     = NULL,
+	.readunlock     = NULL,
 	.name		= "spin_lock_irq"
 };
 
@@ -241,6 +259,9 @@ static struct lock_torture_ops mutex_lock_ops = {
 	.writelock	= torture_mutex_lock,
 	.write_delay	= torture_mutex_delay,
 	.writeunlock	= torture_mutex_unlock,
+	.readlock       = NULL,
+	.read_delay     = NULL,
+	.readunlock     = NULL,
 	.name		= "mutex_lock"
 };
 
@@ -274,28 +295,57 @@ static int lock_torture_writer(void *arg)
 }
 
 /*
+ * Lock torture reader kthread.  Repeatedly acquires and releases
+ * the reader lock.
+ */
+static int lock_torture_reader(void *arg)
+{
+	struct lock_stress_stats *lrsp = arg;
+	static DEFINE_TORTURE_RANDOM(rand);
+
+	VERBOSE_TOROUT_STRING("lock_torture_reader task started");
+	set_user_nice(current, MAX_NICE);
+
+	do {
+		if ((torture_random(&rand) & 0xfffff) == 0)
+			schedule_timeout_uninterruptible(1);
+		cur_ops->readlock();
+		lock_is_read_held = 1;
+		lrsp->n_lock_acquired++;
+		cur_ops->read_delay(&rand);
+		lock_is_read_held = 0;
+		cur_ops->readunlock();
+		stutter_wait("lock_torture_reader");
+	} while (!torture_must_stop());
+	torture_kthread_stopping("lock_torture_reader");
+	return 0;
+}
+
+/*
  * Create an lock-torture-statistics message in the specified buffer.
  */
-static void lock_torture_printk(char *page)
+static void __torture_print_stats(char *page,
+				  struct lock_stress_stats *statp, bool write)
 {
 	bool fail = 0;
-	int i;
+	int i, n_stress;
 	long max = 0;
-	long min = lwsa[0].n_lock_acquired;
+	long min = statp[0].n_lock_acquired;
 	long long sum = 0;
 
-	for (i = 0; i < nrealwriters_stress; i++) {
-		if (lwsa[i].n_lock_fail)
+	n_stress = write ? nrealwriters_stress : nrealreaders_stress;
+	for (i = 0; i < n_stress; i++) {
+		if (statp[i].n_lock_fail)
 			fail = true;
-		sum += lwsa[i].n_lock_acquired;
-		if (max < lwsa[i].n_lock_fail)
-			max = lwsa[i].n_lock_fail;
-		if (min > lwsa[i].n_lock_fail)
-			min = lwsa[i].n_lock_fail;
+		sum += statp[i].n_lock_acquired;
+		if (max < statp[i].n_lock_fail)
+			max = statp[i].n_lock_fail;
+		if (min > statp[i].n_lock_fail)
+			min = statp[i].n_lock_fail;
 	}
-	page += sprintf(page, "%s%s ", torture_type, TORTURE_FLAG);
 	page += sprintf(page,
-			"Writes:  Total: %lld  Max/Min: %ld/%ld %s  Fail: %d %s\n",
+			"%s:  Total: %lld  Max/Min: %ld/%ld %s  Fail: %d %s\n",
+			write ? "Writes" : "Reads ",
 			sum, max, min, max / 2 > min ? "???" : "",
 			fail, fail ? "!!!" : "");
 	if (fail)
@@ -315,15 +365,32 @@ static void lock_torture_stats_print(void)
 	int size = nrealwriters_stress * 200 + 8192;
 	char *buf;
 
+	if (cur_ops->readlock)
+		size += nrealreaders_stress * 200 + 8192;
+
 	buf = kmalloc(size, GFP_KERNEL);
 	if (!buf) {
 		pr_err("lock_torture_stats_print: Out of memory, need: %d",
 		       size);
 		return;
 	}
-	lock_torture_printk(buf);
+
+	__torture_print_stats(buf, lwsa, true);
 	pr_alert("%s", buf);
 	kfree(buf);
+
+	if (cur_ops->readlock) {
+		buf = kmalloc(size, GFP_KERNEL);
+		if (!buf) {
+			pr_err("lock_torture_stats_print: Out of memory, need: %d",
+			       size);
+			return;
+		}
+
+		__torture_print_stats(buf, lrsa, false);
+		pr_alert("%s", buf);
+		kfree(buf);
+	}
 }
 
 /*
@@ -350,10 +417,10 @@ lock_torture_print_module_parms(struct lock_torture_ops *cur_ops,
 				const char *tag)
 {
 	pr_alert("%s" TORTURE_FLAG
-		 "--- %s%s: nwriters_stress=%d stat_interval=%d verbose=%d shuffle_interval=%d stutter=%d shutdown_secs=%d onoff_interval=%d onoff_holdoff=%d\n",
+		 "--- %s%s: nwriters_stress=%d nreaders_stress=%d stat_interval=%d verbose=%d shuffle_interval=%d stutter=%d shutdown_secs=%d onoff_interval=%d onoff_holdoff=%d\n",
 		 torture_type, tag, debug_lock ? " [debug]": "",
-		 nrealwriters_stress, stat_interval, verbose,
-		 shuffle_interval, stutter, shutdown_secs,
+		 nrealwriters_stress, nrealreaders_stress, stat_interval,
+		 verbose, shuffle_interval, stutter, shutdown_secs,
 		 onoff_interval, onoff_holdoff);
 }
 
@@ -372,6 +439,14 @@ static void lock_torture_cleanup(void)
 		writer_tasks = NULL;
 	}
 
+	if (reader_tasks) {
+		for (i = 0; i < nrealreaders_stress; i++)
+			torture_stop_kthread(lock_torture_reader,
+					     reader_tasks[i]);
+		kfree(reader_tasks);
+		reader_tasks = NULL;
+	}
+
 	torture_stop_kthread(lock_torture_stats, stats_task);
 	lock_torture_stats_print();  /* -After- the stats thread is stopped! */
 
@@ -389,7 +464,7 @@ static void lock_torture_cleanup(void)
 
 static int __init lock_torture_init(void)
 {
-	int i;
+	int i, j;
 	int firsterr = 0;
 	static struct lock_torture_ops *torture_ops[] = {
 		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops, &mutex_lock_ops,
@@ -430,7 +505,6 @@ static int __init lock_torture_init(void)
 	if (strncmp(torture_type, "spin", 4) == 0)
 		debug_lock = true;
 #endif
-	lock_torture_print_module_parms(cur_ops, "Start of test");
 
 	/* Initialize the statistics so that each run gets its own numbers. */
 
@@ -446,8 +520,37 @@ static int __init lock_torture_init(void)
 		lwsa[i].n_lock_acquired = 0;
 	}
 
-	/* Start up the kthreads. */
+	if (cur_ops->readlock) {
+		if (nreaders_stress >= 0)
+			nrealreaders_stress = nreaders_stress;
+		else {
+			/*
+			 * By default distribute evenly the number of
+			 * readers and writers. We still run the same number
+			 * of threads as the writer-only locks default.
+			 */
+			if (nwriters_stress < 0) /* user doesn't care */
+				nrealwriters_stress = num_online_cpus();
+			nrealreaders_stress = nrealwriters_stress;
+		}
+
+		lock_is_read_held = 0;
+		lrsa = kmalloc(sizeof(*lrsa) * nrealreaders_stress, GFP_KERNEL);
+		if (lrsa == NULL) {
+			VERBOSE_TOROUT_STRING("lrsa: Out of memory");
+			firsterr = -ENOMEM;
+			kfree(lwsa);
+			goto unwind;
+		}
 
+		for (i = 0; i < nrealreaders_stress; i++) {
+			lrsa[i].n_lock_fail = 0;
+			lrsa[i].n_lock_acquired = 0;
+		}
+	}
+	lock_torture_print_module_parms(cur_ops, "Start of test");
+
+	/* Prepare torture context. */
 	if (onoff_interval > 0) {
 		firsterr = torture_onoff_init(onoff_holdoff * HZ,
 					      onoff_interval * HZ);
@@ -478,11 +581,44 @@ static int __init lock_torture_init(void)
 		firsterr = -ENOMEM;
 		goto unwind;
 	}
-	for (i = 0; i < nrealwriters_stress; i++) {
+
+	if (cur_ops->readlock) {
+		reader_tasks = kzalloc(nrealreaders_stress * sizeof(reader_tasks[0]),
+				       GFP_KERNEL);
+		if (reader_tasks == NULL) {
+			VERBOSE_TOROUT_ERRSTRING("reader_tasks: Out of memory");
+			firsterr = -ENOMEM;
+			goto unwind;
+		}
+	}
+
+	/*
+	 * Create the kthreads and start torturing (oh, those poor little locks).
+	 *
+	 * TODO: Note that we interleave writers with readers, giving writers a
+	 * slight advantage, by creating its kthread first. This can be modified
+	 * for very specific needs, or even let the user choose the policy, if
+	 * ever wanted.
+	 */
+	for (i = 0, j = 0; i < nrealwriters_stress ||
+		    j < nrealreaders_stress; i++, j++) {
+		if (i >= nrealwriters_stress)
+			goto create_reader;
+
+		/* Create writer. */
 		firsterr = torture_create_kthread(lock_torture_writer, &lwsa[i],
 						  writer_tasks[i]);
 		if (firsterr)
 			goto unwind;
+
+	create_reader:
+		if (cur_ops->readlock == NULL || (j >= nrealreaders_stress))
+			continue;
+		/* Create reader. */
+		firsterr = torture_create_kthread(lock_torture_reader, &lrsa[j],
+						  reader_tasks[j]);
+		if (firsterr)
+			goto unwind;
 	}
 	if (stat_interval > 0) {
 		firsterr = torture_create_kthread(lock_torture_stats, NULL,
-- 
1.8.4.5




^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 8/9] locktorture: Support rwsems
  2014-09-12  3:40 [PATCH -tip 0/9] locktorture: Improve and expand lock torturing Davidlohr Bueso
                   ` (6 preceding siblings ...)
  2014-09-12  4:40 ` [PATCH 7/9] locktorture: Add infrastructure for torturing read locks Davidlohr Bueso
@ 2014-09-12  4:41 ` Davidlohr Bueso
  2014-09-12  7:37   ` Peter Zijlstra
  2014-09-12 18:07   ` Paul E. McKenney
  2014-09-12  4:42 ` [PATCH 9/9] locktorture: Introduce torture context Davidlohr Bueso
  8 siblings, 2 replies; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12  4:41 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel, dave, dbueso

We can easily do so with our new reader lock support. Just an arbitrary
design default: readers have higher (5x) critical region latencies than
writers: 50 ms and 10 ms, respectively.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
---
 Documentation/locking/locktorture.txt |  2 ++
 kernel/locking/locktorture.c          | 68 ++++++++++++++++++++++++++++++++++-
 2 files changed, 69 insertions(+), 1 deletion(-)

diff --git a/Documentation/locking/locktorture.txt b/Documentation/locking/locktorture.txt
index 1bdeb71..f7d99e2 100644
--- a/Documentation/locking/locktorture.txt
+++ b/Documentation/locking/locktorture.txt
@@ -47,6 +47,8 @@ torture_type	  Type of lock to torture. By default, only spinlocks will
 
 		     o "mutex_lock": mutex_lock() and mutex_unlock() pairs.
 
+		     o "rwsem_lock": read/write down() and up() semaphore pairs.
+
 torture_runnable  Start locktorture at module init. By default it will begin
 		  once the module is loaded.
 
diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index c1073d7..8480118 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -265,6 +265,71 @@ static struct lock_torture_ops mutex_lock_ops = {
 	.name		= "mutex_lock"
 };
 
+static DECLARE_RWSEM(torture_rwsem);
+static int torture_rwsem_down_write(void) __acquires(torture_rwsem)
+{
+	down_write(&torture_rwsem);
+	return 0;
+}
+
+static void torture_rwsem_write_delay(struct torture_random_state *trsp)
+{
+	const unsigned long longdelay_ms = 100;
+
+	/* We want a long delay occasionally to force massive contention.  */
+	if (!(torture_random(trsp) %
+	      (nrealwriters_stress * 2000 * longdelay_ms)))
+		mdelay(longdelay_ms * 10);
+	else
+		mdelay(longdelay_ms / 10);
+#ifdef CONFIG_PREEMPT
+	if (!(torture_random(trsp) % (nrealwriters_stress * 20000)))
+		preempt_schedule();  /* Allow test to be preempted. */
+#endif
+}
+
+static void torture_rwsem_up_write(void) __releases(torture_rwsem)
+{
+	up_write(&torture_rwsem);
+}
+
+static int torture_rwsem_down_read(void) __acquires(torture_rwsem)
+{
+	down_read(&torture_rwsem);
+	return 0;
+}
+
+static void torture_rwsem_read_delay(struct torture_random_state *trsp)
+{
+	const unsigned long longdelay_ms = 100;
+
+	/* We want a long delay occasionally to force massive contention.  */
+	if (!(torture_random(trsp) %
+	      (nrealwriters_stress * 2000 * longdelay_ms)))
+		mdelay(longdelay_ms * 2);
+	else
+		mdelay(longdelay_ms / 2);
+#ifdef CONFIG_PREEMPT
+	if (!(torture_random(trsp) % (nrealreaders_stress * 20000)))
+		preempt_schedule();  /* Allow test to be preempted. */
+#endif
+}
+
+static void torture_rwsem_up_read(void) __releases(torture_rwsem)
+{
+	up_read(&torture_rwsem);
+}
+
+static struct lock_torture_ops rwsem_lock_ops = {
+	.writelock	= torture_rwsem_down_write,
+	.write_delay	= torture_rwsem_write_delay,
+	.writeunlock	= torture_rwsem_up_write,
+	.readlock       = torture_rwsem_down_read,
+	.read_delay     = torture_rwsem_read_delay,
+	.readunlock     = torture_rwsem_up_read,
+	.name		= "rwsem_lock"
+};
+
 /*
  * Lock torture writer kthread.  Repeatedly acquires and releases
  * the lock, checking for duplicate acquisitions.
@@ -467,7 +532,8 @@ static int __init lock_torture_init(void)
 	int i, j;
 	int firsterr = 0;
 	static struct lock_torture_ops *torture_ops[] = {
-		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops, &mutex_lock_ops,
+		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops,
+		&mutex_lock_ops, &rwsem_lock_ops,
 	};
 
 	if (!torture_init_begin(torture_type, verbose, &torture_runnable))
-- 
1.8.4.5




^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 9/9] locktorture: Introduce torture context
  2014-09-12  3:40 [PATCH -tip 0/9] locktorture: Improve and expand lock torturing Davidlohr Bueso
                   ` (7 preceding siblings ...)
  2014-09-12  4:41 ` [PATCH 8/9] locktorture: Support rwsems Davidlohr Bueso
@ 2014-09-12  4:42 ` Davidlohr Bueso
  8 siblings, 0 replies; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12  4:42 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel, dave, dbueso

The amount of global variables is getting pretty ugly. Group variables
related to the execution (ie: not parameters) in a new context structure.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
---
 kernel/locking/locktorture.c | 161 ++++++++++++++++++++++---------------------
 1 file changed, 82 insertions(+), 79 deletions(-)

diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index 8480118..540d5df 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -66,29 +66,22 @@ torture_param(int, stutter, 5, "Number of jiffies to run/halt test, 0=disable");
 torture_param(bool, verbose, true,
 	     "Enable verbose debugging printk()s");
 
-static bool debug_lock = false;
 static char *torture_type = "spin_lock";
 module_param(torture_type, charp, 0444);
 MODULE_PARM_DESC(torture_type,
 		 "Type of lock to torture (spin_lock, spin_lock_irq, mutex_lock, ...)");
 
-static atomic_t n_lock_torture_errors;
-
 static struct task_struct *stats_task;
 static struct task_struct **writer_tasks;
 static struct task_struct **reader_tasks;
 
-static int nrealwriters_stress;
 static bool lock_is_write_held;
-static int nrealreaders_stress;
 static bool lock_is_read_held;
 
 struct lock_stress_stats {
 	long n_lock_fail;
 	long n_lock_acquired;
 };
-static struct lock_stress_stats *lwsa; /* writer statistics */
-static struct lock_stress_stats *lrsa; /* reader statistics */
 
 #if defined(MODULE)
 #define LOCKTORTURE_RUNNABLE_INIT 1
@@ -117,8 +110,18 @@ struct lock_torture_ops {
 	const char *name;
 };
 
-static struct lock_torture_ops *cur_ops;
-
+struct lock_torture_cxt {
+	int nrealwriters_stress;
+	int nrealreaders_stress;
+	bool debug_lock;
+	atomic_t n_lock_torture_errors;
+	struct lock_torture_ops *cur_ops;
+	struct lock_stress_stats *lwsa; /* writer statistics */
+	struct lock_stress_stats *lrsa; /* reader statistics */
+};
+static struct lock_torture_cxt cxt = { 0, 0, false,
+				       ATOMIC_INIT(0),
+				       NULL, NULL};
 /*
  * Definitions for lock torture testing.
  */
@@ -134,10 +137,10 @@ static void torture_lock_busted_write_delay(struct torture_random_state *trsp)
 
 	/* We want a long delay occasionally to force massive contention.  */
 	if (!(torture_random(trsp) %
-	      (nrealwriters_stress * 2000 * longdelay_us)))
+	      (cxt.nrealwriters_stress * 2000 * longdelay_us)))
 		mdelay(longdelay_us);
 #ifdef CONFIG_PREEMPT
-	if (!(torture_random(trsp) % (nrealwriters_stress * 20000)))
+	if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000)))
 		preempt_schedule();  /* Allow test to be preempted. */
 #endif
 }
@@ -174,13 +177,13 @@ static void torture_spin_lock_write_delay(struct torture_random_state *trsp)
 	 * we want a long delay occasionally to force massive contention.
 	 */
 	if (!(torture_random(trsp) %
-	      (nrealwriters_stress * 2000 * longdelay_us)))
+	      (cxt.nrealwriters_stress * 2000 * longdelay_us)))
 		mdelay(longdelay_us);
 	if (!(torture_random(trsp) %
-	      (nrealwriters_stress * 2 * shortdelay_us)))
+	      (cxt.nrealwriters_stress * 2 * shortdelay_us)))
 		udelay(shortdelay_us);
 #ifdef CONFIG_PREEMPT
-	if (!(torture_random(trsp) % (nrealwriters_stress * 20000)))
+	if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000)))
 		preempt_schedule();  /* Allow test to be preempted. */
 #endif
 }
@@ -206,14 +209,14 @@ __acquires(torture_spinlock_irq)
 	unsigned long flags;
 
 	spin_lock_irqsave(&torture_spinlock, flags);
-	cur_ops->flags = flags;
+	cxt.cur_ops->flags = flags;
 	return 0;
 }
 
 static void torture_lock_spin_write_unlock_irq(void)
 __releases(torture_spinlock)
 {
-	spin_unlock_irqrestore(&torture_spinlock, cur_ops->flags);
+	spin_unlock_irqrestore(&torture_spinlock, cxt.cur_ops->flags);
 }
 
 static struct lock_torture_ops spin_lock_irq_ops = {
@@ -240,12 +243,12 @@ static void torture_mutex_delay(struct torture_random_state *trsp)
 
 	/* We want a long delay occasionally to force massive contention.  */
 	if (!(torture_random(trsp) %
-	      (nrealwriters_stress * 2000 * longdelay_ms)))
+	      (cxt.nrealwriters_stress * 2000 * longdelay_ms)))
 		mdelay(longdelay_ms * 5);
 	else
 		mdelay(longdelay_ms / 5);
 #ifdef CONFIG_PREEMPT
-	if (!(torture_random(trsp) % (nrealwriters_stress * 20000)))
+	if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000)))
 		preempt_schedule();  /* Allow test to be preempted. */
 #endif
 }
@@ -278,12 +281,12 @@ static void torture_rwsem_write_delay(struct torture_random_state *trsp)
 
 	/* We want a long delay occasionally to force massive contention.  */
 	if (!(torture_random(trsp) %
-	      (nrealwriters_stress * 2000 * longdelay_ms)))
+	      (cxt.nrealwriters_stress * 2000 * longdelay_ms)))
 		mdelay(longdelay_ms * 10);
 	else
 		mdelay(longdelay_ms / 10);
 #ifdef CONFIG_PREEMPT
-	if (!(torture_random(trsp) % (nrealwriters_stress * 20000)))
+	if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000)))
 		preempt_schedule();  /* Allow test to be preempted. */
 #endif
 }
@@ -305,12 +308,12 @@ static void torture_rwsem_read_delay(struct torture_random_state *trsp)
 
 	/* We want a long delay occasionally to force massive contention.  */
 	if (!(torture_random(trsp) %
-	      (nrealwriters_stress * 2000 * longdelay_ms)))
+	      (cxt.nrealwriters_stress * 2000 * longdelay_ms)))
 		mdelay(longdelay_ms * 2);
 	else
 		mdelay(longdelay_ms / 2);
 #ifdef CONFIG_PREEMPT
-	if (!(torture_random(trsp) % (nrealreaders_stress * 20000)))
+	if (!(torture_random(trsp) % (cxt.nrealreaders_stress * 20000)))
 		preempt_schedule();  /* Allow test to be preempted. */
 #endif
 }
@@ -345,14 +348,14 @@ static int lock_torture_writer(void *arg)
 	do {
 		if ((torture_random(&rand) & 0xfffff) == 0)
 			schedule_timeout_uninterruptible(1);
-		cur_ops->writelock();
+		cxt.cur_ops->writelock();
 		if (WARN_ON_ONCE(lock_is_write_held))
 			lwsp->n_lock_fail++;
 		lock_is_write_held = 1;
 		lwsp->n_lock_acquired++;
-		cur_ops->write_delay(&rand);
+		cxt.cur_ops->write_delay(&rand);
 		lock_is_write_held = 0;
-		cur_ops->writeunlock();
+		cxt.cur_ops->writeunlock();
 		stutter_wait("lock_torture_writer");
 	} while (!torture_must_stop());
 	torture_kthread_stopping("lock_torture_writer");
@@ -374,12 +377,12 @@ static int lock_torture_reader(void *arg)
 	do {
 		if ((torture_random(&rand) & 0xfffff) == 0)
 			schedule_timeout_uninterruptible(1);
-		cur_ops->readlock();
+		cxt.cur_ops->readlock();
 		lock_is_read_held = 1;
 		lrsp->n_lock_acquired++;
-		cur_ops->read_delay(&rand);
+		cxt.cur_ops->read_delay(&rand);
 		lock_is_read_held = 0;
-		cur_ops->readunlock();
+		cxt.cur_ops->readunlock();
 		stutter_wait("lock_torture_reader");
 	} while (!torture_must_stop());
 	torture_kthread_stopping("lock_torture_reader");
@@ -398,7 +401,7 @@ static void __torture_print_stats(char *page,
 	long min = statp[0].n_lock_acquired;
 	long long sum = 0;
 
-	n_stress = write ? nrealwriters_stress : nrealreaders_stress;
+	n_stress = write ? cxt.nrealwriters_stress : cxt.nrealreaders_stress;
 	for (i = 0; i < n_stress; i++) {
 		if (statp[i].n_lock_fail)
 			fail = true;
@@ -414,7 +417,7 @@ static void __torture_print_stats(char *page,
 			sum, max, min, max / 2 > min ? "???" : "",
 			fail, fail ? "!!!" : "");
 	if (fail)
-		atomic_inc(&n_lock_torture_errors);
+		atomic_inc(&cxt.n_lock_torture_errors);
 }
 
 /*
@@ -427,11 +430,11 @@ static void __torture_print_stats(char *page,
  */
 static void lock_torture_stats_print(void)
 {
-	int size = nrealwriters_stress * 200 + 8192;
+	int size = cxt.nrealwriters_stress * 200 + 8192;
 	char *buf;
 
-	if (cur_ops->readlock)
-		size += nrealreaders_stress * 200 + 8192;
+	if (cxt.cur_ops->readlock)
+		size += cxt.nrealreaders_stress * 200 + 8192;
 
 	buf = kmalloc(size, GFP_KERNEL);
 	if (!buf) {
@@ -440,11 +443,11 @@ static void lock_torture_stats_print(void)
 		return;
 	}
 
-	__torture_print_stats(buf, lwsa, true);
+	__torture_print_stats(buf, cxt.lwsa, true);
 	pr_alert("%s", buf);
 	kfree(buf);
 
-	if (cur_ops->readlock) {
+	if (cxt.cur_ops->readlock) {
 		buf = kmalloc(size, GFP_KERNEL);
 		if (!buf) {
 			pr_err("lock_torture_stats_print: Out of memory, need: %d",
@@ -452,7 +455,7 @@ static void lock_torture_stats_print(void)
 			return;
 		}
 
-		__torture_print_stats(buf, lrsa, false);
+		__torture_print_stats(buf, cxt.lrsa, false);
 		pr_alert("%s", buf);
 		kfree(buf);
 	}
@@ -483,8 +486,8 @@ lock_torture_print_module_parms(struct lock_torture_ops *cur_ops,
 {
 	pr_alert("%s" TORTURE_FLAG
 		 "--- %s%s: nwriters_stress=%d nreaders_stress=%d stat_interval=%d verbose=%d shuffle_interval=%d stutter=%d shutdown_secs=%d onoff_interval=%d onoff_holdoff=%d\n",
-		 torture_type, tag, debug_lock ? " [debug]": "",
-		 nrealwriters_stress, nrealreaders_stress, stat_interval,
+		 torture_type, tag, cxt.debug_lock ? " [debug]": "",
+		 cxt.nrealwriters_stress, cxt.nrealreaders_stress, stat_interval,
 		 verbose, shuffle_interval, stutter, shutdown_secs,
 		 onoff_interval, onoff_holdoff);
 }
@@ -497,7 +500,7 @@ static void lock_torture_cleanup(void)
 		return;
 
 	if (writer_tasks) {
-		for (i = 0; i < nrealwriters_stress; i++)
+		for (i = 0; i < cxt.nrealwriters_stress; i++)
 			torture_stop_kthread(lock_torture_writer,
 					     writer_tasks[i]);
 		kfree(writer_tasks);
@@ -505,7 +508,7 @@ static void lock_torture_cleanup(void)
 	}
 
 	if (reader_tasks) {
-		for (i = 0; i < nrealreaders_stress; i++)
+		for (i = 0; i < cxt.nrealreaders_stress; i++)
 			torture_stop_kthread(lock_torture_reader,
 					     reader_tasks[i]);
 		kfree(reader_tasks);
@@ -515,14 +518,14 @@ static void lock_torture_cleanup(void)
 	torture_stop_kthread(lock_torture_stats, stats_task);
 	lock_torture_stats_print();  /* -After- the stats thread is stopped! */
 
-	if (atomic_read(&n_lock_torture_errors))
-		lock_torture_print_module_parms(cur_ops,
+	if (atomic_read(&cxt.n_lock_torture_errors))
+		lock_torture_print_module_parms(cxt.cur_ops,
 						"End of test: FAILURE");
 	else if (torture_onoff_failures())
-		lock_torture_print_module_parms(cur_ops,
+		lock_torture_print_module_parms(cxt.cur_ops,
 						"End of test: LOCK_HOTPLUG");
 	else
-		lock_torture_print_module_parms(cur_ops,
+		lock_torture_print_module_parms(cxt.cur_ops,
 						"End of test: SUCCESS");
 	torture_cleanup_end();
 }
@@ -541,8 +544,8 @@ static int __init lock_torture_init(void)
 
 	/* Process args and tell the world that the torturer is on the job. */
 	for (i = 0; i < ARRAY_SIZE(torture_ops); i++) {
-		cur_ops = torture_ops[i];
-		if (strcmp(torture_type, cur_ops->name) == 0)
+		cxt.cur_ops = torture_ops[i];
+		if (strcmp(torture_type, cxt.cur_ops->name) == 0)
 			break;
 	}
 	if (i == ARRAY_SIZE(torture_ops)) {
@@ -555,40 +558,40 @@ static int __init lock_torture_init(void)
 		torture_init_end();
 		return -EINVAL;
 	}
-	if (cur_ops->init)
-		cur_ops->init(); /* no "goto unwind" prior to this point!!! */
+	if (cxt.cur_ops->init)
+		cxt.cur_ops->init(); /* no "goto unwind" prior to this point!!! */
 
 	if (nwriters_stress >= 0)
-		nrealwriters_stress = nwriters_stress;
+		cxt.nrealwriters_stress = nwriters_stress;
 	else
-		nrealwriters_stress = 2 * num_online_cpus();
+		cxt.nrealwriters_stress = 2 * num_online_cpus();
 
 #ifdef CONFIG_DEBUG_MUTEXES
 	if (strncmp(torture_type, "mutex", 5) == 0)
-		debug_lock = true;
+		cxt.debug_lock = true;
 #endif
 #ifdef CONFIG_DEBUG_SPINLOCK
 	if (strncmp(torture_type, "spin", 4) == 0)
-		debug_lock = true;
+		cxt.debug_lock = true;
 #endif
 
 	/* Initialize the statistics so that each run gets its own numbers. */
 
 	lock_is_write_held = 0;
-	lwsa = kmalloc(sizeof(*lwsa) * nrealwriters_stress, GFP_KERNEL);
-	if (lwsa == NULL) {
-		VERBOSE_TOROUT_STRING("lwsa: Out of memory");
+	cxt.lwsa = kmalloc(sizeof(*cxt.lwsa) * cxt.nrealwriters_stress, GFP_KERNEL);
+	if (cxt.lwsa == NULL) {
+		VERBOSE_TOROUT_STRING("cxt.lwsa: Out of memory");
 		firsterr = -ENOMEM;
 		goto unwind;
 	}
-	for (i = 0; i < nrealwriters_stress; i++) {
-		lwsa[i].n_lock_fail = 0;
-		lwsa[i].n_lock_acquired = 0;
+	for (i = 0; i < cxt.nrealwriters_stress; i++) {
+		cxt.lwsa[i].n_lock_fail = 0;
+		cxt.lwsa[i].n_lock_acquired = 0;
 	}
 
-	if (cur_ops->readlock) {
+	if (cxt.cur_ops->readlock) {
 		if (nreaders_stress >= 0)
-			nrealreaders_stress = nreaders_stress;
+			cxt.nrealreaders_stress = nreaders_stress;
 		else {
 			/*
 			 * By default distribute evenly the number of
@@ -596,25 +599,25 @@ static int __init lock_torture_init(void)
 			 * of threads as the writer-only locks default.
 			 */
 			if (nwriters_stress < 0) /* user doesn't care */
-				nrealwriters_stress = num_online_cpus();
-			nrealreaders_stress = nrealwriters_stress;
+				cxt.nrealwriters_stress = num_online_cpus();
+			cxt.nrealreaders_stress = cxt.nrealwriters_stress;
 		}
 
 		lock_is_read_held = 0;
-		lrsa = kmalloc(sizeof(*lrsa) * nrealreaders_stress, GFP_KERNEL);
-		if (lrsa == NULL) {
-			VERBOSE_TOROUT_STRING("lrsa: Out of memory");
+		cxt.lrsa = kmalloc(sizeof(*cxt.lrsa) * cxt.nrealreaders_stress, GFP_KERNEL);
+		if (cxt.lrsa == NULL) {
+			VERBOSE_TOROUT_STRING("cxt.lrsa: Out of memory");
 			firsterr = -ENOMEM;
-			kfree(lwsa);
+			kfree(cxt.lwsa);
 			goto unwind;
 		}
 
-		for (i = 0; i < nrealreaders_stress; i++) {
-			lrsa[i].n_lock_fail = 0;
-			lrsa[i].n_lock_acquired = 0;
+		for (i = 0; i < cxt.nrealreaders_stress; i++) {
+			cxt.lrsa[i].n_lock_fail = 0;
+			cxt.lrsa[i].n_lock_acquired = 0;
 		}
 	}
-	lock_torture_print_module_parms(cur_ops, "Start of test");
+	lock_torture_print_module_parms(cxt.cur_ops, "Start of test");
 
 	/* Prepare torture context. */
 	if (onoff_interval > 0) {
@@ -640,7 +643,7 @@ static int __init lock_torture_init(void)
 			goto unwind;
 	}
 
-	writer_tasks = kzalloc(nrealwriters_stress * sizeof(writer_tasks[0]),
+	writer_tasks = kzalloc(cxt.nrealwriters_stress * sizeof(writer_tasks[0]),
 			       GFP_KERNEL);
 	if (writer_tasks == NULL) {
 		VERBOSE_TOROUT_ERRSTRING("writer_tasks: Out of memory");
@@ -648,8 +651,8 @@ static int __init lock_torture_init(void)
 		goto unwind;
 	}
 
-	if (cur_ops->readlock) {
-		reader_tasks = kzalloc(nrealreaders_stress * sizeof(reader_tasks[0]),
+	if (cxt.cur_ops->readlock) {
+		reader_tasks = kzalloc(cxt.nrealreaders_stress * sizeof(reader_tasks[0]),
 				       GFP_KERNEL);
 		if (reader_tasks == NULL) {
 			VERBOSE_TOROUT_ERRSTRING("reader_tasks: Out of memory");
@@ -666,22 +669,22 @@ static int __init lock_torture_init(void)
 	 * for very specific needs, or even let the user choose the policy, if
 	 * ever wanted.
 	 */
-	for (i = 0, j = 0; i < nrealwriters_stress ||
-		    j < nrealreaders_stress; i++, j++) {
-		if (i >= nrealwriters_stress)
+	for (i = 0, j = 0; i < cxt.nrealwriters_stress ||
+		    j < cxt.nrealreaders_stress; i++, j++) {
+		if (i >= cxt.nrealwriters_stress)
 			goto create_reader;
 
 		/* Create writer. */
-		firsterr = torture_create_kthread(lock_torture_writer, &lwsa[i],
+		firsterr = torture_create_kthread(lock_torture_writer, &cxt.lwsa[i],
 						  writer_tasks[i]);
 		if (firsterr)
 			goto unwind;
 
 	create_reader:
-		if (cur_ops->readlock == NULL || (j >= nrealreaders_stress))
+		if (cxt.cur_ops->readlock == NULL || (j >= cxt.nrealreaders_stress))
 			continue;
 		/* Create reader. */
-		firsterr = torture_create_kthread(lock_torture_reader, &lrsa[j],
+		firsterr = torture_create_kthread(lock_torture_reader, &cxt.lrsa[j],
 						  reader_tasks[j]);
 		if (firsterr)
 			goto unwind;
-- 
1.8.4.5




^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 2/9] locktorture: Add documentation
  2014-09-12  3:40 ` [PATCH 2/9] locktorture: Add documentation Davidlohr Bueso
@ 2014-09-12  5:28   ` Davidlohr Bueso
  2014-09-13  1:10   ` Randy Dunlap
  1 sibling, 0 replies; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12  5:28 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel, Randy Dunlap

Cc'ing Randy.

On Thu, 2014-09-11 at 20:40 -0700, Davidlohr Bueso wrote:
> Just like Documentation/RCU/torture.txt, begin a document for the
> locktorture module. This module is still pretty green, so I have
> just added some specific sections to the doc (general desc, params,
> usage, etc.). Further development should update the file.
> 
> Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
> ---
>  Documentation/locking/locktorture.txt | 128 ++++++++++++++++++++++++++++++++++
>  1 file changed, 128 insertions(+)
>  create mode 100644 Documentation/locking/locktorture.txt
> 
> diff --git a/Documentation/locking/locktorture.txt b/Documentation/locking/locktorture.txt
> new file mode 100644
> index 0000000..c0ab969
> --- /dev/null
> +++ b/Documentation/locking/locktorture.txt
> @@ -0,0 +1,128 @@
> +Kernel Lock Torture Test Operation
> +
> +CONFIG_LOCK_TORTURE_TEST
> +
> +The CONFIG LOCK_TORTURE_TEST config option provides a kernel module
> +that runs torture tests on core kernel locking primitives. The kernel
> +module, 'locktorture', may be built after the fact on the running
> +kernel to be tested, if desired. The tests periodically outputs status
> +messages via printk(), which can be examined via the dmesg (perhaps
> +grepping for "torture").  The test is started when the module is loaded,
> +and stops when the module is unloaded. This program is based on how RCU
> +is tortured, via rcutorture.
> +
> +This torture test consists of creating a number of kernel threads which
> +acquires the lock and holds it for specific amount of time, thus simulating
> +different critical region behaviors. The amount of contention on the lock
> +can be simulated by either enlarging this critical region hold time and/or
> +creating more kthreads.
> +
> +
> +MODULE PARAMETERS
> +
> +This module has the following parameters:
> +
> +
> +	    ** Locktorture-specific **
> +
> +nwriters_stress   Number of kernel threads that will stress exclusive lock
> +		  ownership (writers). The default value is twice the amount
> +		  of online CPUs.
> +
> +torture_type	  Type of lock to torture. By default, only spinlocks will
> +		  be tortured. This module can torture the following locks,
> +		  with string values as follows:
> +
> +		     o "lock_busted": Simulates a buggy lock implementation.
> +
> +		     o "spin_lock": spin_lock() and spin_unlock() pairs.
> +
> +		     o "spin_lock_irq": spin_lock_irq() and spin_unlock_irq()
> +					pairs.
> +
> +torture_runnable  Start locktorture at module init. By default it will begin
> +		  once the module is loaded.
> +
> +
> +	    ** Torture-framework (RCU + locking) **
> +
> +shutdown_secs	  The number of seconds to run the test before terminating
> +		  the test and powering off the system.  The default is
> +		  zero, which disables test termination and system shutdown.
> +		  This capability is useful for automated testing.
> +
> +onoff_holdoff	  The number of seconds between each attempt to execute a
> +		  randomly selected CPU-hotplug operation.  Defaults to
> +		  zero, which disables CPU hotplugging.  In HOTPLUG_CPU=n
> +		  kernels, locktorture will silently refuse to do any
> +		  CPU-hotplug operations regardless of what value is
> +		  specified for onoff_interval.
> +
> +onoff_holdoff	  The number of seconds to wait until starting CPU-hotplug
> +		  operations.  This would normally only be used when
> +		  locktorture was built into the kernel and started
> +		  automatically at boot time, in which case it is useful
> +		  in order to avoid confusing boot-time code with CPUs
> +		  coming and going. This parameter is only useful if
> +		  CONFIG_HOTPLUG_CPU is enabled.
> +
> +stat_interval	  Number of seconds between statistics-related printk()s.
> +		  By default, locktorture will report stats every 60 seconds.
> +		  Setting the interval to zero causes the statistics to
> +		  be printed -only- when the module is unloaded, and this
> +		  is the default.
> +
> +stutter		  The length of time to run the test before pausing for this
> +		  same period of time.  Defaults to "stutter=5", so as
> +		  to run and pause for (roughly) five-second intervals.
> +		  Specifying "stutter=0" causes the test to run continuously
> +		  without pausing, which is the old default behavior.
> +
> +shuffle_interval  The number of seconds to keep the test threads affinitied
> +		  to a particular subset of the CPUs, defaults to 3 seconds.
> +		  Used in conjunction with test_no_idle_hz.
> +
> +verbose		  Enable verbose debugging printking, via printk(). Enabled
> +		  by default. This extra information is mostly related to
> +		  high-level errors and reports from the main 'torture'
> +		  framework.
> +
> +
> +STATISTICS
> +
> +Statistics are printed in the following format:
> +
> +spin_lock-torture: Writes:  Total: 93746064  Max/Min: 0/0   Fail: 0
> +   (A)				   (B)		  (C)	       (D)
> +
> +(A): Lock type that is being tortured -- torture_type parameter.
> +
> +(B): Number of times the lock was acquired.
> +
> +(C): Min and max number of times threads failed to acquire the lock.
> +
> +(D): true/false values if there were errors acquiring the lock. This should
> +     -only- be positive if there is a bug in the locking primitive's
> +     implementation. Otherwise a lock should never fail (ie: spin_lock()).
> +     Of course, the same applies for (C), above. A dummy example of this is
> +     the "lock_busted" type.
> +
> +USAGE
> +
> +The following script may be used to torture locks:
> +
> +	#!/bin/sh
> +
> +	modprobe locktorture
> +	sleep 3600
> +	rmmod locktorture
> +	dmesg | grep torture:
> +
> +The output can be manually inspected for the error flag of "!!!".
> +One could of course create a more elaborate script that automatically
> +checked for such errors.  The "rmmod" command forces a "SUCCESS",
> +"FAILURE", or "RCU_HOTPLUG" indication to be printk()ed.  The first
> +two are self-explanatory, while the last indicates that while there
> +were no locking failures, CPU-hotplug problems were detected.
> +
> +Also see: Documentation/RCU/torture.txt



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 8/9] locktorture: Support rwsems
  2014-09-12  4:41 ` [PATCH 8/9] locktorture: Support rwsems Davidlohr Bueso
@ 2014-09-12  7:37   ` Peter Zijlstra
  2014-09-12 14:49     ` Davidlohr Bueso
  2014-09-12 18:07   ` Paul E. McKenney
  1 sibling, 1 reply; 27+ messages in thread
From: Peter Zijlstra @ 2014-09-12  7:37 UTC (permalink / raw)
  To: Davidlohr Bueso; +Cc: paulmck, mingo, linux-kernel, dbueso

On Thu, Sep 11, 2014 at 09:41:30PM -0700, Davidlohr Bueso wrote:
> We can easily do so with our new reader lock support. Just an arbitrary
> design default: readers have higher (5x) critical region latencies than
> writers: 50 ms and 10 ms, respectively.
> 

Nice, could you copy/paste this into a rwlock test as well?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 8/9] locktorture: Support rwsems
  2014-09-12  7:37   ` Peter Zijlstra
@ 2014-09-12 14:49     ` Davidlohr Bueso
  0 siblings, 0 replies; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12 14:49 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: paulmck, mingo, linux-kernel

On Fri, 2014-09-12 at 09:37 +0200, Peter Zijlstra wrote:
> On Thu, Sep 11, 2014 at 09:41:30PM -0700, Davidlohr Bueso wrote:
> > We can easily do so with our new reader lock support. Just an arbitrary
> > design default: readers have higher (5x) critical region latencies than
> > writers: 50 ms and 10 ms, respectively.
> > 
> 
> Nice, could you copy/paste this into a rwlock test as well?

Indeed, if folks like what the see in this patchset then adding more
locks is pretty trivial.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 7/9] locktorture: Add infrastructure for torturing read locks
  2014-09-12  4:40 ` [PATCH 7/9] locktorture: Add infrastructure for torturing read locks Davidlohr Bueso
@ 2014-09-12 16:06   ` Paul E. McKenney
  2014-09-12 18:02     ` Davidlohr Bueso
  0 siblings, 1 reply; 27+ messages in thread
From: Paul E. McKenney @ 2014-09-12 16:06 UTC (permalink / raw)
  To: Davidlohr Bueso; +Cc: peterz, mingo, linux-kernel, dbueso

On Thu, Sep 11, 2014 at 09:40:41PM -0700, Davidlohr Bueso wrote:
> Most of it is based on what we already have for writers. This allows
> readers to be very independent (and thus configurable), enabling
> future module parameters to control things such as rw distribution.
> Furthermore, readers have their own delaying function, allowing us
> to test different rw critical region latencies, and stress locking
> internals. Similarly, statistics, for now will only serve for the
> number of lock acquisitions -- as opposed to writers, readers have
> no failure detection.
> 
> In addition, introduce a new nreaders_stress module parameter. The
> default number of readers will be the same number of writers threads.
> Writer threads are interleaved with readers. Documentation is updated,
> respectively.

Nice!!!

Conditional fairness checks in the future?  (As in verifying that if
the rwlock in question claims some degree of fairness, trying to break
that guarantee, and contrariwise, if the lock is unfair, making sure
to avoid starvation during the test?)

And one nit below.

							Thanx, Paul

> Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
> ---
>  Documentation/locking/locktorture.txt |  16 +++-
>  kernel/locking/locktorture.c          | 176 ++++++++++++++++++++++++++++++----
>  2 files changed, 168 insertions(+), 24 deletions(-)
> 
> diff --git a/Documentation/locking/locktorture.txt b/Documentation/locking/locktorture.txt
> index 6b1e7ca..1bdeb71 100644
> --- a/Documentation/locking/locktorture.txt
> +++ b/Documentation/locking/locktorture.txt
> @@ -29,6 +29,11 @@ nwriters_stress   Number of kernel threads that will stress exclusive lock
>  		  ownership (writers). The default value is twice the amount
>  		  of online CPUs.
> 
> +nreaders_stress   Number of kernel threads that will stress shared lock
> +		  ownership (readers). The default is the same amount of writer
> +		  locks. If the user did not specify nwriters_stress, then
> +		  both readers and writers be the amount of online CPUs.
> +
>  torture_type	  Type of lock to torture. By default, only spinlocks will
>  		  be tortured. This module can torture the following locks,
>  		  with string values as follows:
> @@ -95,15 +100,18 @@ STATISTICS
>  Statistics are printed in the following format:
> 
>  spin_lock-torture: Writes:  Total: 93746064  Max/Min: 0/0   Fail: 0
> -   (A)				   (B)		  (C)	       (D)
> +   (A)		    (B)		   (C)		  (D)	       (E)
> 
>  (A): Lock type that is being tortured -- torture_type parameter.
> 
> -(B): Number of times the lock was acquired.
> +(B): Number of writer lock acquisitions. If dealing with a read/write primitive
> +     a second "Reads" statistics line is printed.
> +
> +(C): Number of times the lock was acquired.
> 
> -(C): Min and max number of times threads failed to acquire the lock.
> +(D): Min and max number of times threads failed to acquire the lock.
> 
> -(D): true/false values if there were errors acquiring the lock. This should
> +(E): true/false values if there were errors acquiring the lock. This should
>       -only- be positive if there is a bug in the locking primitive's
>       implementation. Otherwise a lock should never fail (ie: spin_lock()).
>       Of course, the same applies for (C), above. A dummy example of this is
> diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
> index 988267c..c1073d7 100644
> --- a/kernel/locking/locktorture.c
> +++ b/kernel/locking/locktorture.c
> @@ -52,6 +52,8 @@ MODULE_AUTHOR("Paul E. McKenney <paulmck@us.ibm.com>");
> 
>  torture_param(int, nwriters_stress, -1,
>  	     "Number of write-locking stress-test threads");
> +torture_param(int, nreaders_stress, -1,
> +	     "Number of read-locking stress-test threads");
>  torture_param(int, onoff_holdoff, 0, "Time after boot before CPU hotplugs (s)");
>  torture_param(int, onoff_interval, 0,
>  	     "Time between CPU hotplugs (s), 0=disable");
> @@ -74,15 +76,19 @@ static atomic_t n_lock_torture_errors;
> 
>  static struct task_struct *stats_task;
>  static struct task_struct **writer_tasks;
> +static struct task_struct **reader_tasks;
> 
>  static int nrealwriters_stress;
>  static bool lock_is_write_held;
> +static int nrealreaders_stress;
> +static bool lock_is_read_held;
> 
>  struct lock_stress_stats {
>  	long n_lock_fail;
>  	long n_lock_acquired;
>  };
>  static struct lock_stress_stats *lwsa; /* writer statistics */
> +static struct lock_stress_stats *lrsa; /* reader statistics */
> 
>  #if defined(MODULE)
>  #define LOCKTORTURE_RUNNABLE_INIT 1
> @@ -104,6 +110,9 @@ struct lock_torture_ops {
>  	int (*writelock)(void);
>  	void (*write_delay)(struct torture_random_state *trsp);
>  	void (*writeunlock)(void);
> +	int (*readlock)(void);
> +	void (*read_delay)(struct torture_random_state *trsp);
> +	void (*readunlock)(void);
>  	unsigned long flags;
>  	const char *name;
>  };
> @@ -142,6 +151,9 @@ static struct lock_torture_ops lock_busted_ops = {
>  	.writelock	= torture_lock_busted_write_lock,
>  	.write_delay	= torture_lock_busted_write_delay,
>  	.writeunlock	= torture_lock_busted_write_unlock,
> +	.readlock       = NULL,
> +	.read_delay     = NULL,
> +	.readunlock     = NULL,

C initialization does this already, no need to add the NULL initializers.

>  	.name		= "lock_busted"
>  };
> 
> @@ -182,6 +194,9 @@ static struct lock_torture_ops spin_lock_ops = {
>  	.writelock	= torture_spin_lock_write_lock,
>  	.write_delay	= torture_spin_lock_write_delay,
>  	.writeunlock	= torture_spin_lock_write_unlock,
> +	.readlock       = NULL,
> +	.read_delay     = NULL,
> +	.readunlock     = NULL,
>  	.name		= "spin_lock"
>  };
> 
> @@ -205,6 +220,9 @@ static struct lock_torture_ops spin_lock_irq_ops = {
>  	.writelock	= torture_spin_lock_write_lock_irq,
>  	.write_delay	= torture_spin_lock_write_delay,
>  	.writeunlock	= torture_lock_spin_write_unlock_irq,
> +	.readlock       = NULL,
> +	.read_delay     = NULL,
> +	.readunlock     = NULL,
>  	.name		= "spin_lock_irq"
>  };
> 
> @@ -241,6 +259,9 @@ static struct lock_torture_ops mutex_lock_ops = {
>  	.writelock	= torture_mutex_lock,
>  	.write_delay	= torture_mutex_delay,
>  	.writeunlock	= torture_mutex_unlock,
> +	.readlock       = NULL,
> +	.read_delay     = NULL,
> +	.readunlock     = NULL,
>  	.name		= "mutex_lock"
>  };
> 
> @@ -274,28 +295,57 @@ static int lock_torture_writer(void *arg)
>  }
> 
>  /*
> + * Lock torture reader kthread.  Repeatedly acquires and releases
> + * the reader lock.
> + */
> +static int lock_torture_reader(void *arg)
> +{
> +	struct lock_stress_stats *lrsp = arg;
> +	static DEFINE_TORTURE_RANDOM(rand);
> +
> +	VERBOSE_TOROUT_STRING("lock_torture_reader task started");
> +	set_user_nice(current, MAX_NICE);
> +
> +	do {
> +		if ((torture_random(&rand) & 0xfffff) == 0)
> +			schedule_timeout_uninterruptible(1);
> +		cur_ops->readlock();
> +		lock_is_read_held = 1;
> +		lrsp->n_lock_acquired++;
> +		cur_ops->read_delay(&rand);
> +		lock_is_read_held = 0;
> +		cur_ops->readunlock();
> +		stutter_wait("lock_torture_reader");
> +	} while (!torture_must_stop());
> +	torture_kthread_stopping("lock_torture_reader");
> +	return 0;
> +}
> +
> +/*
>   * Create an lock-torture-statistics message in the specified buffer.
>   */
> -static void lock_torture_printk(char *page)
> +static void __torture_print_stats(char *page,
> +				  struct lock_stress_stats *statp, bool write)
>  {
>  	bool fail = 0;
> -	int i;
> +	int i, n_stress;
>  	long max = 0;
> -	long min = lwsa[0].n_lock_acquired;
> +	long min = statp[0].n_lock_acquired;
>  	long long sum = 0;
> 
> -	for (i = 0; i < nrealwriters_stress; i++) {
> -		if (lwsa[i].n_lock_fail)
> +	n_stress = write ? nrealwriters_stress : nrealreaders_stress;
> +	for (i = 0; i < n_stress; i++) {
> +		if (statp[i].n_lock_fail)
>  			fail = true;
> -		sum += lwsa[i].n_lock_acquired;
> -		if (max < lwsa[i].n_lock_fail)
> -			max = lwsa[i].n_lock_fail;
> -		if (min > lwsa[i].n_lock_fail)
> -			min = lwsa[i].n_lock_fail;
> +		sum += statp[i].n_lock_acquired;
> +		if (max < statp[i].n_lock_fail)
> +			max = statp[i].n_lock_fail;
> +		if (min > statp[i].n_lock_fail)
> +			min = statp[i].n_lock_fail;
>  	}
> -	page += sprintf(page, "%s%s ", torture_type, TORTURE_FLAG);
>  	page += sprintf(page,
> -			"Writes:  Total: %lld  Max/Min: %ld/%ld %s  Fail: %d %s\n",
> +			"%s:  Total: %lld  Max/Min: %ld/%ld %s  Fail: %d %s\n",
> +			write ? "Writes" : "Reads ",
>  			sum, max, min, max / 2 > min ? "???" : "",
>  			fail, fail ? "!!!" : "");
>  	if (fail)
> @@ -315,15 +365,32 @@ static void lock_torture_stats_print(void)
>  	int size = nrealwriters_stress * 200 + 8192;
>  	char *buf;
> 
> +	if (cur_ops->readlock)
> +		size += nrealreaders_stress * 200 + 8192;
> +
>  	buf = kmalloc(size, GFP_KERNEL);
>  	if (!buf) {
>  		pr_err("lock_torture_stats_print: Out of memory, need: %d",
>  		       size);
>  		return;
>  	}
> -	lock_torture_printk(buf);
> +
> +	__torture_print_stats(buf, lwsa, true);
>  	pr_alert("%s", buf);
>  	kfree(buf);
> +
> +	if (cur_ops->readlock) {
> +		buf = kmalloc(size, GFP_KERNEL);
> +		if (!buf) {
> +			pr_err("lock_torture_stats_print: Out of memory, need: %d",
> +			       size);
> +			return;
> +		}
> +
> +		__torture_print_stats(buf, lrsa, false);
> +		pr_alert("%s", buf);
> +		kfree(buf);
> +	}
>  }
> 
>  /*
> @@ -350,10 +417,10 @@ lock_torture_print_module_parms(struct lock_torture_ops *cur_ops,
>  				const char *tag)
>  {
>  	pr_alert("%s" TORTURE_FLAG
> -		 "--- %s%s: nwriters_stress=%d stat_interval=%d verbose=%d shuffle_interval=%d stutter=%d shutdown_secs=%d onoff_interval=%d onoff_holdoff=%d\n",
> +		 "--- %s%s: nwriters_stress=%d nreaders_stress=%d stat_interval=%d verbose=%d shuffle_interval=%d stutter=%d shutdown_secs=%d onoff_interval=%d onoff_holdoff=%d\n",
>  		 torture_type, tag, debug_lock ? " [debug]": "",
> -		 nrealwriters_stress, stat_interval, verbose,
> -		 shuffle_interval, stutter, shutdown_secs,
> +		 nrealwriters_stress, nrealreaders_stress, stat_interval,
> +		 verbose, shuffle_interval, stutter, shutdown_secs,
>  		 onoff_interval, onoff_holdoff);
>  }
> 
> @@ -372,6 +439,14 @@ static void lock_torture_cleanup(void)
>  		writer_tasks = NULL;
>  	}
> 
> +	if (reader_tasks) {
> +		for (i = 0; i < nrealreaders_stress; i++)
> +			torture_stop_kthread(lock_torture_reader,
> +					     reader_tasks[i]);
> +		kfree(reader_tasks);
> +		reader_tasks = NULL;
> +	}
> +
>  	torture_stop_kthread(lock_torture_stats, stats_task);
>  	lock_torture_stats_print();  /* -After- the stats thread is stopped! */
> 
> @@ -389,7 +464,7 @@ static void lock_torture_cleanup(void)
> 
>  static int __init lock_torture_init(void)
>  {
> -	int i;
> +	int i, j;
>  	int firsterr = 0;
>  	static struct lock_torture_ops *torture_ops[] = {
>  		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops, &mutex_lock_ops,
> @@ -430,7 +505,6 @@ static int __init lock_torture_init(void)
>  	if (strncmp(torture_type, "spin", 4) == 0)
>  		debug_lock = true;
>  #endif
> -	lock_torture_print_module_parms(cur_ops, "Start of test");
> 
>  	/* Initialize the statistics so that each run gets its own numbers. */
> 
> @@ -446,8 +520,37 @@ static int __init lock_torture_init(void)
>  		lwsa[i].n_lock_acquired = 0;
>  	}
> 
> -	/* Start up the kthreads. */
> +	if (cur_ops->readlock) {
> +		if (nreaders_stress >= 0)
> +			nrealreaders_stress = nreaders_stress;
> +		else {
> +			/*
> +			 * By default distribute evenly the number of
> +			 * readers and writers. We still run the same number
> +			 * of threads as the writer-only locks default.
> +			 */
> +			if (nwriters_stress < 0) /* user doesn't care */
> +				nrealwriters_stress = num_online_cpus();
> +			nrealreaders_stress = nrealwriters_stress;
> +		}
> +
> +		lock_is_read_held = 0;
> +		lrsa = kmalloc(sizeof(*lrsa) * nrealreaders_stress, GFP_KERNEL);
> +		if (lrsa == NULL) {
> +			VERBOSE_TOROUT_STRING("lrsa: Out of memory");
> +			firsterr = -ENOMEM;
> +			kfree(lwsa);
> +			goto unwind;
> +		}
> 
> +		for (i = 0; i < nrealreaders_stress; i++) {
> +			lrsa[i].n_lock_fail = 0;
> +			lrsa[i].n_lock_acquired = 0;
> +		}
> +	}
> +	lock_torture_print_module_parms(cur_ops, "Start of test");
> +
> +	/* Prepare torture context. */
>  	if (onoff_interval > 0) {
>  		firsterr = torture_onoff_init(onoff_holdoff * HZ,
>  					      onoff_interval * HZ);
> @@ -478,11 +581,44 @@ static int __init lock_torture_init(void)
>  		firsterr = -ENOMEM;
>  		goto unwind;
>  	}
> -	for (i = 0; i < nrealwriters_stress; i++) {
> +
> +	if (cur_ops->readlock) {
> +		reader_tasks = kzalloc(nrealreaders_stress * sizeof(reader_tasks[0]),
> +				       GFP_KERNEL);
> +		if (reader_tasks == NULL) {
> +			VERBOSE_TOROUT_ERRSTRING("reader_tasks: Out of memory");
> +			firsterr = -ENOMEM;
> +			goto unwind;
> +		}
> +	}
> +
> +	/*
> +	 * Create the kthreads and start torturing (oh, those poor little locks).
> +	 *
> +	 * TODO: Note that we interleave writers with readers, giving writers a
> +	 * slight advantage, by creating its kthread first. This can be modified
> +	 * for very specific needs, or even let the user choose the policy, if
> +	 * ever wanted.
> +	 */
> +	for (i = 0, j = 0; i < nrealwriters_stress ||
> +		    j < nrealreaders_stress; i++, j++) {
> +		if (i >= nrealwriters_stress)
> +			goto create_reader;
> +
> +		/* Create writer. */
>  		firsterr = torture_create_kthread(lock_torture_writer, &lwsa[i],
>  						  writer_tasks[i]);
>  		if (firsterr)
>  			goto unwind;
> +
> +	create_reader:
> +		if (cur_ops->readlock == NULL || (j >= nrealreaders_stress))
> +			continue;
> +		/* Create reader. */
> +		firsterr = torture_create_kthread(lock_torture_reader, &lrsa[j],
> +						  reader_tasks[j]);
> +		if (firsterr)
> +			goto unwind;
>  	}
>  	if (stat_interval > 0) {
>  		firsterr = torture_create_kthread(lock_torture_stats, NULL,
> -- 
> 1.8.4.5
> 
> 
> 


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/9] locktorture: Rename locktorture_runnable parameter
  2014-09-12  3:40 ` [PATCH 1/9] locktorture: Rename locktorture_runnable parameter Davidlohr Bueso
@ 2014-09-12 17:40   ` Paul E. McKenney
  2014-09-12 17:51     ` Paul E. McKenney
  0 siblings, 1 reply; 27+ messages in thread
From: Paul E. McKenney @ 2014-09-12 17:40 UTC (permalink / raw)
  To: Davidlohr Bueso; +Cc: peterz, mingo, linux-kernel, Davidlohr Bueso

On Thu, Sep 11, 2014 at 08:40:16PM -0700, Davidlohr Bueso wrote:
> ... to just 'torture_runnable'. It follows other variable naming
> and is shorter.
> 
> Signed-off-by: Davidlohr Bueso <dbueso@suse.de>

Looks good -- and please see below for the corresponding change to the
locktorture scripting.  (Which I have queued separately after this change.)

> ---
>  kernel/locking/locktorture.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
> index 0955b88..8c770b2 100644
> --- a/kernel/locking/locktorture.c
> +++ b/kernel/locking/locktorture.c
> @@ -87,9 +87,9 @@ static struct lock_writer_stress_stats *lwsa;
>  #else
>  #define LOCKTORTURE_RUNNABLE_INIT 0
>  #endif
> -int locktorture_runnable = LOCKTORTURE_RUNNABLE_INIT;
> -module_param(locktorture_runnable, int, 0444);
> -MODULE_PARM_DESC(locktorture_runnable, "Start locktorture at module init");
> +int torture_runnable = LOCKTORTURE_RUNNABLE_INIT;
> +module_param(torture_runnable, int, 0444);
> +MODULE_PARM_DESC(torture_runnable, "Start locktorture at module init");
> 
>  /* Forward reference. */
>  static void lock_torture_cleanup(void);
> @@ -355,7 +355,7 @@ static int __init lock_torture_init(void)
>  		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops,
>  	};
> 
> -	if (!torture_init_begin(torture_type, verbose, &locktorture_runnable))
> +	if (!torture_init_begin(torture_type, verbose, &torture_runnable))
>  		return -EBUSY;
> 
>  	/* Process args and tell the world that the torturer is on the job. */
> -- 
> 1.8.4.5

locktorture: Make torture scripting account for new _runnable name

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

diff --git a/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh b/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh
index 9746ea1cd6c7..252aae618984 100644
--- a/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh
+++ b/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh
@@ -38,6 +38,6 @@ per_version_boot_params () {
 	echo $1 `locktorture_param_onoff "$1" "$2"` \
 		locktorture.stat_interval=15 \
 		locktorture.shutdown_secs=$3 \
-		locktorture.locktorture_runnable=1 \
+		locktorture.torture_runnable=1 \
 		locktorture.verbose=1
 }


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/9] locktorture: Rename locktorture_runnable parameter
  2014-09-12 17:40   ` Paul E. McKenney
@ 2014-09-12 17:51     ` Paul E. McKenney
  0 siblings, 0 replies; 27+ messages in thread
From: Paul E. McKenney @ 2014-09-12 17:51 UTC (permalink / raw)
  To: Davidlohr Bueso; +Cc: peterz, mingo, linux-kernel, Davidlohr Bueso

On Fri, Sep 12, 2014 at 10:40:26AM -0700, Paul E. McKenney wrote:

[ . . . ]

> locktorture: Make torture scripting account for new _runnable name
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> 
> diff --git a/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh b/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh
> index 9746ea1cd6c7..252aae618984 100644
> --- a/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh
> +++ b/tools/testing/selftests/rcutorture/configs/lock/ver_functions.sh
> @@ -38,6 +38,6 @@ per_version_boot_params () {
>  	echo $1 `locktorture_param_onoff "$1" "$2"` \
>  		locktorture.stat_interval=15 \
>  		locktorture.shutdown_secs=$3 \
> -		locktorture.locktorture_runnable=1 \
> +		locktorture.torture_runnable=1 \
>  		locktorture.verbose=1
>  }

And I apparently forgot to document locktorture's kernel parameters...

							Thanx, Paul

------------------------------------------------------------------------

locktorture: Document boot/module parameters
    
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index c8b6beb3edda..c04fb60f4cb3 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -1704,6 +1704,49 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 	lockd.nlm_udpport=M	[NFS] Assign UDP port.
 			Format: <integer>
 
+	locktorture.nreaders_stress= [KNL]
+			Set the number of locking read-acquisition kthreads.
+			Defaults to being automatically set based on the
+			number of online CPUs.
+
+	locktorture.nwriters_stress= [KNL]
+			Set the number of locking write-acquisition kthreads.
+
+	locktorture.onoff_holdoff= [KNL]
+			Set time (s) after boot for CPU-hotplug testing.
+
+	locktorture.onoff_interval= [KNL]
+			Set time (s) between CPU-hotplug operations, or
+			zero to disable CPU-hotplug testing.
+
+	locktorture.shuffle_interval= [KNL]
+			Set task-shuffle interval (jiffies).  Shuffling
+			tasks allows some CPUs to go into dyntick-idle
+			mode during the locktorture test.
+
+	locktorture.shutdown_secs= [KNL]
+			Set time (s) after boot system shutdown.  This
+			is useful for hands-off automated testing.
+
+	locktorture.stat_interval= [KNL]
+			Time (s) between statistics printk()s.
+
+	locktorture.stutter= [KNL]
+			Time (s) to stutter testing, for example,
+			specifying five seconds causes the test to run for
+			five seconds, wait for five seconds, and so on.
+			This tests the locking primitive's ability to
+			transition abruptly to and from idle.
+
+	locktorture.torture_runnable= [BOOT]
+			Start locktorture running at boot time.
+
+	locktorture.torture_type= [KNL]
+			Specify the locking implementation to test.
+
+	locktorture.verbose= [KNL]
+			Enable additional printk() statements.
+
 	logibm.irq=	[HW,MOUSE] Logitech Bus Mouse Driver
 			Format: <irq>
 


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/9] locktorture: Support mutexes
  2014-09-12  3:40 ` [PATCH 3/9] locktorture: Support mutexes Davidlohr Bueso
@ 2014-09-12 18:02   ` Paul E. McKenney
  2014-09-12 18:56     ` Davidlohr Bueso
  0 siblings, 1 reply; 27+ messages in thread
From: Paul E. McKenney @ 2014-09-12 18:02 UTC (permalink / raw)
  To: Davidlohr Bueso; +Cc: peterz, mingo, linux-kernel, Davidlohr Bueso

On Thu, Sep 11, 2014 at 08:40:18PM -0700, Davidlohr Bueso wrote:
> Add a "mutex_lock" torture test. The main difference with the already
> existing spinlock tests is that the latency of the critical region
> is much larger. We randomly delay for (arbitrarily) either 500 ms or,
> otherwise, 25 ms. While this can considerably reduce the amount of
> writes compared to non blocking locks, if run long enough it can have
> the same torturous effect. Furthermore it is more representative of
> mutex hold times and can stress better things like thrashing.
> 
> Signed-off-by: Davidlohr Bueso <dbueso@suse.de>

One question and one follow-up patch below.

								Thanx, Paul

> ---
>  Documentation/locking/locktorture.txt |  2 ++
>  kernel/locking/locktorture.c          | 41 +++++++++++++++++++++++++++++++++--
>  2 files changed, 41 insertions(+), 2 deletions(-)
> 
> diff --git a/Documentation/locking/locktorture.txt b/Documentation/locking/locktorture.txt
> index c0ab969..6b1e7ca 100644
> --- a/Documentation/locking/locktorture.txt
> +++ b/Documentation/locking/locktorture.txt
> @@ -40,6 +40,8 @@ torture_type	  Type of lock to torture. By default, only spinlocks will
>  		     o "spin_lock_irq": spin_lock_irq() and spin_unlock_irq()
>  					pairs.
> 
> +		     o "mutex_lock": mutex_lock() and mutex_unlock() pairs.
> +
>  torture_runnable  Start locktorture at module init. By default it will begin
>  		  once the module is loaded.
> 
> diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
> index 8c770b2..414ba45 100644
> --- a/kernel/locking/locktorture.c
> +++ b/kernel/locking/locktorture.c
> @@ -27,6 +27,7 @@
>  #include <linux/kthread.h>
>  #include <linux/err.h>
>  #include <linux/spinlock.h>
> +#include <linux/mutex.h>
>  #include <linux/smp.h>
>  #include <linux/interrupt.h>
>  #include <linux/sched.h>
> @@ -66,7 +67,7 @@ torture_param(bool, verbose, true,
>  static char *torture_type = "spin_lock";
>  module_param(torture_type, charp, 0444);
>  MODULE_PARM_DESC(torture_type,
> -		 "Type of lock to torture (spin_lock, spin_lock_irq, ...)");
> +		 "Type of lock to torture (spin_lock, spin_lock_irq, mutex_lock, ...)");
> 
>  static atomic_t n_lock_torture_errors;
> 
> @@ -206,6 +207,42 @@ static struct lock_torture_ops spin_lock_irq_ops = {
>  	.name		= "spin_lock_irq"
>  };
> 
> +static DEFINE_MUTEX(torture_mutex);
> +
> +static int torture_mutex_lock(void) __acquires(torture_mutex)
> +{
> +	mutex_lock(&torture_mutex);
> +	return 0;
> +}
> +
> +static void torture_mutex_delay(struct torture_random_state *trsp)
> +{
> +	const unsigned long longdelay_ms = 100;
> +
> +	/* We want a long delay occasionally to force massive contention.  */
> +	if (!(torture_random(trsp) %
> +	      (nrealwriters_stress * 2000 * longdelay_ms)))
> +		mdelay(longdelay_ms * 5);

So let's see...  We wait 500 milliseconds about once per 200,000 operations
per writer.  So if we have 5 writers, we wait 500 milliseconds per million
operations.  So each writer will do about 200,000 operations, then there
will be a half-second gap.  But each short operation holds the lock for
20 milliseconds, which takes several hours to work through the million
operations.

So it looks to me like you are in massive contention state either way,
at least until the next stutter interval shows up.

Is that the intent?  Or am I missing something here?

> +	else
> +		mdelay(longdelay_ms / 5);
> +#ifdef CONFIG_PREEMPT
> +	if (!(torture_random(trsp) % (nrealwriters_stress * 20000)))
> +		preempt_schedule();  /* Allow test to be preempted. */
> +#endif
> +}
> +
> +static void torture_mutex_unlock(void) __releases(torture_mutex)
> +{
> +	mutex_unlock(&torture_mutex);
> +}
> +
> +static struct lock_torture_ops mutex_lock_ops = {
> +	.writelock	= torture_mutex_lock,
> +	.write_delay	= torture_mutex_delay,
> +	.writeunlock	= torture_mutex_unlock,
> +	.name		= "mutex_lock"
> +};
> +
>  /*
>   * Lock torture writer kthread.  Repeatedly acquires and releases
>   * the lock, checking for duplicate acquisitions.
> @@ -352,7 +389,7 @@ static int __init lock_torture_init(void)
>  	int i;
>  	int firsterr = 0;
>  	static struct lock_torture_ops *torture_ops[] = {
> -		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops,
> +		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops, &mutex_lock_ops,
>  	};
> 
>  	if (!torture_init_begin(torture_type, verbose, &torture_runnable))
> -- 

And I queued the following patch to catch up the scripting.

------------------------------------------------------------------------

locktorture: Add test scenario for mutex_lock

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

diff --git a/tools/testing/selftests/rcutorture/configs/lock/CFLIST b/tools/testing/selftests/rcutorture/configs/lock/CFLIST
index a061b22d1892..901bafde4588 100644
--- a/tools/testing/selftests/rcutorture/configs/lock/CFLIST
+++ b/tools/testing/selftests/rcutorture/configs/lock/CFLIST
@@ -1 +1,2 @@
 LOCK01
+LOCK02
diff --git a/tools/testing/selftests/rcutorture/configs/lock/LOCK02 b/tools/testing/selftests/rcutorture/configs/lock/LOCK02
new file mode 100644
index 000000000000..1d1da1477fc3
--- /dev/null
+++ b/tools/testing/selftests/rcutorture/configs/lock/LOCK02
@@ -0,0 +1,6 @@
+CONFIG_SMP=y
+CONFIG_NR_CPUS=4
+CONFIG_HOTPLUG_CPU=y
+CONFIG_PREEMPT_NONE=n
+CONFIG_PREEMPT_VOLUNTARY=n
+CONFIG_PREEMPT=y
diff --git a/tools/testing/selftests/rcutorture/configs/lock/LOCK02.boot b/tools/testing/selftests/rcutorture/configs/lock/LOCK02.boot
new file mode 100644
index 000000000000..5aa44b4f1b51
--- /dev/null
+++ b/tools/testing/selftests/rcutorture/configs/lock/LOCK02.boot
@@ -0,0 +1 @@
+locktorture.torture_type=mutex_lock


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 7/9] locktorture: Add infrastructure for torturing read locks
  2014-09-12 16:06   ` Paul E. McKenney
@ 2014-09-12 18:02     ` Davidlohr Bueso
  0 siblings, 0 replies; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12 18:02 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel

On Fri, 2014-09-12 at 09:06 -0700, Paul E. McKenney wrote:
> On Thu, Sep 11, 2014 at 09:40:41PM -0700, Davidlohr Bueso wrote:
> > In addition, introduce a new nreaders_stress module parameter. The
> > default number of readers will be the same number of writers threads.
> > Writer threads are interleaved with readers. Documentation is updated,
> > respectively.
> 
> Nice!!!
> 
> Conditional fairness checks in the future?  (As in verifying that if
> the rwlock in question claims some degree of fairness, trying to break
> that guarantee, and contrariwise, if the lock is unfair, making sure
> to avoid starvation during the test?)

Yep, there are all sorts of interesting things we can measure in rw
runs. In this set I'm only trying to establish a minimum infrastructure.
Future work will be pretty trivial (at least code wise) to implement
once this sort of thing is in. 

> And one nit below.
[...]
> C initialization does this already, no need to add the NULL initializers.

Sure, I always tend to be explicit when initializing. I guess you won't
like the context initialization in patch 9/9.

Thanks,
Davidlohr


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 6/9] torture: Address race in module cleanup
  2014-09-12  3:40 ` [PATCH 6/9] torture: Address race in module cleanup Davidlohr Bueso
@ 2014-09-12 18:04   ` Paul E. McKenney
  2014-09-12 18:28     ` Davidlohr Bueso
  0 siblings, 1 reply; 27+ messages in thread
From: Paul E. McKenney @ 2014-09-12 18:04 UTC (permalink / raw)
  To: Davidlohr Bueso; +Cc: peterz, mingo, linux-kernel, Davidlohr Bueso

On Thu, Sep 11, 2014 at 08:40:21PM -0700, Davidlohr Bueso wrote:
> When performing module cleanups by calling torture_cleanup() the
> 'torture_type' string in nullified However, callers are not necessarily
> done, and might still need to reference the variable. This impacts
> both rcutorture and locktorture, causing printing things like:
> 
> [   94.226618] (null)-torture: Stopping lock_torture_writer task
> [   94.226624] (null)-torture: Stopping lock_torture_stats task
> 
> Thus delay this operation until the very end of the cleanup process.
> The consequence (which shouldn't matter for this kid of program) is,
> of course, that we delay the window between rmmod and modprobing,
> for instance in module_torture_begin().
> 
> Signed-off-by: Davidlohr Bueso <dbueso@suse.de>

Good catch!  I had just been ignoring the (null), and my scripting
doesn't care, but it is better to have it taken care of.

							Thanx, Paul

> ---
>  include/linux/torture.h      |  3 ++-
>  kernel/locking/locktorture.c |  3 ++-
>  kernel/rcu/rcutorture.c      |  3 ++-
>  kernel/torture.c             | 16 +++++++++++++---
>  4 files changed, 19 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/torture.h b/include/linux/torture.h
> index 5ca58fc..301b628 100644
> --- a/include/linux/torture.h
> +++ b/include/linux/torture.h
> @@ -77,7 +77,8 @@ int torture_stutter_init(int s);
>  /* Initialization and cleanup. */
>  bool torture_init_begin(char *ttype, bool v, int *runnable);
>  void torture_init_end(void);
> -bool torture_cleanup(void);
> +bool torture_cleanup_begin(void);
> +void torture_cleanup_end(void);
>  bool torture_must_stop(void);
>  bool torture_must_stop_irq(void);
>  void torture_kthread_stopping(char *title);
> diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
> index de703a7..988267c 100644
> --- a/kernel/locking/locktorture.c
> +++ b/kernel/locking/locktorture.c
> @@ -361,7 +361,7 @@ static void lock_torture_cleanup(void)
>  {
>  	int i;
> 
> -	if (torture_cleanup())
> +	if (torture_cleanup_begin())
>  		return;
> 
>  	if (writer_tasks) {
> @@ -384,6 +384,7 @@ static void lock_torture_cleanup(void)
>  	else
>  		lock_torture_print_module_parms(cur_ops,
>  						"End of test: SUCCESS");
> +	torture_cleanup_end();
>  }
> 
>  static int __init lock_torture_init(void)
> diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
> index 948a769..57a2792 100644
> --- a/kernel/rcu/rcutorture.c
> +++ b/kernel/rcu/rcutorture.c
> @@ -1418,7 +1418,7 @@ rcu_torture_cleanup(void)
>  	int i;
> 
>  	rcutorture_record_test_transition();
> -	if (torture_cleanup()) {
> +	if (torture_cleanup_begin()) {
>  		if (cur_ops->cb_barrier != NULL)
>  			cur_ops->cb_barrier();
>  		return;
> @@ -1468,6 +1468,7 @@ rcu_torture_cleanup(void)
>  					       "End of test: RCU_HOTPLUG");
>  	else
>  		rcu_torture_print_module_parms(cur_ops, "End of test: SUCCESS");
> +	torture_cleanup_end();
>  }
> 
>  #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD
> diff --git a/kernel/torture.c b/kernel/torture.c
> index d600af2..07a5c3d 100644
> --- a/kernel/torture.c
> +++ b/kernel/torture.c
> @@ -635,8 +635,13 @@ EXPORT_SYMBOL_GPL(torture_init_end);
>   *
>   * This must be called before the caller starts shutting down its own
>   * kthreads.
> + *
> + * Both torture_cleanup_begin() and torture_cleanup_end() must be paired,
> + * in order to correctly perform the cleanup. They are separated because
> + * threads can still need to reference the torture_type type, thus nullify
> + * only after completing all other relevant calls.
>   */
> -bool torture_cleanup(void)
> +bool torture_cleanup_begin(void)
>  {
>  	mutex_lock(&fullstop_mutex);
>  	if (ACCESS_ONCE(fullstop) == FULLSTOP_SHUTDOWN) {
> @@ -651,12 +656,17 @@ bool torture_cleanup(void)
>  	torture_shuffle_cleanup();
>  	torture_stutter_cleanup();
>  	torture_onoff_cleanup();
> +	return false;
> +}
> +EXPORT_SYMBOL_GPL(torture_cleanup_begin);
> +
> +void torture_cleanup_end(void)
> +{
>  	mutex_lock(&fullstop_mutex);
>  	torture_type = NULL;
>  	mutex_unlock(&fullstop_mutex);
> -	return false;
>  }
> -EXPORT_SYMBOL_GPL(torture_cleanup);
> +EXPORT_SYMBOL_GPL(torture_cleanup_end);
> 
>  /*
>   * Is it time for the current torture test to stop?
> -- 
> 1.8.4.5
> 


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 8/9] locktorture: Support rwsems
  2014-09-12  4:41 ` [PATCH 8/9] locktorture: Support rwsems Davidlohr Bueso
  2014-09-12  7:37   ` Peter Zijlstra
@ 2014-09-12 18:07   ` Paul E. McKenney
  1 sibling, 0 replies; 27+ messages in thread
From: Paul E. McKenney @ 2014-09-12 18:07 UTC (permalink / raw)
  To: Davidlohr Bueso; +Cc: peterz, mingo, linux-kernel, dbueso

On Thu, Sep 11, 2014 at 09:41:30PM -0700, Davidlohr Bueso wrote:
> We can easily do so with our new reader lock support. Just an arbitrary
> design default: readers have higher (5x) critical region latencies than
> writers: 50 ms and 10 ms, respectively.

Except in the massive contention case, where the writers get longer
delays than the readers, correct?

I again am guessing that you are relying on stutter intervals to allow
the locks to be in any state other than massively contended.

And patch to add this to the default set run by the scripts below.

> Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
> ---
>  Documentation/locking/locktorture.txt |  2 ++
>  kernel/locking/locktorture.c          | 68 ++++++++++++++++++++++++++++++++++-
>  2 files changed, 69 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/locking/locktorture.txt b/Documentation/locking/locktorture.txt
> index 1bdeb71..f7d99e2 100644
> --- a/Documentation/locking/locktorture.txt
> +++ b/Documentation/locking/locktorture.txt
> @@ -47,6 +47,8 @@ torture_type	  Type of lock to torture. By default, only spinlocks will
> 
>  		     o "mutex_lock": mutex_lock() and mutex_unlock() pairs.
> 
> +		     o "rwsem_lock": read/write down() and up() semaphore pairs.
> +
>  torture_runnable  Start locktorture at module init. By default it will begin
>  		  once the module is loaded.
> 
> diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
> index c1073d7..8480118 100644
> --- a/kernel/locking/locktorture.c
> +++ b/kernel/locking/locktorture.c
> @@ -265,6 +265,71 @@ static struct lock_torture_ops mutex_lock_ops = {
>  	.name		= "mutex_lock"
>  };
> 
> +static DECLARE_RWSEM(torture_rwsem);
> +static int torture_rwsem_down_write(void) __acquires(torture_rwsem)
> +{
> +	down_write(&torture_rwsem);
> +	return 0;
> +}
> +
> +static void torture_rwsem_write_delay(struct torture_random_state *trsp)
> +{
> +	const unsigned long longdelay_ms = 100;
> +
> +	/* We want a long delay occasionally to force massive contention.  */
> +	if (!(torture_random(trsp) %
> +	      (nrealwriters_stress * 2000 * longdelay_ms)))
> +		mdelay(longdelay_ms * 10);
> +	else
> +		mdelay(longdelay_ms / 10);
> +#ifdef CONFIG_PREEMPT
> +	if (!(torture_random(trsp) % (nrealwriters_stress * 20000)))
> +		preempt_schedule();  /* Allow test to be preempted. */
> +#endif
> +}
> +
> +static void torture_rwsem_up_write(void) __releases(torture_rwsem)
> +{
> +	up_write(&torture_rwsem);
> +}
> +
> +static int torture_rwsem_down_read(void) __acquires(torture_rwsem)
> +{
> +	down_read(&torture_rwsem);
> +	return 0;
> +}
> +
> +static void torture_rwsem_read_delay(struct torture_random_state *trsp)
> +{
> +	const unsigned long longdelay_ms = 100;
> +
> +	/* We want a long delay occasionally to force massive contention.  */
> +	if (!(torture_random(trsp) %
> +	      (nrealwriters_stress * 2000 * longdelay_ms)))
> +		mdelay(longdelay_ms * 2);
> +	else
> +		mdelay(longdelay_ms / 2);
> +#ifdef CONFIG_PREEMPT
> +	if (!(torture_random(trsp) % (nrealreaders_stress * 20000)))
> +		preempt_schedule();  /* Allow test to be preempted. */
> +#endif
> +}
> +
> +static void torture_rwsem_up_read(void) __releases(torture_rwsem)
> +{
> +	up_read(&torture_rwsem);
> +}
> +
> +static struct lock_torture_ops rwsem_lock_ops = {
> +	.writelock	= torture_rwsem_down_write,
> +	.write_delay	= torture_rwsem_write_delay,
> +	.writeunlock	= torture_rwsem_up_write,
> +	.readlock       = torture_rwsem_down_read,
> +	.read_delay     = torture_rwsem_read_delay,
> +	.readunlock     = torture_rwsem_up_read,
> +	.name		= "rwsem_lock"
> +};
> +
>  /*
>   * Lock torture writer kthread.  Repeatedly acquires and releases
>   * the lock, checking for duplicate acquisitions.
> @@ -467,7 +532,8 @@ static int __init lock_torture_init(void)
>  	int i, j;
>  	int firsterr = 0;
>  	static struct lock_torture_ops *torture_ops[] = {
> -		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops, &mutex_lock_ops,
> +		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops,
> +		&mutex_lock_ops, &rwsem_lock_ops,
>  	};
> 
>  	if (!torture_init_begin(torture_type, verbose, &torture_runnable))
> -- 
> 1.8.4.5

------------------------------------------------------------------------

locktorture: Add test scenario for rwsem_lock

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

diff --git a/tools/testing/selftests/rcutorture/configs/lock/CFLIST b/tools/testing/selftests/rcutorture/configs/lock/CFLIST
index 901bafde4588..6108137da770 100644
--- a/tools/testing/selftests/rcutorture/configs/lock/CFLIST
+++ b/tools/testing/selftests/rcutorture/configs/lock/CFLIST
@@ -1,2 +1,3 @@
 LOCK01
 LOCK02
+LOCK03
diff --git a/tools/testing/selftests/rcutorture/configs/lock/LOCK03 b/tools/testing/selftests/rcutorture/configs/lock/LOCK03
new file mode 100644
index 000000000000..1d1da1477fc3
--- /dev/null
+++ b/tools/testing/selftests/rcutorture/configs/lock/LOCK03
@@ -0,0 +1,6 @@
+CONFIG_SMP=y
+CONFIG_NR_CPUS=4
+CONFIG_HOTPLUG_CPU=y
+CONFIG_PREEMPT_NONE=n
+CONFIG_PREEMPT_VOLUNTARY=n
+CONFIG_PREEMPT=y
diff --git a/tools/testing/selftests/rcutorture/configs/lock/LOCK03.boot b/tools/testing/selftests/rcutorture/configs/lock/LOCK03.boot
new file mode 100644
index 000000000000..a67bbe0245c9
--- /dev/null
+++ b/tools/testing/selftests/rcutorture/configs/lock/LOCK03.boot
@@ -0,0 +1 @@
+locktorture.torture_type=rwsem_lock


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 6/9] torture: Address race in module cleanup
  2014-09-12 18:04   ` Paul E. McKenney
@ 2014-09-12 18:28     ` Davidlohr Bueso
  2014-09-12 19:03       ` Paul E. McKenney
  0 siblings, 1 reply; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12 18:28 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel

On Fri, 2014-09-12 at 11:04 -0700, Paul E. McKenney wrote:
> On Thu, Sep 11, 2014 at 08:40:21PM -0700, Davidlohr Bueso wrote:
> > When performing module cleanups by calling torture_cleanup() the
> > 'torture_type' string in nullified However, callers are not necessarily
> > done, and might still need to reference the variable. This impacts
> > both rcutorture and locktorture, causing printing things like:
> > 
> > [   94.226618] (null)-torture: Stopping lock_torture_writer task
> > [   94.226624] (null)-torture: Stopping lock_torture_stats task
> > 
> > Thus delay this operation until the very end of the cleanup process.
> > The consequence (which shouldn't matter for this kid of program) is,
> > of course, that we delay the window between rmmod and modprobing,
> > for instance in module_torture_begin().
> > 
> > Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
> 
> Good catch!  I had just been ignoring the (null), and my scripting
> doesn't care, but it is better to have it taken care of.

In addition, for locktorture this issue can cause not only null but the
printing the wrong cleanup string when a new module is loaded with a
different torture_type.

Thanks,
Davidlohr


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/9] locktorture: Support mutexes
  2014-09-12 18:02   ` Paul E. McKenney
@ 2014-09-12 18:56     ` Davidlohr Bueso
  2014-09-12 19:12       ` Paul E. McKenney
  0 siblings, 1 reply; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-12 18:56 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel

On Fri, 2014-09-12 at 11:02 -0700, Paul E. McKenney wrote:
> On Thu, Sep 11, 2014 at 08:40:18PM -0700, Davidlohr Bueso wrote:
> > +static void torture_mutex_delay(struct torture_random_state *trsp)
> > +{
> > +	const unsigned long longdelay_ms = 100;
> > +
> > +	/* We want a long delay occasionally to force massive contention.  */
> > +	if (!(torture_random(trsp) %
> > +	      (nrealwriters_stress * 2000 * longdelay_ms)))
> > +		mdelay(longdelay_ms * 5);
> 
> So let's see...  We wait 500 milliseconds about once per 200,000 operations
> per writer.  So if we have 5 writers, we wait 500 milliseconds per million
> operations.  So each writer will do about 200,000 operations, then there
> will be a half-second gap.  But each short operation holds the lock for
> 20 milliseconds, which takes several hours to work through the million
> operations.
> 
> So it looks to me like you are in massive contention state either way,
> at least until the next stutter interval shows up.
> 
> Is that the intent?  Or am I missing something here?

Ah, nice description. Yes, I am aiming for constant massive contention
(should have mentioned this, sorry). I believe it stresses the more
interesting parts of mutexes -- and rwsems, for that matter. If you
think it's excessive, we could decrease the the large wait and/or
increase the short one. I used the factor of the delay by the default
stutter value -- we could also make it always equal.

> > +	else
> > +		mdelay(longdelay_ms / 5);
> > +#ifdef CONFIG_PREEMPT
> > +	if (!(torture_random(trsp) % (nrealwriters_stress * 20000)))
> > +		preempt_schedule();  /* Allow test to be preempted. */
> > +#endif
> > +}
> > +
> > +static void torture_mutex_unlock(void) __releases(torture_mutex)
> > +{
> > +	mutex_unlock(&torture_mutex);
> > +}
> > +
> > +static struct lock_torture_ops mutex_lock_ops = {
> > +	.writelock	= torture_mutex_lock,
> > +	.write_delay	= torture_mutex_delay,
> > +	.writeunlock	= torture_mutex_unlock,
> > +	.name		= "mutex_lock"
> > +};
> > +
> >  /*
> >   * Lock torture writer kthread.  Repeatedly acquires and releases
> >   * the lock, checking for duplicate acquisitions.
> > @@ -352,7 +389,7 @@ static int __init lock_torture_init(void)
> >  	int i;
> >  	int firsterr = 0;
> >  	static struct lock_torture_ops *torture_ops[] = {
> > -		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops,
> > +		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops, &mutex_lock_ops,
> >  	};
> > 
> >  	if (!torture_init_begin(torture_type, verbose, &torture_runnable))
> > -- 
> 
> And I queued the following patch to catch up the scripting.

Thanks! Completely overlooked the scripting bits. I'll keep it in mind
in the future.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 6/9] torture: Address race in module cleanup
  2014-09-12 18:28     ` Davidlohr Bueso
@ 2014-09-12 19:03       ` Paul E. McKenney
  0 siblings, 0 replies; 27+ messages in thread
From: Paul E. McKenney @ 2014-09-12 19:03 UTC (permalink / raw)
  To: Davidlohr Bueso; +Cc: peterz, mingo, linux-kernel

On Fri, Sep 12, 2014 at 11:28:36AM -0700, Davidlohr Bueso wrote:
> On Fri, 2014-09-12 at 11:04 -0700, Paul E. McKenney wrote:
> > On Thu, Sep 11, 2014 at 08:40:21PM -0700, Davidlohr Bueso wrote:
> > > When performing module cleanups by calling torture_cleanup() the
> > > 'torture_type' string in nullified However, callers are not necessarily
> > > done, and might still need to reference the variable. This impacts
> > > both rcutorture and locktorture, causing printing things like:
> > > 
> > > [   94.226618] (null)-torture: Stopping lock_torture_writer task
> > > [   94.226624] (null)-torture: Stopping lock_torture_stats task
> > > 
> > > Thus delay this operation until the very end of the cleanup process.
> > > The consequence (which shouldn't matter for this kid of program) is,
> > > of course, that we delay the window between rmmod and modprobing,
> > > for instance in module_torture_begin().
> > > 
> > > Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
> > 
> > Good catch!  I had just been ignoring the (null), and my scripting
> > doesn't care, but it is better to have it taken care of.
> 
> In addition, for locktorture this issue can cause not only null but the
> printing the wrong cleanup string when a new module is loaded with a
> different torture_type.

That would be even more annoying.  ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/9] locktorture: Support mutexes
  2014-09-12 18:56     ` Davidlohr Bueso
@ 2014-09-12 19:12       ` Paul E. McKenney
  2014-09-13  2:13         ` Davidlohr Bueso
  0 siblings, 1 reply; 27+ messages in thread
From: Paul E. McKenney @ 2014-09-12 19:12 UTC (permalink / raw)
  To: Davidlohr Bueso; +Cc: peterz, mingo, linux-kernel

On Fri, Sep 12, 2014 at 11:56:31AM -0700, Davidlohr Bueso wrote:
> On Fri, 2014-09-12 at 11:02 -0700, Paul E. McKenney wrote:
> > On Thu, Sep 11, 2014 at 08:40:18PM -0700, Davidlohr Bueso wrote:
> > > +static void torture_mutex_delay(struct torture_random_state *trsp)
> > > +{
> > > +	const unsigned long longdelay_ms = 100;
> > > +
> > > +	/* We want a long delay occasionally to force massive contention.  */
> > > +	if (!(torture_random(trsp) %
> > > +	      (nrealwriters_stress * 2000 * longdelay_ms)))
> > > +		mdelay(longdelay_ms * 5);
> > 
> > So let's see...  We wait 500 milliseconds about once per 200,000 operations
> > per writer.  So if we have 5 writers, we wait 500 milliseconds per million
> > operations.  So each writer will do about 200,000 operations, then there
> > will be a half-second gap.  But each short operation holds the lock for
> > 20 milliseconds, which takes several hours to work through the million
> > operations.
> > 
> > So it looks to me like you are in massive contention state either way,
> > at least until the next stutter interval shows up.
> > 
> > Is that the intent?  Or am I missing something here?
> 
> Ah, nice description. Yes, I am aiming for constant massive contention
> (should have mentioned this, sorry). I believe it stresses the more
> interesting parts of mutexes -- and rwsems, for that matter. If you
> think it's excessive, we could decrease the the large wait and/or
> increase the short one. I used the factor of the delay by the default
> stutter value -- we could also make it always equal.

Don't get me wrong -- I am all for massive contention testing.  It is
just that from what I can see, you aren't getting any real additional
benefit out of the 500-millisecond wait.  Having even as few as (say)
three tasks each repeatedly acquiring the lock and blocking for 20
milliseconds ("else" clause below) will give you maximal contention.
I cannot see how occasionally blocking for 500 milliseconds can do much
of anything to increase the contention level.

Now if the common case was to acquire and then immediately release the
lock, I could see how throwing in the occasional delay would be very
useful.  But for exclusive locks, a few tens of microseconds delay would
probably suffice to give you a maximal contention event.  Yes, you do
have a one-jiffy delay in the lock_torture_writer() loop, but it happens
only one loop out of one million -- and if that is what you are worried
about, a two-jiffy delay in the critical section would -guarantee- you
a maximal contention event in most cases.

So my concern is that the large values you have are mostly slowing down
the test and thus reducing its intensity.  But again, I could easily be
missing something here.

> > > +	else
> > > +		mdelay(longdelay_ms / 5);
> > > +#ifdef CONFIG_PREEMPT
> > > +	if (!(torture_random(trsp) % (nrealwriters_stress * 20000)))
> > > +		preempt_schedule();  /* Allow test to be preempted. */
> > > +#endif
> > > +}
> > > +
> > > +static void torture_mutex_unlock(void) __releases(torture_mutex)
> > > +{
> > > +	mutex_unlock(&torture_mutex);
> > > +}
> > > +
> > > +static struct lock_torture_ops mutex_lock_ops = {
> > > +	.writelock	= torture_mutex_lock,
> > > +	.write_delay	= torture_mutex_delay,
> > > +	.writeunlock	= torture_mutex_unlock,
> > > +	.name		= "mutex_lock"
> > > +};
> > > +
> > >  /*
> > >   * Lock torture writer kthread.  Repeatedly acquires and releases
> > >   * the lock, checking for duplicate acquisitions.
> > > @@ -352,7 +389,7 @@ static int __init lock_torture_init(void)
> > >  	int i;
> > >  	int firsterr = 0;
> > >  	static struct lock_torture_ops *torture_ops[] = {
> > > -		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops,
> > > +		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops, &mutex_lock_ops,
> > >  	};
> > > 
> > >  	if (!torture_init_begin(torture_type, verbose, &torture_runnable))
> > > -- 
> > 
> > And I queued the following patch to catch up the scripting.
> 
> Thanks! Completely overlooked the scripting bits. I'll keep it in mind
> in the future.

No problem, and I look forward to also seeing the scripting pieces in
the future.  ;-)

							Thanx, Paul


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 2/9] locktorture: Add documentation
  2014-09-12  3:40 ` [PATCH 2/9] locktorture: Add documentation Davidlohr Bueso
  2014-09-12  5:28   ` Davidlohr Bueso
@ 2014-09-13  1:10   ` Randy Dunlap
  2014-09-16 19:35     ` Paul E. McKenney
  1 sibling, 1 reply; 27+ messages in thread
From: Randy Dunlap @ 2014-09-13  1:10 UTC (permalink / raw)
  To: Davidlohr Bueso, paulmck; +Cc: peterz, mingo, linux-kernel, Davidlohr Bueso

On 09/11/14 20:40, Davidlohr Bueso wrote:
> Just like Documentation/RCU/torture.txt, begin a document for the
> locktorture module. This module is still pretty green, so I have
> just added some specific sections to the doc (general desc, params,
> usage, etc.). Further development should update the file.
> 
> Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
> ---
>  Documentation/locking/locktorture.txt | 128 ++++++++++++++++++++++++++++++++++
>  1 file changed, 128 insertions(+)
>  create mode 100644 Documentation/locking/locktorture.txt
> 
> diff --git a/Documentation/locking/locktorture.txt b/Documentation/locking/locktorture.txt
> new file mode 100644
> index 0000000..c0ab969
> --- /dev/null
> +++ b/Documentation/locking/locktorture.txt
> @@ -0,0 +1,128 @@
> +Kernel Lock Torture Test Operation
> +
> +CONFIG_LOCK_TORTURE_TEST
> +
> +The CONFIG LOCK_TORTURE_TEST config option provides a kernel module
> +that runs torture tests on core kernel locking primitives. The kernel
> +module, 'locktorture', may be built after the fact on the running
> +kernel to be tested, if desired. The tests periodically outputs status

                                                           output

> +messages via printk(), which can be examined via the dmesg (perhaps
> +grepping for "torture").  The test is started when the module is loaded,
> +and stops when the module is unloaded. This program is based on how RCU
> +is tortured, via rcutorture.
> +
> +This torture test consists of creating a number of kernel threads which
> +acquires the lock and holds it for specific amount of time, thus simulating

   acquire               hold

> +different critical region behaviors. The amount of contention on the lock
> +can be simulated by either enlarging this critical region hold time and/or
> +creating more kthreads.
> +
> +
> +MODULE PARAMETERS
> +
> +This module has the following parameters:
> +
> +
> +	    ** Locktorture-specific **
> +
> +nwriters_stress   Number of kernel threads that will stress exclusive lock
> +		  ownership (writers). The default value is twice the amount

I would s/amount/number/ but that's minor.

> +		  of online CPUs.
> +
> +torture_type	  Type of lock to torture. By default, only spinlocks will
> +		  be tortured. This module can torture the following locks,
> +		  with string values as follows:
> +
> +		     o "lock_busted": Simulates a buggy lock implementation.
> +
> +		     o "spin_lock": spin_lock() and spin_unlock() pairs.
> +
> +		     o "spin_lock_irq": spin_lock_irq() and spin_unlock_irq()
> +					pairs.
> +
> +torture_runnable  Start locktorture at module init. By default it will begin
> +		  once the module is loaded.

What differences would that make?

> +
> +
> +	    ** Torture-framework (RCU + locking) **
> +
> +shutdown_secs	  The number of seconds to run the test before terminating
> +		  the test and powering off the system.  The default is
> +		  zero, which disables test termination and system shutdown.
> +		  This capability is useful for automated testing.
> +
> +onoff_holdoff	  The number of seconds between each attempt to execute a
> +		  randomly selected CPU-hotplug operation.  Defaults to
> +		  zero, which disables CPU hotplugging.  In HOTPLUG_CPU=n

s/HOTPLUG_CPU/CONFIG_HOTPLUG_CPU/ to be consistent.

> +		  kernels, locktorture will silently refuse to do any
> +		  CPU-hotplug operations regardless of what value is
> +		  specified for onoff_interval.

eh?  what is                    onoff_interval ?

Oh, the param name (in leftmost column) above should be onoff_interval since
onoff_holdoff is below.

> +
> +onoff_holdoff	  The number of seconds to wait until starting CPU-hotplug
> +		  operations.  This would normally only be used when
> +		  locktorture was built into the kernel and started
> +		  automatically at boot time, in which case it is useful
> +		  in order to avoid confusing boot-time code with CPUs
> +		  coming and going. This parameter is only useful if
> +		  CONFIG_HOTPLUG_CPU is enabled.
> +
> +stat_interval	  Number of seconds between statistics-related printk()s.
> +		  By default, locktorture will report stats every 60 seconds.
> +		  Setting the interval to zero causes the statistics to
> +		  be printed -only- when the module is unloaded, and this
> +		  is the default.
> +
> +stutter		  The length of time to run the test before pausing for this
> +		  same period of time.  Defaults to "stutter=5", so as
> +		  to run and pause for (roughly) five-second intervals.
> +		  Specifying "stutter=0" causes the test to run continuously
> +		  without pausing, which is the old default behavior.
> +
> +shuffle_interval  The number of seconds to keep the test threads affinitied
> +		  to a particular subset of the CPUs, defaults to 3 seconds.
> +		  Used in conjunction with test_no_idle_hz.
> +
> +verbose		  Enable verbose debugging printking, via printk(). Enabled

			                           printing

> +		  by default. This extra information is mostly related to
> +		  high-level errors and reports from the main 'torture'
> +		  framework.
> +
> +
> +STATISTICS
> +
> +Statistics are printed in the following format:
> +
> +spin_lock-torture: Writes:  Total: 93746064  Max/Min: 0/0   Fail: 0
> +   (A)				   (B)		  (C)	       (D)
> +
> +(A): Lock type that is being tortured -- torture_type parameter.
> +
> +(B): Number of times the lock was acquired.
> +
> +(C): Min and max number of times threads failed to acquire the lock.
> +
> +(D): true/false values if there were errors acquiring the lock. This should
> +     -only- be positive if there is a bug in the locking primitive's
> +     implementation. Otherwise a lock should never fail (ie: spin_lock()).

                                                           (i.e., spin_lock()).

> +     Of course, the same applies for (C), above. A dummy example of this is
> +     the "lock_busted" type.
> +
> +USAGE
> +
> +The following script may be used to torture locks:
> +
> +	#!/bin/sh
> +
> +	modprobe locktorture
> +	sleep 3600
> +	rmmod locktorture
> +	dmesg | grep torture:
> +
> +The output can be manually inspected for the error flag of "!!!".
> +One could of course create a more elaborate script that automatically
> +checked for such errors.  The "rmmod" command forces a "SUCCESS",
> +"FAILURE", or "RCU_HOTPLUG" indication to be printk()ed.  The first
> +two are self-explanatory, while the last indicates that while there
> +were no locking failures, CPU-hotplug problems were detected.
> +
> +Also see: Documentation/RCU/torture.txt
> 


-- 
~Randy

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/9] locktorture: Support mutexes
  2014-09-12 19:12       ` Paul E. McKenney
@ 2014-09-13  2:13         ` Davidlohr Bueso
  0 siblings, 0 replies; 27+ messages in thread
From: Davidlohr Bueso @ 2014-09-13  2:13 UTC (permalink / raw)
  To: paulmck; +Cc: peterz, mingo, linux-kernel

On Fri, 2014-09-12 at 12:12 -0700, Paul E. McKenney wrote:
> On Fri, Sep 12, 2014 at 11:56:31AM -0700, Davidlohr Bueso wrote:
> > On Fri, 2014-09-12 at 11:02 -0700, Paul E. McKenney wrote:
> > > On Thu, Sep 11, 2014 at 08:40:18PM -0700, Davidlohr Bueso wrote:
> > > > +static void torture_mutex_delay(struct torture_random_state *trsp)
> > > > +{
> > > > +	const unsigned long longdelay_ms = 100;
> > > > +
> > > > +	/* We want a long delay occasionally to force massive contention.  */
> > > > +	if (!(torture_random(trsp) %
> > > > +	      (nrealwriters_stress * 2000 * longdelay_ms)))
> > > > +		mdelay(longdelay_ms * 5);
> > > 
> > > So let's see...  We wait 500 milliseconds about once per 200,000 operations
> > > per writer.  So if we have 5 writers, we wait 500 milliseconds per million
> > > operations.  So each writer will do about 200,000 operations, then there
> > > will be a half-second gap.  But each short operation holds the lock for
> > > 20 milliseconds, which takes several hours to work through the million
> > > operations.
> > > 
> > > So it looks to me like you are in massive contention state either way,
> > > at least until the next stutter interval shows up.
> > > 
> > > Is that the intent?  Or am I missing something here?
> > 
> > Ah, nice description. Yes, I am aiming for constant massive contention
> > (should have mentioned this, sorry). I believe it stresses the more
> > interesting parts of mutexes -- and rwsems, for that matter. If you
> > think it's excessive, we could decrease the the large wait and/or
> > increase the short one. I used the factor of the delay by the default
> > stutter value -- we could also make it always equal.
> 
> Don't get me wrong -- I am all for massive contention testing.  It is
> just that from what I can see, you aren't getting any real additional
> benefit out of the 500-millisecond wait.  Having even as few as (say)
> three tasks each repeatedly acquiring the lock and blocking for 20
> milliseconds ("else" clause below) will give you maximal contention.
> I cannot see how occasionally blocking for 500 milliseconds can do much
> of anything to increase the contention level.
> 
> Now if the common case was to acquire and then immediately release the
> lock, I could see how throwing in the occasional delay would be very
> useful. 

Right, that's what we do in the case of spinlock torturing.

> But for exclusive locks, a few tens of microseconds delay would
> probably suffice to give you a maximal contention event.  Yes, you do
> have a one-jiffy delay in the lock_torture_writer() loop, but it happens
> only one loop out of one million -- and if that is what you are worried
> about, a two-jiffy delay in the critical section would -guarantee- you
> a maximal contention event in most cases.

Ok yeah, no need to increase the jiffy delay.

> So my concern is that the large values you have are mostly slowing down
> the test and thus reducing its intensity.  But again, I could easily be
> missing something here.

You aren't. My rationale behind it was to have long and the occasional
very-long hold times. I'm thinking of either removing the 500 ms delay
altogether, or decreasing both delays by ~10x. That should provide a
more distributed contention between level between both delays. Threads
blocking for ~2ms should be quite ok for us.

Thanks,
Davidlohr


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 2/9] locktorture: Add documentation
  2014-09-13  1:10   ` Randy Dunlap
@ 2014-09-16 19:35     ` Paul E. McKenney
  0 siblings, 0 replies; 27+ messages in thread
From: Paul E. McKenney @ 2014-09-16 19:35 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: Davidlohr Bueso, peterz, mingo, linux-kernel, Davidlohr Bueso

On Fri, Sep 12, 2014 at 06:10:19PM -0700, Randy Dunlap wrote:
> On 09/11/14 20:40, Davidlohr Bueso wrote:
> > Just like Documentation/RCU/torture.txt, begin a document for the
> > locktorture module. This module is still pretty green, so I have
> > just added some specific sections to the doc (general desc, params,
> > usage, etc.). Further development should update the file.
> > 
> > Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
> > ---
> >  Documentation/locking/locktorture.txt | 128 ++++++++++++++++++++++++++++++++++
> >  1 file changed, 128 insertions(+)
> >  create mode 100644 Documentation/locking/locktorture.txt

Thank you for the review, Randy!  I am folding the patch below into
Davidlohr's patch.

							Thanx, Paul

> > diff --git a/Documentation/locking/locktorture.txt b/Documentation/locking/locktorture.txt
> > new file mode 100644
> > index 0000000..c0ab969
> > --- /dev/null
> > +++ b/Documentation/locking/locktorture.txt
> > @@ -0,0 +1,128 @@
> > +Kernel Lock Torture Test Operation
> > +
> > +CONFIG_LOCK_TORTURE_TEST
> > +
> > +The CONFIG LOCK_TORTURE_TEST config option provides a kernel module
> > +that runs torture tests on core kernel locking primitives. The kernel
> > +module, 'locktorture', may be built after the fact on the running
> > +kernel to be tested, if desired. The tests periodically outputs status
> 
>                                                            output
> 
> > +messages via printk(), which can be examined via the dmesg (perhaps
> > +grepping for "torture").  The test is started when the module is loaded,
> > +and stops when the module is unloaded. This program is based on how RCU
> > +is tortured, via rcutorture.
> > +
> > +This torture test consists of creating a number of kernel threads which
> > +acquires the lock and holds it for specific amount of time, thus simulating
> 
>    acquire               hold
> 
> > +different critical region behaviors. The amount of contention on the lock
> > +can be simulated by either enlarging this critical region hold time and/or
> > +creating more kthreads.
> > +
> > +
> > +MODULE PARAMETERS
> > +
> > +This module has the following parameters:
> > +
> > +
> > +	    ** Locktorture-specific **
> > +
> > +nwriters_stress   Number of kernel threads that will stress exclusive lock
> > +		  ownership (writers). The default value is twice the amount
> 
> I would s/amount/number/ but that's minor.
> 
> > +		  of online CPUs.
> > +
> > +torture_type	  Type of lock to torture. By default, only spinlocks will
> > +		  be tortured. This module can torture the following locks,
> > +		  with string values as follows:
> > +
> > +		     o "lock_busted": Simulates a buggy lock implementation.
> > +
> > +		     o "spin_lock": spin_lock() and spin_unlock() pairs.
> > +
> > +		     o "spin_lock_irq": spin_lock_irq() and spin_unlock_irq()
> > +					pairs.
> > +
> > +torture_runnable  Start locktorture at module init. By default it will begin
> > +		  once the module is loaded.
> 
> What differences would that make?
> 
> > +
> > +
> > +	    ** Torture-framework (RCU + locking) **
> > +
> > +shutdown_secs	  The number of seconds to run the test before terminating
> > +		  the test and powering off the system.  The default is
> > +		  zero, which disables test termination and system shutdown.
> > +		  This capability is useful for automated testing.
> > +
> > +onoff_holdoff	  The number of seconds between each attempt to execute a
> > +		  randomly selected CPU-hotplug operation.  Defaults to
> > +		  zero, which disables CPU hotplugging.  In HOTPLUG_CPU=n
> 
> s/HOTPLUG_CPU/CONFIG_HOTPLUG_CPU/ to be consistent.
> 
> > +		  kernels, locktorture will silently refuse to do any
> > +		  CPU-hotplug operations regardless of what value is
> > +		  specified for onoff_interval.
> 
> eh?  what is                    onoff_interval ?
> 
> Oh, the param name (in leftmost column) above should be onoff_interval since
> onoff_holdoff is below.
> 
> > +
> > +onoff_holdoff	  The number of seconds to wait until starting CPU-hotplug
> > +		  operations.  This would normally only be used when
> > +		  locktorture was built into the kernel and started
> > +		  automatically at boot time, in which case it is useful
> > +		  in order to avoid confusing boot-time code with CPUs
> > +		  coming and going. This parameter is only useful if
> > +		  CONFIG_HOTPLUG_CPU is enabled.
> > +
> > +stat_interval	  Number of seconds between statistics-related printk()s.
> > +		  By default, locktorture will report stats every 60 seconds.
> > +		  Setting the interval to zero causes the statistics to
> > +		  be printed -only- when the module is unloaded, and this
> > +		  is the default.
> > +
> > +stutter		  The length of time to run the test before pausing for this
> > +		  same period of time.  Defaults to "stutter=5", so as
> > +		  to run and pause for (roughly) five-second intervals.
> > +		  Specifying "stutter=0" causes the test to run continuously
> > +		  without pausing, which is the old default behavior.
> > +
> > +shuffle_interval  The number of seconds to keep the test threads affinitied
> > +		  to a particular subset of the CPUs, defaults to 3 seconds.
> > +		  Used in conjunction with test_no_idle_hz.
> > +
> > +verbose		  Enable verbose debugging printking, via printk(). Enabled
> 
> 			                           printing
> 
> > +		  by default. This extra information is mostly related to
> > +		  high-level errors and reports from the main 'torture'
> > +		  framework.
> > +
> > +
> > +STATISTICS
> > +
> > +Statistics are printed in the following format:
> > +
> > +spin_lock-torture: Writes:  Total: 93746064  Max/Min: 0/0   Fail: 0
> > +   (A)				   (B)		  (C)	       (D)
> > +
> > +(A): Lock type that is being tortured -- torture_type parameter.
> > +
> > +(B): Number of times the lock was acquired.
> > +
> > +(C): Min and max number of times threads failed to acquire the lock.
> > +
> > +(D): true/false values if there were errors acquiring the lock. This should
> > +     -only- be positive if there is a bug in the locking primitive's
> > +     implementation. Otherwise a lock should never fail (ie: spin_lock()).
> 
>                                                            (i.e., spin_lock()).
> 
> > +     Of course, the same applies for (C), above. A dummy example of this is
> > +     the "lock_busted" type.
> > +
> > +USAGE
> > +
> > +The following script may be used to torture locks:
> > +
> > +	#!/bin/sh
> > +
> > +	modprobe locktorture
> > +	sleep 3600
> > +	rmmod locktorture
> > +	dmesg | grep torture:
> > +
> > +The output can be manually inspected for the error flag of "!!!".
> > +One could of course create a more elaborate script that automatically
> > +checked for such errors.  The "rmmod" command forces a "SUCCESS",
> > +"FAILURE", or "RCU_HOTPLUG" indication to be printk()ed.  The first
> > +two are self-explanatory, while the last indicates that while there
> > +were no locking failures, CPU-hotplug problems were detected.
> > +
> > +Also see: Documentation/RCU/torture.txt

diff --git a/Documentation/locking/locktorture.txt b/Documentation/locking/locktorture.txt
index f7d99e2a5799..be715015e0f7 100644
--- a/Documentation/locking/locktorture.txt
+++ b/Documentation/locking/locktorture.txt
@@ -5,14 +5,14 @@ CONFIG_LOCK_TORTURE_TEST
 The CONFIG LOCK_TORTURE_TEST config option provides a kernel module
 that runs torture tests on core kernel locking primitives. The kernel
 module, 'locktorture', may be built after the fact on the running
-kernel to be tested, if desired. The tests periodically outputs status
+kernel to be tested, if desired. The tests periodically output status
 messages via printk(), which can be examined via the dmesg (perhaps
 grepping for "torture").  The test is started when the module is loaded,
 and stops when the module is unloaded. This program is based on how RCU
 is tortured, via rcutorture.
 
 This torture test consists of creating a number of kernel threads which
-acquires the lock and holds it for specific amount of time, thus simulating
+acquire the lock and hold it for specific amount of time, thus simulating
 different critical region behaviors. The amount of contention on the lock
 can be simulated by either enlarging this critical region hold time and/or
 creating more kthreads.
@@ -26,7 +26,7 @@ This module has the following parameters:
 	    ** Locktorture-specific **
 
 nwriters_stress   Number of kernel threads that will stress exclusive lock
-		  ownership (writers). The default value is twice the amount
+		  ownership (writers). The default value is twice the number
 		  of online CPUs.
 
 nreaders_stress   Number of kernel threads that will stress shared lock
@@ -49,8 +49,10 @@ torture_type	  Type of lock to torture. By default, only spinlocks will
 
 		     o "rwsem_lock": read/write down() and up() semaphore pairs.
 
-torture_runnable  Start locktorture at module init. By default it will begin
-		  once the module is loaded.
+torture_runnable  Start locktorture at boot time in the case where the
+		  module is built into the kernel, otherwise wait for
+		  torture_runnable to be set via sysfs before starting.
+		  By default it will begin once the module is loaded.
 
 
 	    ** Torture-framework (RCU + locking) **
@@ -60,12 +62,12 @@ shutdown_secs	  The number of seconds to run the test before terminating
 		  zero, which disables test termination and system shutdown.
 		  This capability is useful for automated testing.
 
-onoff_holdoff	  The number of seconds between each attempt to execute a
-		  randomly selected CPU-hotplug operation.  Defaults to
-		  zero, which disables CPU hotplugging.  In HOTPLUG_CPU=n
-		  kernels, locktorture will silently refuse to do any
-		  CPU-hotplug operations regardless of what value is
-		  specified for onoff_interval.
+onoff_interval	  The number of seconds between each attempt to execute a
+		  randomly selected CPU-hotplug operation.  Defaults
+		  to zero, which disables CPU hotplugging.  In
+		  CONFIG_HOTPLUG_CPU=n kernels, locktorture will silently
+		  refuse to do any CPU-hotplug operations regardless of
+		  what value is specified for onoff_interval.
 
 onoff_holdoff	  The number of seconds to wait until starting CPU-hotplug
 		  operations.  This would normally only be used when
@@ -91,7 +93,7 @@ shuffle_interval  The number of seconds to keep the test threads affinitied
 		  to a particular subset of the CPUs, defaults to 3 seconds.
 		  Used in conjunction with test_no_idle_hz.
 
-verbose		  Enable verbose debugging printking, via printk(). Enabled
+verbose		  Enable verbose debugging printing, via printk(). Enabled
 		  by default. This extra information is mostly related to
 		  high-level errors and reports from the main 'torture'
 		  framework.
@@ -115,7 +117,7 @@ spin_lock-torture: Writes:  Total: 93746064  Max/Min: 0/0   Fail: 0
 
 (E): true/false values if there were errors acquiring the lock. This should
      -only- be positive if there is a bug in the locking primitive's
-     implementation. Otherwise a lock should never fail (ie: spin_lock()).
+     implementation. Otherwise a lock should never fail (i.e., spin_lock()).
      Of course, the same applies for (C), above. A dummy example of this is
      the "lock_busted" type.
 


^ permalink raw reply related	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2014-09-16 19:35 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-12  3:40 [PATCH -tip 0/9] locktorture: Improve and expand lock torturing Davidlohr Bueso
2014-09-12  3:40 ` [PATCH 1/9] locktorture: Rename locktorture_runnable parameter Davidlohr Bueso
2014-09-12 17:40   ` Paul E. McKenney
2014-09-12 17:51     ` Paul E. McKenney
2014-09-12  3:40 ` [PATCH 2/9] locktorture: Add documentation Davidlohr Bueso
2014-09-12  5:28   ` Davidlohr Bueso
2014-09-13  1:10   ` Randy Dunlap
2014-09-16 19:35     ` Paul E. McKenney
2014-09-12  3:40 ` [PATCH 3/9] locktorture: Support mutexes Davidlohr Bueso
2014-09-12 18:02   ` Paul E. McKenney
2014-09-12 18:56     ` Davidlohr Bueso
2014-09-12 19:12       ` Paul E. McKenney
2014-09-13  2:13         ` Davidlohr Bueso
2014-09-12  3:40 ` [PATCH 4/9] locktorture: Teach about lock debugging Davidlohr Bueso
2014-09-12  3:40 ` [PATCH 5/9] locktorture: Make statistics generic Davidlohr Bueso
2014-09-12  3:40 ` [PATCH 6/9] torture: Address race in module cleanup Davidlohr Bueso
2014-09-12 18:04   ` Paul E. McKenney
2014-09-12 18:28     ` Davidlohr Bueso
2014-09-12 19:03       ` Paul E. McKenney
2014-09-12  4:40 ` [PATCH 7/9] locktorture: Add infrastructure for torturing read locks Davidlohr Bueso
2014-09-12 16:06   ` Paul E. McKenney
2014-09-12 18:02     ` Davidlohr Bueso
2014-09-12  4:41 ` [PATCH 8/9] locktorture: Support rwsems Davidlohr Bueso
2014-09-12  7:37   ` Peter Zijlstra
2014-09-12 14:49     ` Davidlohr Bueso
2014-09-12 18:07   ` Paul E. McKenney
2014-09-12  4:42 ` [PATCH 9/9] locktorture: Introduce torture context Davidlohr Bueso

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.