linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/17] Support for read-write lock deadlock detection
@ 2019-05-16  7:59 Yuyang Du
  2019-05-16  7:59 ` [PATCH v2 01/17] locking/lockdep: Add lock type enum to explicitly specify read or write locks Yuyang Du
                   ` (16 more replies)
  0 siblings, 17 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  7:59 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

Hi Peter and Ingo,

Historically, the read-write lock (recursive-read locks included) is not
well supported in lockdep. This patchset attempts to solve this problem
sound and complete.

The bulk of the algorithm is in patch #10, which is actually not complex at
all. Hopefully, it simply works.

Now that we have read-write locks suppported, we have all the 262 cases
passed, though I have to flip some cases which, I think, are wrong.

P.S. To Boqun, I haven't got time to read your patchset except that I did
carefully read your design doc and learnt from it a lot. It is helpful.
Please give this patchset at least a look.

Change from v1:

 - As Peter pointed out the assumption that a lock class cannot ever be read
   and recursive read does not hold. The value of LOCK_TYPE_RECURSIVE is
   changed to 3 from 2, which maintains that if type1 and type2 are exclusive
   then type1 & type2 == 0. And since it is legal for a task can have a read
   and then a recursive read lock, we allow this in deadlock check.

 - Some typo fixes and rewording in comments or changelog.


Thanks,
Yuyang

--

Yuyang Du (17):
  locking/lockdep: Add lock type enum to explicitly specify read or
    write locks
  locking/lockdep: Add read-write type for dependency
  locking/lockdep: Add helper functions to operate on the searched path
  locking/lockdep: Update direct dependency's read-write type if it
    exists
  locking/lockdep: Rename deadlock check functions
  locking/lockdep: Adjust BFS algorithm to support multiple matches
  locking/lockdep: Introduce mark_lock_unaccessed()
  locking/lockdep: Introduce chain_hlocks_type for held lock's
    read-write type
  locking/lockdep: Hash held lock's read-write type into chain key
  locking/lockdep: Support read-write lock's deadlock detection
  locking/lockdep: Adjust lockdep selftest cases
  locking/lockdep: Remove useless lock type assignment
  locking/lockdep: Add nest lock type
  locking/lockdep: Support recursive read locks
  locking/lockdep: Adjust selftest case for recursive read lock
  locking/lockdep: Add more lockdep selftest cases
  locking/lockdep: Remove irq-safe to irq-unsafe read check

 include/linux/lockdep.h            |   57 +-
 include/linux/rcupdate.h           |    2 +-
 kernel/locking/lockdep.c           |  464 +++++++++++----
 kernel/locking/lockdep_internals.h |    4 +
 lib/locking-selftest.c             | 1099 +++++++++++++++++++++++++++++++++++-
 5 files changed, 1489 insertions(+), 137 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 01/17] locking/lockdep: Add lock type enum to explicitly specify read or write locks
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
@ 2019-05-16  7:59 ` Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 02/17] locking/lockdep: Add read-write type for dependency Yuyang Du
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  7:59 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

Add an enum to formalize lock types, as those type values now matter.

In order to make:

  LOCK_TYPE_READ & LOCK_TYPE_RECURSIVE != 0

The recursive-read type is set to 3 as opposed to the current 2.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 include/linux/lockdep.h  | 45 +++++++++++++++++++++++++++++++++++++++------
 include/linux/rcupdate.h |  2 +-
 kernel/locking/lockdep.c | 23 +++++++++++++----------
 3 files changed, 53 insertions(+), 17 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 7c2fefa..1009c47 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -339,10 +339,38 @@ static inline int lockdep_match_key(struct lockdep_map *lock,
  *
  * Values for "read":
  *
- *   0: exclusive (write) acquire
- *   1: read-acquire (no recursion allowed)
- *   2: read-acquire with same-instance recursion allowed
+ *   LOCK_TYPE_EXCLUSIVE (LOCK_TYPE_WRITE): exclusive (write) acquire
+ *   LOCK_TYPE_READ: read-acquire (no recursion allowed)
+ *   LOCK_TYPE_RECURSIVE: read-acquire with same-instance recursion allowed
  *
+ * The same lock between tasks have the following exclusiveness (the read
+ * type includes recursive read):
+ *
+ *   read and read    (no )
+ *   read and write   (yes)
+ *   write and write  (yes)
+ *
+ * The enum values for the types are arranged such that:
+ *
+ *   if type1 and type2 are exclusive then type1 & type2 == 0.
+ *
+ * We have a tricky situation concerning read vs. recursive-read. Since
+ * there can be lock classes that may ever be both read lock and
+ * recursive-read lock, for example:
+ *
+ *   kernel/qrwlock.c:queued_read_lock_slowpath()
+ *
+ * The values of lock types are defined as follows:
+ */
+enum lock_type {
+	LOCK_TYPE_EXCLUSIVE	= 0,
+	LOCK_TYPE_WRITE		= 0,
+	LOCK_TYPE_READ,
+	LOCK_TYPE_UNUSED,
+	LOCK_TYPE_RECURSIVE,
+};
+
+/*
  * Values for check:
  *
  *   0: simple checks (freeing, held-at-exit-time, etc.)
@@ -588,9 +616,14 @@ static inline void print_irqtrace_events(struct task_struct *curr)
  * on the per lock-class debug mode:
  */
 
-#define lock_acquire_exclusive(l, s, t, n, i)		lock_acquire(l, s, t, 0, 1, n, i)
-#define lock_acquire_shared(l, s, t, n, i)		lock_acquire(l, s, t, 1, 1, n, i)
-#define lock_acquire_shared_recursive(l, s, t, n, i)	lock_acquire(l, s, t, 2, 1, n, i)
+#define lock_acquire_exclusive(l, s, t, n, i)			\
+	lock_acquire(l, s, t, LOCK_TYPE_EXCLUSIVE, 1, n, i)
+
+#define lock_acquire_shared(l, s, t, n, i)			\
+	lock_acquire(l, s, t, LOCK_TYPE_READ, 1, n, i)
+
+#define lock_acquire_shared_recursive(l, s, t, n, i)		\
+	lock_acquire(l, s, t, LOCK_TYPE_RECURSIVE, 1, n, i)
 
 #define spin_acquire(l, s, t, i)		lock_acquire_exclusive(l, s, t, NULL, i)
 #define spin_acquire_nest(l, s, t, n, i)	lock_acquire_exclusive(l, s, t, n, i)
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 922bb68..22b41b1 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -207,7 +207,7 @@ static inline void destroy_rcu_head_on_stack(struct rcu_head *head) { }
 
 static inline void rcu_lock_acquire(struct lockdep_map *map)
 {
-	lock_acquire(map, 0, 0, 2, 0, NULL, _THIS_IP_);
+	lock_acquire(map, 0, 0, LOCK_TYPE_RECURSIVE, 0, NULL, _THIS_IP_);
 }
 
 static inline void rcu_lock_release(struct lockdep_map *map)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 7275d6c..fb6be63 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2297,7 +2297,10 @@ static inline void inc_chains(void)
  * (Note that this has to be done separately, because the graph cannot
  * detect such classes of deadlocks.)
  *
- * Returns: 0 on deadlock detected, 1 on OK, 2 on recursive read
+ * Returns:
+ *  0: on deadlock detected;
+ *  1: on OK;
+ *  3: LOCK_TYPE_RECURSIVE on recursive read
  */
 static int
 check_deadlock(struct task_struct *curr, struct held_lock *next)
@@ -2319,15 +2322,15 @@ static inline void inc_chains(void)
 		 * Allow read-after-read recursion of the same
 		 * lock class (i.e. read_lock(lock)+read_lock(lock)):
 		 */
-		if ((next->read == 2) && prev->read)
-			return 2;
+		if ((next->read == LOCK_TYPE_RECURSIVE) && prev->read)
+			return LOCK_TYPE_RECURSIVE;
 
 		/*
 		 * We're holding the nest_lock, which serializes this lock's
 		 * nesting behaviour.
 		 */
 		if (nest)
-			return 2;
+			return LOCK_TYPE_RECURSIVE;
 
 		print_deadlock_bug(curr, prev, next);
 		return 0;
@@ -2407,7 +2410,7 @@ static inline void inc_chains(void)
 	 * write-lock never takes any other locks, then the reads are
 	 * equivalent to a NOP.
 	 */
-	if (next->read == 2 || prev->read == 2)
+	if (next->read == LOCK_TYPE_RECURSIVE || prev->read == LOCK_TYPE_RECURSIVE)
 		return 1;
 	/*
 	 * Is the <prev> -> <next> dependency already present?
@@ -2493,7 +2496,7 @@ static inline void inc_chains(void)
 		 * Only non-recursive-read entries get new dependencies
 		 * added:
 		 */
-		if (hlock->read != 2 && hlock->check) {
+		if (hlock->read != LOCK_TYPE_RECURSIVE && hlock->check) {
 			int ret = check_prev_add(curr, hlock, next, distance,
 						 &trace);
 			if (!ret)
@@ -2877,13 +2880,13 @@ static int validate_chain(struct task_struct *curr,
 		 * building dependencies (just like we jump over
 		 * trylock entries):
 		 */
-		if (ret == 2)
-			hlock->read = 2;
+		if (ret == LOCK_TYPE_RECURSIVE)
+			hlock->read = LOCK_TYPE_RECURSIVE;
 		/*
 		 * Add dependency only if this lock is not the head
 		 * of the chain, and if it's not a secondary read-lock:
 		 */
-		if (!chain_head && ret != 2) {
+		if (!chain_head && ret != LOCK_TYPE_RECURSIVE) {
 			if (!check_prevs_add(curr, hlock))
 				return 0;
 		}
@@ -4105,7 +4108,7 @@ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
 	curr->curr_chain_key = hlock->prev_chain_key;
 
 	WARN(hlock->read, "downgrading a read lock");
-	hlock->read = 1;
+	hlock->read = LOCK_TYPE_READ;
 	hlock->acquire_ip = ip;
 
 	if (reacquire_held_locks(curr, depth, i))
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 02/17] locking/lockdep: Add read-write type for dependency
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
  2019-05-16  7:59 ` [PATCH v2 01/17] locking/lockdep: Add lock type enum to explicitly specify read or write locks Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-29 11:37   ` Boqun Feng
  2019-05-16  8:00 ` [PATCH v2 03/17] locking/lockdep: Add helper functions to operate on the searched path Yuyang Du
                   ` (14 subsequent siblings)
  16 siblings, 1 reply; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

Direct dependency needs to keep track of its locks' read-write types. A
union field is added to lock_list struct so the type is stored there as
this:

	lock_type[1] (u16), lock_type[0] (u16)

			or:

	dep_type (int)

where value:

 0: exclusive / write
 1: read
 2: recursive read

Note that (int) dep_type value may vary with different architectural
endianness, so use helper functions to operate on these types.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 include/linux/lockdep.h            | 12 ++++++++++++
 kernel/locking/lockdep.c           | 34 +++++++++++++++++++++++++++++++---
 kernel/locking/lockdep_internals.h |  3 +++
 3 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 1009c47..bc09c85 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -195,6 +195,18 @@ struct lock_list {
 	struct lock_class		*links_to;
 	struct lock_trace		trace;
 	int				distance;
+	/*
+	 * This field keeps track of the read-write type of this dependency.
+	 *
+	 * With L1 -> L2:
+	 *
+	 * lock_type[0] stores the type of L1, while lock_type[1] stores the
+	 * type of L2.
+	 */
+	union {
+		int	dep_type;
+		u16	lock_type[2];
+	};
 
 	/*
 	 * The parent field is used to implement breadth-first search, and the
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index fb6be63..e7eedbf 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1225,7 +1225,7 @@ static struct lock_list *alloc_list_entry(void)
 static int add_lock_to_list(struct lock_class *this,
 			    struct lock_class *links_to, struct list_head *head,
 			    unsigned long ip, int distance,
-			    struct lock_trace *trace)
+			    struct lock_trace *trace, int dep_type)
 {
 	struct lock_list *entry;
 	/*
@@ -1240,6 +1240,8 @@ static int add_lock_to_list(struct lock_class *this,
 	entry->links_to = links_to;
 	entry->distance = distance;
 	entry->trace = *trace;
+	entry->dep_type = dep_type;
+
 	/*
 	 * Both allocation and removal are done under the graph lock; but
 	 * iteration is under RCU-sched; see look_up_lock_class() and
@@ -1677,6 +1679,30 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class)
 	return ret;
 }
 
+static inline int get_dep_type(struct held_lock *lock1, struct held_lock *lock2)
+{
+	/*
+	 * With dependency lock1 -> lock2:
+	 *
+	 * lock_type[0] is lock1, while lock_type[1] is lock2.
+	 *
+	 * Avoid architectural endianness difference composing dep_type.
+	 */
+	u16 type[2] = { lock1->read, lock2->read };
+
+	return *(int *)type;
+}
+
+static inline int get_lock_type1(struct lock_list *lock)
+{
+	return lock->lock_type[0];
+}
+
+static inline int get_lock_type2(struct lock_list *lock)
+{
+	return lock->lock_type[1];
+}
+
 /*
  * Check that the dependency graph starting at <src> can lead to
  * <target> or not. Print an error and return 0 if it does.
@@ -2446,14 +2472,16 @@ static inline void inc_chains(void)
 	 */
 	ret = add_lock_to_list(hlock_class(next), hlock_class(prev),
 			       &hlock_class(prev)->locks_after,
-			       next->acquire_ip, distance, trace);
+			       next->acquire_ip, distance, trace,
+			       get_dep_type(prev, next));
 
 	if (!ret)
 		return 0;
 
 	ret = add_lock_to_list(hlock_class(prev), hlock_class(next),
 			       &hlock_class(next)->locks_before,
-			       next->acquire_ip, distance, trace);
+			       next->acquire_ip, distance, trace,
+			       get_dep_type(next, prev));
 	if (!ret)
 		return 0;
 
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index 150ec3f..c287bcb 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -26,6 +26,9 @@ enum lock_usage_bit {
 #define LOCK_USAGE_DIR_MASK  2
 #define LOCK_USAGE_STATE_MASK (~(LOCK_USAGE_READ_MASK | LOCK_USAGE_DIR_MASK))
 
+#define LOCK_TYPE_BITS	16
+#define LOCK_TYPE_MASK	0xFFFF
+
 /*
  * Usage-state bitmasks:
  */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 03/17] locking/lockdep: Add helper functions to operate on the searched path
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
  2019-05-16  7:59 ` [PATCH v2 01/17] locking/lockdep: Add lock type enum to explicitly specify read or write locks Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 02/17] locking/lockdep: Add read-write type for dependency Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 04/17] locking/lockdep: Update direct dependency's read-write type if it exists Yuyang Du
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

 - find_lock_in_path() tries to find whether a lock class is in the path.

 - find_next_dep_in_path() returns the next dependency after a
   given dependency in the path.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index e7eedbf..595dc94 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1365,6 +1365,37 @@ static inline int get_lock_depth(struct lock_list *child)
 }
 
 /*
+ * Return the dependency if @lock is in the path, otherwise NULL.
+ */
+static inline struct lock_list *
+find_lock_in_path(struct lock_class *lock, struct lock_list *target)
+{
+	while ((target = get_lock_parent(target)))
+		if (target->class == lock)
+			return target;
+
+	return NULL;
+}
+
+/*
+ * Walk back to the next dependency after @source from @target. Note
+ * that @source must be in the path, and @source can not be the same as
+ * @target, otherwise this is going to fail and reutrn NULL.
+ */
+static inline struct lock_list *
+find_next_dep_in_path(struct lock_list *source, struct lock_list *target)
+{
+	while (get_lock_parent(target) != source) {
+		target = get_lock_parent(target);
+
+		if (!target)
+			break;
+	}
+
+	return target;
+}
+
+/*
  * Return the forward or backward dependency list.
  *
  * @lock:   the lock_list to get its class's dependency list
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 04/17] locking/lockdep: Update direct dependency's read-write type if it exists
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
                   ` (2 preceding siblings ...)
  2019-05-16  8:00 ` [PATCH v2 03/17] locking/lockdep: Add helper functions to operate on the searched path Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 05/17] locking/lockdep: Rename deadlock check functions Yuyang Du
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

When adding a dependency, if the dependency exists - be it direct or
indirect - the dependency's read-write type may be updated.

We can keep all different-typed dependencies, but for simplicity we just
"upgrade" lock types if possible, making them more exlusive (in the order:
recursive read -> read -> write.

For indirect dependency, we do the redundancy check only when it has no
read locks (i.e., write-lock ended dependency); this makes the redundancy
check less complicated, which matters only when CONFIG_LOCKDEP_SMALL.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 61 ++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 57 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 595dc94..1f1cb21 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1734,6 +1734,16 @@ static inline int get_lock_type2(struct lock_list *lock)
 	return lock->lock_type[1];
 }
 
+static inline void set_lock_type1(struct lock_list *lock, int read)
+{
+	lock->lock_type[0] = (u16)read;
+}
+
+static inline void set_lock_type2(struct lock_list *lock, int read)
+{
+	lock->lock_type[1] = (u16)read;
+}
+
 /*
  * Check that the dependency graph starting at <src> can lead to
  * <target> or not. Print an error and return 0 if it does.
@@ -1814,6 +1824,32 @@ static inline int get_lock_type2(struct lock_list *lock)
 	ret = check_path(hlock_class(target), &src_entry, &target_entry);
 
 	if (!ret) {
+		struct lock_list *parent;
+
+		/*
+		 * Do this indirect dependency has the same type as the
+		 * direct dependency?
+		 *
+		 * Actually, we keep the more exclusive lock type
+		 * between the indirect and direct dependencies by
+		 * "upgrading" the indirect dependency's lock type if
+		 * needed, and then consider this redundant.
+		 */
+
+		/* Target end lock type: */
+		if (target->read < get_lock_type2(target_entry))
+			set_lock_type2(target_entry, target->read)
+
+		/* Source end lock type: */
+		parent = find_next_dep_in_path(&src_entry, target_entry);
+		if (!parent) {
+			DEBUG_LOCKS_WARN_ON(1);
+			return 0;
+		}
+
+		if (src->read < get_lock_type1(parent))
+			set_lock_type1(parent, src->read)
+
 		debug_atomic_inc(nr_redundant);
 		ret = 2;
 	} else if (ret < 0)
@@ -2479,6 +2515,17 @@ static inline void inc_chains(void)
 	 */
 	list_for_each_entry(entry, &hlock_class(prev)->locks_after, entry) {
 		if (entry->class == hlock_class(next)) {
+			/*
+			 * For direct dependency, smaller type value
+			 * generally means more exlusive; we keep the
+			 * more exlusive ones, in other words, we
+			 * "upgrade" the dependency if we can.
+			 */
+			if (prev->read < get_lock_type1(entry))
+				set_lock_type1(entry, prev->read);
+			if (next->read < get_lock_type2(entry))
+				set_lock_type2(entry, next->read);
+
 			if (distance == 1)
 				entry->distance = 1;
 			return 1;
@@ -2487,11 +2534,17 @@ static inline void inc_chains(void)
 
 #ifdef CONFIG_LOCKDEP_SMALL
 	/*
-	 * Is the <prev> -> <next> link redundant?
+	 * Only when this dependency has no read lock/locks (i.e.,
+	 * write-write dependency), we do redundancy check.
 	 */
-	ret = check_redundant(prev, next);
-	if (ret != 1)
-		return ret;
+	if (!get_dep_type(prev, next)) {
+		/*
+		 * Is the <prev> -> <next> link redundant?
+		 */
+		ret = check_redundant(prev, next);
+		if (ret != 1)
+			return ret;
+	}
 #endif
 
 	if (!trace->nr_entries && !save_trace(trace))
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 05/17] locking/lockdep: Rename deadlock check functions
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
                   ` (3 preceding siblings ...)
  2019-05-16  8:00 ` [PATCH v2 04/17] locking/lockdep: Update direct dependency's read-write type if it exists Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 06/17] locking/lockdep: Adjust BFS algorithm to support multiple matches Yuyang Du
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

Deadlock checks are performed at two places:

 - Within current's held lock stack, check for lock recursion deadlock.
 - Within dependency graph, check for lock inversion deadlock.

Rename the two relevant functions for later use. Plus, with read locks,
dependency circle in graph is not a sufficient condition for lock
inversion deadlocks anymore, so check_noncircular() is not entirely
accurate.

No functional change.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1f1cb21..f4982ad 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1771,8 +1771,8 @@ static inline void set_lock_type2(struct lock_list *lock, int read)
  * Print an error and return 0 if it does.
  */
 static noinline int
-check_noncircular(struct held_lock *src, struct held_lock *target,
-		  struct lock_trace *trace)
+check_deadlock_graph(struct held_lock *src, struct held_lock *target,
+		     struct lock_trace *trace)
 {
 	int ret;
 	struct lock_list *uninitialized_var(target_entry);
@@ -2385,7 +2385,8 @@ static inline void inc_chains(void)
 }
 
 /*
- * Check whether we are holding such a class already.
+ * Check whether we are holding such a class already in current
+ * context's held lock stack.
  *
  * (Note that this has to be done separately, because the graph cannot
  * detect such classes of deadlocks.)
@@ -2396,7 +2397,7 @@ static inline void inc_chains(void)
  *  3: LOCK_TYPE_RECURSIVE on recursive read
  */
 static int
-check_deadlock(struct task_struct *curr, struct held_lock *next)
+check_deadlock_current(struct task_struct *curr, struct held_lock *next)
 {
 	struct held_lock *prev;
 	struct held_lock *nest = NULL;
@@ -2480,7 +2481,7 @@ static inline void inc_chains(void)
 
 	/*
 	 * Prove that the new <prev> -> <next> dependency would not
-	 * create a circular dependency in the graph. (We do this by
+	 * create a deadlock scenario in the graph. (We do this by
 	 * a breadth-first search into the graph starting at <next>,
 	 * and check whether we can reach <prev>.)
 	 *
@@ -2488,7 +2489,7 @@ static inline void inc_chains(void)
 	 * MAX_CIRCULAR_QUEUE_SIZE) which keeps track of a breadth of nodes
 	 * in the graph whose neighbours are to be checked.
 	 */
-	ret = check_noncircular(next, prev, trace);
+	ret = check_deadlock_graph(next, prev, trace);
 	if (unlikely(ret <= 0))
 		return 0;
 
@@ -2983,7 +2984,7 @@ static int validate_chain(struct task_struct *curr,
 		 * The simple case: does the current hold the same lock
 		 * already?
 		 */
-		int ret = check_deadlock(curr, hlock);
+		int ret = check_deadlock_current(curr, hlock);
 
 		if (!ret)
 			return 0;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 06/17] locking/lockdep: Adjust BFS algorithm to support multiple matches
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
                   ` (4 preceding siblings ...)
  2019-05-16  8:00 ` [PATCH v2 05/17] locking/lockdep: Rename deadlock check functions Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 07/17] locking/lockdep: Introduce mark_lock_unaccessed() Yuyang Du
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

With read locks, circle is not sufficient condition for a deadlock. As a
result, we need to continue the search after a match but the match is not
wanted. __bfs() is adjusted to that end. The basic idea is to enqueue a
node's children before matching the node.

No functional change.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 54 +++++++++++++++++++++++++-----------------------
 1 file changed, 28 insertions(+), 26 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index f4982ad..54ddf85 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1417,7 +1417,7 @@ static int __bfs(struct lock_list *source_entry,
 		 void *data,
 		 int (*match)(struct lock_list *entry, void *data),
 		 struct lock_list **target_entry,
-		 int offset)
+		 int offset, bool init)
 {
 	struct lock_list *entry;
 	struct lock_list *lock;
@@ -1425,19 +1425,11 @@ static int __bfs(struct lock_list *source_entry,
 	struct circular_queue *cq = &lock_cq;
 	int ret = 1;
 
-	if (match(source_entry, data)) {
-		*target_entry = source_entry;
-		ret = 0;
-		goto exit;
+	if (init) {
+		__cq_init(cq);
+		__cq_enqueue(cq, source_entry);
 	}
 
-	head = get_dep_list(source_entry, offset);
-	if (list_empty(head))
-		goto exit;
-
-	__cq_init(cq);
-	__cq_enqueue(cq, source_entry);
-
 	while ((lock = __cq_dequeue(cq))) {
 
 		if (!lock->class) {
@@ -1449,25 +1441,34 @@ static int __bfs(struct lock_list *source_entry,
 
 		DEBUG_LOCKS_WARN_ON(!irqs_disabled());
 
+		/*
+		 * Enqueue a node's children before match() so that even
+		 * this node matches but is not wanted, we can continue
+		 * the search.
+		 */
 		list_for_each_entry_rcu(entry, head, entry) {
 			if (!lock_accessed(entry)) {
 				unsigned int cq_depth;
+
 				mark_lock_accessed(entry, lock);
-				if (match(entry, data)) {
-					*target_entry = entry;
-					ret = 0;
-					goto exit;
-				}
 
 				if (__cq_enqueue(cq, entry)) {
 					ret = -1;
 					goto exit;
 				}
+
 				cq_depth = __cq_get_elem_count(cq);
 				if (max_bfs_queue_depth < cq_depth)
 					max_bfs_queue_depth = cq_depth;
 			}
 		}
+
+		if (match(lock, data)) {
+			*target_entry = lock;
+			ret = 0;
+			goto exit;
+		}
+
 	}
 exit:
 	return ret;
@@ -1476,10 +1477,10 @@ static int __bfs(struct lock_list *source_entry,
 static inline int __bfs_forwards(struct lock_list *src_entry,
 			void *data,
 			int (*match)(struct lock_list *entry, void *data),
-			struct lock_list **target_entry)
+			struct lock_list **target_entry, bool init)
 {
 	return __bfs(src_entry, data, match, target_entry,
-		     offsetof(struct lock_class, locks_after));
+		     offsetof(struct lock_class, locks_after), init);
 
 }
 
@@ -1489,7 +1490,7 @@ static inline int __bfs_backwards(struct lock_list *src_entry,
 			struct lock_list **target_entry)
 {
 	return __bfs(src_entry, data, match, target_entry,
-		     offsetof(struct lock_class, locks_before));
+		     offsetof(struct lock_class, locks_before), true);
 
 }
 
@@ -1662,7 +1663,7 @@ static unsigned long __lockdep_count_forward_deps(struct lock_list *this)
 	unsigned long  count = 0;
 	struct lock_list *uninitialized_var(target_entry);
 
-	__bfs_forwards(this, (void *)&count, noop_count, &target_entry);
+	__bfs_forwards(this, (void *)&count, noop_count, &target_entry, true);
 
 	return count;
 }
@@ -1750,12 +1751,12 @@ static inline void set_lock_type2(struct lock_list *lock, int read)
  */
 static noinline int
 check_path(struct lock_class *target, struct lock_list *src_entry,
-	   struct lock_list **target_entry)
+	   struct lock_list **target_entry, bool init)
 {
 	int ret;
 
 	ret = __bfs_forwards(src_entry, (void *)target, class_equal,
-			     target_entry);
+			     target_entry, init);
 
 	if (unlikely(ret < 0))
 		print_bfs_bug(ret);
@@ -1783,7 +1784,7 @@ static inline void set_lock_type2(struct lock_list *lock, int read)
 
 	debug_atomic_inc(nr_cyclic_checks);
 
-	ret = check_path(hlock_class(target), &src_entry, &target_entry);
+	ret = check_path(hlock_class(target), &src_entry, &target_entry, true);
 
 	if (unlikely(!ret)) {
 		if (!trace->nr_entries) {
@@ -1821,7 +1822,7 @@ static inline void set_lock_type2(struct lock_list *lock, int read)
 
 	debug_atomic_inc(nr_redundant_checks);
 
-	ret = check_path(hlock_class(target), &src_entry, &target_entry);
+	ret = check_path(hlock_class(target), &src_entry, &target_entry, true);
 
 	if (!ret) {
 		struct lock_list *parent;
@@ -1897,7 +1898,8 @@ static inline int usage_match(struct lock_list *entry, void *mask)
 
 	debug_atomic_inc(nr_find_usage_forwards_checks);
 
-	result = __bfs_forwards(root, &usage_mask, usage_match, target_entry);
+	result = __bfs_forwards(root, &usage_mask, usage_match,
+				target_entry, true);
 
 	return result;
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 07/17] locking/lockdep: Introduce mark_lock_unaccessed()
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
                   ` (5 preceding siblings ...)
  2019-05-16  8:00 ` [PATCH v2 06/17] locking/lockdep: Adjust BFS algorithm to support multiple matches Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 08/17] locking/lockdep: Introduce chain_hlocks_type for held lock's read-write type Yuyang Du
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

Since in graph search, multiple matches may be needed, a matched lock
needs to rejoin the search for another match, thereby introduce
mark_lock_unaccessed().

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 54ddf85..617c0f4 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1338,6 +1338,15 @@ static inline void mark_lock_accessed(struct lock_list *lock,
 	lock->class->dep_gen_id = lockdep_dependency_gen_id;
 }
 
+static inline void mark_lock_unaccessed(struct lock_list *lock)
+{
+	unsigned long nr;
+
+	nr = lock - list_entries;
+	WARN_ON(nr >= ARRAY_SIZE(list_entries)); /* Out-of-bounds, input fail */
+	lock->class->dep_gen_id--;
+}
+
 static inline unsigned long lock_accessed(struct lock_list *lock)
 {
 	unsigned long nr;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 08/17] locking/lockdep: Introduce chain_hlocks_type for held lock's read-write type
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
                   ` (6 preceding siblings ...)
  2019-05-16  8:00 ` [PATCH v2 07/17] locking/lockdep: Introduce mark_lock_unaccessed() Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 09/17] locking/lockdep: Hash held lock's read-write type into chain key Yuyang Du
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

Lock chain needs to include information about the read-write lock type. To
that end, introduce:

	chain_hlocks_type[MAX_LOCKDEP_CHAIN_HLOCKS]

in addition to:

	chain_hlocks[MAX_LOCKDEP_CHAIN_HLOCKS]

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 617c0f4..f11d018d 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -861,6 +861,7 @@ static bool class_lock_list_valid(struct lock_class *c, struct list_head *h)
 
 #ifdef CONFIG_PROVE_LOCKING
 static u16 chain_hlocks[MAX_LOCKDEP_CHAIN_HLOCKS];
+static u16 chain_hlocks_type[MAX_LOCKDEP_CHAIN_HLOCKS];
 #endif
 
 static bool check_lock_chain_key(struct lock_chain *chain)
@@ -2668,6 +2669,7 @@ static inline void inc_chains(void)
 static DECLARE_BITMAP(lock_chains_in_use, MAX_LOCKDEP_CHAINS);
 int nr_chain_hlocks;
 static u16 chain_hlocks[MAX_LOCKDEP_CHAIN_HLOCKS];
+static u16 chain_hlocks_type[MAX_LOCKDEP_CHAIN_HLOCKS];
 
 struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i)
 {
@@ -2872,9 +2874,12 @@ static inline int add_chain_cache(struct task_struct *curr,
 		chain->base = nr_chain_hlocks;
 		for (j = 0; j < chain->depth - 1; j++, i++) {
 			int lock_id = curr->held_locks[i].class_idx;
+			int lock_type = curr->held_locks[i].read;
 			chain_hlocks[chain->base + j] = lock_id;
+			chain_hlocks_type[chain->base + j] = lock_type;
 		}
 		chain_hlocks[chain->base + j] = class - lock_classes;
+		chain_hlocks_type[chain->base + j] = hlock->read;
 		nr_chain_hlocks += chain->depth;
 	} else {
 		if (!debug_locks_off_graph_unlock())
@@ -4824,6 +4829,9 @@ static void remove_class_from_lock_chain(struct pending_free *pf,
 			memmove(&chain_hlocks[i], &chain_hlocks[i + 1],
 				(chain->base + chain->depth - i) *
 				sizeof(chain_hlocks[0]));
+			memmove(&chain_hlocks_type[i], &chain_hlocks_type[i + 1],
+				(chain->base + chain->depth - i) *
+				sizeof(chain_hlocks_type[0]));
 		}
 		/*
 		 * Each lock class occurs at most once in a lock chain so once
@@ -5268,6 +5276,7 @@ void __init lockdep_init(void)
 		+ sizeof(lock_chains)
 		+ sizeof(lock_chains_in_use)
 		+ sizeof(chain_hlocks)
+		+ sizeof(chain_hlocks_type)
 #endif
 		) / 1024
 		);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 09/17] locking/lockdep: Hash held lock's read-write type into chain key
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
                   ` (7 preceding siblings ...)
  2019-05-16  8:00 ` [PATCH v2 08/17] locking/lockdep: Introduce chain_hlocks_type for held lock's read-write type Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 10/17] locking/lockdep: Support read-write lock's deadlock detection Yuyang Du
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

When computing a chain's hash key, we need to consider a held lock's type,
so the additional data to use Jenkins hash algorithm is a composite of the
new held lock's lock class index (lower 16 bits) and its read-write type
(higher 16 bits) as opposed to just class index before:

        held lock type (u16) : lock class index (u16)

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 46 ++++++++++++++++++++++++++++++++--------------
 1 file changed, 32 insertions(+), 14 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index f11d018d..019099e 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -359,11 +359,19 @@ struct pending_free {
  * it's a hash of all locks taken up to that lock, including that lock.
  * It's a 64-bit hash, because it's important for the keys to be
  * unique.
+ *
+ * The additional u32 data to hash is a composite of the new held lock's
+ * lock class index (lower 16 bits) and its read-write type (higher 16
+ * bits):
+ *
+ *     hlock type (u16) : lock class index (u16)
  */
-static inline u64 iterate_chain_key(u64 key, u32 idx)
+static inline u64 iterate_chain_key(u64 key, u32 idx, u16 hlock_type)
 {
 	u32 k0 = key, k1 = key >> 32;
 
+	idx += hlock_type << LOCK_TYPE_BITS;
+
 	__jhash_mix(idx, k0, k1); /* Macro that modifies arguments! */
 
 	return k0 | (u64)k1 << 32;
@@ -871,7 +879,8 @@ static bool check_lock_chain_key(struct lock_chain *chain)
 	int i;
 
 	for (i = chain->base; i < chain->base + chain->depth; i++)
-		chain_key = iterate_chain_key(chain_key, chain_hlocks[i]);
+		chain_key = iterate_chain_key(chain_key, chain_hlocks[i],
+					      chain_hlocks_type[i]);
 	/*
 	 * The 'unsigned long long' casts avoid that a compiler warning
 	 * is reported when building tools/lib/lockdep.
@@ -2699,9 +2708,9 @@ static inline int get_first_held_lock(struct task_struct *curr,
 /*
  * Returns the next chain_key iteration
  */
-static u64 print_chain_key_iteration(int class_idx, u64 chain_key)
+static u64 print_chain_key_iteration(int class_idx, u64 chain_key, int lock_type)
 {
-	u64 new_chain_key = iterate_chain_key(chain_key, class_idx);
+	u64 new_chain_key = iterate_chain_key(chain_key, class_idx, lock_type);
 
 	printk(" class_idx:%d -> chain_key:%016Lx",
 		class_idx,
@@ -2721,12 +2730,15 @@ static u64 print_chain_key_iteration(int class_idx, u64 chain_key)
 		hlock_next->irq_context);
 	for (; i < depth; i++) {
 		hlock = curr->held_locks + i;
-		chain_key = print_chain_key_iteration(hlock->class_idx, chain_key);
+		chain_key = print_chain_key_iteration(hlock->class_idx,
+						      chain_key,
+						      hlock->read);
 
 		print_lock(hlock);
 	}
 
-	print_chain_key_iteration(hlock_next->class_idx, chain_key);
+	print_chain_key_iteration(hlock_next->class_idx, chain_key,
+				  hlock_next->read);
 	print_lock(hlock_next);
 }
 
@@ -2734,12 +2746,14 @@ static void print_chain_keys_chain(struct lock_chain *chain)
 {
 	int i;
 	u64 chain_key = INITIAL_CHAIN_KEY;
-	int class_id;
+	int class_id, lock_type;
 
 	printk("depth: %u\n", chain->depth);
 	for (i = 0; i < chain->depth; i++) {
 		class_id = chain_hlocks[chain->base + i];
-		chain_key = print_chain_key_iteration(class_id, chain_key);
+		lock_type = chain_hlocks_type[chain->base + i];
+		chain_key = print_chain_key_iteration(class_id, chain_key,
+						      lock_type);
 
 		print_lock_name(lock_classes + class_id);
 		printk("\n");
@@ -2780,7 +2794,7 @@ static int check_no_collision(struct task_struct *curr,
 			struct lock_chain *chain)
 {
 #ifdef CONFIG_DEBUG_LOCKDEP
-	int i, j, id;
+	int i, j, id, type;
 
 	i = get_first_held_lock(curr, hlock);
 
@@ -2789,10 +2803,12 @@ static int check_no_collision(struct task_struct *curr,
 		return 0;
 	}
 
-	for (j = 0; j < chain->depth - 1; j++, i++) {
+	for (j = chain->base; j < chain->base + chain->depth - 1; j++, i++) {
 		id = curr->held_locks[i].class_idx;
+		type = curr->held_locks[i].read;
 
-		if (DEBUG_LOCKS_WARN_ON(chain_hlocks[chain->base + j] != id)) {
+		if (DEBUG_LOCKS_WARN_ON((chain_hlocks[j] != id) ||
+					(chain_hlocks_type[j] != type))) {
 			print_collision(curr, hlock, chain);
 			return 0;
 		}
@@ -3078,7 +3094,8 @@ static void check_chain_key(struct task_struct *curr)
 		if (prev_hlock && (prev_hlock->irq_context !=
 							hlock->irq_context))
 			chain_key = INITIAL_CHAIN_KEY;
-		chain_key = iterate_chain_key(chain_key, hlock->class_idx);
+		chain_key = iterate_chain_key(chain_key, hlock->class_idx,
+					      hlock->read);
 		prev_hlock = hlock;
 	}
 	if (chain_key != curr->curr_chain_key) {
@@ -4001,7 +4018,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 		chain_key = INITIAL_CHAIN_KEY;
 		chain_head = 1;
 	}
-	chain_key = iterate_chain_key(chain_key, class_idx);
+	chain_key = iterate_chain_key(chain_key, class_idx, read);
 
 	if (nest_lock && !__lock_is_held(nest_lock, -1)) {
 		print_lock_nested_lock_not_held(curr, hlock, ip);
@@ -4845,7 +4862,8 @@ static void remove_class_from_lock_chain(struct pending_free *pf,
 recalc:
 	chain_key = INITIAL_CHAIN_KEY;
 	for (i = chain->base; i < chain->base + chain->depth; i++)
-		chain_key = iterate_chain_key(chain_key, chain_hlocks[i]);
+		chain_key = iterate_chain_key(chain_key, chain_hlocks[i],
+					      chain_hlocks_type[i]);
 	if (chain->depth && chain->chain_key == chain_key)
 		return;
 	/* Overwrite the chain key for concurrent RCU readers. */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 10/17] locking/lockdep: Support read-write lock's deadlock detection
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
                   ` (8 preceding siblings ...)
  2019-05-16  8:00 ` [PATCH v2 09/17] locking/lockdep: Hash held lock's read-write type into chain key Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 11/17] locking/lockdep: Adjust lockdep selftest cases Yuyang Du
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

A read-write lock is different from an exclusive lock in that read locks
can be concurrent, while a write lock is essentially the same as an
exclusive lock.

Read-write locks have not been well supported for deadlock detection so
far. This patch series designs and implements an algorithm to add this
important missing feature to lockdep.

To articulate the algorithm plainly, lets first give an abstract of the
problem to solve. And, to make the problem simple and easy to describe,
recursive-read locks are not considered; they will be covered later, and
actually adding recursive-read lock is very easy if the read-lock problem
is solved.

Problem statement
-----------------

Waiting relations in a circular fashion are at the heart of a deadlock:
a circle is universally *necessary* for any deadlock, albeit not
*sufficient* with read-write locks. A deadlock circle can be arbitrarily
long with many tasks contributing some arcs. But no matter how long the
circle is, it has to complete with a final arc, so the problem to solve
can be stated as:

Lemma #1
--------

  A deadlock can be and can only be detected at the *final* arc when it
  completes a circle.

Assume there are n tasks contribute to that circle, denoted as T_1, T_2,
..., T_n, these tasks can be grouped as (T_1, ..., T_n-1) and T_n. And
by combining the former virtually, we get a big T1 and T2 (with task
numbers adjusted). This essentially means:

Lemma #2
--------

  Two tasks can virtually represent any situation with any number of
  tasks to check for deadlock.

Actually, how long the circle is does not really matter since the
problem as stated is what a difference the final missing arc can make:
deadlock or not; therefore, we need a few locks that are just enough to
represent all the possibilities. And this leads to:

Lemma #3
--------

  Any deadlock scenario can be converted to an ABBA deadlock.

where AB comes from T1 and BA from T2 (T1 and T2 are made by Lemma #2),
which says any other associated locks in the graph are not critical or
necessary and thus may be ignored. For example:

    T1        T2
    --        --

    L1        L7
    L2        L2

    A         B
    L3        L3
    L4        L8
    B         A    [Deadlock]

    L5
    L6

from deadlock perspective is equivalent to an ABBA:

    T1        T2
    --        --

    A         B
    B         A    [Deadlock]

Despite the lemmas, three facts are relevant to the problem: (a) with a
new arc, determining whether it completes a circle is an easy task, (b)
a new direct arc (direct dependency) can introduce a number of indirect
arcs (indirect dependencies), and (c) checking all the additional arcs
(direct and indirect) efficiently is not so easy since lock operations are
frequent and lock graph can be gigantic. Actually, if it is free to check
any number of arcs, deadlock detection even with read-write locks is fairly
easy. That said performance is what the real difficulty is, so a good
algorithm should not only be self-evident that it does solve the problem
but also do so at low cost.

Here we try a start to solve it!

Having grasped the problem statement, we are good to proceed to a
divide-and-conquer approach to the solution: the entire problem is
broken down into a comprehensive list of simple and abstract problem
cases to solve, and if each and every one of them is solved, the entire
problem is solved.

The division is based on the type of the final arc or dependency in T2.
Based on Lemma #2, we use only two tasks in the following discussion.
And based on Lemma #3, these cases are all ABBAs. To be concise, the
following symbol R stands for read lock, W stands for write lock or
exclusive lock, and X stands for either R or W.

---------------------------------------------------------------
When the final dependency is ended with read lock and read lock
---------------------------------------------------------------

Case #1:

    T1        T2
    --        --

    W1        R2
    W2        R1   [Deadlock]

Case #2:

    T1        T2

    X1        R2
    R2        R1   [No deadlock]

------------------------------------------------------
When the final dependency is write lock and write lock
------------------------------------------------------

Case #3:

    T1        T2
    --        --

    X1        W2
    X2        W1   [Deadlock]

-----------------------------------------------------
When the final dependency is read lock and write lock
-----------------------------------------------------

Note that the final dependency is ended with write lock and read lock is
essentially the same as this one so that case is omitted.

Case #4:

    T1        T2
    --        --

    X1        R2
    W2        W1   [Deadlock]

Case #5:

    T1        T2
    --        --

    X1        R2
    R2        W1   [No deadlock]

Solving the above cases (no false positive or false negative) is
actually fairly easy to do; we therefore have our first *Simple
Algorithm*:

----------------
Simple Algorithm
----------------

Step 1
------

Keep track of each dependency's read or write ends. There is a
combination of four types:

   - read -> read
   - read -> write
   - write -> read
   - write -> write

Step 2
------

Redundancy check (as to whether adding a dependency into graph) for a
direct dependency needs to be beefed up considering dependency's read-
or write-ended types: a direct dependency is redundant to an indirect
dependency only if their ends have the same types. However, for
simplicity, direct dependencies can be added right away and if the
dependency to add already exists, the existing dependency can be
"upgraded": upgrade the end type towards more exclusive (the exlusiveness
increases from recursive read -> read -> write).

Step 3
------

Traverse the entire dependency graph (there may be more than one circle)
to find whether a deadlock circle can be formed by adding a new direct
dependency. A circle may or may not be a deadlock. The deciding rule is
simple: it is a deadlock if the new dependency to add have inversed
dependency with exclusive lock types in the graph (i.e., an ABBA according
to Lemma #3). Lock exclusiveness relations are listed as below:

   -  read and read       (no )
   -  read and write      (yes)
   - write and read       (yes)
   - write and write      (yes)

In order to find a deadlock circle efficiently, the new dependency's locks
can *start* from write-ended dependences if dependencies of all types are
kept (this is not our case since our read-ended dependencies are upgraded
if needed). Note that Step 3 can avoid the following false negative case
easily:

Case #6:

    T1        T2
    --        --

    R1
    R2

   (R1 R2 released)

    W1        R2
    W2        R1   [Deadlock]

*Simple Algorithm* done loosely described.

I wish we lived in a fairy-tale world that the problem has been solved
so easily, but the reality is not. Huge false negatives owing to
indirect dependencies could appear, which is illustrated as the
following case to further solve:

Case #7:

    T1        T2
    --        --

    X1        X3
    R2        R2
    X3        X1   [Deadlock]

where X1's and X3's in the two tasks create a deadlock scenario (they can
be any one of the deadlock cases above). When checking direct dependency
R2 -> X1, there is no obvious deadlock using our *Simple Algorithm*,
however, under the hood the actual deadlock is formed after R2 introduces an
indirect dependency X3 -> X1, which could comfortably be missed.

To detect deadlock scenario like Case #7, a naive option is to check all
additional indirect dependencies, but this option would be so
inefficient and thus is simply passed. To find an efficient solution
instead, lets first contrive a no-deadlock Case #8 for comparison (which
is essentially rewritten from Case #5).

Case #8:

    T1        T2
    --        --

    X1
    X3        R2
    R2        X1   [No deadlock]

Having considered Case #7 and Case #8, a final working algorithm can be
formulated:

---------------
Final Algorithm
---------------

This *Final Algorithm* is beefed up from Simple Algorithm using the
following lemmas:

Lemma #4
--------

  The direct dependency R2 -> X1 that serves in the path from X3 -> X1 is
  *critical*.

Although the actual deadlock in Case #7 cannot be easily found by our
*Simple Algorithm*, however, by strengthening the algorithm somehow the
deadlock *definitely* can be found at adding the direct dependency
(i.e., R2 -> X1 in Case #7), see Lemma #1. In other words, a *critical*
direct dependency (i.e., the final arc) suffices to find any deadlock if
there is a deadlock, otherwise there is no deadlock. As a matter of fact,
after a false deadlock (R2 -> X1 -> R2), if the search continues the true
deadlock (R2 -> X1 -> R2 -> X3 -> R2) will eventually be taken out of the
hood.

Lemma #5
--------

  Missed in Case #8, the game changer to Case #7 is that it has X3 in T2
  whereas Case #8 does not.

Having considered this, our *Final Algorithm* can be adjusted from the
*Simple Algorithm* by adding:

(a). Continue searching the graph to find a new circle.

(b). In the new circle, if previous locks in T2's stack (or chain) are in
     it and then check whether the circle is indeed a deadlock. This is
     done by checking each newly introduced indirect dependency (such as
     X3 -> X1 in Case #7) according to our Simple Algorithm as before.

(c). If a deadlock is found then the algorithm is done, otherwise go to
     (a) unless the entire graph is traversed.

Why does Lemma #5 work?

Lemma #6
--------

  Lemma #5 nicely raises a question whether a previous lock involved
  indirect dependency in T2 is *necessary*. The answer is yes, otherwise
  our *Simple Algorithm* has already solved the problem.

Lemma #7
--------

  It is also natual to ask whether an indirect dependencies with a
  starting lock in T2 only is *sufficient*: what if the indirect
  dependency has a starting lock from T1. The answer is yes too.

Because Lemma #2 and Lemma #3 say that any deadlock is an ABBA so that
T1 can only contribute an AB and T2 must have a BA. As a result, since we
assumed T1 has no deadlock and Lemma #4 says the new dependency is
*critical*, then any deadlock formed by the new direct or indirect
dependencies introduced in T2 (which is the BA part) will definitely be
found with *Simple Algorithm* or *Final Algorithm* respectively.

This is perhaps the most subtle and difficult part of this algorithm. To
test Lemma #7 holds true, one may try to contrive a case based on Case #8
or freely to generate a deadlock case if possible.

Anyway, we welcome any new cases. Cases matter in this algorithm because
as stated before, this algorithm solves the read-write lock deadlock
detection problem by having solved all the contrived cases (be it deadlock
or no deadlock). And if anyone comes up with a new case that is not covered
here, it likely will defeat this algorithm, but if otherwise this algorithm
just works sound and complete.

*Final Algorithm* done loosely described.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 143 +++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 127 insertions(+), 16 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 019099e..d5a874b 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -85,6 +85,7 @@
  */
 static arch_spinlock_t lockdep_lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
 static struct task_struct *lockdep_selftest_task_struct;
+static bool inside_selftest(void);
 
 static int graph_lock(void)
 {
@@ -1784,17 +1785,24 @@ static inline void set_lock_type2(struct lock_list *lock, int read)
 }
 
 /*
- * Prove that the dependency graph starting at <src> can not
- * lead to <target>. If it can, there is a circle when adding
- * <target> -> <src> dependency.
+ * Prove that when adding <target> -> <src> dependency into the dependency
+ * graph, there is no deadlock. This is done by:
+ *
+ * 1. Determine whether the graph starting at <src> can lead to <target>.
+ *    If it can, there is a circle.
+ *
+ * 2. If there is a circle, there may or may not be a deadlock, which is
+ *    then comprehensively checked according to the general read-write
+ *    lock deadlock detection algorithm.
  *
  * Print an error and return 0 if it does.
  */
 static noinline int
-check_deadlock_graph(struct held_lock *src, struct held_lock *target,
-		     struct lock_trace *trace)
+check_deadlock_graph(struct task_struct *curr, struct held_lock *src,
+		     struct held_lock *target, struct lock_trace *trace)
 {
-	int ret;
+	int ret, i;
+	bool init = true;
 	struct lock_list *uninitialized_var(target_entry);
 	struct lock_list src_entry = {
 		.class = hlock_class(src),
@@ -1803,19 +1811,122 @@ static inline void set_lock_type2(struct lock_list *lock, int read)
 
 	debug_atomic_inc(nr_cyclic_checks);
 
-	ret = check_path(hlock_class(target), &src_entry, &target_entry, true);
+	while (true) {
+		ret = check_path(hlock_class(target), &src_entry,
+				 &target_entry, init);
+		init = false;
+
+		/* Found a circle, is it deadlock? */
+		if (unlikely(!ret)) {
+			struct lock_list *parent;
 
-	if (unlikely(!ret)) {
-		if (!trace->nr_entries) {
 			/*
-			 * If save_trace fails here, the printing might
-			 * trigger a WARN but because of the !nr_entries it
-			 * should not do bad things.
+			 * Note that we have an assumption that no lock
+			 * class can be both read and recursive-read.
+			 *
+			 * Check this direct dependency.
+			 *
+			 * Step 1: next's lock type and source dependency's
+			 * lock type are exclusive, no?
+			 *
+			 * Find the first dependency after source dependency.
 			 */
-			save_trace(trace);
-		}
+			parent = find_next_dep_in_path(&src_entry, target_entry);
+			if (!parent) {
+				DEBUG_LOCKS_WARN_ON(1);
+				return -3;
+			}
+
+			if (src->read & get_lock_type1(parent))
+				goto cont;
+
+			/*
+			 * Step 2: prev's lock type and target dependency's
+			 * lock type are exclusive, yes?
+			 */
+			if (!(target->read & get_lock_type2(target_entry)))
+				goto print;
+
+			/*
+			 * Check indirect dependency from even further
+			 * previous lock.
+			 */
+			for (i = 0; i < curr->lockdep_depth; i++) {
+				struct held_lock *prev = curr->held_locks + i;
+
+				if (prev->irq_context != src->irq_context)
+					continue;
 
-		print_circular_bug(&src_entry, target_entry, src, target);
+				/*
+				 * We arrived at the same prev lock in this
+				 * direct dependency checking.
+				 */
+				if (prev == target)
+					break;
+
+				/*
+				 * Since the src lock (the next lock to
+				 * acquire) is neither recursive nor nested
+				 * lock, so this prev class cannot be src
+				 * class, then does the path have this
+				 * previous lock?
+				 *
+				 * With read locks it would be possible a
+				 * lock can reoccur in a path. For example:
+				 *
+				 * -> RL1 -> RL2 -> RL3 -> RL1 -> ...
+				 *
+				 * and for another three examples:
+				 *
+				 * Ex1: -> RL1 -> WL2 -> RL3 -> RL1
+				 * Ex2: -> WL1 -> RL2 -> RL3 -> WL1
+				 * Ex3: -> RL1 -> RL2 -> RL3 -> WL1
+				 *
+				 * This can result in that a path may
+				 * encounter a lock twice or more, but we
+				 * used the breadth-first search algorithm
+				 * that will find the shortest path,
+				 * which means that this path can not have
+				 * the same (middle) lock multiple times.
+				 * However, is Ex3 a problem?
+				 */
+				parent = find_lock_in_path(hlock_class(prev),
+							   target_entry);
+				if (parent) {
+					/*
+					 * Yes, we have a candidiate indirect
+					 * dependency to check.
+					 *
+					 * Again step 2: new prev's lock
+					 * type and its dependency in graph
+					 * are exclusive, yes?
+					 *
+					 * Note that we don't need step 1
+					 * again.
+					 */
+					if (!(prev->read & get_lock_type2(parent)))
+						goto print;
+				}
+			}
+cont:
+			mark_lock_unaccessed(target_entry);
+			continue;
+print:
+			if (!trace->nr_entries) {
+				/*
+				 * If save_trace fails here, the printing
+				 * might trigger a WARN but because of the
+				 * !nr_entries it should not do bad things.
+				 */
+				save_trace(trace);
+			}
+
+			print_circular_bug(&src_entry, target_entry,
+					   src, target);
+			break;
+		} else
+			/* The graph is all traversed or an error occurred */
+			break;
 	}
 
 	return ret;
@@ -2510,7 +2621,7 @@ static inline void inc_chains(void)
 	 * MAX_CIRCULAR_QUEUE_SIZE) which keeps track of a breadth of nodes
 	 * in the graph whose neighbours are to be checked.
 	 */
-	ret = check_deadlock_graph(next, prev, trace);
+	ret = check_deadlock_graph(curr, next, prev, trace);
 	if (unlikely(ret <= 0))
 		return 0;
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 11/17] locking/lockdep: Adjust lockdep selftest cases
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
                   ` (9 preceding siblings ...)
  2019-05-16  8:00 ` [PATCH v2 10/17] locking/lockdep: Support read-write lock's deadlock detection Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-29 11:44   ` Boqun Feng
  2019-05-16  8:00 ` [PATCH v2 12/17] locking/lockdep: Remove useless lock type assignment Yuyang Du
                   ` (5 subsequent siblings)
  16 siblings, 1 reply; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

With read-write lock support, some read-write lock cases need to be updated,
specifically, some read-lock involved deadlocks are actually not deadlocks.
Hope I am not wildly wrong.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 lib/locking-selftest.c | 44 +++++++++++++++++++++++++++++++++-----------
 1 file changed, 33 insertions(+), 11 deletions(-)

diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index a170554..f83f047 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -424,7 +424,7 @@ static void rwsem_ABBA2(void)
 	ML(Y1);
 	RSL(X1);
 	RSU(X1);
-	MU(Y1); // should fail
+	MU(Y1); // should NOT fail
 }
 
 
@@ -462,12 +462,13 @@ static void rwsem_ABBA3(void)
 
 /*
  * ABBA deadlock:
+ *
+ * Should fail except for either A or B is read lock.
  */
-
 #define E()					\
 						\
 	LOCK_UNLOCK_2(A, B);			\
-	LOCK_UNLOCK_2(B, A); /* fail */
+	LOCK_UNLOCK_2(B, A);
 
 /*
  * 6 testcases:
@@ -494,13 +495,15 @@ static void rwsem_ABBA3(void)
 
 /*
  * AB BC CA deadlock:
+ *
+ * Should fail except for read or recursive-read locks.
  */
 
 #define E()					\
 						\
 	LOCK_UNLOCK_2(A, B);			\
 	LOCK_UNLOCK_2(B, C);			\
-	LOCK_UNLOCK_2(C, A); /* fail */
+	LOCK_UNLOCK_2(C, A);
 
 /*
  * 6 testcases:
@@ -527,13 +530,15 @@ static void rwsem_ABBA3(void)
 
 /*
  * AB CA BC deadlock:
+ *
+ * Should fail except for read or recursive-read locks.
  */
 
 #define E()					\
 						\
 	LOCK_UNLOCK_2(A, B);			\
 	LOCK_UNLOCK_2(C, A);			\
-	LOCK_UNLOCK_2(B, C); /* fail */
+	LOCK_UNLOCK_2(B, C);
 
 /*
  * 6 testcases:
@@ -560,6 +565,8 @@ static void rwsem_ABBA3(void)
 
 /*
  * AB BC CD DA deadlock:
+ *
+ * Should fail except for read or recursive-read locks.
  */
 
 #define E()					\
@@ -567,7 +574,7 @@ static void rwsem_ABBA3(void)
 	LOCK_UNLOCK_2(A, B);			\
 	LOCK_UNLOCK_2(B, C);			\
 	LOCK_UNLOCK_2(C, D);			\
-	LOCK_UNLOCK_2(D, A); /* fail */
+	LOCK_UNLOCK_2(D, A);
 
 /*
  * 6 testcases:
@@ -594,13 +601,15 @@ static void rwsem_ABBA3(void)
 
 /*
  * AB CD BD DA deadlock:
+ *
+ * Should fail except for read or recursive-read locks.
  */
 #define E()					\
 						\
 	LOCK_UNLOCK_2(A, B);			\
 	LOCK_UNLOCK_2(C, D);			\
 	LOCK_UNLOCK_2(B, D);			\
-	LOCK_UNLOCK_2(D, A); /* fail */
+	LOCK_UNLOCK_2(D, A);
 
 /*
  * 6 testcases:
@@ -627,13 +636,15 @@ static void rwsem_ABBA3(void)
 
 /*
  * AB CD BC DA deadlock:
+ *
+ * Should fail except for read or recursive-read locks.
  */
 #define E()					\
 						\
 	LOCK_UNLOCK_2(A, B);			\
 	LOCK_UNLOCK_2(C, D);			\
 	LOCK_UNLOCK_2(B, C);			\
-	LOCK_UNLOCK_2(D, A); /* fail */
+	LOCK_UNLOCK_2(D, A);
 
 /*
  * 6 testcases:
@@ -1238,7 +1249,7 @@ static inline void print_testname(const char *testname)
 /*
  * 'read' variant: rlocks must not trigger.
  */
-#define DO_TESTCASE_6R(desc, name)				\
+#define DO_TESTCASE_6AA(desc, name)				\
 	print_testname(desc);					\
 	dotest(name##_spin, FAILURE, LOCKTYPE_SPIN);		\
 	dotest(name##_wlock, FAILURE, LOCKTYPE_RWLOCK);		\
@@ -1249,6 +1260,17 @@ static inline void print_testname(const char *testname)
 	dotest_rt(name##_rtmutex, FAILURE, LOCKTYPE_RTMUTEX);	\
 	pr_cont("\n");
 
+#define DO_TESTCASE_6R(desc, name)				\
+	print_testname(desc);					\
+	dotest(name##_spin, FAILURE, LOCKTYPE_SPIN);		\
+	dotest(name##_wlock, FAILURE, LOCKTYPE_RWLOCK);		\
+	dotest(name##_rlock, SUCCESS, LOCKTYPE_RWLOCK);		\
+	dotest(name##_mutex, FAILURE, LOCKTYPE_MUTEX);		\
+	dotest(name##_wsem, FAILURE, LOCKTYPE_RWSEM);		\
+	dotest(name##_rsem, SUCCESS, LOCKTYPE_RWSEM);		\
+	dotest_rt(name##_rtmutex, FAILURE, LOCKTYPE_RTMUTEX);	\
+	pr_cont("\n");
+
 #define DO_TESTCASE_2I(desc, name, nr)				\
 	DO_TESTCASE_1("hard-"desc, name##_hard, nr);		\
 	DO_TESTCASE_1("soft-"desc, name##_soft, nr);
@@ -1991,7 +2013,7 @@ void locking_selftest(void)
 	debug_locks_silent = !debug_locks_verbose;
 	lockdep_set_selftest_task(current);
 
-	DO_TESTCASE_6R("A-A deadlock", AA);
+	DO_TESTCASE_6AA("A-A deadlock", AA);
 	DO_TESTCASE_6R("A-B-B-A deadlock", ABBA);
 	DO_TESTCASE_6R("A-B-B-C-C-A deadlock", ABBCCA);
 	DO_TESTCASE_6R("A-B-C-A-B-C deadlock", ABCABC);
@@ -2048,7 +2070,7 @@ void locking_selftest(void)
 	pr_cont("             |");
 	dotest(rlock_ABBA2, SUCCESS, LOCKTYPE_RWLOCK);
 	pr_cont("             |");
-	dotest(rwsem_ABBA2, FAILURE, LOCKTYPE_RWSEM);
+	dotest(rwsem_ABBA2, SUCCESS, LOCKTYPE_RWSEM);
 
 	print_testname("mixed write-lock/lock-write ABBA");
 	pr_cont("             |");
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 12/17] locking/lockdep: Remove useless lock type assignment
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
                   ` (10 preceding siblings ...)
  2019-05-16  8:00 ` [PATCH v2 11/17] locking/lockdep: Adjust lockdep selftest cases Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 13/17] locking/lockdep: Add nest lock type Yuyang Du
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

Next lock to acquire has the lock type set already. There is no need to
reassign it when it is recursive read. No functional change.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index d5a874b..ce79b5a 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3132,13 +3132,6 @@ static int validate_chain(struct task_struct *curr,
 		if (!ret)
 			return 0;
 		/*
-		 * Mark recursive read, as we jump over it when
-		 * building dependencies (just like we jump over
-		 * trylock entries):
-		 */
-		if (ret == LOCK_TYPE_RECURSIVE)
-			hlock->read = LOCK_TYPE_RECURSIVE;
-		/*
 		 * Add dependency only if this lock is not the head
 		 * of the chain, and if it's not a secondary read-lock:
 		 */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 13/17] locking/lockdep: Add nest lock type
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
                   ` (11 preceding siblings ...)
  2019-05-16  8:00 ` [PATCH v2 12/17] locking/lockdep: Remove useless lock type assignment Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 14/17] locking/lockdep: Support recursive read locks Yuyang Du
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

Add a macro LOCK_TYPE_NEST for nest lock type and mark the nested lock in
check_deadlock_current(). Unlike the other LOCK_TYPE_* enums, this lock type
is used only in lockdep.

No functional change.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c           | 7 +++++--
 kernel/locking/lockdep_internals.h | 1 +
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index ce79b5a..d8e484d 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2527,6 +2527,7 @@ static inline void inc_chains(void)
  *  0: on deadlock detected;
  *  1: on OK;
  *  3: LOCK_TYPE_RECURSIVE on recursive read
+ *  4: LOCK_TYPE_NEST on nest lock
  */
 static int
 check_deadlock_current(struct task_struct *curr, struct held_lock *next)
@@ -2556,7 +2557,7 @@ static inline void inc_chains(void)
 		 * nesting behaviour.
 		 */
 		if (nest)
-			return LOCK_TYPE_RECURSIVE;
+			return LOCK_TYPE_NEST;
 
 		print_deadlock_bug(curr, prev, next);
 		return 0;
@@ -3131,11 +3132,13 @@ static int validate_chain(struct task_struct *curr,
 
 		if (!ret)
 			return 0;
+
 		/*
 		 * Add dependency only if this lock is not the head
 		 * of the chain, and if it's not a secondary read-lock:
 		 */
-		if (!chain_head && ret != LOCK_TYPE_RECURSIVE) {
+		if (!chain_head && ret != LOCK_TYPE_RECURSIVE &&
+		    ret != LOCK_TYPE_NEST) {
 			if (!check_prevs_add(curr, hlock))
 				return 0;
 		}
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index c287bcb..759e740 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -28,6 +28,7 @@ enum lock_usage_bit {
 
 #define LOCK_TYPE_BITS	16
 #define LOCK_TYPE_MASK	0xFFFF
+#define LOCK_TYPE_NEST	(LOCK_TYPE_RECURSIVE + 1)
 
 /*
  * Usage-state bitmasks:
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 14/17] locking/lockdep: Support recursive read locks
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
                   ` (12 preceding siblings ...)
  2019-05-16  8:00 ` [PATCH v2 13/17] locking/lockdep: Add nest lock type Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 15/17] locking/lockdep: Adjust selftest case for recursive read lock Yuyang Du
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

Now we are good to finally support recursive read locks.

This is done simply by adding the dependency that has recursive locks to the
graph, which previously is skipped.

The reason is plainly simple:

  (a). Recursive read lock differs from read lock only inside a task.
  (b). check_deadlock_current() already handles recursive-read lock pretty
       well.

Now that the bulk of the implementation of the read-write lock deadlock
detection algorithm is done, the lockdep internal performance statistics can
be collected (the workload used as usual is: make clean; reboot; make vmlinux -j8):

Before
------
 direct dependencies:                  8980 [max: 32768]
 indirect dependencies:               41065
 all direct dependencies:            211016
 dependency chains:                   11592 [max: 65536]
 dependency chain hlocks:             42869 [max: 327680]
 in-hardirq chains:                      85
 in-softirq chains:                     385
 in-process chains:                   10250
 stack-trace entries:                123121 [max: 524288]
 combined max dependencies:       340292196
 max bfs queue depth:                   244
 chain lookup misses:                 12703
 chain lookup hits:               666620822
 cyclic checks:                       11392
 redundant checks:                        0

After
-----

 direct dependencies:                  9624 [max: 32768]
 indirect dependencies:               43759
 all direct dependencies:            216024
 dependency chains:                   12119 [max: 65536]
 dependency chain hlocks:             44654 [max: 327680]
 in-hardirq chains:                      88
 in-softirq chains:                     383
 in-process chains:                   10544
 stack-trace entries:                132362 [max: 524288]
 combined max dependencies:       360385920
 max bfs queue depth:                   250
 chain lookup misses:                 13528
 chain lookup hits:               664721852
 cyclic checks:                       12053
 redundant checks:                        0

Most noticeably, we have slightly more dependencies, chains, and chain
lookup misses, but none of them raises concerns.

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 47 +++++++++++++++++++++++++++--------------------
 1 file changed, 27 insertions(+), 20 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index d8e484d..cd1d515 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1809,6 +1809,9 @@ static inline void set_lock_type2(struct lock_list *lock, int read)
 		.parent = NULL,
 	};
 
+	if (DEBUG_LOCKS_WARN_ON(hlock_class(src) == hlock_class(target)))
+		return 0;
+
 	debug_atomic_inc(nr_cyclic_checks);
 
 	while (true) {
@@ -1865,9 +1868,17 @@ static inline void set_lock_type2(struct lock_list *lock, int read)
 					break;
 
 				/*
+				 * This previous lock has the same class as
+				 * the src (the next lock to acquire); this
+				 * must be a recursive read case. Skip.
+				 */
+				if (hlock_class(prev) == hlock_class(src))
+					continue;
+
+				/*
 				 * Since the src lock (the next lock to
-				 * acquire) is neither recursive nor nested
-				 * lock, so this prev class cannot be src
+				 * acquire) is not nested lock, so up to
+				 * now this prev class cannot be the src
 				 * class, then does the path have this
 				 * previous lock?
 				 *
@@ -2613,6 +2624,17 @@ static inline void inc_chains(void)
 	}
 
 	/*
+	 * Filter out the direct dependency with the same lock class, which
+	 * is legitimate only if the next lock is the recursive-read type,
+	 * otherwise we should not have been here in the first place.
+	 */
+	if (hlock_class(prev) == hlock_class(next)) {
+		WARN_ON_ONCE(next->read != LOCK_TYPE_RECURSIVE);
+		WARN_ON_ONCE(prev->read == LOCK_TYPE_WRITE);
+		return 2;
+	}
+
+	/*
 	 * Prove that the new <prev> -> <next> dependency would not
 	 * create a deadlock scenario in the graph. (We do this by
 	 * a breadth-first search into the graph starting at <next>,
@@ -2630,16 +2652,6 @@ static inline void inc_chains(void)
 		return 0;
 
 	/*
-	 * For recursive read-locks we do all the dependency checks,
-	 * but we dont store read-triggered dependencies (only
-	 * write-triggered dependencies). This ensures that only the
-	 * write-side dependencies matter, and that if for example a
-	 * write-lock never takes any other locks, then the reads are
-	 * equivalent to a NOP.
-	 */
-	if (next->read == LOCK_TYPE_RECURSIVE || prev->read == LOCK_TYPE_RECURSIVE)
-		return 1;
-	/*
 	 * Is the <prev> -> <next> dependency already present?
 	 *
 	 * (this may occur even though this is a new chain: consider
@@ -2738,11 +2750,7 @@ static inline void inc_chains(void)
 		int distance = curr->lockdep_depth - depth + 1;
 		hlock = curr->held_locks + depth - 1;
 
-		/*
-		 * Only non-recursive-read entries get new dependencies
-		 * added:
-		 */
-		if (hlock->read != LOCK_TYPE_RECURSIVE && hlock->check) {
+		if (hlock->check) {
 			int ret = check_prev_add(curr, hlock, next, distance,
 						 &trace);
 			if (!ret)
@@ -3135,10 +3143,9 @@ static int validate_chain(struct task_struct *curr,
 
 		/*
 		 * Add dependency only if this lock is not the head
-		 * of the chain, and if it's not a secondary read-lock:
+		 * of the chain, and if it's not a nest lock:
 		 */
-		if (!chain_head && ret != LOCK_TYPE_RECURSIVE &&
-		    ret != LOCK_TYPE_NEST) {
+		if (!chain_head && ret != LOCK_TYPE_NEST) {
 			if (!check_prevs_add(curr, hlock))
 				return 0;
 		}
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 15/17] locking/lockdep: Adjust selftest case for recursive read lock
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
                   ` (13 preceding siblings ...)
  2019-05-16  8:00 ` [PATCH v2 14/17] locking/lockdep: Support recursive read locks Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 16/17] locking/lockdep: Add more lockdep selftest cases Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 17/17] locking/lockdep: Remove irq-safe to irq-unsafe read check Yuyang Du
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

Now that we support recursive read locks, a previously failed case:

 ----------------------------------------------------------------------------
                                  | spin |wlock |rlock |mutex | wsem | rsem |
   --------------------------------------------------------------------------
   mixed read-lock/lock-write ABBA:             |FAILED|             |  ok  |

can be added back. Now we have:

	Good, all 262 testcases passed!

See the case in: e91498589746065e3a ("Add mixed read-write ABBA tests")

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 lib/locking-selftest.c | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index f83f047..4c6dd8a 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -2055,13 +2055,6 @@ void locking_selftest(void)
 	print_testname("mixed read-lock/lock-write ABBA");
 	pr_cont("             |");
 	dotest(rlock_ABBA1, FAILURE, LOCKTYPE_RWLOCK);
-#ifdef CONFIG_PROVE_LOCKING
-	/*
-	 * Lockdep does indeed fail here, but there's nothing we can do about
-	 * that now.  Don't kill lockdep for it.
-	 */
-	unexpected_testcase_failures--;
-#endif
 
 	pr_cont("             |");
 	dotest(rwsem_ABBA1, FAILURE, LOCKTYPE_RWSEM);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 16/17] locking/lockdep: Add more lockdep selftest cases
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
                   ` (14 preceding siblings ...)
  2019-05-16  8:00 ` [PATCH v2 15/17] locking/lockdep: Adjust selftest case for recursive read lock Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  2019-05-16  8:00 ` [PATCH v2 17/17] locking/lockdep: Remove irq-safe to irq-unsafe read check Yuyang Du
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

Lets make sure our 8 cases can be correctly handled. In contrast, before
this patchset, these 8 cases have 24 failures:

 ----------------------------------------------------------------------------
                                  | spin |wlock |rlock |mutex | wsem | rsem |
   --------------------------------------------------------------------------
      read-write lock ABBA Case #1:             |FAILED|             |  ok  |
     read-write lock ABBA Case #2a:             |  ok  |             |FAILED|
     read-write lock ABBA Case #2b:             |  ok  |             |FAILED|
     read-write lock ABBA Case #3a:             |FAILED|             |  ok  |
     read-write lock ABBA Case #3b:             |FAILED|             |  ok  |
     read-write lock ABBA Case #3c:             |FAILED|             |  ok  |
     read-write lock ABBA Case #3d:             |  ok  |             |  ok  |
     read-write lock ABBA Case #4a:             |FAILED|             |  ok  |
     read-write lock ABBA Case #4b:             |FAILED|             |  ok  |
     read-write lock ABBA Case #5a:             |  ok  |             |FAILED|
     read-write lock ABBA Case #5b:             |  ok  |             |FAILED|
     read-write lock ABBA Case #6a:             |FAILED|             |  ok  |
     read-write lock ABBA Case #6b:             |FAILED|             |  ok  |
     read-write lock ABBA Case #6c:             |FAILED|             |  ok  |
     read-write lock ABBA Case #7a:             |  ok  |             |  ok  |
     read-write lock ABBA Case #7b:             |FAILED|             |  ok  |
     read-write lock ABBA Case #7c:             |FAILED|             |  ok  |
     read-write lock ABBA Case #7d:             |FAILED|             |  ok  |
   read-write lock ABBA Case #8.1a:             |  ok  |             |FAILED|
   read-write lock ABBA Case #8.1b:             |  ok  |             |FAILED|
   read-write lock ABBA Case #8.1c:             |  ok  |             |FAILED|
   read-write lock ABBA Case #8.1d:             |  ok  |             |FAILED|
   read-write lock ABBA Case #8.2a:             |  ok  |             |FAILED|
   read-write lock ABBA Case #8.2b:             |  ok  |             |FAILED|
   read-write lock ABBA Case #8.2c:             |  ok  |             |FAILED|
   read-write lock ABBA Case #8.2d:             |  ok  |             |FAILED|
   --------------------------------------------------------------------------

Note that even many of the cases passed, it is simply because the
recursive-read locks are *not* considered.

Now this patch marks the finish of the implementation of the read-write lock
detection algorithm. Looking forward, we may have some ramifications:

(1) Some previous false positive read-lock involved deadlocks should not be
    a false positive anymore (hopefully), so however a such false positive
    was resolved, it has a chance to have a second look at it.

(2) With recursive-read lock dependencies in graph, there may be new
    deadlock scenarios that have never been able to be discovered before.
    Admittedly, they include both true and possibly false positives.

Have fun and brace for impact!

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 lib/locking-selftest.c | 1022 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 1022 insertions(+)

diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 4c6dd8a..52d5494 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -461,6 +461,872 @@ static void rwsem_ABBA3(void)
 }
 
 /*
+ * Read-write lock ABBA cases.
+ *
+ * Notation:
+ *  R: read lock
+ *  W: write lock
+ *  X: either read or write lock
+ *
+ * Case #1:
+ *
+ *     T1        T2
+ *     --        --
+ *
+ *     W1        R2
+ *     W2        R1   [Deadlock]
+ */
+static void rlock_ABBA_case1(void)
+{
+	WL(X1);
+	WL(Y1);
+	WU(Y1);
+	WU(X1);
+
+	RL(Y1);
+	RL(X1);
+	RU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case1(void)
+{
+	WSL(X1);
+	WSL(Y1);
+	WSU(Y1);
+	WSU(X1);
+
+	RSL(Y1);
+	RSL(X1);
+	RSU(X1);
+	RSU(Y1);
+}
+
+/*
+ * Case #2:
+ *
+ *     T1        T2
+ *
+ *     X1        R2
+ *     R2        R1   [No deadlock]
+ */
+static void rlock_ABBA_case2a(void)
+{
+	WL(X1);
+	RL(Y1);
+	RU(Y1);
+	WU(X1);
+
+	RL(Y1);
+	RL(X1);
+	RU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case2a(void)
+{
+	WSL(X1);
+	RSL(Y1);
+	RSU(Y1);
+	WSU(X1);
+
+	RSL(Y1);
+	RSL(X1);
+	RSU(X1);
+	RSU(Y1);
+}
+
+static void rlock_ABBA_case2b(void)
+{
+	RL(X1);
+	RL(Y1);
+	RU(Y1);
+	RU(X1);
+
+	RL(Y1);
+	RL(X1);
+	RU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case2b(void)
+{
+	RSL(X1);
+	RSL(Y1);
+	RSU(Y1);
+	RSU(X1);
+
+	RSL(Y1);
+	RSL(X1);
+	RSU(X1);
+	RSU(Y1);
+}
+
+/*
+ * Case #3:
+ *
+ *     T1        T2
+ *     --        --
+ *
+ *     X1        W2
+ *     X2        W1   [Deadlock]
+ */
+static void rlock_ABBA_case3a(void)
+{
+	RL(X1);
+	RL(Y1);
+	RU(Y1);
+	RU(X1);
+
+	WL(Y1);
+	WL(X1);
+	WU(X1);
+	WU(Y1);
+}
+
+static void rwsem_ABBA_case3a(void)
+{
+	RSL(X1);
+	RSL(Y1);
+	RSU(Y1);
+	RSU(X1);
+
+	WSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	WSU(Y1);
+}
+
+static void rlock_ABBA_case3b(void)
+{
+	WL(X1);
+	RL(Y1);
+	RU(Y1);
+	WU(X1);
+
+	WL(Y1);
+	WL(X1);
+	WU(X1);
+	WU(Y1);
+}
+
+static void rwsem_ABBA_case3b(void)
+{
+	WSL(X1);
+	RSL(Y1);
+	RSU(Y1);
+	WSU(X1);
+
+	WSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	WSU(Y1);
+}
+
+static void rlock_ABBA_case3c(void)
+{
+	RL(X1);
+	WL(Y1);
+	WU(Y1);
+	RU(X1);
+
+	WL(Y1);
+	WL(X1);
+	WU(X1);
+	WU(Y1);
+}
+
+static void rwsem_ABBA_case3c(void)
+{
+	RSL(X1);
+	WSL(Y1);
+	WSU(Y1);
+	RSU(X1);
+
+	WSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	WSU(Y1);
+}
+
+static void rlock_ABBA_case3d(void)
+{
+	WL(X1);
+	WL(Y1);
+	WU(Y1);
+	WU(X1);
+
+	WL(Y1);
+	WL(X1);
+	WU(X1);
+	WU(Y1);
+}
+
+static void rwsem_ABBA_case3d(void)
+{
+	WSL(X1);
+	WSL(Y1);
+	WSU(Y1);
+	WSU(X1);
+
+	WSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	WSU(Y1);
+}
+
+/*
+ * Case #4:
+ *
+ *     T1        T2
+ *     --        --
+ *
+ *     X1        R2
+ *     W2        W1   [Deadlock]
+ */
+static void rlock_ABBA_case4a(void)
+{
+	WL(X1);
+	WL(Y1);
+	WU(Y1);
+	WU(X1);
+
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case4a(void)
+{
+	WSL(X1);
+	WSL(Y1);
+	WSU(Y1);
+	WSU(X1);
+
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+}
+
+static void rlock_ABBA_case4b(void)
+{
+	RL(X1);
+	WL(Y1);
+	WU(Y1);
+	RU(X1);
+
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case4b(void)
+{
+	RSL(X1);
+	WSL(Y1);
+	WSU(Y1);
+	RSU(X1);
+
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+}
+
+/*
+ * Case #5:
+
+ *     T1        T2
+ *     --        --
+ *
+ *     X1        R2
+ *     R2        W1   [No deadlock]
+ */
+static void rlock_ABBA_case5a(void)
+{
+	RL(X1);
+	RL(Y1);
+	RU(Y1);
+	RU(X1);
+
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case5a(void)
+{
+	RSL(X1);
+	RSL(Y1);
+	RSU(Y1);
+	RSU(X1);
+
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+}
+
+static void rlock_ABBA_case5b(void)
+{
+	WL(X1);
+	RL(Y1);
+	RU(Y1);
+	WU(X1);
+
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case5b(void)
+{
+	WSL(X1);
+	RSL(Y1);
+	RSU(Y1);
+	WSU(X1);
+
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+}
+
+/*
+ * Case #6:
+ *
+ *     T1        T2
+ *     --        --
+ *
+ *     R1
+ *     R2
+ *
+ *    (R1 R2 released)
+ *
+ *     W1        R2
+ *     W2        R1   [Deadlock]
+ */
+static void rlock_ABBA_case6a(void)
+{
+	RL(X1);
+	RL(Y1);
+	RU(Y1);
+	RU(X1);
+
+	WL(X1);
+	WL(Y1);
+	WU(Y1);
+	WU(X1);
+
+	RL(Y1);
+	RL(X1);
+	RU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case6a(void)
+{
+	RSL(X1);
+	RSL(Y1);
+	RSU(Y1);
+	RSU(X1);
+
+	WSL(X1);
+	WSL(Y1);
+	WSU(Y1);
+	WSU(X1);
+
+	RSL(Y1);
+	RSL(X1);
+	RSU(X1);
+	RSU(Y1);
+}
+
+static void rlock_ABBA_case6b(void)
+{
+	WL(X1);
+	RL(Y1);
+	RU(Y1);
+	WU(X1);
+
+	WL(X1);
+	WL(Y1);
+	WU(Y1);
+	WU(X1);
+
+	RL(Y1);
+	RL(X1);
+	RU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case6b(void)
+{
+	WSL(X1);
+	RSL(Y1);
+	RSU(Y1);
+	WSU(X1);
+
+	WSL(X1);
+	WSL(Y1);
+	WSU(Y1);
+	WSU(X1);
+
+	RSL(Y1);
+	RSL(X1);
+	RSU(X1);
+	RSU(Y1);
+}
+
+static void rlock_ABBA_case6c(void)
+{
+	RL(X1);
+	WL(Y1);
+	WU(Y1);
+	RU(X1);
+
+	WL(X1);
+	WL(Y1);
+	WU(Y1);
+	WU(X1);
+
+	RL(Y1);
+	RL(X1);
+	RU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case6c(void)
+{
+	RSL(X1);
+	WSL(Y1);
+	WSU(Y1);
+	RSU(X1);
+
+	WSL(X1);
+	WSL(Y1);
+	WSU(Y1);
+	WSU(X1);
+
+	RSL(Y1);
+	RSL(X1);
+	RSU(X1);
+	RSU(Y1);
+}
+
+/*
+ * Case #7:
+ *
+ *     T1        T2
+ *     --        --
+ *
+ *     X1        X3
+ *     R2        R2
+ *     X3        X1   [Deadlock]
+ */
+static void rlock_ABBA_case7a(void)
+{
+	WL(X1);
+	RL(Y1);
+	WL(Z1);
+	WU(Z1);
+	RU(Y1);
+	WU(X1);
+
+	WL(Z1);
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+	WU(Z1);
+}
+
+static void rwsem_ABBA_case7a(void)
+{
+	WSL(X1);
+	RSL(Y1);
+	WSL(Z1);
+	WSU(Z1);
+	RSU(Y1);
+	WSU(X1);
+
+	WSL(Z1);
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+	WSU(Z1);
+}
+
+static void rlock_ABBA_case7b(void)
+{
+	RL(X1);
+	RL(Y1);
+	WL(Z1);
+	WU(Z1);
+	RU(Y1);
+	RU(X1);
+
+	WL(Z1);
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+	WU(Z1);
+}
+
+static void rwsem_ABBA_case7b(void)
+{
+	RSL(X1);
+	RSL(Y1);
+	WSL(Z1);
+	WSU(Z1);
+	RSU(Y1);
+	RSU(X1);
+
+	WSL(Z1);
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+	WSU(Z1);
+}
+
+static void rlock_ABBA_case7c(void)
+{
+	WL(X1);
+	RL(Y1);
+	RL(Z1);
+	RU(Z1);
+	RU(Y1);
+	WU(X1);
+
+	WL(Z1);
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+	WU(Z1);
+}
+
+static void rwsem_ABBA_case7c(void)
+{
+	WSL(X1);
+	RSL(Y1);
+	RSL(Z1);
+	RSU(Z1);
+	RSU(Y1);
+	WSU(X1);
+
+	WSL(Z1);
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+	WSU(Z1);
+}
+
+static void rlock_ABBA_case7d(void)
+{
+	RL(X1);
+	RL(Y1);
+	RL(Z1);
+	RU(Z1);
+	RU(Y1);
+	RU(X1);
+
+	WL(Z1);
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+	WU(Z1);
+}
+
+static void rwsem_ABBA_case7d(void)
+{
+	RSL(X1);
+	RSL(Y1);
+	RSL(Z1);
+	RSU(Z1);
+	RSU(Y1);
+	RSU(X1);
+
+	WSL(Z1);
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+	WSU(Z1);
+}
+
+/*
+ * Case #8.1:
+ *
+ *     T1        T2
+ *     --        --
+ *
+ *     X1
+ *     X3        R2
+ *     R2        X1   [No deadlock]
+ */
+static void rlock_ABBA_case81a(void)
+{
+	WL(X1);
+	WL(Z1);
+	RL(Y1);
+	RU(Y1);
+	WU(Z1);
+	WU(X1);
+
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case81a(void)
+{
+	WSL(X1);
+	WSL(Z1);
+	RSL(Y1);
+	RSU(Y1);
+	WSU(Z1);
+	WSU(X1);
+
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+}
+
+static void rlock_ABBA_case81b(void)
+{
+	RL(X1);
+	WL(Z1);
+	RL(Y1);
+	RU(Y1);
+	WU(Z1);
+	RU(X1);
+
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case81b(void)
+{
+	RSL(X1);
+	WSL(Z1);
+	RSL(Y1);
+	RSU(Y1);
+	WSU(Z1);
+	RSU(X1);
+
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+}
+
+static void rlock_ABBA_case81c(void)
+{
+	WL(X1);
+	RL(Z1);
+	RL(Y1);
+	RU(Y1);
+	RU(Z1);
+	WU(X1);
+
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case81c(void)
+{
+	WSL(X1);
+	RSL(Z1);
+	RSL(Y1);
+	RSU(Y1);
+	RSU(Z1);
+	WSU(X1);
+
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+}
+
+static void rlock_ABBA_case81d(void)
+{
+	RL(X1);
+	RL(Z1);
+	RL(Y1);
+	RU(Y1);
+	RU(Z1);
+	RU(X1);
+
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case81d(void)
+{
+	RSL(X1);
+	RSL(Z1);
+	RSL(Y1);
+	RSU(Y1);
+	RSU(Z1);
+	RSU(X1);
+
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+}
+
+
+/*
+ * Case #8.2:
+ *
+ *     T1        T2
+ *     --        --
+ *
+ *     X1
+ *     R2        R2
+ *     X3        X1   [No deadlock]
+ */
+static void rlock_ABBA_case82a(void)
+{
+	WL(X1);
+	RL(Y1);
+	WL(Z1);
+	WU(Z1);
+	RU(Y1);
+	WU(X1);
+
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case82a(void)
+{
+	WSL(X1);
+	RSL(Y1);
+	WSL(Z1);
+	WSU(Z1);
+	RSU(Y1);
+	WSU(X1);
+
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+}
+
+static void rlock_ABBA_case82b(void)
+{
+	RL(X1);
+	RL(Y1);
+	WL(Z1);
+	WU(Z1);
+	RU(Y1);
+	RU(X1);
+
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case82b(void)
+{
+	RSL(X1);
+	RSL(Y1);
+	WSL(Z1);
+	WSU(Z1);
+	RSU(Y1);
+	RSU(X1);
+
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+}
+
+static void rlock_ABBA_case82c(void)
+{
+	WL(X1);
+	RL(Y1);
+	RL(Z1);
+	RU(Z1);
+	RU(Y1);
+	WU(X1);
+
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case82c(void)
+{
+	WSL(X1);
+	RSL(Y1);
+	RSL(Z1);
+	RSU(Z1);
+	RSU(Y1);
+	WSU(X1);
+
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+}
+
+static void rlock_ABBA_case82d(void)
+{
+	RL(X1);
+	RL(Y1);
+	RL(Z1);
+	RU(Z1);
+	RU(Y1);
+	RU(X1);
+
+	RL(Y1);
+	WL(X1);
+	WU(X1);
+	RU(Y1);
+}
+
+static void rwsem_ABBA_case82d(void)
+{
+	RSL(X1);
+	RSL(Y1);
+	RSL(Z1);
+	RSU(Z1);
+	RSU(Y1);
+	RSU(X1);
+
+	RSL(Y1);
+	WSL(X1);
+	WSU(X1);
+	RSU(Y1);
+}
+
+/*
  * ABBA deadlock:
  *
  * Should fail except for either A or B is read lock.
@@ -2071,6 +2937,162 @@ void locking_selftest(void)
 	pr_cont("             |");
 	dotest(rwsem_ABBA3, FAILURE, LOCKTYPE_RWSEM);
 
+	print_testname("read-write lock ABBA Case #1");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case1, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case1, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #2a");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case2a, SUCCESS, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case2a, SUCCESS, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #2b");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case2b, SUCCESS, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case2b, SUCCESS, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #3a");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case3a, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case3a, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #3b");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case3b, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case3b, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #3c");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case3c, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case3c, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #3d");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case3d, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case3d, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #4a");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case4a, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case4a, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #4b");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case4b, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case4b, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #5a");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case5a, SUCCESS, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case5a, SUCCESS, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #5b");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case5b, SUCCESS, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case5b, SUCCESS, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #6a");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case6a, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case6a, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #6b");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case6b, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case6b, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #6c");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case6c, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case6c, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #7a");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case7a, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case7a, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #7b");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case7b, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case7b, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #7c");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case7c, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case7c, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #7d");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case7d, FAILURE, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case7d, FAILURE, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #8.1a");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case81a, SUCCESS, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case81a, SUCCESS, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #8.1b");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case81b, SUCCESS, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case81b, SUCCESS, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #8.1c");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case81c, SUCCESS, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case81c, SUCCESS, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #8.1d");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case81d, SUCCESS, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case81d, SUCCESS, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #8.2a");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case82a, SUCCESS, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case82a, SUCCESS, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #8.2b");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case82b, SUCCESS, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case82b, SUCCESS, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #8.2c");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case82c, SUCCESS, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case82c, SUCCESS, LOCKTYPE_RWSEM);
+
+	print_testname("read-write lock ABBA Case #8.2d");
+	pr_cont("             |");
+	dotest(rlock_ABBA_case82d, SUCCESS, LOCKTYPE_RWLOCK);
+	pr_cont("             |");
+	dotest(rwsem_ABBA_case82d, SUCCESS, LOCKTYPE_RWSEM);
+
 	printk("  --------------------------------------------------------------------------\n");
 
 	/*
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 17/17] locking/lockdep: Remove irq-safe to irq-unsafe read check
  2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
                   ` (15 preceding siblings ...)
  2019-05-16  8:00 ` [PATCH v2 16/17] locking/lockdep: Add more lockdep selftest cases Yuyang Du
@ 2019-05-16  8:00 ` Yuyang Du
  16 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-16  8:00 UTC (permalink / raw)
  To: peterz, will.deacon, mingo
  Cc: bvanassche, ming.lei, frederic, tglx, boqun.feng, paulmck,
	linux-kernel, Yuyang Du

We have a lockdep warning:

  ========================================================
  WARNING: possible irq lock inversion dependency detected
  5.1.0-rc7+ #141 Not tainted
  --------------------------------------------------------
  kworker/8:2/328 just changed the state of lock:
  0000000007f1a95b (&(&host->lock)->rlock){-...}, at: ata_bmdma_interrupt+0x27/0x1c0 [libata]
  but this lock took another, HARDIRQ-READ-unsafe lock in the past:
   (&trig->leddev_list_lock){.+.?}

and interrupts could create inverse lock ordering between them.

other info that might help us debug this:
   Possible interrupt unsafe locking scenario:

         CPU0                    CPU1
         ----                    ----
    lock(&trig->leddev_list_lock);
                                 local_irq_disable();
                                 lock(&(&host->lock)->rlock);
                                 lock(&trig->leddev_list_lock);
    <Interrupt>
      lock(&(&host->lock)->rlock);

 *** DEADLOCK ***

This splat is a false positive, which is enabled by the addition of
recursive read locks in the graph. Specifically, trig->leddev_list_lock is a
rwlock_t type, which was not in the graph before recursive read lock support
was added in lockdep.

This false positve is caused by a "false-positive" check in IRQ usage check.

In mark_lock_irq(), the following checks are currently performed:

   ----------------------------------
  |   ->      | unsafe | read unsafe |
  |----------------------------------|
  | safe      |  F  B  |    F* B*    |
  |----------------------------------|
  | read safe |  F* B* |      -      |
   ----------------------------------

Where:
F: check_usage_forwards
B: check_usage_backwards
*: check enabled by STRICT_READ_CHECKS

But actually the safe -> unsafe read dependency does not create a deadlock
scenario.

Fix this by simply removing those two checks, and since safe read -> unsafe
is indeed a problem, these checks are not actually strict per se, so remove
the macro STRICT_READ_CHECKS, and we have the following checks:

   ----------------------------------
  |   ->      | unsafe | read unsafe |
  |----------------------------------|
  | safe      |  F  B  |      -      |
  |----------------------------------|
  | read safe |  F  B  |      -      |
   ----------------------------------

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
 kernel/locking/lockdep.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index cd1d515..bc36fbf 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3450,8 +3450,6 @@ static int SOFTIRQ_verbose(struct lock_class *class)
 	return 0;
 }
 
-#define STRICT_READ_CHECKS	1
-
 static int (*state_verbose_f[])(struct lock_class *class) = {
 #define LOCKDEP_STATE(__STATE) \
 	__STATE##_verbose,
@@ -3497,7 +3495,7 @@ typedef int (*check_usage_f)(struct task_struct *, struct held_lock *,
 	 * Validate that the lock dependencies don't have conflicting usage
 	 * states.
 	 */
-	if ((!read || STRICT_READ_CHECKS) &&
+	if ((!read || !dir) &&
 			!usage(curr, this, excl_bit, state_name(new_bit & ~LOCK_USAGE_READ_MASK)))
 		return 0;
 
@@ -3508,7 +3506,7 @@ typedef int (*check_usage_f)(struct task_struct *, struct held_lock *,
 		if (!valid_state(curr, this, new_bit, excl_bit + LOCK_USAGE_READ_MASK))
 			return 0;
 
-		if (STRICT_READ_CHECKS &&
+		if (dir &&
 			!usage(curr, this, excl_bit + LOCK_USAGE_READ_MASK,
 				state_name(new_bit + LOCK_USAGE_READ_MASK)))
 			return 0;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 02/17] locking/lockdep: Add read-write type for dependency
  2019-05-16  8:00 ` [PATCH v2 02/17] locking/lockdep: Add read-write type for dependency Yuyang Du
@ 2019-05-29 11:37   ` Boqun Feng
  0 siblings, 0 replies; 21+ messages in thread
From: Boqun Feng @ 2019-05-29 11:37 UTC (permalink / raw)
  To: Yuyang Du
  Cc: peterz, will.deacon, mingo, bvanassche, ming.lei, frederic, tglx,
	paulmck, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 5200 bytes --]

On Thu, May 16, 2019 at 04:00:00PM +0800, Yuyang Du wrote:
> Direct dependency needs to keep track of its locks' read-write types. A
> union field is added to lock_list struct so the type is stored there as
> this:
> 
> 	lock_type[1] (u16), lock_type[0] (u16)
> 
> 			or:
> 
> 	dep_type (int)
> 
> where value:
> 
>  0: exclusive / write
>  1: read
>  2: recursive read
> 
> Note that (int) dep_type value may vary with different architectural
> endianness, so use helper functions to operate on these types.
> 
> Signed-off-by: Yuyang Du <duyuyang@gmail.com>
> ---
>  include/linux/lockdep.h            | 12 ++++++++++++
>  kernel/locking/lockdep.c           | 34 +++++++++++++++++++++++++++++++---
>  kernel/locking/lockdep_internals.h |  3 +++
>  3 files changed, 46 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> index 1009c47..bc09c85 100644
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -195,6 +195,18 @@ struct lock_list {
>  	struct lock_class		*links_to;
>  	struct lock_trace		trace;
>  	int				distance;
> +	/*
> +	 * This field keeps track of the read-write type of this dependency.
> +	 *
> +	 * With L1 -> L2:
> +	 *
> +	 * lock_type[0] stores the type of L1, while lock_type[1] stores the
> +	 * type of L2.
> +	 */
> +	union {
> +		int	dep_type;
> +		u16	lock_type[2];

This takes extra space for storing the type of dependencies. In my
previous patchset, only 2bits are used, because you only need to care
about four types of dependencies:

1.	Exclusive -> Recursive
2.	Shared (read or recursive read) -> Recursive
3.	Exclusize -> Non-Recursive (write or non-recursive readers)
4.	Shared (read or recursive read) -> Non-Recursive

To understand why, here are my slides for Linux Plumber Conference last
year:

	https://www.linuxplumbersconf.org/event/2/contributions/74/attachments/67/78/Recursive_read_deadlocks_and_Where_to_find_them.pdf

And we can steal two bits from "distance", since 2^30 would be enough.
I think it's important to save the size of lock_list since we will have
a lot of them.

Regards,
Boqun

> +	};
>  
>  	/*
>  	 * The parent field is used to implement breadth-first search, and the
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index fb6be63..e7eedbf 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -1225,7 +1225,7 @@ static struct lock_list *alloc_list_entry(void)
>  static int add_lock_to_list(struct lock_class *this,
>  			    struct lock_class *links_to, struct list_head *head,
>  			    unsigned long ip, int distance,
> -			    struct lock_trace *trace)
> +			    struct lock_trace *trace, int dep_type)
>  {
>  	struct lock_list *entry;
>  	/*
> @@ -1240,6 +1240,8 @@ static int add_lock_to_list(struct lock_class *this,
>  	entry->links_to = links_to;
>  	entry->distance = distance;
>  	entry->trace = *trace;
> +	entry->dep_type = dep_type;
> +
>  	/*
>  	 * Both allocation and removal are done under the graph lock; but
>  	 * iteration is under RCU-sched; see look_up_lock_class() and
> @@ -1677,6 +1679,30 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class)
>  	return ret;
>  }
>  
> +static inline int get_dep_type(struct held_lock *lock1, struct held_lock *lock2)
> +{
> +	/*
> +	 * With dependency lock1 -> lock2:
> +	 *
> +	 * lock_type[0] is lock1, while lock_type[1] is lock2.
> +	 *
> +	 * Avoid architectural endianness difference composing dep_type.
> +	 */
> +	u16 type[2] = { lock1->read, lock2->read };
> +
> +	return *(int *)type;
> +}
> +
> +static inline int get_lock_type1(struct lock_list *lock)
> +{
> +	return lock->lock_type[0];
> +}
> +
> +static inline int get_lock_type2(struct lock_list *lock)
> +{
> +	return lock->lock_type[1];
> +}
> +
>  /*
>   * Check that the dependency graph starting at <src> can lead to
>   * <target> or not. Print an error and return 0 if it does.
> @@ -2446,14 +2472,16 @@ static inline void inc_chains(void)
>  	 */
>  	ret = add_lock_to_list(hlock_class(next), hlock_class(prev),
>  			       &hlock_class(prev)->locks_after,
> -			       next->acquire_ip, distance, trace);
> +			       next->acquire_ip, distance, trace,
> +			       get_dep_type(prev, next));
>  
>  	if (!ret)
>  		return 0;
>  
>  	ret = add_lock_to_list(hlock_class(prev), hlock_class(next),
>  			       &hlock_class(next)->locks_before,
> -			       next->acquire_ip, distance, trace);
> +			       next->acquire_ip, distance, trace,
> +			       get_dep_type(next, prev));
>  	if (!ret)
>  		return 0;
>  
> diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
> index 150ec3f..c287bcb 100644
> --- a/kernel/locking/lockdep_internals.h
> +++ b/kernel/locking/lockdep_internals.h
> @@ -26,6 +26,9 @@ enum lock_usage_bit {
>  #define LOCK_USAGE_DIR_MASK  2
>  #define LOCK_USAGE_STATE_MASK (~(LOCK_USAGE_READ_MASK | LOCK_USAGE_DIR_MASK))
>  
> +#define LOCK_TYPE_BITS	16
> +#define LOCK_TYPE_MASK	0xFFFF
> +
>  /*
>   * Usage-state bitmasks:
>   */
> -- 
> 1.8.3.1
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 11/17] locking/lockdep: Adjust lockdep selftest cases
  2019-05-16  8:00 ` [PATCH v2 11/17] locking/lockdep: Adjust lockdep selftest cases Yuyang Du
@ 2019-05-29 11:44   ` Boqun Feng
  2019-05-30  7:37     ` Yuyang Du
  0 siblings, 1 reply; 21+ messages in thread
From: Boqun Feng @ 2019-05-29 11:44 UTC (permalink / raw)
  To: Yuyang Du
  Cc: peterz, will.deacon, mingo, bvanassche, ming.lei, frederic, tglx,
	paulmck, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 5487 bytes --]

On Thu, May 16, 2019 at 04:00:09PM +0800, Yuyang Du wrote:
> With read-write lock support, some read-write lock cases need to be updated,
> specifically, some read-lock involved deadlocks are actually not deadlocks.
> Hope I am not wildly wrong.
> 
> Signed-off-by: Yuyang Du <duyuyang@gmail.com>
> ---
>  lib/locking-selftest.c | 44 +++++++++++++++++++++++++++++++++-----------
>  1 file changed, 33 insertions(+), 11 deletions(-)
> 
> diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
> index a170554..f83f047 100644
> --- a/lib/locking-selftest.c
> +++ b/lib/locking-selftest.c
> @@ -424,7 +424,7 @@ static void rwsem_ABBA2(void)
>  	ML(Y1);
>  	RSL(X1);
>  	RSU(X1);
> -	MU(Y1); // should fail
> +	MU(Y1); // should NOT fail

I'm afraid you get this wrong ;-) reader of rwsem is non-recursive if I
understand correctly, so case like:

	Task 0			Task 1

	down_read(A);
				mutex_lock(B);

				down_read(A);
	mutex_lock(B);

can be a deadlock, if we consider a third independent task:

	Task 0			Task 1			Task 2

	down_read(A);
				mutex_lock(B);
							down_write(A);
				down_read(A);
	mutex_lock(B);

in this case, Task 1 can not get it's lock for A, therefore, deadlock.

Regards,
Boqun

>  }
>  
>  
> @@ -462,12 +462,13 @@ static void rwsem_ABBA3(void)
>  
>  /*
>   * ABBA deadlock:
> + *
> + * Should fail except for either A or B is read lock.
>   */
> -
>  #define E()					\
>  						\
>  	LOCK_UNLOCK_2(A, B);			\
> -	LOCK_UNLOCK_2(B, A); /* fail */
> +	LOCK_UNLOCK_2(B, A);
>  
>  /*
>   * 6 testcases:
> @@ -494,13 +495,15 @@ static void rwsem_ABBA3(void)
>  
>  /*
>   * AB BC CA deadlock:
> + *
> + * Should fail except for read or recursive-read locks.
>   */
>  
>  #define E()					\
>  						\
>  	LOCK_UNLOCK_2(A, B);			\
>  	LOCK_UNLOCK_2(B, C);			\
> -	LOCK_UNLOCK_2(C, A); /* fail */
> +	LOCK_UNLOCK_2(C, A);
>  
>  /*
>   * 6 testcases:
> @@ -527,13 +530,15 @@ static void rwsem_ABBA3(void)
>  
>  /*
>   * AB CA BC deadlock:
> + *
> + * Should fail except for read or recursive-read locks.
>   */
>  
>  #define E()					\
>  						\
>  	LOCK_UNLOCK_2(A, B);			\
>  	LOCK_UNLOCK_2(C, A);			\
> -	LOCK_UNLOCK_2(B, C); /* fail */
> +	LOCK_UNLOCK_2(B, C);
>  
>  /*
>   * 6 testcases:
> @@ -560,6 +565,8 @@ static void rwsem_ABBA3(void)
>  
>  /*
>   * AB BC CD DA deadlock:
> + *
> + * Should fail except for read or recursive-read locks.
>   */
>  
>  #define E()					\
> @@ -567,7 +574,7 @@ static void rwsem_ABBA3(void)
>  	LOCK_UNLOCK_2(A, B);			\
>  	LOCK_UNLOCK_2(B, C);			\
>  	LOCK_UNLOCK_2(C, D);			\
> -	LOCK_UNLOCK_2(D, A); /* fail */
> +	LOCK_UNLOCK_2(D, A);
>  
>  /*
>   * 6 testcases:
> @@ -594,13 +601,15 @@ static void rwsem_ABBA3(void)
>  
>  /*
>   * AB CD BD DA deadlock:
> + *
> + * Should fail except for read or recursive-read locks.
>   */
>  #define E()					\
>  						\
>  	LOCK_UNLOCK_2(A, B);			\
>  	LOCK_UNLOCK_2(C, D);			\
>  	LOCK_UNLOCK_2(B, D);			\
> -	LOCK_UNLOCK_2(D, A); /* fail */
> +	LOCK_UNLOCK_2(D, A);
>  
>  /*
>   * 6 testcases:
> @@ -627,13 +636,15 @@ static void rwsem_ABBA3(void)
>  
>  /*
>   * AB CD BC DA deadlock:
> + *
> + * Should fail except for read or recursive-read locks.
>   */
>  #define E()					\
>  						\
>  	LOCK_UNLOCK_2(A, B);			\
>  	LOCK_UNLOCK_2(C, D);			\
>  	LOCK_UNLOCK_2(B, C);			\
> -	LOCK_UNLOCK_2(D, A); /* fail */
> +	LOCK_UNLOCK_2(D, A);
>  
>  /*
>   * 6 testcases:
> @@ -1238,7 +1249,7 @@ static inline void print_testname(const char *testname)
>  /*
>   * 'read' variant: rlocks must not trigger.
>   */
> -#define DO_TESTCASE_6R(desc, name)				\
> +#define DO_TESTCASE_6AA(desc, name)				\
>  	print_testname(desc);					\
>  	dotest(name##_spin, FAILURE, LOCKTYPE_SPIN);		\
>  	dotest(name##_wlock, FAILURE, LOCKTYPE_RWLOCK);		\
> @@ -1249,6 +1260,17 @@ static inline void print_testname(const char *testname)
>  	dotest_rt(name##_rtmutex, FAILURE, LOCKTYPE_RTMUTEX);	\
>  	pr_cont("\n");
>  
> +#define DO_TESTCASE_6R(desc, name)				\
> +	print_testname(desc);					\
> +	dotest(name##_spin, FAILURE, LOCKTYPE_SPIN);		\
> +	dotest(name##_wlock, FAILURE, LOCKTYPE_RWLOCK);		\
> +	dotest(name##_rlock, SUCCESS, LOCKTYPE_RWLOCK);		\
> +	dotest(name##_mutex, FAILURE, LOCKTYPE_MUTEX);		\
> +	dotest(name##_wsem, FAILURE, LOCKTYPE_RWSEM);		\
> +	dotest(name##_rsem, SUCCESS, LOCKTYPE_RWSEM);		\
> +	dotest_rt(name##_rtmutex, FAILURE, LOCKTYPE_RTMUTEX);	\
> +	pr_cont("\n");
> +
>  #define DO_TESTCASE_2I(desc, name, nr)				\
>  	DO_TESTCASE_1("hard-"desc, name##_hard, nr);		\
>  	DO_TESTCASE_1("soft-"desc, name##_soft, nr);
> @@ -1991,7 +2013,7 @@ void locking_selftest(void)
>  	debug_locks_silent = !debug_locks_verbose;
>  	lockdep_set_selftest_task(current);
>  
> -	DO_TESTCASE_6R("A-A deadlock", AA);
> +	DO_TESTCASE_6AA("A-A deadlock", AA);
>  	DO_TESTCASE_6R("A-B-B-A deadlock", ABBA);
>  	DO_TESTCASE_6R("A-B-B-C-C-A deadlock", ABBCCA);
>  	DO_TESTCASE_6R("A-B-C-A-B-C deadlock", ABCABC);
> @@ -2048,7 +2070,7 @@ void locking_selftest(void)
>  	pr_cont("             |");
>  	dotest(rlock_ABBA2, SUCCESS, LOCKTYPE_RWLOCK);
>  	pr_cont("             |");
> -	dotest(rwsem_ABBA2, FAILURE, LOCKTYPE_RWSEM);
> +	dotest(rwsem_ABBA2, SUCCESS, LOCKTYPE_RWSEM);
>  
>  	print_testname("mixed write-lock/lock-write ABBA");
>  	pr_cont("             |");
> -- 
> 1.8.3.1
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 11/17] locking/lockdep: Adjust lockdep selftest cases
  2019-05-29 11:44   ` Boqun Feng
@ 2019-05-30  7:37     ` Yuyang Du
  0 siblings, 0 replies; 21+ messages in thread
From: Yuyang Du @ 2019-05-30  7:37 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Peter Zijlstra, will.deacon, Ingo Molnar, Bart Van Assche,
	ming.lei, Frederic Weisbecker, tglx, paulmck, LKML

Thanks for review.

On Wed, 29 May 2019 at 19:44, Boqun Feng <boqun.feng@gmail.com> wrote:
>
> > @@ -424,7 +424,7 @@ static void rwsem_ABBA2(void)
> >       ML(Y1);
> >       RSL(X1);
> >       RSU(X1);
> > -     MU(Y1); // should fail
> > +     MU(Y1); // should NOT fail
>
> I'm afraid you get this wrong ;-) reader of rwsem is non-recursive if I
> understand correctly, so case like:
>
>         Task 0                  Task 1
>
>         down_read(A);
>                                 mutex_lock(B);
>
>                                 down_read(A);
>         mutex_lock(B);
>
> can be a deadlock, if we consider a third independent task:
>
>         Task 0                  Task 1                  Task 2
>
>         down_read(A);
>                                 mutex_lock(B);
>                                                         down_write(A);
>                                 down_read(A);
>         mutex_lock(B);
>
> in this case, Task 1 can not get it's lock for A, therefore, deadlock.

Well, yes. This situation is damn counterintuitive and looks
suboptimal, but I guess I can understand why this is done so. It is a
shame read locks are not 100% concurrent. I wish I were bright enough
to have figured this out on my own.

Ok, now this perhaps can be easily remedied. it is merely a matter
that finally I can set straight the lock exclusiveness table, and then
from there the only change seems to be now only recursive-read locks
are no deadlock.

Thanks,
Yuyang

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2019-05-30  7:38 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-16  7:59 [PATCH v2 00/17] Support for read-write lock deadlock detection Yuyang Du
2019-05-16  7:59 ` [PATCH v2 01/17] locking/lockdep: Add lock type enum to explicitly specify read or write locks Yuyang Du
2019-05-16  8:00 ` [PATCH v2 02/17] locking/lockdep: Add read-write type for dependency Yuyang Du
2019-05-29 11:37   ` Boqun Feng
2019-05-16  8:00 ` [PATCH v2 03/17] locking/lockdep: Add helper functions to operate on the searched path Yuyang Du
2019-05-16  8:00 ` [PATCH v2 04/17] locking/lockdep: Update direct dependency's read-write type if it exists Yuyang Du
2019-05-16  8:00 ` [PATCH v2 05/17] locking/lockdep: Rename deadlock check functions Yuyang Du
2019-05-16  8:00 ` [PATCH v2 06/17] locking/lockdep: Adjust BFS algorithm to support multiple matches Yuyang Du
2019-05-16  8:00 ` [PATCH v2 07/17] locking/lockdep: Introduce mark_lock_unaccessed() Yuyang Du
2019-05-16  8:00 ` [PATCH v2 08/17] locking/lockdep: Introduce chain_hlocks_type for held lock's read-write type Yuyang Du
2019-05-16  8:00 ` [PATCH v2 09/17] locking/lockdep: Hash held lock's read-write type into chain key Yuyang Du
2019-05-16  8:00 ` [PATCH v2 10/17] locking/lockdep: Support read-write lock's deadlock detection Yuyang Du
2019-05-16  8:00 ` [PATCH v2 11/17] locking/lockdep: Adjust lockdep selftest cases Yuyang Du
2019-05-29 11:44   ` Boqun Feng
2019-05-30  7:37     ` Yuyang Du
2019-05-16  8:00 ` [PATCH v2 12/17] locking/lockdep: Remove useless lock type assignment Yuyang Du
2019-05-16  8:00 ` [PATCH v2 13/17] locking/lockdep: Add nest lock type Yuyang Du
2019-05-16  8:00 ` [PATCH v2 14/17] locking/lockdep: Support recursive read locks Yuyang Du
2019-05-16  8:00 ` [PATCH v2 15/17] locking/lockdep: Adjust selftest case for recursive read lock Yuyang Du
2019-05-16  8:00 ` [PATCH v2 16/17] locking/lockdep: Add more lockdep selftest cases Yuyang Du
2019-05-16  8:00 ` [PATCH v2 17/17] locking/lockdep: Remove irq-safe to irq-unsafe read check Yuyang Du

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).